Feb 13 15:11:05.917455 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1]
Feb 13 15:11:05.917478 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Feb 13 13:51:50 -00 2025
Feb 13 15:11:05.917487 kernel: KASLR enabled
Feb 13 15:11:05.917493 kernel: efi: EFI v2.7 by EDK II
Feb 13 15:11:05.917499 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 
Feb 13 15:11:05.917504 kernel: random: crng init done
Feb 13 15:11:05.917511 kernel: secureboot: Secure boot disabled
Feb 13 15:11:05.917517 kernel: ACPI: Early table checksum verification disabled
Feb 13 15:11:05.917523 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS )
Feb 13 15:11:05.917530 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS  BXPC     00000001      01000013)
Feb 13 15:11:05.917536 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:11:05.917542 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:11:05.917547 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:11:05.917553 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:11:05.917560 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:11:05.917568 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:11:05.917575 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:11:05.917581 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:11:05.917587 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:11:05.917593 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600
Feb 13 15:11:05.917599 kernel: NUMA: Failed to initialise from firmware
Feb 13 15:11:05.917605 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff]
Feb 13 15:11:05.917611 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff]
Feb 13 15:11:05.917617 kernel: Zone ranges:
Feb 13 15:11:05.917623 kernel:   DMA      [mem 0x0000000040000000-0x00000000dcffffff]
Feb 13 15:11:05.917631 kernel:   DMA32    empty
Feb 13 15:11:05.917637 kernel:   Normal   empty
Feb 13 15:11:05.917643 kernel: Movable zone start for each node
Feb 13 15:11:05.917649 kernel: Early memory node ranges
Feb 13 15:11:05.917655 kernel:   node   0: [mem 0x0000000040000000-0x00000000d967ffff]
Feb 13 15:11:05.917661 kernel:   node   0: [mem 0x00000000d9680000-0x00000000d968ffff]
Feb 13 15:11:05.917667 kernel:   node   0: [mem 0x00000000d9690000-0x00000000d976ffff]
Feb 13 15:11:05.917673 kernel:   node   0: [mem 0x00000000d9770000-0x00000000d9b3ffff]
Feb 13 15:11:05.917679 kernel:   node   0: [mem 0x00000000d9b40000-0x00000000dce1ffff]
Feb 13 15:11:05.917685 kernel:   node   0: [mem 0x00000000dce20000-0x00000000dceaffff]
Feb 13 15:11:05.917692 kernel:   node   0: [mem 0x00000000dceb0000-0x00000000dcebffff]
Feb 13 15:11:05.917698 kernel:   node   0: [mem 0x00000000dcec0000-0x00000000dcfdffff]
Feb 13 15:11:05.917705 kernel:   node   0: [mem 0x00000000dcfe0000-0x00000000dcffffff]
Feb 13 15:11:05.917725 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff]
Feb 13 15:11:05.917731 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges
Feb 13 15:11:05.917740 kernel: psci: probing for conduit method from ACPI.
Feb 13 15:11:05.917747 kernel: psci: PSCIv1.1 detected in firmware.
Feb 13 15:11:05.917754 kernel: psci: Using standard PSCI v0.2 function IDs
Feb 13 15:11:05.917762 kernel: psci: Trusted OS migration not required
Feb 13 15:11:05.917768 kernel: psci: SMC Calling Convention v1.1
Feb 13 15:11:05.917775 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003)
Feb 13 15:11:05.917781 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976
Feb 13 15:11:05.917788 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096
Feb 13 15:11:05.917795 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 
Feb 13 15:11:05.917801 kernel: Detected PIPT I-cache on CPU0
Feb 13 15:11:05.917808 kernel: CPU features: detected: GIC system register CPU interface
Feb 13 15:11:05.917814 kernel: CPU features: detected: Hardware dirty bit management
Feb 13 15:11:05.917820 kernel: CPU features: detected: Spectre-v4
Feb 13 15:11:05.917828 kernel: CPU features: detected: Spectre-BHB
Feb 13 15:11:05.917835 kernel: CPU features: kernel page table isolation forced ON by KASLR
Feb 13 15:11:05.917842 kernel: CPU features: detected: Kernel page table isolation (KPTI)
Feb 13 15:11:05.917849 kernel: CPU features: detected: ARM erratum 1418040
Feb 13 15:11:05.917855 kernel: CPU features: detected: SSBS not fully self-synchronizing
Feb 13 15:11:05.917861 kernel: alternatives: applying boot alternatives
Feb 13 15:11:05.917869 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=26b1bb981574844309559baa9983d7ef1e1e8283aa92ecd6061030daf7cdbbef
Feb 13 15:11:05.917875 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Feb 13 15:11:05.917882 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb 13 15:11:05.917888 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Feb 13 15:11:05.917895 kernel: Fallback order for Node 0: 0 
Feb 13 15:11:05.917903 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 633024
Feb 13 15:11:05.917909 kernel: Policy zone: DMA
Feb 13 15:11:05.917915 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 13 15:11:05.917930 kernel: software IO TLB: area num 4.
Feb 13 15:11:05.917937 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB)
Feb 13 15:11:05.917944 kernel: Memory: 2387536K/2572288K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 184752K reserved, 0K cma-reserved)
Feb 13 15:11:05.917950 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Feb 13 15:11:05.917957 kernel: rcu: Preemptible hierarchical RCU implementation.
Feb 13 15:11:05.917964 kernel: rcu:         RCU event tracing is enabled.
Feb 13 15:11:05.917970 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4.
Feb 13 15:11:05.917977 kernel:         Trampoline variant of Tasks RCU enabled.
Feb 13 15:11:05.917983 kernel:         Tracing variant of Tasks RCU enabled.
Feb 13 15:11:05.917992 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 13 15:11:05.917999 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Feb 13 15:11:05.918005 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
Feb 13 15:11:05.918012 kernel: GICv3: 256 SPIs implemented
Feb 13 15:11:05.918018 kernel: GICv3: 0 Extended SPIs implemented
Feb 13 15:11:05.918024 kernel: Root IRQ handler: gic_handle_irq
Feb 13 15:11:05.918031 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI
Feb 13 15:11:05.918045 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000
Feb 13 15:11:05.918052 kernel: ITS [mem 0x08080000-0x0809ffff]
Feb 13 15:11:05.918058 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1)
Feb 13 15:11:05.918065 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1)
Feb 13 15:11:05.918073 kernel: GICv3: using LPI property table @0x00000000400f0000
Feb 13 15:11:05.918080 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000
Feb 13 15:11:05.918087 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb 13 15:11:05.918093 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb 13 15:11:05.918099 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt).
Feb 13 15:11:05.918106 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns
Feb 13 15:11:05.918113 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns
Feb 13 15:11:05.918119 kernel: arm-pv: using stolen time PV
Feb 13 15:11:05.918126 kernel: Console: colour dummy device 80x25
Feb 13 15:11:05.918133 kernel: ACPI: Core revision 20230628
Feb 13 15:11:05.918151 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000)
Feb 13 15:11:05.918160 kernel: pid_max: default: 32768 minimum: 301
Feb 13 15:11:05.918167 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Feb 13 15:11:05.918173 kernel: landlock: Up and running.
Feb 13 15:11:05.918180 kernel: SELinux:  Initializing.
Feb 13 15:11:05.918186 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb 13 15:11:05.918193 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb 13 15:11:05.918200 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Feb 13 15:11:05.918206 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Feb 13 15:11:05.918213 kernel: rcu: Hierarchical SRCU implementation.
Feb 13 15:11:05.918221 kernel: rcu:         Max phase no-delay instances is 400.
Feb 13 15:11:05.918228 kernel: Platform MSI: ITS@0x8080000 domain created
Feb 13 15:11:05.918234 kernel: PCI/MSI: ITS@0x8080000 domain created
Feb 13 15:11:05.918241 kernel: Remapping and enabling EFI services.
Feb 13 15:11:05.918247 kernel: smp: Bringing up secondary CPUs ...
Feb 13 15:11:05.918254 kernel: Detected PIPT I-cache on CPU1
Feb 13 15:11:05.918261 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000
Feb 13 15:11:05.918267 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000
Feb 13 15:11:05.918274 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb 13 15:11:05.918282 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1]
Feb 13 15:11:05.918289 kernel: Detected PIPT I-cache on CPU2
Feb 13 15:11:05.918300 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000
Feb 13 15:11:05.918308 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000
Feb 13 15:11:05.918315 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb 13 15:11:05.918322 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1]
Feb 13 15:11:05.918329 kernel: Detected PIPT I-cache on CPU3
Feb 13 15:11:05.918336 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000
Feb 13 15:11:05.918343 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000
Feb 13 15:11:05.918351 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb 13 15:11:05.918358 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1]
Feb 13 15:11:05.918365 kernel: smp: Brought up 1 node, 4 CPUs
Feb 13 15:11:05.918372 kernel: SMP: Total of 4 processors activated.
Feb 13 15:11:05.918379 kernel: CPU features: detected: 32-bit EL0 Support
Feb 13 15:11:05.918386 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence
Feb 13 15:11:05.918393 kernel: CPU features: detected: Common not Private translations
Feb 13 15:11:05.918400 kernel: CPU features: detected: CRC32 instructions
Feb 13 15:11:05.918408 kernel: CPU features: detected: Enhanced Virtualization Traps
Feb 13 15:11:05.918415 kernel: CPU features: detected: RCpc load-acquire (LDAPR)
Feb 13 15:11:05.918422 kernel: CPU features: detected: LSE atomic instructions
Feb 13 15:11:05.918429 kernel: CPU features: detected: Privileged Access Never
Feb 13 15:11:05.918436 kernel: CPU features: detected: RAS Extension Support
Feb 13 15:11:05.918443 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS)
Feb 13 15:11:05.918450 kernel: CPU: All CPU(s) started at EL1
Feb 13 15:11:05.918457 kernel: alternatives: applying system-wide alternatives
Feb 13 15:11:05.918464 kernel: devtmpfs: initialized
Feb 13 15:11:05.918471 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 13 15:11:05.918479 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Feb 13 15:11:05.918486 kernel: pinctrl core: initialized pinctrl subsystem
Feb 13 15:11:05.918493 kernel: SMBIOS 3.0.0 present.
Feb 13 15:11:05.918500 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022
Feb 13 15:11:05.918507 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 13 15:11:05.918514 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations
Feb 13 15:11:05.918521 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Feb 13 15:11:05.918528 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Feb 13 15:11:05.918535 kernel: audit: initializing netlink subsys (disabled)
Feb 13 15:11:05.918544 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1
Feb 13 15:11:05.918551 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 13 15:11:05.918557 kernel: cpuidle: using governor menu
Feb 13 15:11:05.918564 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
Feb 13 15:11:05.918571 kernel: ASID allocator initialised with 32768 entries
Feb 13 15:11:05.918578 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 13 15:11:05.918585 kernel: Serial: AMBA PL011 UART driver
Feb 13 15:11:05.918592 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL
Feb 13 15:11:05.918599 kernel: Modules: 0 pages in range for non-PLT usage
Feb 13 15:11:05.918607 kernel: Modules: 509280 pages in range for PLT usage
Feb 13 15:11:05.918614 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb 13 15:11:05.918624 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page
Feb 13 15:11:05.918632 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages
Feb 13 15:11:05.918638 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page
Feb 13 15:11:05.918645 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb 13 15:11:05.918652 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page
Feb 13 15:11:05.918659 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages
Feb 13 15:11:05.918667 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page
Feb 13 15:11:05.918674 kernel: ACPI: Added _OSI(Module Device)
Feb 13 15:11:05.918681 kernel: ACPI: Added _OSI(Processor Device)
Feb 13 15:11:05.918688 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Feb 13 15:11:05.918695 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 13 15:11:05.918702 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb 13 15:11:05.918709 kernel: ACPI: Interpreter enabled
Feb 13 15:11:05.918716 kernel: ACPI: Using GIC for interrupt routing
Feb 13 15:11:05.918723 kernel: ACPI: MCFG table detected, 1 entries
Feb 13 15:11:05.918730 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA
Feb 13 15:11:05.918738 kernel: printk: console [ttyAMA0] enabled
Feb 13 15:11:05.918745 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb 13 15:11:05.918889 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Feb 13 15:11:05.918974 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR]
Feb 13 15:11:05.919042 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
Feb 13 15:11:05.919107 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00
Feb 13 15:11:05.919193 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff]
Feb 13 15:11:05.919207 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io  0x0000-0xffff window]
Feb 13 15:11:05.919214 kernel: PCI host bridge to bus 0000:00
Feb 13 15:11:05.919286 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window]
Feb 13 15:11:05.919344 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0xffff window]
Feb 13 15:11:05.919402 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window]
Feb 13 15:11:05.919460 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb 13 15:11:05.919538 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000
Feb 13 15:11:05.919617 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00
Feb 13 15:11:05.919684 kernel: pci 0000:00:01.0: reg 0x10: [io  0x0000-0x001f]
Feb 13 15:11:05.919749 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff]
Feb 13 15:11:05.919814 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref]
Feb 13 15:11:05.919879 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref]
Feb 13 15:11:05.919953 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff]
Feb 13 15:11:05.920018 kernel: pci 0000:00:01.0: BAR 0: assigned [io  0x1000-0x101f]
Feb 13 15:11:05.920080 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window]
Feb 13 15:11:05.920164 kernel: pci_bus 0000:00: resource 5 [io  0x0000-0xffff window]
Feb 13 15:11:05.920228 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window]
Feb 13 15:11:05.920237 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35
Feb 13 15:11:05.920245 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36
Feb 13 15:11:05.920252 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37
Feb 13 15:11:05.920259 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38
Feb 13 15:11:05.920268 kernel: iommu: Default domain type: Translated
Feb 13 15:11:05.920275 kernel: iommu: DMA domain TLB invalidation policy: strict mode
Feb 13 15:11:05.920282 kernel: efivars: Registered efivars operations
Feb 13 15:11:05.920289 kernel: vgaarb: loaded
Feb 13 15:11:05.920296 kernel: clocksource: Switched to clocksource arch_sys_counter
Feb 13 15:11:05.920303 kernel: VFS: Disk quotas dquot_6.6.0
Feb 13 15:11:05.920310 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 13 15:11:05.920317 kernel: pnp: PnP ACPI init
Feb 13 15:11:05.920389 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved
Feb 13 15:11:05.920400 kernel: pnp: PnP ACPI: found 1 devices
Feb 13 15:11:05.920408 kernel: NET: Registered PF_INET protocol family
Feb 13 15:11:05.920415 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb 13 15:11:05.920422 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
Feb 13 15:11:05.920429 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 13 15:11:05.920436 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
Feb 13 15:11:05.920443 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear)
Feb 13 15:11:05.920450 kernel: TCP: Hash tables configured (established 32768 bind 32768)
Feb 13 15:11:05.920457 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb 13 15:11:05.920465 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb 13 15:11:05.920472 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 13 15:11:05.920479 kernel: PCI: CLS 0 bytes, default 64
Feb 13 15:11:05.920486 kernel: kvm [1]: HYP mode not available
Feb 13 15:11:05.920493 kernel: Initialise system trusted keyrings
Feb 13 15:11:05.920500 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0
Feb 13 15:11:05.920507 kernel: Key type asymmetric registered
Feb 13 15:11:05.920514 kernel: Asymmetric key parser 'x509' registered
Feb 13 15:11:05.920521 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
Feb 13 15:11:05.920530 kernel: io scheduler mq-deadline registered
Feb 13 15:11:05.920537 kernel: io scheduler kyber registered
Feb 13 15:11:05.920544 kernel: io scheduler bfq registered
Feb 13 15:11:05.920551 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0
Feb 13 15:11:05.920558 kernel: ACPI: button: Power Button [PWRB]
Feb 13 15:11:05.920566 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36
Feb 13 15:11:05.920631 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007)
Feb 13 15:11:05.920640 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 13 15:11:05.920648 kernel: thunder_xcv, ver 1.0
Feb 13 15:11:05.920656 kernel: thunder_bgx, ver 1.0
Feb 13 15:11:05.920663 kernel: nicpf, ver 1.0
Feb 13 15:11:05.920670 kernel: nicvf, ver 1.0
Feb 13 15:11:05.920748 kernel: rtc-efi rtc-efi.0: registered as rtc0
Feb 13 15:11:05.920810 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:11:05 UTC (1739459465)
Feb 13 15:11:05.920820 kernel: hid: raw HID events driver (C) Jiri Kosina
Feb 13 15:11:05.920827 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available
Feb 13 15:11:05.920834 kernel: watchdog: Delayed init of the lockup detector failed: -19
Feb 13 15:11:05.920842 kernel: watchdog: Hard watchdog permanently disabled
Feb 13 15:11:05.920850 kernel: NET: Registered PF_INET6 protocol family
Feb 13 15:11:05.920857 kernel: Segment Routing with IPv6
Feb 13 15:11:05.920863 kernel: In-situ OAM (IOAM) with IPv6
Feb 13 15:11:05.920870 kernel: NET: Registered PF_PACKET protocol family
Feb 13 15:11:05.920877 kernel: Key type dns_resolver registered
Feb 13 15:11:05.920884 kernel: registered taskstats version 1
Feb 13 15:11:05.920891 kernel: Loading compiled-in X.509 certificates
Feb 13 15:11:05.920898 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 03c2ececc548f4ae45f50171451f5c036e2757d4'
Feb 13 15:11:05.920907 kernel: Key type .fscrypt registered
Feb 13 15:11:05.920914 kernel: Key type fscrypt-provisioning registered
Feb 13 15:11:05.920929 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb 13 15:11:05.920936 kernel: ima: Allocated hash algorithm: sha1
Feb 13 15:11:05.920943 kernel: ima: No architecture policies found
Feb 13 15:11:05.920950 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng)
Feb 13 15:11:05.920957 kernel: clk: Disabling unused clocks
Feb 13 15:11:05.920964 kernel: Freeing unused kernel memory: 38336K
Feb 13 15:11:05.920971 kernel: Run /init as init process
Feb 13 15:11:05.920980 kernel:   with arguments:
Feb 13 15:11:05.920987 kernel:     /init
Feb 13 15:11:05.920994 kernel:   with environment:
Feb 13 15:11:05.921000 kernel:     HOME=/
Feb 13 15:11:05.921008 kernel:     TERM=linux
Feb 13 15:11:05.921014 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Feb 13 15:11:05.921022 systemd[1]: Successfully made /usr/ read-only.
Feb 13 15:11:05.921032 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE)
Feb 13 15:11:05.921042 systemd[1]: Detected virtualization kvm.
Feb 13 15:11:05.921049 systemd[1]: Detected architecture arm64.
Feb 13 15:11:05.921056 systemd[1]: Running in initrd.
Feb 13 15:11:05.921064 systemd[1]: No hostname configured, using default hostname.
Feb 13 15:11:05.921072 systemd[1]: Hostname set to <localhost>.
Feb 13 15:11:05.921079 systemd[1]: Initializing machine ID from VM UUID.
Feb 13 15:11:05.921086 systemd[1]: Queued start job for default target initrd.target.
Feb 13 15:11:05.921094 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:11:05.921103 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:11:05.921111 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Feb 13 15:11:05.921118 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 15:11:05.921126 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Feb 13 15:11:05.921135 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Feb 13 15:11:05.921153 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Feb 13 15:11:05.921163 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Feb 13 15:11:05.921171 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:11:05.921182 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:11:05.921192 systemd[1]: Reached target paths.target - Path Units.
Feb 13 15:11:05.921201 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 15:11:05.921208 systemd[1]: Reached target swap.target - Swaps.
Feb 13 15:11:05.921216 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 15:11:05.921223 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 15:11:05.921231 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 15:11:05.921240 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Feb 13 15:11:05.921248 systemd[1]: Listening on systemd-journald.socket - Journal Sockets.
Feb 13 15:11:05.921256 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:11:05.921263 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:11:05.921271 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:11:05.921278 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 15:11:05.921286 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Feb 13 15:11:05.921294 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 15:11:05.921302 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Feb 13 15:11:05.921310 systemd[1]: Starting systemd-fsck-usr.service...
Feb 13 15:11:05.921317 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 15:11:05.921325 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 15:11:05.921332 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:11:05.921340 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Feb 13 15:11:05.921348 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:11:05.921357 systemd[1]: Finished systemd-fsck-usr.service.
Feb 13 15:11:05.921386 systemd-journald[240]: Collecting audit messages is disabled.
Feb 13 15:11:05.921408 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Feb 13 15:11:05.921416 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:11:05.921424 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 15:11:05.921432 systemd-journald[240]: Journal started
Feb 13 15:11:05.921450 systemd-journald[240]: Runtime Journal (/run/log/journal/87a57fa094da421ca1780df51b760d2a) is 5.9M, max 47.3M, 41.4M free.
Feb 13 15:11:05.911606 systemd-modules-load[241]: Inserted module 'overlay'
Feb 13 15:11:05.926169 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 15:11:05.926198 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 13 15:11:05.929164 kernel: Bridge firewalling registered
Feb 13 15:11:05.929165 systemd-modules-load[241]: Inserted module 'br_netfilter'
Feb 13 15:11:05.929962 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:11:05.941338 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:11:05.943190 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:11:05.947148 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 15:11:05.949220 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 15:11:05.956236 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:11:05.958572 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:11:05.960068 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:11:05.963533 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:11:05.977349 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Feb 13 15:11:05.979782 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 15:11:05.989441 dracut-cmdline[282]: dracut-dracut-053
Feb 13 15:11:05.992340 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=26b1bb981574844309559baa9983d7ef1e1e8283aa92ecd6061030daf7cdbbef
Feb 13 15:11:06.017203 systemd-resolved[284]: Positive Trust Anchors:
Feb 13 15:11:06.017223 systemd-resolved[284]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 15:11:06.017253 systemd-resolved[284]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 15:11:06.021713 systemd-resolved[284]: Defaulting to hostname 'linux'.
Feb 13 15:11:06.022680 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 15:11:06.027547 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:11:06.063169 kernel: SCSI subsystem initialized
Feb 13 15:11:06.068158 kernel: Loading iSCSI transport class v2.0-870.
Feb 13 15:11:06.075156 kernel: iscsi: registered transport (tcp)
Feb 13 15:11:06.088433 kernel: iscsi: registered transport (qla4xxx)
Feb 13 15:11:06.088500 kernel: QLogic iSCSI HBA Driver
Feb 13 15:11:06.128488 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Feb 13 15:11:06.140284 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Feb 13 15:11:06.156372 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 13 15:11:06.156410 kernel: device-mapper: uevent: version 1.0.3
Feb 13 15:11:06.158006 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Feb 13 15:11:06.204181 kernel: raid6: neonx8   gen() 13882 MB/s
Feb 13 15:11:06.221178 kernel: raid6: neonx4   gen() 15700 MB/s
Feb 13 15:11:06.238164 kernel: raid6: neonx2   gen() 13094 MB/s
Feb 13 15:11:06.255160 kernel: raid6: neonx1   gen() 10378 MB/s
Feb 13 15:11:06.272162 kernel: raid6: int64x8  gen()  6695 MB/s
Feb 13 15:11:06.289163 kernel: raid6: int64x4  gen()  7252 MB/s
Feb 13 15:11:06.306161 kernel: raid6: int64x2  gen()  6023 MB/s
Feb 13 15:11:06.323271 kernel: raid6: int64x1  gen()  4999 MB/s
Feb 13 15:11:06.323284 kernel: raid6: using algorithm neonx4 gen() 15700 MB/s
Feb 13 15:11:06.341258 kernel: raid6: .... xor() 12333 MB/s, rmw enabled
Feb 13 15:11:06.341272 kernel: raid6: using neon recovery algorithm
Feb 13 15:11:06.346606 kernel: xor: measuring software checksum speed
Feb 13 15:11:06.346624 kernel:    8regs           : 20665 MB/sec
Feb 13 15:11:06.347308 kernel:    32regs          : 21664 MB/sec
Feb 13 15:11:06.348613 kernel:    arm64_neon      : 27579 MB/sec
Feb 13 15:11:06.348625 kernel: xor: using function: arm64_neon (27579 MB/sec)
Feb 13 15:11:06.398163 kernel: Btrfs loaded, zoned=no, fsverity=no
Feb 13 15:11:06.408967 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 15:11:06.423308 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:11:06.436795 systemd-udevd[466]: Using default interface naming scheme 'v255'.
Feb 13 15:11:06.440424 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:11:06.449330 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Feb 13 15:11:06.461461 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation
Feb 13 15:11:06.487946 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 15:11:06.498344 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 15:11:06.536694 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:11:06.546339 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Feb 13 15:11:06.557019 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Feb 13 15:11:06.558756 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 15:11:06.560413 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:11:06.562835 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 15:11:06.572352 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Feb 13 15:11:06.582030 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 15:11:06.588188 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues
Feb 13 15:11:06.598859 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB)
Feb 13 15:11:06.598990 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Feb 13 15:11:06.599008 kernel: GPT:9289727 != 19775487
Feb 13 15:11:06.599017 kernel: GPT:Alternate GPT header not at the end of the disk.
Feb 13 15:11:06.599027 kernel: GPT:9289727 != 19775487
Feb 13 15:11:06.599036 kernel: GPT: Use GNU Parted to correct GPT errors.
Feb 13 15:11:06.599045 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb 13 15:11:06.600067 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 15:11:06.600757 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:11:06.603887 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:11:06.605014 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 15:11:06.605145 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:11:06.608884 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:11:06.616667 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:11:06.620144 kernel: BTRFS: device fsid b3d3c5e7-c505-4391-bb7a-de2a572c0855 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (511)
Feb 13 15:11:06.624154 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (509)
Feb 13 15:11:06.630938 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT.
Feb 13 15:11:06.632377 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:11:06.645233 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM.
Feb 13 15:11:06.655903 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132.
Feb 13 15:11:06.657225 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A.
Feb 13 15:11:06.670643 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Feb 13 15:11:06.685288 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Feb 13 15:11:06.687120 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:11:06.692094 disk-uuid[554]: Primary Header is updated.
Feb 13 15:11:06.692094 disk-uuid[554]: Secondary Entries is updated.
Feb 13 15:11:06.692094 disk-uuid[554]: Secondary Header is updated.
Feb 13 15:11:06.700178 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb 13 15:11:06.703722 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:11:07.707161 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb 13 15:11:07.707488 disk-uuid[555]: The operation has completed successfully.
Feb 13 15:11:07.731070 systemd[1]: disk-uuid.service: Deactivated successfully.
Feb 13 15:11:07.731190 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Feb 13 15:11:07.778298 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Feb 13 15:11:07.781048 sh[575]: Success
Feb 13 15:11:07.796162 kernel: device-mapper: verity: sha256 using implementation "sha256-ce"
Feb 13 15:11:07.845982 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Feb 13 15:11:07.847936 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Feb 13 15:11:07.849101 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Feb 13 15:11:07.861043 kernel: BTRFS info (device dm-0): first mount of filesystem b3d3c5e7-c505-4391-bb7a-de2a572c0855
Feb 13 15:11:07.861091 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm
Feb 13 15:11:07.861112 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Feb 13 15:11:07.863548 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Feb 13 15:11:07.863573 kernel: BTRFS info (device dm-0): using free space tree
Feb 13 15:11:07.866984 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Feb 13 15:11:07.868369 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Feb 13 15:11:07.879288 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Feb 13 15:11:07.880780 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Feb 13 15:11:07.890118 kernel: BTRFS info (device vda6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011
Feb 13 15:11:07.890167 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 15:11:07.891159 kernel: BTRFS info (device vda6): using free space tree
Feb 13 15:11:07.894171 kernel: BTRFS info (device vda6): auto enabling async discard
Feb 13 15:11:07.901475 systemd[1]: mnt-oem.mount: Deactivated successfully.
Feb 13 15:11:07.903475 kernel: BTRFS info (device vda6): last unmount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011
Feb 13 15:11:07.907598 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Feb 13 15:11:07.914322 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Feb 13 15:11:07.966357 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 15:11:07.978261 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 15:11:08.006883 systemd-networkd[765]: lo: Link UP
Feb 13 15:11:08.006896 systemd-networkd[765]: lo: Gained carrier
Feb 13 15:11:08.007731 systemd-networkd[765]: Enumeration completed
Feb 13 15:11:08.007820 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 15:11:08.010124 ignition[674]: Ignition 2.20.0
Feb 13 15:11:08.008222 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:11:08.010130 ignition[674]: Stage: fetch-offline
Feb 13 15:11:08.008226 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 15:11:08.010177 ignition[674]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:11:08.008973 systemd-networkd[765]: eth0: Link UP
Feb 13 15:11:08.010185 ignition[674]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb 13 15:11:08.008976 systemd-networkd[765]: eth0: Gained carrier
Feb 13 15:11:08.010325 ignition[674]: parsed url from cmdline: ""
Feb 13 15:11:08.008982 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:11:08.010328 ignition[674]: no config URL provided
Feb 13 15:11:08.009428 systemd[1]: Reached target network.target - Network.
Feb 13 15:11:08.010332 ignition[674]: reading system config file "/usr/lib/ignition/user.ign"
Feb 13 15:11:08.010341 ignition[674]: no config at "/usr/lib/ignition/user.ign"
Feb 13 15:11:08.010362 ignition[674]: op(1): [started]  loading QEMU firmware config module
Feb 13 15:11:08.028181 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1
Feb 13 15:11:08.010366 ignition[674]: op(1): executing: "modprobe" "qemu_fw_cfg"
Feb 13 15:11:08.017349 ignition[674]: op(1): [finished] loading QEMU firmware config module
Feb 13 15:11:08.064362 ignition[674]: parsing config with SHA512: ab622bd2fad7ec9de12f83dac5e28fded9aed21c2da4593e94455c014ce0cc926f613953b1cbab4d7890ca12333c0588fbcf191058b43b00cdcdfb4a9891bee3
Feb 13 15:11:08.070784 unknown[674]: fetched base config from "system"
Feb 13 15:11:08.070799 unknown[674]: fetched user config from "qemu"
Feb 13 15:11:08.071695 ignition[674]: fetch-offline: fetch-offline passed
Feb 13 15:11:08.073315 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 15:11:08.071776 ignition[674]: Ignition finished successfully
Feb 13 15:11:08.074662 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json).
Feb 13 15:11:08.081288 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Feb 13 15:11:08.093720 ignition[776]: Ignition 2.20.0
Feb 13 15:11:08.093731 ignition[776]: Stage: kargs
Feb 13 15:11:08.093896 ignition[776]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:11:08.093906 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb 13 15:11:08.094835 ignition[776]: kargs: kargs passed
Feb 13 15:11:08.098540 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Feb 13 15:11:08.094879 ignition[776]: Ignition finished successfully
Feb 13 15:11:08.109282 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Feb 13 15:11:08.118906 ignition[784]: Ignition 2.20.0
Feb 13 15:11:08.118923 ignition[784]: Stage: disks
Feb 13 15:11:08.119072 ignition[784]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:11:08.121399 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Feb 13 15:11:08.119081 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb 13 15:11:08.123004 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Feb 13 15:11:08.119928 ignition[784]: disks: disks passed
Feb 13 15:11:08.124646 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Feb 13 15:11:08.119969 ignition[784]: Ignition finished successfully
Feb 13 15:11:08.126636 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 15:11:08.128445 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 15:11:08.129859 systemd[1]: Reached target basic.target - Basic System.
Feb 13 15:11:08.139301 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Feb 13 15:11:08.148632 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks
Feb 13 15:11:08.152737 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Feb 13 15:11:08.167241 systemd[1]: Mounting sysroot.mount - /sysroot...
Feb 13 15:11:08.209158 kernel: EXT4-fs (vda9): mounted filesystem f78dcc36-7881-4d16-ad8b-28e23dfbdad0 r/w with ordered data mode. Quota mode: none.
Feb 13 15:11:08.209695 systemd[1]: Mounted sysroot.mount - /sysroot.
Feb 13 15:11:08.210935 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Feb 13 15:11:08.228219 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 15:11:08.229891 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Feb 13 15:11:08.231337 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met.
Feb 13 15:11:08.231381 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Feb 13 15:11:08.240736 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (803)
Feb 13 15:11:08.240757 kernel: BTRFS info (device vda6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011
Feb 13 15:11:08.240768 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 15:11:08.240777 kernel: BTRFS info (device vda6): using free space tree
Feb 13 15:11:08.231405 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 15:11:08.235535 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Feb 13 15:11:08.237701 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Feb 13 15:11:08.246354 kernel: BTRFS info (device vda6): auto enabling async discard
Feb 13 15:11:08.247302 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 15:11:08.287300 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory
Feb 13 15:11:08.291378 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory
Feb 13 15:11:08.295124 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory
Feb 13 15:11:08.298972 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory
Feb 13 15:11:08.366169 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Feb 13 15:11:08.371227 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Feb 13 15:11:08.372725 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Feb 13 15:11:08.378153 kernel: BTRFS info (device vda6): last unmount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011
Feb 13 15:11:08.392632 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Feb 13 15:11:08.394476 ignition[916]: INFO     : Ignition 2.20.0
Feb 13 15:11:08.394476 ignition[916]: INFO     : Stage: mount
Feb 13 15:11:08.394476 ignition[916]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:11:08.394476 ignition[916]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb 13 15:11:08.394476 ignition[916]: INFO     : mount: mount passed
Feb 13 15:11:08.394476 ignition[916]: INFO     : Ignition finished successfully
Feb 13 15:11:08.395425 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Feb 13 15:11:08.404263 systemd[1]: Starting ignition-files.service - Ignition (files)...
Feb 13 15:11:08.902570 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Feb 13 15:11:08.911306 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 15:11:08.917168 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (930)
Feb 13 15:11:08.919791 kernel: BTRFS info (device vda6): first mount of filesystem c44a03df-bf46-42eb-b6fb-d68275519011
Feb 13 15:11:08.919807 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 15:11:08.919817 kernel: BTRFS info (device vda6): using free space tree
Feb 13 15:11:08.923163 kernel: BTRFS info (device vda6): auto enabling async discard
Feb 13 15:11:08.923765 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 15:11:08.947125 ignition[947]: INFO     : Ignition 2.20.0
Feb 13 15:11:08.947125 ignition[947]: INFO     : Stage: files
Feb 13 15:11:08.948842 ignition[947]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:11:08.948842 ignition[947]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb 13 15:11:08.948842 ignition[947]: DEBUG    : files: compiled without relabeling support, skipping
Feb 13 15:11:08.952650 ignition[947]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Feb 13 15:11:08.952650 ignition[947]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Feb 13 15:11:08.955531 ignition[947]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Feb 13 15:11:08.955531 ignition[947]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Feb 13 15:11:08.955531 ignition[947]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Feb 13 15:11:08.955531 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz"
Feb 13 15:11:08.955531 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1
Feb 13 15:11:08.953240 unknown[947]: wrote ssh authorized keys file for user: core
Feb 13 15:11:09.010033 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET result: OK
Feb 13 15:11:09.211409 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz"
Feb 13 15:11:09.211409 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/opt/bin/cilium.tar.gz"
Feb 13 15:11:09.215103 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1
Feb 13 15:11:09.537339 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET result: OK
Feb 13 15:11:09.556060 systemd-networkd[765]: eth0: Gained IPv6LL
Feb 13 15:11:09.597883 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz"
Feb 13 15:11:09.599947 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/home/core/install.sh"
Feb 13 15:11:09.599947 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh"
Feb 13 15:11:09.599947 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/nginx.yaml"
Feb 13 15:11:09.599947 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml"
Feb 13 15:11:09.599947 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Feb 13 15:11:09.599947 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Feb 13 15:11:09.599947 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb 13 15:11:09.599947 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb 13 15:11:09.599947 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 15:11:09.599947 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 15:11:09.599947 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw"
Feb 13 15:11:09.599947 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw"
Feb 13 15:11:09.599947 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw"
Feb 13 15:11:09.599947 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1
Feb 13 15:11:09.884183 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(b): GET result: OK
Feb 13 15:11:10.454249 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw"
Feb 13 15:11:10.454249 ignition[947]: INFO     : files: op(c): [started]  processing unit "prepare-helm.service"
Feb 13 15:11:10.457983 ignition[947]: INFO     : files: op(c): op(d): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb 13 15:11:10.457983 ignition[947]: INFO     : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb 13 15:11:10.457983 ignition[947]: INFO     : files: op(c): [finished] processing unit "prepare-helm.service"
Feb 13 15:11:10.457983 ignition[947]: INFO     : files: op(e): [started]  processing unit "coreos-metadata.service"
Feb 13 15:11:10.457983 ignition[947]: INFO     : files: op(e): op(f): [started]  writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Feb 13 15:11:10.457983 ignition[947]: INFO     : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Feb 13 15:11:10.457983 ignition[947]: INFO     : files: op(e): [finished] processing unit "coreos-metadata.service"
Feb 13 15:11:10.457983 ignition[947]: INFO     : files: op(10): [started]  setting preset to disabled for "coreos-metadata.service"
Feb 13 15:11:10.473010 ignition[947]: INFO     : files: op(10): op(11): [started]  removing enablement symlink(s) for "coreos-metadata.service"
Feb 13 15:11:10.475860 ignition[947]: INFO     : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service"
Feb 13 15:11:10.478388 ignition[947]: INFO     : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service"
Feb 13 15:11:10.478388 ignition[947]: INFO     : files: op(12): [started]  setting preset to enabled for "prepare-helm.service"
Feb 13 15:11:10.478388 ignition[947]: INFO     : files: op(12): [finished] setting preset to enabled for "prepare-helm.service"
Feb 13 15:11:10.478388 ignition[947]: INFO     : files: createResultFile: createFiles: op(13): [started]  writing file "/sysroot/etc/.ignition-result.json"
Feb 13 15:11:10.478388 ignition[947]: INFO     : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json"
Feb 13 15:11:10.478388 ignition[947]: INFO     : files: files passed
Feb 13 15:11:10.478388 ignition[947]: INFO     : Ignition finished successfully
Feb 13 15:11:10.478874 systemd[1]: Finished ignition-files.service - Ignition (files).
Feb 13 15:11:10.490263 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Feb 13 15:11:10.492041 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Feb 13 15:11:10.495372 systemd[1]: ignition-quench.service: Deactivated successfully.
Feb 13 15:11:10.496214 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Feb 13 15:11:10.500092 initrd-setup-root-after-ignition[976]: grep: /sysroot/oem/oem-release: No such file or directory
Feb 13 15:11:10.501478 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:11:10.501478 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:11:10.504524 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:11:10.503721 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 15:11:10.506087 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Feb 13 15:11:10.520368 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Feb 13 15:11:10.540372 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 13 15:11:10.541208 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Feb 13 15:11:10.542634 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Feb 13 15:11:10.544479 systemd[1]: Reached target initrd.target - Initrd Default Target.
Feb 13 15:11:10.546305 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Feb 13 15:11:10.553255 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Feb 13 15:11:10.565069 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 15:11:10.573294 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Feb 13 15:11:10.580523 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:11:10.581854 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:11:10.584055 systemd[1]: Stopped target timers.target - Timer Units.
Feb 13 15:11:10.585918 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 13 15:11:10.586050 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 15:11:10.588698 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Feb 13 15:11:10.590801 systemd[1]: Stopped target basic.target - Basic System.
Feb 13 15:11:10.592442 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Feb 13 15:11:10.594153 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 15:11:10.596128 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Feb 13 15:11:10.598145 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Feb 13 15:11:10.600006 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 15:11:10.601991 systemd[1]: Stopped target sysinit.target - System Initialization.
Feb 13 15:11:10.603968 systemd[1]: Stopped target local-fs.target - Local File Systems.
Feb 13 15:11:10.605695 systemd[1]: Stopped target swap.target - Swaps.
Feb 13 15:11:10.607255 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 13 15:11:10.607377 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 15:11:10.609785 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:11:10.611735 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:11:10.613723 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Feb 13 15:11:10.618970 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:11:10.620282 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 13 15:11:10.620425 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Feb 13 15:11:10.623362 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Feb 13 15:11:10.623479 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 15:11:10.625491 systemd[1]: Stopped target paths.target - Path Units.
Feb 13 15:11:10.627087 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 13 15:11:10.635628 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:11:10.636970 systemd[1]: Stopped target slices.target - Slice Units.
Feb 13 15:11:10.639095 systemd[1]: Stopped target sockets.target - Socket Units.
Feb 13 15:11:10.640740 systemd[1]: iscsid.socket: Deactivated successfully.
Feb 13 15:11:10.640827 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 15:11:10.642386 systemd[1]: iscsiuio.socket: Deactivated successfully.
Feb 13 15:11:10.642471 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 15:11:10.644018 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Feb 13 15:11:10.644125 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 15:11:10.645922 systemd[1]: ignition-files.service: Deactivated successfully.
Feb 13 15:11:10.646022 systemd[1]: Stopped ignition-files.service - Ignition (files).
Feb 13 15:11:10.660309 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Feb 13 15:11:10.661216 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 13 15:11:10.661350 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:11:10.664466 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Feb 13 15:11:10.665965 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 13 15:11:10.666099 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:11:10.670235 ignition[1002]: INFO     : Ignition 2.20.0
Feb 13 15:11:10.670235 ignition[1002]: INFO     : Stage: umount
Feb 13 15:11:10.673976 ignition[1002]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:11:10.673976 ignition[1002]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb 13 15:11:10.673976 ignition[1002]: INFO     : umount: umount passed
Feb 13 15:11:10.673976 ignition[1002]: INFO     : Ignition finished successfully
Feb 13 15:11:10.670820 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 13 15:11:10.671021 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 15:11:10.676745 systemd[1]: ignition-mount.service: Deactivated successfully.
Feb 13 15:11:10.676850 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Feb 13 15:11:10.678977 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Feb 13 15:11:10.679574 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 13 15:11:10.681217 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Feb 13 15:11:10.684410 systemd[1]: Stopped target network.target - Network.
Feb 13 15:11:10.685570 systemd[1]: ignition-disks.service: Deactivated successfully.
Feb 13 15:11:10.685633 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Feb 13 15:11:10.687361 systemd[1]: ignition-kargs.service: Deactivated successfully.
Feb 13 15:11:10.687409 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Feb 13 15:11:10.689204 systemd[1]: ignition-setup.service: Deactivated successfully.
Feb 13 15:11:10.689250 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Feb 13 15:11:10.691199 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Feb 13 15:11:10.691246 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Feb 13 15:11:10.693124 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Feb 13 15:11:10.694880 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Feb 13 15:11:10.706002 systemd[1]: systemd-resolved.service: Deactivated successfully.
Feb 13 15:11:10.706179 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Feb 13 15:11:10.709896 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully.
Feb 13 15:11:10.710200 systemd[1]: systemd-networkd.service: Deactivated successfully.
Feb 13 15:11:10.710280 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Feb 13 15:11:10.713806 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully.
Feb 13 15:11:10.714395 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Feb 13 15:11:10.714444 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:11:10.723238 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Feb 13 15:11:10.724215 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Feb 13 15:11:10.724279 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 15:11:10.726744 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 13 15:11:10.726795 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:11:10.729957 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 13 15:11:10.730004 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:11:10.731978 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb 13 15:11:10.732021 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:11:10.735215 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:11:10.737122 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb 13 15:11:10.737195 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Feb 13 15:11:10.745772 systemd[1]: network-cleanup.service: Deactivated successfully.
Feb 13 15:11:10.747980 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Feb 13 15:11:10.752475 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 13 15:11:10.752604 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:11:10.754956 systemd[1]: sysroot-boot.service: Deactivated successfully.
Feb 13 15:11:10.755039 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Feb 13 15:11:10.756707 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 13 15:11:10.756752 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:11:10.758412 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 13 15:11:10.758443 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:11:10.760133 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 13 15:11:10.760197 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 15:11:10.762858 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 13 15:11:10.762917 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Feb 13 15:11:10.765854 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 15:11:10.765914 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:11:10.768032 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Feb 13 15:11:10.768078 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Feb 13 15:11:10.781361 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Feb 13 15:11:10.782447 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 13 15:11:10.782518 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:11:10.785634 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 15:11:10.785686 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:11:10.789541 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Feb 13 15:11:10.789591 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully.
Feb 13 15:11:10.789876 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 13 15:11:10.789970 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Feb 13 15:11:10.791558 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Feb 13 15:11:10.794210 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Feb 13 15:11:10.803619 systemd[1]: Switching root.
Feb 13 15:11:10.837390 systemd-journald[240]: Journal stopped
Feb 13 15:11:11.651531 systemd-journald[240]: Received SIGTERM from PID 1 (systemd).
Feb 13 15:11:11.651587 kernel: SELinux:  policy capability network_peer_controls=1
Feb 13 15:11:11.651599 kernel: SELinux:  policy capability open_perms=1
Feb 13 15:11:11.651611 kernel: SELinux:  policy capability extended_socket_class=1
Feb 13 15:11:11.651621 kernel: SELinux:  policy capability always_check_network=0
Feb 13 15:11:11.651630 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 13 15:11:11.651639 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 13 15:11:11.651648 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Feb 13 15:11:11.651657 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Feb 13 15:11:11.651666 kernel: audit: type=1403 audit(1739459471.013:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb 13 15:11:11.651676 systemd[1]: Successfully loaded SELinux policy in 32.036ms.
Feb 13 15:11:11.651697 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.809ms.
Feb 13 15:11:11.651710 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE)
Feb 13 15:11:11.651721 systemd[1]: Detected virtualization kvm.
Feb 13 15:11:11.651731 systemd[1]: Detected architecture arm64.
Feb 13 15:11:11.651741 systemd[1]: Detected first boot.
Feb 13 15:11:11.651751 systemd[1]: Initializing machine ID from VM UUID.
Feb 13 15:11:11.651764 kernel: NET: Registered PF_VSOCK protocol family
Feb 13 15:11:11.651774 zram_generator::config[1048]: No configuration found.
Feb 13 15:11:11.651788 systemd[1]: Populated /etc with preset unit settings.
Feb 13 15:11:11.651800 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully.
Feb 13 15:11:11.651812 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Feb 13 15:11:11.651822 systemd[1]: Stopped initrd-switch-root.service - Switch Root.
Feb 13 15:11:11.651832 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Feb 13 15:11:11.651842 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Feb 13 15:11:11.651852 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Feb 13 15:11:11.651861 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Feb 13 15:11:11.651871 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Feb 13 15:11:11.651882 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Feb 13 15:11:11.651893 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Feb 13 15:11:11.651910 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Feb 13 15:11:11.651921 systemd[1]: Created slice user.slice - User and Session Slice.
Feb 13 15:11:11.651932 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:11:11.651942 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:11:11.651954 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Feb 13 15:11:11.651965 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Feb 13 15:11:11.651975 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Feb 13 15:11:11.651985 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 15:11:11.651997 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0...
Feb 13 15:11:11.652008 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:11:11.652018 systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
Feb 13 15:11:11.652028 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
Feb 13 15:11:11.652039 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
Feb 13 15:11:11.652049 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Feb 13 15:11:11.652059 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:11:11.652071 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 15:11:11.652082 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 15:11:11.652092 systemd[1]: Reached target swap.target - Swaps.
Feb 13 15:11:11.652102 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Feb 13 15:11:11.652112 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Feb 13 15:11:11.652123 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption.
Feb 13 15:11:11.652134 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:11:11.652152 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:11:11.652163 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:11:11.652173 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Feb 13 15:11:11.652185 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Feb 13 15:11:11.652195 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Feb 13 15:11:11.652205 systemd[1]: Mounting media.mount - External Media Directory...
Feb 13 15:11:11.652215 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Feb 13 15:11:11.652225 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Feb 13 15:11:11.652235 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Feb 13 15:11:11.652246 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb 13 15:11:11.652256 systemd[1]: Reached target machines.target - Containers.
Feb 13 15:11:11.652268 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Feb 13 15:11:11.652278 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:11:11.652289 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 15:11:11.652299 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Feb 13 15:11:11.652310 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:11:11.652320 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Feb 13 15:11:11.652330 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 15:11:11.652340 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Feb 13 15:11:11.652350 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 15:11:11.652363 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Feb 13 15:11:11.652373 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Feb 13 15:11:11.652384 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
Feb 13 15:11:11.652393 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Feb 13 15:11:11.652403 systemd[1]: Stopped systemd-fsck-usr.service.
Feb 13 15:11:11.652413 kernel: fuse: init (API version 7.39)
Feb 13 15:11:11.652424 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67).
Feb 13 15:11:11.652440 kernel: loop: module loaded
Feb 13 15:11:11.652454 kernel: ACPI: bus type drm_connector registered
Feb 13 15:11:11.652463 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 15:11:11.652474 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 15:11:11.652484 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Feb 13 15:11:11.652496 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Feb 13 15:11:11.652506 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials...
Feb 13 15:11:11.652535 systemd-journald[1127]: Collecting audit messages is disabled.
Feb 13 15:11:11.652556 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 15:11:11.652569 systemd-journald[1127]: Journal started
Feb 13 15:11:11.652590 systemd-journald[1127]: Runtime Journal (/run/log/journal/87a57fa094da421ca1780df51b760d2a) is 5.9M, max 47.3M, 41.4M free.
Feb 13 15:11:11.423775 systemd[1]: Queued start job for default target multi-user.target.
Feb 13 15:11:11.438173 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6.
Feb 13 15:11:11.438583 systemd[1]: systemd-journald.service: Deactivated successfully.
Feb 13 15:11:11.654813 systemd[1]: verity-setup.service: Deactivated successfully.
Feb 13 15:11:11.656208 systemd[1]: Stopped verity-setup.service.
Feb 13 15:11:11.661605 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 15:11:11.662336 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Feb 13 15:11:11.663527 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Feb 13 15:11:11.664813 systemd[1]: Mounted media.mount - External Media Directory.
Feb 13 15:11:11.666003 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Feb 13 15:11:11.667318 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Feb 13 15:11:11.668594 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Feb 13 15:11:11.671168 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Feb 13 15:11:11.672657 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:11:11.674292 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 13 15:11:11.674472 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Feb 13 15:11:11.675945 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:11:11.676112 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:11:11.678510 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 13 15:11:11.678759 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Feb 13 15:11:11.680216 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 15:11:11.680478 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 15:11:11.682057 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb 13 15:11:11.682319 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Feb 13 15:11:11.685483 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 15:11:11.685656 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 15:11:11.687432 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:11:11.688935 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Feb 13 15:11:11.690567 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Feb 13 15:11:11.692370 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials.
Feb 13 15:11:11.705237 systemd[1]: Reached target network-pre.target - Preparation for Network.
Feb 13 15:11:11.714250 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Feb 13 15:11:11.716587 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Feb 13 15:11:11.717839 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Feb 13 15:11:11.717971 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 15:11:11.720369 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management.
Feb 13 15:11:11.722842 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Feb 13 15:11:11.725067 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Feb 13 15:11:11.726297 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:11:11.727810 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Feb 13 15:11:11.730180 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Feb 13 15:11:11.731344 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 15:11:11.732376 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Feb 13 15:11:11.733559 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 15:11:11.736638 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:11:11.744267 systemd-journald[1127]: Time spent on flushing to /var/log/journal/87a57fa094da421ca1780df51b760d2a is 16.505ms for 871 entries.
Feb 13 15:11:11.744267 systemd-journald[1127]: System Journal (/var/log/journal/87a57fa094da421ca1780df51b760d2a) is 8M, max 195.6M, 187.6M free.
Feb 13 15:11:11.785612 systemd-journald[1127]: Received client request to flush runtime journal.
Feb 13 15:11:11.785666 kernel: loop0: detected capacity change from 0 to 113512
Feb 13 15:11:11.785692 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Feb 13 15:11:11.744195 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Feb 13 15:11:11.749438 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Feb 13 15:11:11.755184 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:11:11.757036 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Feb 13 15:11:11.759394 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Feb 13 15:11:11.761029 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Feb 13 15:11:11.768528 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Feb 13 15:11:11.773174 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:11:11.780541 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Feb 13 15:11:11.789431 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk...
Feb 13 15:11:11.792861 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Feb 13 15:11:11.794591 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Feb 13 15:11:11.802163 kernel: loop1: detected capacity change from 0 to 123192
Feb 13 15:11:11.814042 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Feb 13 15:11:11.814835 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk.
Feb 13 15:11:11.820641 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Feb 13 15:11:11.831454 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 15:11:11.834255 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Feb 13 15:11:11.843158 kernel: loop2: detected capacity change from 0 to 201592
Feb 13 15:11:11.854486 systemd-tmpfiles[1188]: ACLs are not supported, ignoring.
Feb 13 15:11:11.854501 systemd-tmpfiles[1188]: ACLs are not supported, ignoring.
Feb 13 15:11:11.860084 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:11:11.876185 kernel: loop3: detected capacity change from 0 to 113512
Feb 13 15:11:11.882605 kernel: loop4: detected capacity change from 0 to 123192
Feb 13 15:11:11.890166 kernel: loop5: detected capacity change from 0 to 201592
Feb 13 15:11:11.896312 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'.
Feb 13 15:11:11.896799 (sd-merge)[1192]: Merged extensions into '/usr'.
Feb 13 15:11:11.900061 systemd[1]: Reload requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)...
Feb 13 15:11:11.900212 systemd[1]: Reloading...
Feb 13 15:11:11.959183 zram_generator::config[1223]: No configuration found.
Feb 13 15:11:12.045962 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:11:12.069858 ldconfig[1160]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Feb 13 15:11:12.097781 systemd[1]: Reloading finished in 197 ms.
Feb 13 15:11:12.119944 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Feb 13 15:11:12.123172 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Feb 13 15:11:12.137379 systemd[1]: Starting ensure-sysext.service...
Feb 13 15:11:12.139443 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 15:11:12.149794 systemd[1]: Reload requested from client PID 1254 ('systemctl') (unit ensure-sysext.service)...
Feb 13 15:11:12.149934 systemd[1]: Reloading...
Feb 13 15:11:12.157482 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Feb 13 15:11:12.157694 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Feb 13 15:11:12.158346 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Feb 13 15:11:12.158564 systemd-tmpfiles[1255]: ACLs are not supported, ignoring.
Feb 13 15:11:12.158613 systemd-tmpfiles[1255]: ACLs are not supported, ignoring.
Feb 13 15:11:12.161425 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot.
Feb 13 15:11:12.161436 systemd-tmpfiles[1255]: Skipping /boot
Feb 13 15:11:12.170072 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot.
Feb 13 15:11:12.170087 systemd-tmpfiles[1255]: Skipping /boot
Feb 13 15:11:12.199178 zram_generator::config[1284]: No configuration found.
Feb 13 15:11:12.283323 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:11:12.333952 systemd[1]: Reloading finished in 183 ms.
Feb 13 15:11:12.344928 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Feb 13 15:11:12.363216 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:11:12.371080 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Feb 13 15:11:12.373680 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Feb 13 15:11:12.376118 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Feb 13 15:11:12.382500 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 15:11:12.386150 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:11:12.389983 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Feb 13 15:11:12.394209 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:11:12.400976 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:11:12.403299 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 15:11:12.406744 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 15:11:12.409644 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:11:12.409779 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67).
Feb 13 15:11:12.410700 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:11:12.412601 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:11:12.414762 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 15:11:12.415027 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 15:11:12.417196 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Feb 13 15:11:12.419041 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 15:11:12.419273 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 15:11:12.428479 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Feb 13 15:11:12.433745 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:11:12.436753 systemd-udevd[1330]: Using default interface naming scheme 'v255'.
Feb 13 15:11:12.442932 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:11:12.447515 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 15:11:12.453350 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 15:11:12.454711 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:11:12.454848 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67).
Feb 13 15:11:12.458516 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Feb 13 15:11:12.462424 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Feb 13 15:11:12.468635 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:11:12.470755 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Feb 13 15:11:12.473590 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:11:12.473783 augenrules[1367]: No rules
Feb 13 15:11:12.473747 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:11:12.476355 systemd[1]: audit-rules.service: Deactivated successfully.
Feb 13 15:11:12.476556 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Feb 13 15:11:12.478274 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 15:11:12.478440 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 15:11:12.480207 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 15:11:12.480377 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 15:11:12.482145 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Feb 13 15:11:12.494110 systemd[1]: Finished ensure-sysext.service.
Feb 13 15:11:12.506317 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Feb 13 15:11:12.507389 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:11:12.512783 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:11:12.519448 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Feb 13 15:11:12.522075 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 15:11:12.525311 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 15:11:12.527434 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:11:12.527484 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67).
Feb 13 15:11:12.529861 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 15:11:12.534479 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization...
Feb 13 15:11:12.535781 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Feb 13 15:11:12.537363 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Feb 13 15:11:12.538824 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:11:12.539015 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:11:12.540460 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 13 15:11:12.540644 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Feb 13 15:11:12.542003 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 15:11:12.542208 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 15:11:12.543693 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 15:11:12.543866 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 15:11:12.546498 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped.
Feb 13 15:11:12.559367 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 15:11:12.559444 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 15:11:12.559599 augenrules[1389]: /sbin/augenrules: No change
Feb 13 15:11:12.569206 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1362)
Feb 13 15:11:12.583178 augenrules[1428]: No rules
Feb 13 15:11:12.584091 systemd[1]: audit-rules.service: Deactivated successfully.
Feb 13 15:11:12.584318 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Feb 13 15:11:12.620358 systemd-resolved[1324]: Positive Trust Anchors:
Feb 13 15:11:12.620679 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 15:11:12.620765 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 15:11:12.627794 systemd-resolved[1324]: Defaulting to hostname 'linux'.
Feb 13 15:11:12.629533 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 15:11:12.630869 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization.
Feb 13 15:11:12.632849 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:11:12.634079 systemd[1]: Reached target time-set.target - System Time Set.
Feb 13 15:11:12.638226 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Feb 13 15:11:12.649806 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Feb 13 15:11:12.669168 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Feb 13 15:11:12.670863 systemd-networkd[1405]: lo: Link UP
Feb 13 15:11:12.671185 systemd-networkd[1405]: lo: Gained carrier
Feb 13 15:11:12.672295 systemd-networkd[1405]: Enumeration completed
Feb 13 15:11:12.672818 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:11:12.672951 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 15:11:12.673031 systemd-networkd[1405]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 15:11:12.673790 systemd-networkd[1405]: eth0: Link UP
Feb 13 15:11:12.673855 systemd-networkd[1405]: eth0: Gained carrier
Feb 13 15:11:12.673921 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:11:12.675076 systemd[1]: Reached target network.target - Network.
Feb 13 15:11:12.683331 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd...
Feb 13 15:11:12.685763 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Feb 13 15:11:12.688043 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:11:12.689543 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Feb 13 15:11:12.691524 systemd-networkd[1405]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1
Feb 13 15:11:12.693309 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection.
Feb 13 15:11:12.694065 systemd-timesyncd[1407]: Contacted time server 10.0.0.1:123 (10.0.0.1).
Feb 13 15:11:12.694109 systemd-timesyncd[1407]: Initial clock synchronization to Thu 2025-02-13 15:11:12.688661 UTC.
Feb 13 15:11:12.696473 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Feb 13 15:11:12.704477 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd.
Feb 13 15:11:12.719093 lvm[1446]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 13 15:11:12.744440 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:11:12.753756 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Feb 13 15:11:12.755293 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:11:12.758309 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 15:11:12.759447 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Feb 13 15:11:12.760717 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Feb 13 15:11:12.762155 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Feb 13 15:11:12.763299 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Feb 13 15:11:12.764565 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Feb 13 15:11:12.765844 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Feb 13 15:11:12.765883 systemd[1]: Reached target paths.target - Path Units.
Feb 13 15:11:12.766804 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 15:11:12.768737 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Feb 13 15:11:12.771204 systemd[1]: Starting docker.socket - Docker Socket for the API...
Feb 13 15:11:12.774376 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local).
Feb 13 15:11:12.775864 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK).
Feb 13 15:11:12.777146 systemd[1]: Reached target ssh-access.target - SSH Access Available.
Feb 13 15:11:12.780331 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Feb 13 15:11:12.781780 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket.
Feb 13 15:11:12.784244 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Feb 13 15:11:12.785940 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Feb 13 15:11:12.787155 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 15:11:12.788101 systemd[1]: Reached target basic.target - Basic System.
Feb 13 15:11:12.789096 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Feb 13 15:11:12.789128 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Feb 13 15:11:12.790180 systemd[1]: Starting containerd.service - containerd container runtime...
Feb 13 15:11:12.792154 lvm[1456]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 13 15:11:12.792266 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Feb 13 15:11:12.795393 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Feb 13 15:11:12.801483 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Feb 13 15:11:12.802700 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Feb 13 15:11:12.804217 jq[1459]: false
Feb 13 15:11:12.803888 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Feb 13 15:11:12.806881 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin...
Feb 13 15:11:12.810307 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Feb 13 15:11:12.814324 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Feb 13 15:11:12.819383 extend-filesystems[1460]: Found loop3
Feb 13 15:11:12.823119 extend-filesystems[1460]: Found loop4
Feb 13 15:11:12.823119 extend-filesystems[1460]: Found loop5
Feb 13 15:11:12.823119 extend-filesystems[1460]: Found vda
Feb 13 15:11:12.823119 extend-filesystems[1460]: Found vda1
Feb 13 15:11:12.823119 extend-filesystems[1460]: Found vda2
Feb 13 15:11:12.823119 extend-filesystems[1460]: Found vda3
Feb 13 15:11:12.823119 extend-filesystems[1460]: Found usr
Feb 13 15:11:12.823119 extend-filesystems[1460]: Found vda4
Feb 13 15:11:12.823119 extend-filesystems[1460]: Found vda6
Feb 13 15:11:12.823119 extend-filesystems[1460]: Found vda7
Feb 13 15:11:12.823119 extend-filesystems[1460]: Found vda9
Feb 13 15:11:12.823119 extend-filesystems[1460]: Checking size of /dev/vda9
Feb 13 15:11:12.819499 systemd[1]: Starting systemd-logind.service - User Login Management...
Feb 13 15:11:12.830228 dbus-daemon[1458]: [system] SELinux support is enabled
Feb 13 15:11:12.823090 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Feb 13 15:11:12.823635 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Feb 13 15:11:12.825326 systemd[1]: Starting update-engine.service - Update Engine...
Feb 13 15:11:12.843825 jq[1478]: true
Feb 13 15:11:12.831363 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Feb 13 15:11:12.833283 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Feb 13 15:11:12.838209 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Feb 13 15:11:12.842532 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Feb 13 15:11:12.842707 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Feb 13 15:11:12.842973 systemd[1]: motdgen.service: Deactivated successfully.
Feb 13 15:11:12.843130 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Feb 13 15:11:12.848515 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Feb 13 15:11:12.848717 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Feb 13 15:11:12.866174 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1359)
Feb 13 15:11:12.867121 jq[1482]: true
Feb 13 15:11:12.867409 (ntainerd)[1484]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Feb 13 15:11:12.872294 extend-filesystems[1460]: Resized partition /dev/vda9
Feb 13 15:11:12.874491 extend-filesystems[1494]: resize2fs 1.47.1 (20-May-2024)
Feb 13 15:11:12.885013 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Feb 13 15:11:12.885047 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Feb 13 15:11:12.887307 update_engine[1472]: I20250213 15:11:12.885653  1472 main.cc:92] Flatcar Update Engine starting
Feb 13 15:11:12.888482 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Feb 13 15:11:12.888507 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Feb 13 15:11:12.892169 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks
Feb 13 15:11:12.892360 systemd[1]: Started update-engine.service - Update Engine.
Feb 13 15:11:12.897874 update_engine[1472]: I20250213 15:11:12.897814  1472 update_check_scheduler.cc:74] Next update check in 3m34s
Feb 13 15:11:12.907325 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Feb 13 15:11:12.909063 tar[1481]: linux-arm64/LICENSE
Feb 13 15:11:12.909063 tar[1481]: linux-arm64/helm
Feb 13 15:11:12.943756 systemd-logind[1467]: Watching system buttons on /dev/input/event0 (Power Button)
Feb 13 15:11:12.946482 systemd-logind[1467]: New seat seat0.
Feb 13 15:11:12.953465 kernel: EXT4-fs (vda9): resized filesystem to 1864699
Feb 13 15:11:12.953882 systemd[1]: Started systemd-logind.service - User Login Management.
Feb 13 15:11:12.970145 extend-filesystems[1494]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required
Feb 13 15:11:12.970145 extend-filesystems[1494]: old_desc_blocks = 1, new_desc_blocks = 1
Feb 13 15:11:12.970145 extend-filesystems[1494]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long.
Feb 13 15:11:12.979985 extend-filesystems[1460]: Resized filesystem in /dev/vda9
Feb 13 15:11:12.971928 systemd[1]: extend-filesystems.service: Deactivated successfully.
Feb 13 15:11:12.972124 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Feb 13 15:11:12.985269 bash[1512]: Updated "/home/core/.ssh/authorized_keys"
Feb 13 15:11:12.986363 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Feb 13 15:11:12.990426 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met.
Feb 13 15:11:12.996418 locksmithd[1505]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Feb 13 15:11:13.108764 containerd[1484]: time="2025-02-13T15:11:13.108371379Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23
Feb 13 15:11:13.136091 containerd[1484]: time="2025-02-13T15:11:13.135981741Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:11:13.137923 containerd[1484]: time="2025-02-13T15:11:13.137869187Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:11:13.137923 containerd[1484]: time="2025-02-13T15:11:13.137914220Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Feb 13 15:11:13.138010 containerd[1484]: time="2025-02-13T15:11:13.137934416Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Feb 13 15:11:13.138119 containerd[1484]: time="2025-02-13T15:11:13.138098989Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Feb 13 15:11:13.138159 containerd[1484]: time="2025-02-13T15:11:13.138122865Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Feb 13 15:11:13.138220 containerd[1484]: time="2025-02-13T15:11:13.138203492Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:11:13.138250 containerd[1484]: time="2025-02-13T15:11:13.138221688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:11:13.138430 containerd[1484]: time="2025-02-13T15:11:13.138412857Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:11:13.138451 containerd[1484]: time="2025-02-13T15:11:13.138435013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Feb 13 15:11:13.138469 containerd[1484]: time="2025-02-13T15:11:13.138448451Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:11:13.138469 containerd[1484]: time="2025-02-13T15:11:13.138457889Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Feb 13 15:11:13.138542 containerd[1484]: time="2025-02-13T15:11:13.138528357Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:11:13.138741 containerd[1484]: time="2025-02-13T15:11:13.138726204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:11:13.138858 containerd[1484]: time="2025-02-13T15:11:13.138843665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:11:13.138885 containerd[1484]: time="2025-02-13T15:11:13.138859822Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Feb 13 15:11:13.138962 containerd[1484]: time="2025-02-13T15:11:13.138947408Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Feb 13 15:11:13.139016 containerd[1484]: time="2025-02-13T15:11:13.139004398Z" level=info msg="metadata content store policy set" policy=shared
Feb 13 15:11:13.149159 containerd[1484]: time="2025-02-13T15:11:13.149097478Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Feb 13 15:11:13.150177 containerd[1484]: time="2025-02-13T15:11:13.149374751Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Feb 13 15:11:13.150177 containerd[1484]: time="2025-02-13T15:11:13.149399747Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Feb 13 15:11:13.150177 containerd[1484]: time="2025-02-13T15:11:13.149416944Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Feb 13 15:11:13.150177 containerd[1484]: time="2025-02-13T15:11:13.149442900Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Feb 13 15:11:13.150177 containerd[1484]: time="2025-02-13T15:11:13.149603553Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Feb 13 15:11:13.150177 containerd[1484]: time="2025-02-13T15:11:13.149828236Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Feb 13 15:11:13.150177 containerd[1484]: time="2025-02-13T15:11:13.149942737Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Feb 13 15:11:13.150177 containerd[1484]: time="2025-02-13T15:11:13.149963573Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Feb 13 15:11:13.150177 containerd[1484]: time="2025-02-13T15:11:13.149978331Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Feb 13 15:11:13.150177 containerd[1484]: time="2025-02-13T15:11:13.149992129Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Feb 13 15:11:13.150177 containerd[1484]: time="2025-02-13T15:11:13.150004007Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Feb 13 15:11:13.150177 containerd[1484]: time="2025-02-13T15:11:13.150016125Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Feb 13 15:11:13.150177 containerd[1484]: time="2025-02-13T15:11:13.150030042Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Feb 13 15:11:13.150177 containerd[1484]: time="2025-02-13T15:11:13.150050119Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Feb 13 15:11:13.150450 containerd[1484]: time="2025-02-13T15:11:13.150064157Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Feb 13 15:11:13.150450 containerd[1484]: time="2025-02-13T15:11:13.150078634Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Feb 13 15:11:13.150450 containerd[1484]: time="2025-02-13T15:11:13.150090992Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Feb 13 15:11:13.150450 containerd[1484]: time="2025-02-13T15:11:13.150112109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Feb 13 15:11:13.150450 containerd[1484]: time="2025-02-13T15:11:13.150126426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Feb 13 15:11:13.150726 containerd[1484]: time="2025-02-13T15:11:13.150706370Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Feb 13 15:11:13.150865 containerd[1484]: time="2025-02-13T15:11:13.150849946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Feb 13 15:11:13.150998 containerd[1484]: time="2025-02-13T15:11:13.150938491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Feb 13 15:11:13.151064 containerd[1484]: time="2025-02-13T15:11:13.151051432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Feb 13 15:11:13.151117 containerd[1484]: time="2025-02-13T15:11:13.151105703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Feb 13 15:11:13.151259 containerd[1484]: time="2025-02-13T15:11:13.151189369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Feb 13 15:11:13.151327 containerd[1484]: time="2025-02-13T15:11:13.151312549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Feb 13 15:11:13.151433 containerd[1484]: time="2025-02-13T15:11:13.151418451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Feb 13 15:11:13.151509 containerd[1484]: time="2025-02-13T15:11:13.151495438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Feb 13 15:11:13.151828 containerd[1484]: time="2025-02-13T15:11:13.151593982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Feb 13 15:11:13.151828 containerd[1484]: time="2025-02-13T15:11:13.151612339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Feb 13 15:11:13.151828 containerd[1484]: time="2025-02-13T15:11:13.151629016Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Feb 13 15:11:13.151828 containerd[1484]: time="2025-02-13T15:11:13.151653092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Feb 13 15:11:13.151828 containerd[1484]: time="2025-02-13T15:11:13.151665650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Feb 13 15:11:13.151828 containerd[1484]: time="2025-02-13T15:11:13.151675768Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Feb 13 15:11:13.154101 containerd[1484]: time="2025-02-13T15:11:13.152191362Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Feb 13 15:11:13.154101 containerd[1484]: time="2025-02-13T15:11:13.152221238Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Feb 13 15:11:13.154342 containerd[1484]: time="2025-02-13T15:11:13.154319448Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Feb 13 15:11:13.154414 containerd[1484]: time="2025-02-13T15:11:13.154395555Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Feb 13 15:11:13.154532 containerd[1484]: time="2025-02-13T15:11:13.154516975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Feb 13 15:11:13.154594 containerd[1484]: time="2025-02-13T15:11:13.154582244Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Feb 13 15:11:13.154712 containerd[1484]: time="2025-02-13T15:11:13.154696745Z" level=info msg="NRI interface is disabled by configuration."
Feb 13 15:11:13.154836 containerd[1484]: time="2025-02-13T15:11:13.154820085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Feb 13 15:11:13.157072 containerd[1484]: time="2025-02-13T15:11:13.156039722Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Feb 13 15:11:13.157289 containerd[1484]: time="2025-02-13T15:11:13.157082868Z" level=info msg="Connect containerd service"
Feb 13 15:11:13.157289 containerd[1484]: time="2025-02-13T15:11:13.157174733Z" level=info msg="using legacy CRI server"
Feb 13 15:11:13.157289 containerd[1484]: time="2025-02-13T15:11:13.157190290Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Feb 13 15:11:13.157613 containerd[1484]: time="2025-02-13T15:11:13.157591543Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Feb 13 15:11:13.158319 containerd[1484]: time="2025-02-13T15:11:13.158292027Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 13 15:11:13.158981 containerd[1484]: time="2025-02-13T15:11:13.158712957Z" level=info msg="Start subscribing containerd event"
Feb 13 15:11:13.158981 containerd[1484]: time="2025-02-13T15:11:13.158784345Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Feb 13 15:11:13.158981 containerd[1484]: time="2025-02-13T15:11:13.158912963Z" level=info msg=serving... address=/run/containerd/containerd.sock
Feb 13 15:11:13.158981 containerd[1484]: time="2025-02-13T15:11:13.158848174Z" level=info msg="Start recovering state"
Feb 13 15:11:13.159345 containerd[1484]: time="2025-02-13T15:11:13.158995470Z" level=info msg="Start event monitor"
Feb 13 15:11:13.159345 containerd[1484]: time="2025-02-13T15:11:13.159008907Z" level=info msg="Start snapshots syncer"
Feb 13 15:11:13.159345 containerd[1484]: time="2025-02-13T15:11:13.159017306Z" level=info msg="Start cni network conf syncer for default"
Feb 13 15:11:13.159345 containerd[1484]: time="2025-02-13T15:11:13.159023825Z" level=info msg="Start streaming server"
Feb 13 15:11:13.159345 containerd[1484]: time="2025-02-13T15:11:13.159168841Z" level=info msg="containerd successfully booted in 0.051789s"
Feb 13 15:11:13.159253 systemd[1]: Started containerd.service - containerd container runtime.
Feb 13 15:11:13.316323 tar[1481]: linux-arm64/README.md
Feb 13 15:11:13.333797 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin.
Feb 13 15:11:13.458937 sshd_keygen[1477]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Feb 13 15:11:13.476331 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Feb 13 15:11:13.492402 systemd[1]: Starting issuegen.service - Generate /run/issue...
Feb 13 15:11:13.497072 systemd[1]: issuegen.service: Deactivated successfully.
Feb 13 15:11:13.497276 systemd[1]: Finished issuegen.service - Generate /run/issue.
Feb 13 15:11:13.500408 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Feb 13 15:11:13.510733 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Feb 13 15:11:13.513429 systemd[1]: Started getty@tty1.service - Getty on tty1.
Feb 13 15:11:13.515489 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0.
Feb 13 15:11:13.516754 systemd[1]: Reached target getty.target - Login Prompts.
Feb 13 15:11:13.779256 systemd-networkd[1405]: eth0: Gained IPv6LL
Feb 13 15:11:13.781886 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Feb 13 15:11:13.783636 systemd[1]: Reached target network-online.target - Network is Online.
Feb 13 15:11:13.798442 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent...
Feb 13 15:11:13.800970 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:11:13.803097 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Feb 13 15:11:13.817277 systemd[1]: coreos-metadata.service: Deactivated successfully.
Feb 13 15:11:13.817509 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent.
Feb 13 15:11:13.819475 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Feb 13 15:11:13.822280 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Feb 13 15:11:14.386206 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:11:14.387772 systemd[1]: Reached target multi-user.target - Multi-User System.
Feb 13 15:11:14.391351 (kubelet)[1571]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 15:11:14.394525 systemd[1]: Startup finished in 576ms (kernel) + 5.296s (initrd) + 3.413s (userspace) = 9.286s.
Feb 13 15:11:14.860809 kubelet[1571]: E0213 15:11:14.860693    1571 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 15:11:14.863418 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 15:11:14.863569 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 15:11:14.863889 systemd[1]: kubelet.service: Consumed 794ms CPU time, 251.2M memory peak.
Feb 13 15:11:18.219024 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Feb 13 15:11:18.220315 systemd[1]: Started sshd@0-10.0.0.48:22-10.0.0.1:34330.service - OpenSSH per-connection server daemon (10.0.0.1:34330).
Feb 13 15:11:18.280847 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 34330 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE
Feb 13 15:11:18.283018 sshd-session[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:11:18.294519 systemd-logind[1467]: New session 1 of user core.
Feb 13 15:11:18.295484 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Feb 13 15:11:18.304408 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Feb 13 15:11:18.314196 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Feb 13 15:11:18.318439 systemd[1]: Starting user@500.service - User Manager for UID 500...
Feb 13 15:11:18.324651 (systemd)[1589]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Feb 13 15:11:18.326742 systemd-logind[1467]: New session c1 of user core.
Feb 13 15:11:18.449382 systemd[1589]: Queued start job for default target default.target.
Feb 13 15:11:18.458071 systemd[1589]: Created slice app.slice - User Application Slice.
Feb 13 15:11:18.458102 systemd[1589]: Reached target paths.target - Paths.
Feb 13 15:11:18.458162 systemd[1589]: Reached target timers.target - Timers.
Feb 13 15:11:18.459370 systemd[1589]: Starting dbus.socket - D-Bus User Message Bus Socket...
Feb 13 15:11:18.468736 systemd[1589]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Feb 13 15:11:18.468807 systemd[1589]: Reached target sockets.target - Sockets.
Feb 13 15:11:18.468850 systemd[1589]: Reached target basic.target - Basic System.
Feb 13 15:11:18.468879 systemd[1589]: Reached target default.target - Main User Target.
Feb 13 15:11:18.468905 systemd[1589]: Startup finished in 132ms.
Feb 13 15:11:18.469055 systemd[1]: Started user@500.service - User Manager for UID 500.
Feb 13 15:11:18.470569 systemd[1]: Started session-1.scope - Session 1 of User core.
Feb 13 15:11:18.526182 systemd[1]: Started sshd@1-10.0.0.48:22-10.0.0.1:34334.service - OpenSSH per-connection server daemon (10.0.0.1:34334).
Feb 13 15:11:18.564857 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 34334 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE
Feb 13 15:11:18.566124 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:11:18.569916 systemd-logind[1467]: New session 2 of user core.
Feb 13 15:11:18.579310 systemd[1]: Started session-2.scope - Session 2 of User core.
Feb 13 15:11:18.630568 sshd[1602]: Connection closed by 10.0.0.1 port 34334
Feb 13 15:11:18.630987 sshd-session[1600]: pam_unix(sshd:session): session closed for user core
Feb 13 15:11:18.639975 systemd[1]: sshd@1-10.0.0.48:22-10.0.0.1:34334.service: Deactivated successfully.
Feb 13 15:11:18.641333 systemd[1]: session-2.scope: Deactivated successfully.
Feb 13 15:11:18.642456 systemd-logind[1467]: Session 2 logged out. Waiting for processes to exit.
Feb 13 15:11:18.644391 systemd[1]: Started sshd@2-10.0.0.48:22-10.0.0.1:34348.service - OpenSSH per-connection server daemon (10.0.0.1:34348).
Feb 13 15:11:18.645154 systemd-logind[1467]: Removed session 2.
Feb 13 15:11:18.682384 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 34348 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE
Feb 13 15:11:18.683566 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:11:18.687674 systemd-logind[1467]: New session 3 of user core.
Feb 13 15:11:18.699302 systemd[1]: Started session-3.scope - Session 3 of User core.
Feb 13 15:11:18.746462 sshd[1610]: Connection closed by 10.0.0.1 port 34348
Feb 13 15:11:18.746782 sshd-session[1607]: pam_unix(sshd:session): session closed for user core
Feb 13 15:11:18.768660 systemd[1]: sshd@2-10.0.0.48:22-10.0.0.1:34348.service: Deactivated successfully.
Feb 13 15:11:18.771676 systemd[1]: session-3.scope: Deactivated successfully.
Feb 13 15:11:18.772483 systemd-logind[1467]: Session 3 logged out. Waiting for processes to exit.
Feb 13 15:11:18.785476 systemd[1]: Started sshd@3-10.0.0.48:22-10.0.0.1:34352.service - OpenSSH per-connection server daemon (10.0.0.1:34352).
Feb 13 15:11:18.786298 systemd-logind[1467]: Removed session 3.
Feb 13 15:11:18.821776 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 34352 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE
Feb 13 15:11:18.822976 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:11:18.827190 systemd-logind[1467]: New session 4 of user core.
Feb 13 15:11:18.839271 systemd[1]: Started session-4.scope - Session 4 of User core.
Feb 13 15:11:18.889574 sshd[1618]: Connection closed by 10.0.0.1 port 34352
Feb 13 15:11:18.890016 sshd-session[1615]: pam_unix(sshd:session): session closed for user core
Feb 13 15:11:18.902532 systemd[1]: sshd@3-10.0.0.48:22-10.0.0.1:34352.service: Deactivated successfully.
Feb 13 15:11:18.905473 systemd[1]: session-4.scope: Deactivated successfully.
Feb 13 15:11:18.906171 systemd-logind[1467]: Session 4 logged out. Waiting for processes to exit.
Feb 13 15:11:18.907718 systemd[1]: Started sshd@4-10.0.0.48:22-10.0.0.1:34360.service - OpenSSH per-connection server daemon (10.0.0.1:34360).
Feb 13 15:11:18.913494 systemd-logind[1467]: Removed session 4.
Feb 13 15:11:18.947731 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 34360 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE
Feb 13 15:11:18.948987 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:11:18.953043 systemd-logind[1467]: New session 5 of user core.
Feb 13 15:11:18.966301 systemd[1]: Started session-5.scope - Session 5 of User core.
Feb 13 15:11:19.027197 sudo[1627]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Feb 13 15:11:19.027460 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:11:19.052122 sudo[1627]: pam_unix(sudo:session): session closed for user root
Feb 13 15:11:19.053729 sshd[1626]: Connection closed by 10.0.0.1 port 34360
Feb 13 15:11:19.054149 sshd-session[1623]: pam_unix(sshd:session): session closed for user core
Feb 13 15:11:19.064231 systemd[1]: sshd@4-10.0.0.48:22-10.0.0.1:34360.service: Deactivated successfully.
Feb 13 15:11:19.065815 systemd[1]: session-5.scope: Deactivated successfully.
Feb 13 15:11:19.066508 systemd-logind[1467]: Session 5 logged out. Waiting for processes to exit.
Feb 13 15:11:19.076503 systemd[1]: Started sshd@5-10.0.0.48:22-10.0.0.1:34372.service - OpenSSH per-connection server daemon (10.0.0.1:34372).
Feb 13 15:11:19.077458 systemd-logind[1467]: Removed session 5.
Feb 13 15:11:19.114716 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 34372 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE
Feb 13 15:11:19.116025 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:11:19.119845 systemd-logind[1467]: New session 6 of user core.
Feb 13 15:11:19.130323 systemd[1]: Started session-6.scope - Session 6 of User core.
Feb 13 15:11:19.181035 sudo[1637]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Feb 13 15:11:19.181320 sudo[1637]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:11:19.184326 sudo[1637]: pam_unix(sudo:session): session closed for user root
Feb 13 15:11:19.188714 sudo[1636]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules
Feb 13 15:11:19.188989 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:11:19.204559 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Feb 13 15:11:19.226988 augenrules[1659]: No rules
Feb 13 15:11:19.228151 systemd[1]: audit-rules.service: Deactivated successfully.
Feb 13 15:11:19.229252 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Feb 13 15:11:19.230174 sudo[1636]: pam_unix(sudo:session): session closed for user root
Feb 13 15:11:19.231349 sshd[1635]: Connection closed by 10.0.0.1 port 34372
Feb 13 15:11:19.232292 sshd-session[1632]: pam_unix(sshd:session): session closed for user core
Feb 13 15:11:19.249358 systemd[1]: sshd@5-10.0.0.48:22-10.0.0.1:34372.service: Deactivated successfully.
Feb 13 15:11:19.250734 systemd[1]: session-6.scope: Deactivated successfully.
Feb 13 15:11:19.251401 systemd-logind[1467]: Session 6 logged out. Waiting for processes to exit.
Feb 13 15:11:19.258398 systemd[1]: Started sshd@6-10.0.0.48:22-10.0.0.1:34384.service - OpenSSH per-connection server daemon (10.0.0.1:34384).
Feb 13 15:11:19.259380 systemd-logind[1467]: Removed session 6.
Feb 13 15:11:19.292500 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 34384 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE
Feb 13 15:11:19.293702 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:11:19.297751 systemd-logind[1467]: New session 7 of user core.
Feb 13 15:11:19.306281 systemd[1]: Started session-7.scope - Session 7 of User core.
Feb 13 15:11:19.356156 sudo[1671]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Feb 13 15:11:19.356692 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:11:19.708366 systemd[1]: Starting docker.service - Docker Application Container Engine...
Feb 13 15:11:19.708458 (dockerd)[1690]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU
Feb 13 15:11:19.978280 dockerd[1690]: time="2025-02-13T15:11:19.978125090Z" level=info msg="Starting up"
Feb 13 15:11:20.173875 dockerd[1690]: time="2025-02-13T15:11:20.173814659Z" level=info msg="Loading containers: start."
Feb 13 15:11:20.327163 kernel: Initializing XFRM netlink socket
Feb 13 15:11:20.400493 systemd-networkd[1405]: docker0: Link UP
Feb 13 15:11:20.430425 dockerd[1690]: time="2025-02-13T15:11:20.430327141Z" level=info msg="Loading containers: done."
Feb 13 15:11:20.445334 dockerd[1690]: time="2025-02-13T15:11:20.445281467Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Feb 13 15:11:20.445482 dockerd[1690]: time="2025-02-13T15:11:20.445383373Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
Feb 13 15:11:20.445585 dockerd[1690]: time="2025-02-13T15:11:20.445562989Z" level=info msg="Daemon has completed initialization"
Feb 13 15:11:20.474515 dockerd[1690]: time="2025-02-13T15:11:20.474454098Z" level=info msg="API listen on /run/docker.sock"
Feb 13 15:11:20.474671 systemd[1]: Started docker.service - Docker Application Container Engine.
Feb 13 15:11:20.980788 containerd[1484]: time="2025-02-13T15:11:20.980730042Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\""
Feb 13 15:11:21.738234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2773696032.mount: Deactivated successfully.
Feb 13 15:11:22.874612 containerd[1484]: time="2025-02-13T15:11:22.874561591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:11:22.875571 containerd[1484]: time="2025-02-13T15:11:22.875290300Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=26218238"
Feb 13 15:11:22.876289 containerd[1484]: time="2025-02-13T15:11:22.876230662Z" level=info msg="ImageCreate event name:\"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:11:22.880216 containerd[1484]: time="2025-02-13T15:11:22.880164450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:11:22.884118 containerd[1484]: time="2025-02-13T15:11:22.884057403Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"26215036\" in 1.903283247s"
Feb 13 15:11:22.884118 containerd[1484]: time="2025-02-13T15:11:22.884104357Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:6417e1437b6d9a789e1ca789695a574e1df00a632bdbfbcae9695c9a7d500e32\""
Feb 13 15:11:22.884924 containerd[1484]: time="2025-02-13T15:11:22.884886659Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\""
Feb 13 15:11:24.169727 containerd[1484]: time="2025-02-13T15:11:24.169680609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:11:24.171288 containerd[1484]: time="2025-02-13T15:11:24.171236706Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=22528147"
Feb 13 15:11:24.173835 containerd[1484]: time="2025-02-13T15:11:24.172376133Z" level=info msg="ImageCreate event name:\"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:11:24.175913 containerd[1484]: time="2025-02-13T15:11:24.175862123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:11:24.176630 containerd[1484]: time="2025-02-13T15:11:24.176596077Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"23968941\" in 1.291674502s"
Feb 13 15:11:24.176630 containerd[1484]: time="2025-02-13T15:11:24.176627673Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:3c9285acfd2ff7915bb451cc40ac060366ac519f3fef00c455f5aca0e0346c4d\""
Feb 13 15:11:24.177558 containerd[1484]: time="2025-02-13T15:11:24.177187528Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\""
Feb 13 15:11:25.113984 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Feb 13 15:11:25.128433 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:11:25.230030 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:11:25.234117 (kubelet)[1953]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 15:11:25.326598 kubelet[1953]: E0213 15:11:25.326546    1953 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 15:11:25.329394 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 15:11:25.329529 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 15:11:25.329803 systemd[1]: kubelet.service: Consumed 142ms CPU time, 104.7M memory peak.
Feb 13 15:11:25.344897 containerd[1484]: time="2025-02-13T15:11:25.344845246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:11:25.345399 containerd[1484]: time="2025-02-13T15:11:25.345352428Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=17480802"
Feb 13 15:11:25.346341 containerd[1484]: time="2025-02-13T15:11:25.346310119Z" level=info msg="ImageCreate event name:\"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:11:25.350001 containerd[1484]: time="2025-02-13T15:11:25.349955905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:11:25.351029 containerd[1484]: time="2025-02-13T15:11:25.350766732Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"18921614\" in 1.173544329s"
Feb 13 15:11:25.351029 containerd[1484]: time="2025-02-13T15:11:25.350811447Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:82dfa03f692fb5d84f66c17d6ee9126b081182152b25d28ea456d89b7d5d8911\""
Feb 13 15:11:25.351339 containerd[1484]: time="2025-02-13T15:11:25.351308911Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\""
Feb 13 15:11:26.397361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2113371442.mount: Deactivated successfully.
Feb 13 15:11:26.761647 containerd[1484]: time="2025-02-13T15:11:26.761397379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:11:26.763170 containerd[1484]: time="2025-02-13T15:11:26.763092193Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=27363384"
Feb 13 15:11:26.764195 containerd[1484]: time="2025-02-13T15:11:26.764134398Z" level=info msg="ImageCreate event name:\"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:11:26.770485 containerd[1484]: time="2025-02-13T15:11:26.770425305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:11:26.771506 containerd[1484]: time="2025-02-13T15:11:26.771460190Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"27362401\" in 1.420111404s"
Feb 13 15:11:26.771506 containerd[1484]: time="2025-02-13T15:11:26.771495387Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\""
Feb 13 15:11:26.772396 containerd[1484]: time="2025-02-13T15:11:26.772349492Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\""
Feb 13 15:11:27.492117 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1143882018.mount: Deactivated successfully.
Feb 13 15:11:28.248411 containerd[1484]: time="2025-02-13T15:11:28.248359221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:11:28.248992 containerd[1484]: time="2025-02-13T15:11:28.248947280Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624"
Feb 13 15:11:28.250399 containerd[1484]: time="2025-02-13T15:11:28.250364333Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:11:28.253428 containerd[1484]: time="2025-02-13T15:11:28.253396060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:11:28.254801 containerd[1484]: time="2025-02-13T15:11:28.254758639Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.482352473s"
Feb 13 15:11:28.254801 containerd[1484]: time="2025-02-13T15:11:28.254798835Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\""
Feb 13 15:11:28.255416 containerd[1484]: time="2025-02-13T15:11:28.255382614Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\""
Feb 13 15:11:28.779112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount348458460.mount: Deactivated successfully.
Feb 13 15:11:28.783368 containerd[1484]: time="2025-02-13T15:11:28.783316099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:11:28.784062 containerd[1484]: time="2025-02-13T15:11:28.784016707Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705"
Feb 13 15:11:28.784796 containerd[1484]: time="2025-02-13T15:11:28.784758150Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:11:28.787370 containerd[1484]: time="2025-02-13T15:11:28.787332364Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:11:28.788320 containerd[1484]: time="2025-02-13T15:11:28.788232111Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 532.81466ms"
Feb 13 15:11:28.788320 containerd[1484]: time="2025-02-13T15:11:28.788266388Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\""
Feb 13 15:11:28.788857 containerd[1484]: time="2025-02-13T15:11:28.788812611Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\""
Feb 13 15:11:29.511249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2160153118.mount: Deactivated successfully.
Feb 13 15:11:31.023738 containerd[1484]: time="2025-02-13T15:11:31.023677487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:11:31.025469 containerd[1484]: time="2025-02-13T15:11:31.025405804Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812431"
Feb 13 15:11:31.026722 containerd[1484]: time="2025-02-13T15:11:31.026668846Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:11:31.029926 containerd[1484]: time="2025-02-13T15:11:31.029870785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:11:31.031466 containerd[1484]: time="2025-02-13T15:11:31.031329047Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.24247752s"
Feb 13 15:11:31.031466 containerd[1484]: time="2025-02-13T15:11:31.031372323Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\""
Feb 13 15:11:35.579994 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Feb 13 15:11:35.588347 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:11:35.680918 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:11:35.684483 (kubelet)[2111]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 15:11:35.716289 kubelet[2111]: E0213 15:11:35.716197    2111 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 15:11:35.718710 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 15:11:35.718853 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 15:11:35.719150 systemd[1]: kubelet.service: Consumed 123ms CPU time, 102M memory peak.
Feb 13 15:11:35.967420 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:11:35.967562 systemd[1]: kubelet.service: Consumed 123ms CPU time, 102M memory peak.
Feb 13 15:11:35.977395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:11:36.003688 systemd[1]: Reload requested from client PID 2127 ('systemctl') (unit session-7.scope)...
Feb 13 15:11:36.003705 systemd[1]: Reloading...
Feb 13 15:11:36.073183 zram_generator::config[2171]: No configuration found.
Feb 13 15:11:36.348542 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:11:36.420494 systemd[1]: Reloading finished in 416 ms.
Feb 13 15:11:36.456398 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:11:36.458902 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:11:36.460431 systemd[1]: kubelet.service: Deactivated successfully.
Feb 13 15:11:36.460635 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:11:36.460679 systemd[1]: kubelet.service: Consumed 82ms CPU time, 90.1M memory peak.
Feb 13 15:11:36.462133 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:11:36.566221 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:11:36.570650 (kubelet)[2218]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Feb 13 15:11:36.609941 kubelet[2218]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 15:11:36.609941 kubelet[2218]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI.
Feb 13 15:11:36.609941 kubelet[2218]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 15:11:36.610338 kubelet[2218]: I0213 15:11:36.609901    2218 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 13 15:11:37.080201 kubelet[2218]: I0213 15:11:37.080078    2218 server.go:520] "Kubelet version" kubeletVersion="v1.32.0"
Feb 13 15:11:37.080201 kubelet[2218]: I0213 15:11:37.080113    2218 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 13 15:11:37.080466 kubelet[2218]: I0213 15:11:37.080429    2218 server.go:954] "Client rotation is on, will bootstrap in background"
Feb 13 15:11:37.129945 kubelet[2218]: E0213 15:11:37.129771    2218 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError"
Feb 13 15:11:37.131815 kubelet[2218]: I0213 15:11:37.131792    2218 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 13 15:11:37.137999 kubelet[2218]: E0213 15:11:37.137955    2218 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
Feb 13 15:11:37.137999 kubelet[2218]: I0213 15:11:37.137986    2218 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
Feb 13 15:11:37.140992 kubelet[2218]: I0213 15:11:37.140963    2218 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb 13 15:11:37.141655 kubelet[2218]: I0213 15:11:37.141608    2218 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb 13 15:11:37.141840 kubelet[2218]: I0213 15:11:37.141653    2218 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
Feb 13 15:11:37.141982 kubelet[2218]: I0213 15:11:37.141963    2218 topology_manager.go:138] "Creating topology manager with none policy"
Feb 13 15:11:37.141982 kubelet[2218]: I0213 15:11:37.141977    2218 container_manager_linux.go:304] "Creating device plugin manager"
Feb 13 15:11:37.142278 kubelet[2218]: I0213 15:11:37.142257    2218 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 15:11:37.146603 kubelet[2218]: I0213 15:11:37.146574    2218 kubelet.go:446] "Attempting to sync node with API server"
Feb 13 15:11:37.146643 kubelet[2218]: I0213 15:11:37.146610    2218 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 13 15:11:37.146643 kubelet[2218]: I0213 15:11:37.146633    2218 kubelet.go:352] "Adding apiserver pod source"
Feb 13 15:11:37.147521 kubelet[2218]: I0213 15:11:37.146644    2218 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 13 15:11:37.147521 kubelet[2218]: W0213 15:11:37.147324    2218 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused
Feb 13 15:11:37.147521 kubelet[2218]: E0213 15:11:37.147379    2218 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError"
Feb 13 15:11:37.147592 kubelet[2218]: W0213 15:11:37.147516    2218 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused
Feb 13 15:11:37.147592 kubelet[2218]: E0213 15:11:37.147559    2218 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError"
Feb 13 15:11:37.149263 kubelet[2218]: I0213 15:11:37.149197    2218 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1"
Feb 13 15:11:37.149840 kubelet[2218]: I0213 15:11:37.149817    2218 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Feb 13 15:11:37.150030 kubelet[2218]: W0213 15:11:37.150016    2218 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Feb 13 15:11:37.151015 kubelet[2218]: I0213 15:11:37.150990    2218 watchdog_linux.go:99] "Systemd watchdog is not enabled"
Feb 13 15:11:37.151068 kubelet[2218]: I0213 15:11:37.151027    2218 server.go:1287] "Started kubelet"
Feb 13 15:11:37.152044 kubelet[2218]: I0213 15:11:37.151134    2218 server.go:169] "Starting to listen" address="0.0.0.0" port=10250
Feb 13 15:11:37.153336 kubelet[2218]: I0213 15:11:37.153293    2218 server.go:490] "Adding debug handlers to kubelet server"
Feb 13 15:11:37.158953 kubelet[2218]: I0213 15:11:37.158602    2218 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Feb 13 15:11:37.159240 kubelet[2218]: I0213 15:11:37.159216    2218 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Feb 13 15:11:37.160075 kubelet[2218]: E0213 15:11:37.157255    2218 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.48:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.48:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823cd3254dd806f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:11:37.151004783 +0000 UTC m=+0.577314268,LastTimestamp:2025-02-13 15:11:37.151004783 +0000 UTC m=+0.577314268,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}"
Feb 13 15:11:37.160720 kubelet[2218]: I0213 15:11:37.160661    2218 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 13 15:11:37.161033 kubelet[2218]: I0213 15:11:37.161010    2218 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
Feb 13 15:11:37.161924 kubelet[2218]: W0213 15:11:37.161785    2218 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused
Feb 13 15:11:37.161924 kubelet[2218]: E0213 15:11:37.161889    2218 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError"
Feb 13 15:11:37.161924 kubelet[2218]: I0213 15:11:37.161047    2218 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
Feb 13 15:11:37.162234 kubelet[2218]: I0213 15:11:37.162074    2218 factory.go:221] Registration of the systemd container factory successfully
Feb 13 15:11:37.162234 kubelet[2218]: I0213 15:11:37.162185    2218 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Feb 13 15:11:37.162234 kubelet[2218]: E0213 15:11:37.161009    2218 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found"
Feb 13 15:11:37.162448 kubelet[2218]: I0213 15:11:37.161038    2218 volume_manager.go:297] "Starting Kubelet Volume Manager"
Feb 13 15:11:37.162652 kubelet[2218]: I0213 15:11:37.162635    2218 reconciler.go:26] "Reconciler: start to sync state"
Feb 13 15:11:37.163053 kubelet[2218]: E0213 15:11:37.163032    2218 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb 13 15:11:37.163372 kubelet[2218]: E0213 15:11:37.163340    2218 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="200ms"
Feb 13 15:11:37.163716 kubelet[2218]: I0213 15:11:37.163699    2218 factory.go:221] Registration of the containerd container factory successfully
Feb 13 15:11:37.173274 kubelet[2218]: I0213 15:11:37.173250    2218 cpu_manager.go:221] "Starting CPU manager" policy="none"
Feb 13 15:11:37.173274 kubelet[2218]: I0213 15:11:37.173267    2218 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s"
Feb 13 15:11:37.173371 kubelet[2218]: I0213 15:11:37.173285    2218 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 15:11:37.176668 kubelet[2218]: I0213 15:11:37.176604    2218 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Feb 13 15:11:37.177890 kubelet[2218]: I0213 15:11:37.177731    2218 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Feb 13 15:11:37.177890 kubelet[2218]: I0213 15:11:37.177756    2218 status_manager.go:227] "Starting to sync pod status with apiserver"
Feb 13 15:11:37.177890 kubelet[2218]: I0213 15:11:37.177776    2218 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
Feb 13 15:11:37.177890 kubelet[2218]: I0213 15:11:37.177782    2218 kubelet.go:2388] "Starting kubelet main sync loop"
Feb 13 15:11:37.177890 kubelet[2218]: E0213 15:11:37.177833    2218 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Feb 13 15:11:37.178415 kubelet[2218]: W0213 15:11:37.178327    2218 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused
Feb 13 15:11:37.178484 kubelet[2218]: E0213 15:11:37.178423    2218 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError"
Feb 13 15:11:37.256997 kubelet[2218]: I0213 15:11:37.256932    2218 policy_none.go:49] "None policy: Start"
Feb 13 15:11:37.256997 kubelet[2218]: I0213 15:11:37.256966    2218 memory_manager.go:186] "Starting memorymanager" policy="None"
Feb 13 15:11:37.256997 kubelet[2218]: I0213 15:11:37.256980    2218 state_mem.go:35] "Initializing new in-memory state store"
Feb 13 15:11:37.261773 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice.
Feb 13 15:11:37.262374 kubelet[2218]: E0213 15:11:37.262334    2218 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found"
Feb 13 15:11:37.273826 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice.
Feb 13 15:11:37.276783 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice.
Feb 13 15:11:37.278631 kubelet[2218]: E0213 15:11:37.278585    2218 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Feb 13 15:11:37.284077 kubelet[2218]: I0213 15:11:37.283868    2218 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 13 15:11:37.284077 kubelet[2218]: I0213 15:11:37.284071    2218 eviction_manager.go:189] "Eviction manager: starting control loop"
Feb 13 15:11:37.284184 kubelet[2218]: I0213 15:11:37.284083    2218 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Feb 13 15:11:37.284336 kubelet[2218]: I0213 15:11:37.284318    2218 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb 13 15:11:37.285204 kubelet[2218]: E0213 15:11:37.285122    2218 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
Feb 13 15:11:37.285372 kubelet[2218]: E0213 15:11:37.285222    2218 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found"
Feb 13 15:11:37.363989 kubelet[2218]: E0213 15:11:37.363834    2218 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="400ms"
Feb 13 15:11:37.385887 kubelet[2218]: I0213 15:11:37.385831    2218 kubelet_node_status.go:76] "Attempting to register node" node="localhost"
Feb 13 15:11:37.386339 kubelet[2218]: E0213 15:11:37.386307    2218 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost"
Feb 13 15:11:37.486252 systemd[1]: Created slice kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice - libcontainer container kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice.
Feb 13 15:11:37.499047 kubelet[2218]: E0213 15:11:37.499005    2218 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost"
Feb 13 15:11:37.501064 systemd[1]: Created slice kubepods-burstable-pod19c5a427adfbe39ebb4300658e3fbc01.slice - libcontainer container kubepods-burstable-pod19c5a427adfbe39ebb4300658e3fbc01.slice.
Feb 13 15:11:37.502360 kubelet[2218]: E0213 15:11:37.502341    2218 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost"
Feb 13 15:11:37.513524 systemd[1]: Created slice kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice - libcontainer container kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice.
Feb 13 15:11:37.514975 kubelet[2218]: E0213 15:11:37.514956    2218 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost"
Feb 13 15:11:37.564549 kubelet[2218]: I0213 15:11:37.564513    2218 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19c5a427adfbe39ebb4300658e3fbc01-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"19c5a427adfbe39ebb4300658e3fbc01\") " pod="kube-system/kube-apiserver-localhost"
Feb 13 15:11:37.564549 kubelet[2218]: I0213 15:11:37.564550    2218 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost"
Feb 13 15:11:37.564549 kubelet[2218]: I0213 15:11:37.564568    2218 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost"
Feb 13 15:11:37.564722 kubelet[2218]: I0213 15:11:37.564584    2218 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost"
Feb 13 15:11:37.564722 kubelet[2218]: I0213 15:11:37.564601    2218 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19c5a427adfbe39ebb4300658e3fbc01-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"19c5a427adfbe39ebb4300658e3fbc01\") " pod="kube-system/kube-apiserver-localhost"
Feb 13 15:11:37.564722 kubelet[2218]: I0213 15:11:37.564615    2218 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost"
Feb 13 15:11:37.564722 kubelet[2218]: I0213 15:11:37.564629    2218 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost"
Feb 13 15:11:37.564722 kubelet[2218]: I0213 15:11:37.564643    2218 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost"
Feb 13 15:11:37.564819 kubelet[2218]: I0213 15:11:37.564661    2218 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19c5a427adfbe39ebb4300658e3fbc01-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"19c5a427adfbe39ebb4300658e3fbc01\") " pod="kube-system/kube-apiserver-localhost"
Feb 13 15:11:37.587709 kubelet[2218]: I0213 15:11:37.587645    2218 kubelet_node_status.go:76] "Attempting to register node" node="localhost"
Feb 13 15:11:37.588025 kubelet[2218]: E0213 15:11:37.587989    2218 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost"
Feb 13 15:11:37.656770 kubelet[2218]: E0213 15:11:37.656567    2218 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.48:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.48:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823cd3254dd806f  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:11:37.151004783 +0000 UTC m=+0.577314268,LastTimestamp:2025-02-13 15:11:37.151004783 +0000 UTC m=+0.577314268,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}"
Feb 13 15:11:37.764805 kubelet[2218]: E0213 15:11:37.764758    2218 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="800ms"
Feb 13 15:11:37.800849 containerd[1484]: time="2025-02-13T15:11:37.800802288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,}"
Feb 13 15:11:37.804978 containerd[1484]: time="2025-02-13T15:11:37.804780339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:19c5a427adfbe39ebb4300658e3fbc01,Namespace:kube-system,Attempt:0,}"
Feb 13 15:11:37.816347 containerd[1484]: time="2025-02-13T15:11:37.816046024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,}"
Feb 13 15:11:37.989515 kubelet[2218]: I0213 15:11:37.989362    2218 kubelet_node_status.go:76] "Attempting to register node" node="localhost"
Feb 13 15:11:37.989938 kubelet[2218]: E0213 15:11:37.989657    2218 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost"
Feb 13 15:11:38.117921 kubelet[2218]: W0213 15:11:38.117780    2218 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused
Feb 13 15:11:38.117921 kubelet[2218]: E0213 15:11:38.117859    2218 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError"
Feb 13 15:11:38.293235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1645902153.mount: Deactivated successfully.
Feb 13 15:11:38.302276 containerd[1484]: time="2025-02-13T15:11:38.302212818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 15:11:38.304971 containerd[1484]: time="2025-02-13T15:11:38.304918255Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175"
Feb 13 15:11:38.308105 containerd[1484]: time="2025-02-13T15:11:38.308047899Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 15:11:38.309829 containerd[1484]: time="2025-02-13T15:11:38.309775329Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 15:11:38.310852 containerd[1484]: time="2025-02-13T15:11:38.310821410Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 15:11:38.313252 containerd[1484]: time="2025-02-13T15:11:38.313209310Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Feb 13 15:11:38.313867 containerd[1484]: time="2025-02-13T15:11:38.313826504Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Feb 13 15:11:38.315683 containerd[1484]: time="2025-02-13T15:11:38.315634328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 15:11:38.316552 containerd[1484]: time="2025-02-13T15:11:38.316515302Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 511.654728ms"
Feb 13 15:11:38.318261 containerd[1484]: time="2025-02-13T15:11:38.318221493Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 502.104755ms"
Feb 13 15:11:38.321475 containerd[1484]: time="2025-02-13T15:11:38.321428492Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 520.53697ms"
Feb 13 15:11:38.459912 containerd[1484]: time="2025-02-13T15:11:38.459798635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:11:38.459912 containerd[1484]: time="2025-02-13T15:11:38.459871110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:11:38.459912 containerd[1484]: time="2025-02-13T15:11:38.459882509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:11:38.460166 containerd[1484]: time="2025-02-13T15:11:38.459957463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:11:38.461043 containerd[1484]: time="2025-02-13T15:11:38.460674529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:11:38.461043 containerd[1484]: time="2025-02-13T15:11:38.460741244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:11:38.461043 containerd[1484]: time="2025-02-13T15:11:38.460756963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:11:38.461043 containerd[1484]: time="2025-02-13T15:11:38.460912031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:11:38.466411 containerd[1484]: time="2025-02-13T15:11:38.466226911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:11:38.466411 containerd[1484]: time="2025-02-13T15:11:38.466283867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:11:38.466411 containerd[1484]: time="2025-02-13T15:11:38.466295106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:11:38.466411 containerd[1484]: time="2025-02-13T15:11:38.466371740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:11:38.485370 systemd[1]: Started cri-containerd-79587d49c800aa48dc1578201c2484555b67d38867f8df9674ceb46c6f4adeaf.scope - libcontainer container 79587d49c800aa48dc1578201c2484555b67d38867f8df9674ceb46c6f4adeaf.
Feb 13 15:11:38.486632 systemd[1]: Started cri-containerd-e3f95cc33beb8930c80365962957d1dfdd25f202d16c54811aca3c457827c92a.scope - libcontainer container e3f95cc33beb8930c80365962957d1dfdd25f202d16c54811aca3c457827c92a.
Feb 13 15:11:38.491850 systemd[1]: Started cri-containerd-4ed4ba273f142bc8e0b1e95158e4b5f392c6bdc94344891433cac89d1da4def2.scope - libcontainer container 4ed4ba273f142bc8e0b1e95158e4b5f392c6bdc94344891433cac89d1da4def2.
Feb 13 15:11:38.502325 kubelet[2218]: W0213 15:11:38.502225    2218 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused
Feb 13 15:11:38.502325 kubelet[2218]: E0213 15:11:38.502296    2218 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError"
Feb 13 15:11:38.522234 containerd[1484]: time="2025-02-13T15:11:38.521585944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3f95cc33beb8930c80365962957d1dfdd25f202d16c54811aca3c457827c92a\""
Feb 13 15:11:38.528751 containerd[1484]: time="2025-02-13T15:11:38.528705768Z" level=info msg="CreateContainer within sandbox \"e3f95cc33beb8930c80365962957d1dfdd25f202d16c54811aca3c457827c92a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Feb 13 15:11:38.533377 containerd[1484]: time="2025-02-13T15:11:38.533343419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,} returns sandbox id \"79587d49c800aa48dc1578201c2484555b67d38867f8df9674ceb46c6f4adeaf\""
Feb 13 15:11:38.535215 containerd[1484]: time="2025-02-13T15:11:38.535189440Z" level=info msg="CreateContainer within sandbox \"79587d49c800aa48dc1578201c2484555b67d38867f8df9674ceb46c6f4adeaf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Feb 13 15:11:38.539045 containerd[1484]: time="2025-02-13T15:11:38.539001473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:19c5a427adfbe39ebb4300658e3fbc01,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ed4ba273f142bc8e0b1e95158e4b5f392c6bdc94344891433cac89d1da4def2\""
Feb 13 15:11:38.541569 containerd[1484]: time="2025-02-13T15:11:38.541540882Z" level=info msg="CreateContainer within sandbox \"4ed4ba273f142bc8e0b1e95158e4b5f392c6bdc94344891433cac89d1da4def2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Feb 13 15:11:38.565377 kubelet[2218]: E0213 15:11:38.565255    2218 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="1.6s"
Feb 13 15:11:38.588322 containerd[1484]: time="2025-02-13T15:11:38.588252085Z" level=info msg="CreateContainer within sandbox \"e3f95cc33beb8930c80365962957d1dfdd25f202d16c54811aca3c457827c92a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"69820dfb853d6cfa76b7944e148050a2f9c98f7736b738792917917aeb82c740\""
Feb 13 15:11:38.588922 containerd[1484]: time="2025-02-13T15:11:38.588892197Z" level=info msg="StartContainer for \"69820dfb853d6cfa76b7944e148050a2f9c98f7736b738792917917aeb82c740\""
Feb 13 15:11:38.589807 containerd[1484]: time="2025-02-13T15:11:38.589637781Z" level=info msg="CreateContainer within sandbox \"79587d49c800aa48dc1578201c2484555b67d38867f8df9674ceb46c6f4adeaf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cdbd9600cdc74759c459502168ddadbaa5763ef5075661d18501b6a959547a00\""
Feb 13 15:11:38.591242 containerd[1484]: time="2025-02-13T15:11:38.590180340Z" level=info msg="StartContainer for \"cdbd9600cdc74759c459502168ddadbaa5763ef5075661d18501b6a959547a00\""
Feb 13 15:11:38.592097 containerd[1484]: time="2025-02-13T15:11:38.592055399Z" level=info msg="CreateContainer within sandbox \"4ed4ba273f142bc8e0b1e95158e4b5f392c6bdc94344891433cac89d1da4def2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"071a6d010b09e67bfadddf1b531ddce4566ffe83fc64262dddbbf6b6c3ac69df\""
Feb 13 15:11:38.592518 containerd[1484]: time="2025-02-13T15:11:38.592489326Z" level=info msg="StartContainer for \"071a6d010b09e67bfadddf1b531ddce4566ffe83fc64262dddbbf6b6c3ac69df\""
Feb 13 15:11:38.600120 kubelet[2218]: W0213 15:11:38.600009    2218 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused
Feb 13 15:11:38.600427 kubelet[2218]: E0213 15:11:38.600302    2218 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError"
Feb 13 15:11:38.609021 kubelet[2218]: W0213 15:11:38.608953    2218 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused
Feb 13 15:11:38.609319 kubelet[2218]: E0213 15:11:38.609296    2218 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.48:6443: connect: connection refused" logger="UnhandledError"
Feb 13 15:11:38.624365 systemd[1]: Started cri-containerd-071a6d010b09e67bfadddf1b531ddce4566ffe83fc64262dddbbf6b6c3ac69df.scope - libcontainer container 071a6d010b09e67bfadddf1b531ddce4566ffe83fc64262dddbbf6b6c3ac69df.
Feb 13 15:11:38.626480 systemd[1]: Started cri-containerd-69820dfb853d6cfa76b7944e148050a2f9c98f7736b738792917917aeb82c740.scope - libcontainer container 69820dfb853d6cfa76b7944e148050a2f9c98f7736b738792917917aeb82c740.
Feb 13 15:11:38.627451 systemd[1]: Started cri-containerd-cdbd9600cdc74759c459502168ddadbaa5763ef5075661d18501b6a959547a00.scope - libcontainer container cdbd9600cdc74759c459502168ddadbaa5763ef5075661d18501b6a959547a00.
Feb 13 15:11:38.675478 containerd[1484]: time="2025-02-13T15:11:38.675425403Z" level=info msg="StartContainer for \"cdbd9600cdc74759c459502168ddadbaa5763ef5075661d18501b6a959547a00\" returns successfully"
Feb 13 15:11:38.675764 containerd[1484]: time="2025-02-13T15:11:38.675723221Z" level=info msg="StartContainer for \"071a6d010b09e67bfadddf1b531ddce4566ffe83fc64262dddbbf6b6c3ac69df\" returns successfully"
Feb 13 15:11:38.675959 containerd[1484]: time="2025-02-13T15:11:38.675860130Z" level=info msg="StartContainer for \"69820dfb853d6cfa76b7944e148050a2f9c98f7736b738792917917aeb82c740\" returns successfully"
Feb 13 15:11:38.791376 kubelet[2218]: I0213 15:11:38.791271    2218 kubelet_node_status.go:76] "Attempting to register node" node="localhost"
Feb 13 15:11:38.792040 kubelet[2218]: E0213 15:11:38.792012    2218 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost"
Feb 13 15:11:39.185288 kubelet[2218]: E0213 15:11:39.184990    2218 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost"
Feb 13 15:11:39.186755 kubelet[2218]: E0213 15:11:39.186497    2218 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost"
Feb 13 15:11:39.190074 kubelet[2218]: E0213 15:11:39.190053    2218 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost"
Feb 13 15:11:40.193176 kubelet[2218]: E0213 15:11:40.192284    2218 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost"
Feb 13 15:11:40.193176 kubelet[2218]: E0213 15:11:40.192446    2218 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost"
Feb 13 15:11:40.397159 kubelet[2218]: I0213 15:11:40.393888    2218 kubelet_node_status.go:76] "Attempting to register node" node="localhost"
Feb 13 15:11:40.474938 kubelet[2218]: E0213 15:11:40.474829    2218 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost"
Feb 13 15:11:40.535450 kubelet[2218]: I0213 15:11:40.535400    2218 kubelet_node_status.go:79] "Successfully registered node" node="localhost"
Feb 13 15:11:40.535450 kubelet[2218]: E0213 15:11:40.535439    2218 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found"
Feb 13 15:11:40.538886 kubelet[2218]: E0213 15:11:40.538861    2218 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found"
Feb 13 15:11:40.640657 kubelet[2218]: E0213 15:11:40.639596    2218 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found"
Feb 13 15:11:40.663636 kubelet[2218]: I0213 15:11:40.663577    2218 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost"
Feb 13 15:11:40.675158 kubelet[2218]: E0213 15:11:40.674162    2218 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost"
Feb 13 15:11:40.675158 kubelet[2218]: I0213 15:11:40.674196    2218 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost"
Feb 13 15:11:40.676268 kubelet[2218]: E0213 15:11:40.676235    2218 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost"
Feb 13 15:11:40.676268 kubelet[2218]: I0213 15:11:40.676264    2218 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost"
Feb 13 15:11:40.678201 kubelet[2218]: E0213 15:11:40.678166    2218 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost"
Feb 13 15:11:41.148856 kubelet[2218]: I0213 15:11:41.148791    2218 apiserver.go:52] "Watching apiserver"
Feb 13 15:11:41.162551 kubelet[2218]: I0213 15:11:41.162517    2218 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
Feb 13 15:11:41.192490 kubelet[2218]: I0213 15:11:41.192449    2218 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost"
Feb 13 15:11:41.194783 kubelet[2218]: E0213 15:11:41.194598    2218 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost"
Feb 13 15:11:42.527324 systemd[1]: Reload requested from client PID 2499 ('systemctl') (unit session-7.scope)...
Feb 13 15:11:42.527341 systemd[1]: Reloading...
Feb 13 15:11:42.602219 zram_generator::config[2543]: No configuration found.
Feb 13 15:11:42.694889 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:11:42.778158 systemd[1]: Reloading finished in 250 ms.
Feb 13 15:11:42.799247 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:11:42.813056 systemd[1]: kubelet.service: Deactivated successfully.
Feb 13 15:11:42.813327 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:11:42.813387 systemd[1]: kubelet.service: Consumed 951ms CPU time, 123.5M memory peak.
Feb 13 15:11:42.824853 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:11:42.926964 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:11:42.931023 (kubelet)[2585]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Feb 13 15:11:42.978863 kubelet[2585]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 15:11:42.978863 kubelet[2585]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI.
Feb 13 15:11:42.978863 kubelet[2585]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 15:11:42.979229 kubelet[2585]: I0213 15:11:42.978905    2585 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 13 15:11:42.988725 kubelet[2585]: I0213 15:11:42.988682    2585 server.go:520] "Kubelet version" kubeletVersion="v1.32.0"
Feb 13 15:11:42.988725 kubelet[2585]: I0213 15:11:42.988714    2585 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 13 15:11:42.989008 kubelet[2585]: I0213 15:11:42.988981    2585 server.go:954] "Client rotation is on, will bootstrap in background"
Feb 13 15:11:42.991301 kubelet[2585]: I0213 15:11:42.991272    2585 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Feb 13 15:11:42.994396 kubelet[2585]: I0213 15:11:42.994364    2585 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 13 15:11:42.999442 kubelet[2585]: E0213 15:11:42.999411    2585 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
Feb 13 15:11:42.999442 kubelet[2585]: I0213 15:11:42.999442    2585 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
Feb 13 15:11:43.002078 kubelet[2585]: I0213 15:11:43.002034    2585 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb 13 15:11:43.002899 kubelet[2585]: I0213 15:11:43.002832    2585 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb 13 15:11:43.003932 kubelet[2585]: I0213 15:11:43.002867    2585 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
Feb 13 15:11:43.003932 kubelet[2585]: I0213 15:11:43.003936    2585 topology_manager.go:138] "Creating topology manager with none policy"
Feb 13 15:11:43.004069 kubelet[2585]: I0213 15:11:43.003946    2585 container_manager_linux.go:304] "Creating device plugin manager"
Feb 13 15:11:43.004069 kubelet[2585]: I0213 15:11:43.003997    2585 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 15:11:43.004320 kubelet[2585]: I0213 15:11:43.004304    2585 kubelet.go:446] "Attempting to sync node with API server"
Feb 13 15:11:43.004355 kubelet[2585]: I0213 15:11:43.004325    2585 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 13 15:11:43.004355 kubelet[2585]: I0213 15:11:43.004349    2585 kubelet.go:352] "Adding apiserver pod source"
Feb 13 15:11:43.004404 kubelet[2585]: I0213 15:11:43.004359    2585 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 13 15:11:43.005641 kubelet[2585]: I0213 15:11:43.005621    2585 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1"
Feb 13 15:11:43.008337 kubelet[2585]: I0213 15:11:43.008264    2585 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Feb 13 15:11:43.008788 kubelet[2585]: I0213 15:11:43.008773    2585 watchdog_linux.go:99] "Systemd watchdog is not enabled"
Feb 13 15:11:43.008828 kubelet[2585]: I0213 15:11:43.008805    2585 server.go:1287] "Started kubelet"
Feb 13 15:11:43.010421 kubelet[2585]: I0213 15:11:43.010219    2585 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Feb 13 15:11:43.010508 kubelet[2585]: I0213 15:11:43.010483    2585 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Feb 13 15:11:43.010585 kubelet[2585]: I0213 15:11:43.010539    2585 server.go:169] "Starting to listen" address="0.0.0.0" port=10250
Feb 13 15:11:43.013447 kubelet[2585]: I0213 15:11:43.013341    2585 server.go:490] "Adding debug handlers to kubelet server"
Feb 13 15:11:43.017028 kubelet[2585]: I0213 15:11:43.016999    2585 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 13 15:11:43.020171 kubelet[2585]: I0213 15:11:43.017257    2585 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
Feb 13 15:11:43.020171 kubelet[2585]: I0213 15:11:43.018353    2585 volume_manager.go:297] "Starting Kubelet Volume Manager"
Feb 13 15:11:43.020171 kubelet[2585]: E0213 15:11:43.018476    2585 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found"
Feb 13 15:11:43.020171 kubelet[2585]: I0213 15:11:43.018668    2585 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
Feb 13 15:11:43.020171 kubelet[2585]: I0213 15:11:43.019295    2585 reconciler.go:26] "Reconciler: start to sync state"
Feb 13 15:11:43.020171 kubelet[2585]: I0213 15:11:43.019742    2585 factory.go:221] Registration of the systemd container factory successfully
Feb 13 15:11:43.020171 kubelet[2585]: I0213 15:11:43.020155    2585 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Feb 13 15:11:43.030171 kubelet[2585]: E0213 15:11:43.028522    2585 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb 13 15:11:43.030171 kubelet[2585]: I0213 15:11:43.029839    2585 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Feb 13 15:11:43.030869 kubelet[2585]: I0213 15:11:43.030841    2585 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Feb 13 15:11:43.030869 kubelet[2585]: I0213 15:11:43.030867    2585 status_manager.go:227] "Starting to sync pod status with apiserver"
Feb 13 15:11:43.030960 kubelet[2585]: I0213 15:11:43.030886    2585 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
Feb 13 15:11:43.030960 kubelet[2585]: I0213 15:11:43.030894    2585 kubelet.go:2388] "Starting kubelet main sync loop"
Feb 13 15:11:43.030960 kubelet[2585]: E0213 15:11:43.030934    2585 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Feb 13 15:11:43.031276 kubelet[2585]: I0213 15:11:43.031253    2585 factory.go:221] Registration of the containerd container factory successfully
Feb 13 15:11:43.079009 kubelet[2585]: I0213 15:11:43.078832    2585 cpu_manager.go:221] "Starting CPU manager" policy="none"
Feb 13 15:11:43.079009 kubelet[2585]: I0213 15:11:43.078848    2585 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s"
Feb 13 15:11:43.079009 kubelet[2585]: I0213 15:11:43.078868    2585 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 15:11:43.079251 kubelet[2585]: I0213 15:11:43.079025    2585 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Feb 13 15:11:43.079251 kubelet[2585]: I0213 15:11:43.079037    2585 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Feb 13 15:11:43.079251 kubelet[2585]: I0213 15:11:43.079055    2585 policy_none.go:49] "None policy: Start"
Feb 13 15:11:43.079251 kubelet[2585]: I0213 15:11:43.079063    2585 memory_manager.go:186] "Starting memorymanager" policy="None"
Feb 13 15:11:43.079251 kubelet[2585]: I0213 15:11:43.079071    2585 state_mem.go:35] "Initializing new in-memory state store"
Feb 13 15:11:43.079251 kubelet[2585]: I0213 15:11:43.079171    2585 state_mem.go:75] "Updated machine memory state"
Feb 13 15:11:43.087002 kubelet[2585]: I0213 15:11:43.086896    2585 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 13 15:11:43.087730 kubelet[2585]: I0213 15:11:43.087068    2585 eviction_manager.go:189] "Eviction manager: starting control loop"
Feb 13 15:11:43.087730 kubelet[2585]: I0213 15:11:43.087088    2585 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Feb 13 15:11:43.087730 kubelet[2585]: I0213 15:11:43.087335    2585 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb 13 15:11:43.091607 kubelet[2585]: E0213 15:11:43.091546    2585 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
Feb 13 15:11:43.131797 kubelet[2585]: I0213 15:11:43.131753    2585 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost"
Feb 13 15:11:43.131913 kubelet[2585]: I0213 15:11:43.131811    2585 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost"
Feb 13 15:11:43.131913 kubelet[2585]: I0213 15:11:43.131831    2585 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost"
Feb 13 15:11:43.193399 kubelet[2585]: I0213 15:11:43.193353    2585 kubelet_node_status.go:76] "Attempting to register node" node="localhost"
Feb 13 15:11:43.220427 kubelet[2585]: I0213 15:11:43.220385    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/19c5a427adfbe39ebb4300658e3fbc01-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"19c5a427adfbe39ebb4300658e3fbc01\") " pod="kube-system/kube-apiserver-localhost"
Feb 13 15:11:43.220427 kubelet[2585]: I0213 15:11:43.220425    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/19c5a427adfbe39ebb4300658e3fbc01-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"19c5a427adfbe39ebb4300658e3fbc01\") " pod="kube-system/kube-apiserver-localhost"
Feb 13 15:11:43.220588 kubelet[2585]: I0213 15:11:43.220447    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost"
Feb 13 15:11:43.220588 kubelet[2585]: I0213 15:11:43.220468    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost"
Feb 13 15:11:43.220588 kubelet[2585]: I0213 15:11:43.220483    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost"
Feb 13 15:11:43.220588 kubelet[2585]: I0213 15:11:43.220499    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/19c5a427adfbe39ebb4300658e3fbc01-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"19c5a427adfbe39ebb4300658e3fbc01\") " pod="kube-system/kube-apiserver-localhost"
Feb 13 15:11:43.220588 kubelet[2585]: I0213 15:11:43.220513    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost"
Feb 13 15:11:43.220690 kubelet[2585]: I0213 15:11:43.220528    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost"
Feb 13 15:11:43.220690 kubelet[2585]: I0213 15:11:43.220544    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost"
Feb 13 15:11:43.397481 kubelet[2585]: I0213 15:11:43.397285    2585 kubelet_node_status.go:125] "Node was previously registered" node="localhost"
Feb 13 15:11:43.398701 kubelet[2585]: I0213 15:11:43.397752    2585 kubelet_node_status.go:79] "Successfully registered node" node="localhost"
Feb 13 15:11:43.560365 sudo[2621]:     root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin
Feb 13 15:11:43.560661 sudo[2621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0)
Feb 13 15:11:43.994183 sudo[2621]: pam_unix(sudo:session): session closed for user root
Feb 13 15:11:44.006206 kubelet[2585]: I0213 15:11:44.005901    2585 apiserver.go:52] "Watching apiserver"
Feb 13 15:11:44.019220 kubelet[2585]: I0213 15:11:44.019175    2585 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
Feb 13 15:11:44.053967 kubelet[2585]: I0213 15:11:44.053896    2585 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost"
Feb 13 15:11:44.054223 kubelet[2585]: I0213 15:11:44.054063    2585 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost"
Feb 13 15:11:44.062154 kubelet[2585]: E0213 15:11:44.062106    2585 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost"
Feb 13 15:11:44.064149 kubelet[2585]: E0213 15:11:44.063971    2585 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost"
Feb 13 15:11:44.086797 kubelet[2585]: I0213 15:11:44.086099    2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.086079336 podStartE2EDuration="1.086079336s" podCreationTimestamp="2025-02-13 15:11:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:11:44.076495133 +0000 UTC m=+1.142475914" watchObservedRunningTime="2025-02-13 15:11:44.086079336 +0000 UTC m=+1.152060077"
Feb 13 15:11:44.096113 kubelet[2585]: I0213 15:11:44.095406    2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.095387157 podStartE2EDuration="1.095387157s" podCreationTimestamp="2025-02-13 15:11:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:11:44.087555764 +0000 UTC m=+1.153536545" watchObservedRunningTime="2025-02-13 15:11:44.095387157 +0000 UTC m=+1.161367938"
Feb 13 15:11:46.346332 sudo[1671]: pam_unix(sudo:session): session closed for user root
Feb 13 15:11:46.347401 sshd[1670]: Connection closed by 10.0.0.1 port 34384
Feb 13 15:11:46.347912 sshd-session[1667]: pam_unix(sshd:session): session closed for user core
Feb 13 15:11:46.351970 systemd-logind[1467]: Session 7 logged out. Waiting for processes to exit.
Feb 13 15:11:46.352784 systemd[1]: sshd@6-10.0.0.48:22-10.0.0.1:34384.service: Deactivated successfully.
Feb 13 15:11:46.355619 systemd[1]: session-7.scope: Deactivated successfully.
Feb 13 15:11:46.355982 systemd[1]: session-7.scope: Consumed 7.982s CPU time, 262.3M memory peak.
Feb 13 15:11:46.357537 systemd-logind[1467]: Removed session 7.
Feb 13 15:11:47.629089 kubelet[2585]: I0213 15:11:47.629009    2585 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Feb 13 15:11:47.630727 containerd[1484]: time="2025-02-13T15:11:47.629761928Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Feb 13 15:11:47.631000 kubelet[2585]: I0213 15:11:47.629983    2585 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Feb 13 15:11:48.471519 kubelet[2585]: I0213 15:11:48.471460    2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.471424108 podStartE2EDuration="5.471424108s" podCreationTimestamp="2025-02-13 15:11:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:11:44.095935923 +0000 UTC m=+1.161916704" watchObservedRunningTime="2025-02-13 15:11:48.471424108 +0000 UTC m=+5.537404889"
Feb 13 15:11:48.584499 systemd[1]: Created slice kubepods-besteffort-pod780c634c_3067_43ba_89d4_2b8d49ecaa8d.slice - libcontainer container kubepods-besteffort-pod780c634c_3067_43ba_89d4_2b8d49ecaa8d.slice.
Feb 13 15:11:48.598015 systemd[1]: Created slice kubepods-burstable-podc6055014_13e4_4f57_89fb_1b3635052f3f.slice - libcontainer container kubepods-burstable-podc6055014_13e4_4f57_89fb_1b3635052f3f.slice.
Feb 13 15:11:48.655670 kubelet[2585]: I0213 15:11:48.655405    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/780c634c-3067-43ba-89d4-2b8d49ecaa8d-kube-proxy\") pod \"kube-proxy-8vx9c\" (UID: \"780c634c-3067-43ba-89d4-2b8d49ecaa8d\") " pod="kube-system/kube-proxy-8vx9c"
Feb 13 15:11:48.655670 kubelet[2585]: I0213 15:11:48.655451    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/780c634c-3067-43ba-89d4-2b8d49ecaa8d-xtables-lock\") pod \"kube-proxy-8vx9c\" (UID: \"780c634c-3067-43ba-89d4-2b8d49ecaa8d\") " pod="kube-system/kube-proxy-8vx9c"
Feb 13 15:11:48.655670 kubelet[2585]: I0213 15:11:48.655469    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/780c634c-3067-43ba-89d4-2b8d49ecaa8d-lib-modules\") pod \"kube-proxy-8vx9c\" (UID: \"780c634c-3067-43ba-89d4-2b8d49ecaa8d\") " pod="kube-system/kube-proxy-8vx9c"
Feb 13 15:11:48.655670 kubelet[2585]: I0213 15:11:48.655520    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6055014-13e4-4f57-89fb-1b3635052f3f-clustermesh-secrets\") pod \"cilium-wwkhg\" (UID: \"c6055014-13e4-4f57-89fb-1b3635052f3f\") " pod="kube-system/cilium-wwkhg"
Feb 13 15:11:48.655670 kubelet[2585]: I0213 15:11:48.655552    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6055014-13e4-4f57-89fb-1b3635052f3f-hubble-tls\") pod \"cilium-wwkhg\" (UID: \"c6055014-13e4-4f57-89fb-1b3635052f3f\") " pod="kube-system/cilium-wwkhg"
Feb 13 15:11:48.656933 kubelet[2585]: I0213 15:11:48.655570    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxdjs\" (UniqueName: \"kubernetes.io/projected/c6055014-13e4-4f57-89fb-1b3635052f3f-kube-api-access-hxdjs\") pod \"cilium-wwkhg\" (UID: \"c6055014-13e4-4f57-89fb-1b3635052f3f\") " pod="kube-system/cilium-wwkhg"
Feb 13 15:11:48.656933 kubelet[2585]: I0213 15:11:48.655617    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-bpf-maps\") pod \"cilium-wwkhg\" (UID: \"c6055014-13e4-4f57-89fb-1b3635052f3f\") " pod="kube-system/cilium-wwkhg"
Feb 13 15:11:48.656933 kubelet[2585]: I0213 15:11:48.655655    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-cilium-run\") pod \"cilium-wwkhg\" (UID: \"c6055014-13e4-4f57-89fb-1b3635052f3f\") " pod="kube-system/cilium-wwkhg"
Feb 13 15:11:48.656933 kubelet[2585]: I0213 15:11:48.655671    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-hostproc\") pod \"cilium-wwkhg\" (UID: \"c6055014-13e4-4f57-89fb-1b3635052f3f\") " pod="kube-system/cilium-wwkhg"
Feb 13 15:11:48.656933 kubelet[2585]: I0213 15:11:48.655694    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-cni-path\") pod \"cilium-wwkhg\" (UID: \"c6055014-13e4-4f57-89fb-1b3635052f3f\") " pod="kube-system/cilium-wwkhg"
Feb 13 15:11:48.656933 kubelet[2585]: I0213 15:11:48.655709    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-etc-cni-netd\") pod \"cilium-wwkhg\" (UID: \"c6055014-13e4-4f57-89fb-1b3635052f3f\") " pod="kube-system/cilium-wwkhg"
Feb 13 15:11:48.657098 kubelet[2585]: I0213 15:11:48.655723    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-xtables-lock\") pod \"cilium-wwkhg\" (UID: \"c6055014-13e4-4f57-89fb-1b3635052f3f\") " pod="kube-system/cilium-wwkhg"
Feb 13 15:11:48.657098 kubelet[2585]: I0213 15:11:48.655765    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8s5h\" (UniqueName: \"kubernetes.io/projected/780c634c-3067-43ba-89d4-2b8d49ecaa8d-kube-api-access-x8s5h\") pod \"kube-proxy-8vx9c\" (UID: \"780c634c-3067-43ba-89d4-2b8d49ecaa8d\") " pod="kube-system/kube-proxy-8vx9c"
Feb 13 15:11:48.657098 kubelet[2585]: I0213 15:11:48.655784    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-lib-modules\") pod \"cilium-wwkhg\" (UID: \"c6055014-13e4-4f57-89fb-1b3635052f3f\") " pod="kube-system/cilium-wwkhg"
Feb 13 15:11:48.657098 kubelet[2585]: I0213 15:11:48.655799    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-cilium-cgroup\") pod \"cilium-wwkhg\" (UID: \"c6055014-13e4-4f57-89fb-1b3635052f3f\") " pod="kube-system/cilium-wwkhg"
Feb 13 15:11:48.657098 kubelet[2585]: I0213 15:11:48.655821    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-host-proc-sys-net\") pod \"cilium-wwkhg\" (UID: \"c6055014-13e4-4f57-89fb-1b3635052f3f\") " pod="kube-system/cilium-wwkhg"
Feb 13 15:11:48.657258 kubelet[2585]: I0213 15:11:48.655836    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-host-proc-sys-kernel\") pod \"cilium-wwkhg\" (UID: \"c6055014-13e4-4f57-89fb-1b3635052f3f\") " pod="kube-system/cilium-wwkhg"
Feb 13 15:11:48.657258 kubelet[2585]: I0213 15:11:48.655852    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6055014-13e4-4f57-89fb-1b3635052f3f-cilium-config-path\") pod \"cilium-wwkhg\" (UID: \"c6055014-13e4-4f57-89fb-1b3635052f3f\") " pod="kube-system/cilium-wwkhg"
Feb 13 15:11:48.786017 systemd[1]: Created slice kubepods-besteffort-pod7f71a30d_7bdc_40c6_9f8a_a2a2d51fa348.slice - libcontainer container kubepods-besteffort-pod7f71a30d_7bdc_40c6_9f8a_a2a2d51fa348.slice.
Feb 13 15:11:48.858265 kubelet[2585]: I0213 15:11:48.858217    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7fxs\" (UniqueName: \"kubernetes.io/projected/7f71a30d-7bdc-40c6-9f8a-a2a2d51fa348-kube-api-access-b7fxs\") pod \"cilium-operator-6c4d7847fc-p4vz9\" (UID: \"7f71a30d-7bdc-40c6-9f8a-a2a2d51fa348\") " pod="kube-system/cilium-operator-6c4d7847fc-p4vz9"
Feb 13 15:11:48.858265 kubelet[2585]: I0213 15:11:48.858262    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f71a30d-7bdc-40c6-9f8a-a2a2d51fa348-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-p4vz9\" (UID: \"7f71a30d-7bdc-40c6-9f8a-a2a2d51fa348\") " pod="kube-system/cilium-operator-6c4d7847fc-p4vz9"
Feb 13 15:11:48.894862 containerd[1484]: time="2025-02-13T15:11:48.894811226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8vx9c,Uid:780c634c-3067-43ba-89d4-2b8d49ecaa8d,Namespace:kube-system,Attempt:0,}"
Feb 13 15:11:48.905921 containerd[1484]: time="2025-02-13T15:11:48.904975429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wwkhg,Uid:c6055014-13e4-4f57-89fb-1b3635052f3f,Namespace:kube-system,Attempt:0,}"
Feb 13 15:11:48.919230 containerd[1484]: time="2025-02-13T15:11:48.916826940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:11:48.919230 containerd[1484]: time="2025-02-13T15:11:48.917367110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:11:48.919230 containerd[1484]: time="2025-02-13T15:11:48.917379549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:11:48.919230 containerd[1484]: time="2025-02-13T15:11:48.917659214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:11:48.936303 systemd[1]: Started cri-containerd-e3ea18b3dacfbc64b06b6a1b1f4204bd5fdbe0fddf9d204466ceb26bb8a92d4e.scope - libcontainer container e3ea18b3dacfbc64b06b6a1b1f4204bd5fdbe0fddf9d204466ceb26bb8a92d4e.
Feb 13 15:11:48.940518 containerd[1484]: time="2025-02-13T15:11:48.940316972Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:11:48.940518 containerd[1484]: time="2025-02-13T15:11:48.940377809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:11:48.940518 containerd[1484]: time="2025-02-13T15:11:48.940389928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:11:48.941097 containerd[1484]: time="2025-02-13T15:11:48.941049772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:11:48.962206 containerd[1484]: time="2025-02-13T15:11:48.959872781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8vx9c,Uid:780c634c-3067-43ba-89d4-2b8d49ecaa8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3ea18b3dacfbc64b06b6a1b1f4204bd5fdbe0fddf9d204466ceb26bb8a92d4e\""
Feb 13 15:11:48.964632 systemd[1]: Started cri-containerd-f250896b71366e31605aae78ccc165431c72195605b832ea804f55464ae58246.scope - libcontainer container f250896b71366e31605aae78ccc165431c72195605b832ea804f55464ae58246.
Feb 13 15:11:48.965293 containerd[1484]: time="2025-02-13T15:11:48.965079935Z" level=info msg="CreateContainer within sandbox \"e3ea18b3dacfbc64b06b6a1b1f4204bd5fdbe0fddf9d204466ceb26bb8a92d4e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Feb 13 15:11:48.982133 containerd[1484]: time="2025-02-13T15:11:48.982089323Z" level=info msg="CreateContainer within sandbox \"e3ea18b3dacfbc64b06b6a1b1f4204bd5fdbe0fddf9d204466ceb26bb8a92d4e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e8d097d54546eb8b7ebf98afc5bf852d494a32ba7663a9e2dd4e5d5d442a8d2e\""
Feb 13 15:11:48.982808 containerd[1484]: time="2025-02-13T15:11:48.982773006Z" level=info msg="StartContainer for \"e8d097d54546eb8b7ebf98afc5bf852d494a32ba7663a9e2dd4e5d5d442a8d2e\""
Feb 13 15:11:48.988814 containerd[1484]: time="2025-02-13T15:11:48.988785996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wwkhg,Uid:c6055014-13e4-4f57-89fb-1b3635052f3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f250896b71366e31605aae78ccc165431c72195605b832ea804f55464ae58246\""
Feb 13 15:11:48.991505 containerd[1484]: time="2025-02-13T15:11:48.991306218Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\""
Feb 13 15:11:49.012968 systemd[1]: Started cri-containerd-e8d097d54546eb8b7ebf98afc5bf852d494a32ba7663a9e2dd4e5d5d442a8d2e.scope - libcontainer container e8d097d54546eb8b7ebf98afc5bf852d494a32ba7663a9e2dd4e5d5d442a8d2e.
Feb 13 15:11:49.043922 containerd[1484]: time="2025-02-13T15:11:49.043654142Z" level=info msg="StartContainer for \"e8d097d54546eb8b7ebf98afc5bf852d494a32ba7663a9e2dd4e5d5d442a8d2e\" returns successfully"
Feb 13 15:11:49.090078 containerd[1484]: time="2025-02-13T15:11:49.089743616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-p4vz9,Uid:7f71a30d-7bdc-40c6-9f8a-a2a2d51fa348,Namespace:kube-system,Attempt:0,}"
Feb 13 15:11:49.127723 containerd[1484]: time="2025-02-13T15:11:49.127635724Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:11:49.127723 containerd[1484]: time="2025-02-13T15:11:49.127690881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:11:49.127888 containerd[1484]: time="2025-02-13T15:11:49.127702200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:11:49.127888 containerd[1484]: time="2025-02-13T15:11:49.127781956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:11:49.149316 systemd[1]: Started cri-containerd-1d18530128616f917980870250e49006d389122633079fda68e7bb52cd456dc7.scope - libcontainer container 1d18530128616f917980870250e49006d389122633079fda68e7bb52cd456dc7.
Feb 13 15:11:49.192220 containerd[1484]: time="2025-02-13T15:11:49.192148779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-p4vz9,Uid:7f71a30d-7bdc-40c6-9f8a-a2a2d51fa348,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d18530128616f917980870250e49006d389122633079fda68e7bb52cd456dc7\""
Feb 13 15:11:50.893859 kubelet[2585]: I0213 15:11:50.893778    2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8vx9c" podStartSLOduration=2.8937610449999998 podStartE2EDuration="2.893761045s" podCreationTimestamp="2025-02-13 15:11:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:11:49.073392684 +0000 UTC m=+6.139373465" watchObservedRunningTime="2025-02-13 15:11:50.893761045 +0000 UTC m=+7.959741786"
Feb 13 15:11:56.486974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3857012825.mount: Deactivated successfully.
Feb 13 15:11:57.834453 containerd[1484]: time="2025-02-13T15:11:57.834388890Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:11:57.835557 containerd[1484]: time="2025-02-13T15:11:57.835522524Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710"
Feb 13 15:11:57.836676 containerd[1484]: time="2025-02-13T15:11:57.836625038Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:11:57.838312 containerd[1484]: time="2025-02-13T15:11:57.838279130Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.846935274s"
Feb 13 15:11:57.838399 containerd[1484]: time="2025-02-13T15:11:57.838315249Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\""
Feb 13 15:11:57.841373 containerd[1484]: time="2025-02-13T15:11:57.841333644Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\""
Feb 13 15:11:57.842997 containerd[1484]: time="2025-02-13T15:11:57.842845302Z" level=info msg="CreateContainer within sandbox \"f250896b71366e31605aae78ccc165431c72195605b832ea804f55464ae58246\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb 13 15:11:57.891042 containerd[1484]: time="2025-02-13T15:11:57.890996439Z" level=info msg="CreateContainer within sandbox \"f250896b71366e31605aae78ccc165431c72195605b832ea804f55464ae58246\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ca092586becd32185230b8b891c27048618eb93b93a6f0ec0223ef025d90edef\""
Feb 13 15:11:57.896985 containerd[1484]: time="2025-02-13T15:11:57.896912796Z" level=info msg="StartContainer for \"ca092586becd32185230b8b891c27048618eb93b93a6f0ec0223ef025d90edef\""
Feb 13 15:11:57.922364 systemd[1]: Started cri-containerd-ca092586becd32185230b8b891c27048618eb93b93a6f0ec0223ef025d90edef.scope - libcontainer container ca092586becd32185230b8b891c27048618eb93b93a6f0ec0223ef025d90edef.
Feb 13 15:11:57.952068 containerd[1484]: time="2025-02-13T15:11:57.952021286Z" level=info msg="StartContainer for \"ca092586becd32185230b8b891c27048618eb93b93a6f0ec0223ef025d90edef\" returns successfully"
Feb 13 15:11:58.000662 systemd[1]: cri-containerd-ca092586becd32185230b8b891c27048618eb93b93a6f0ec0223ef025d90edef.scope: Deactivated successfully.
Feb 13 15:11:58.109284 update_engine[1472]: I20250213 15:11:58.108634  1472 update_attempter.cc:509] Updating boot flags...
Feb 13 15:11:58.155388 containerd[1484]: time="2025-02-13T15:11:58.147713536Z" level=info msg="shim disconnected" id=ca092586becd32185230b8b891c27048618eb93b93a6f0ec0223ef025d90edef namespace=k8s.io
Feb 13 15:11:58.155388 containerd[1484]: time="2025-02-13T15:11:58.155383070Z" level=warning msg="cleaning up after shim disconnected" id=ca092586becd32185230b8b891c27048618eb93b93a6f0ec0223ef025d90edef namespace=k8s.io
Feb 13 15:11:58.155591 containerd[1484]: time="2025-02-13T15:11:58.155403670Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:11:58.177174 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3044)
Feb 13 15:11:58.216266 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3038)
Feb 13 15:11:58.884586 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca092586becd32185230b8b891c27048618eb93b93a6f0ec0223ef025d90edef-rootfs.mount: Deactivated successfully.
Feb 13 15:11:59.108492 containerd[1484]: time="2025-02-13T15:11:59.108450943Z" level=info msg="CreateContainer within sandbox \"f250896b71366e31605aae78ccc165431c72195605b832ea804f55464ae58246\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb 13 15:11:59.124287 containerd[1484]: time="2025-02-13T15:11:59.124228133Z" level=info msg="CreateContainer within sandbox \"f250896b71366e31605aae78ccc165431c72195605b832ea804f55464ae58246\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ad4e203dc44ec36e466b574d13035ce39ba9427877b424b62dce5be999a01b1b\""
Feb 13 15:11:59.126614 containerd[1484]: time="2025-02-13T15:11:59.124725394Z" level=info msg="StartContainer for \"ad4e203dc44ec36e466b574d13035ce39ba9427877b424b62dce5be999a01b1b\""
Feb 13 15:11:59.154372 systemd[1]: Started cri-containerd-ad4e203dc44ec36e466b574d13035ce39ba9427877b424b62dce5be999a01b1b.scope - libcontainer container ad4e203dc44ec36e466b574d13035ce39ba9427877b424b62dce5be999a01b1b.
Feb 13 15:11:59.179412 containerd[1484]: time="2025-02-13T15:11:59.179358123Z" level=info msg="StartContainer for \"ad4e203dc44ec36e466b574d13035ce39ba9427877b424b62dce5be999a01b1b\" returns successfully"
Feb 13 15:11:59.200103 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 13 15:11:59.200385 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:11:59.200840 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:11:59.206587 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:11:59.206770 systemd[1]: cri-containerd-ad4e203dc44ec36e466b574d13035ce39ba9427877b424b62dce5be999a01b1b.scope: Deactivated successfully.
Feb 13 15:11:59.226315 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:11:59.231680 containerd[1484]: time="2025-02-13T15:11:59.231613223Z" level=info msg="shim disconnected" id=ad4e203dc44ec36e466b574d13035ce39ba9427877b424b62dce5be999a01b1b namespace=k8s.io
Feb 13 15:11:59.231680 containerd[1484]: time="2025-02-13T15:11:59.231675861Z" level=warning msg="cleaning up after shim disconnected" id=ad4e203dc44ec36e466b574d13035ce39ba9427877b424b62dce5be999a01b1b namespace=k8s.io
Feb 13 15:11:59.231680 containerd[1484]: time="2025-02-13T15:11:59.231684580Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:11:59.897106 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad4e203dc44ec36e466b574d13035ce39ba9427877b424b62dce5be999a01b1b-rootfs.mount: Deactivated successfully.
Feb 13 15:12:00.112168 containerd[1484]: time="2025-02-13T15:12:00.112015093Z" level=info msg="CreateContainer within sandbox \"f250896b71366e31605aae78ccc165431c72195605b832ea804f55464ae58246\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb 13 15:12:00.132968 containerd[1484]: time="2025-02-13T15:12:00.132902791Z" level=info msg="CreateContainer within sandbox \"f250896b71366e31605aae78ccc165431c72195605b832ea804f55464ae58246\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"20fec1dcc7808ce48d357161fb3d44f0eba092edb262b335dc74dc40c0a89d92\""
Feb 13 15:12:00.135030 containerd[1484]: time="2025-02-13T15:12:00.134774161Z" level=info msg="StartContainer for \"20fec1dcc7808ce48d357161fb3d44f0eba092edb262b335dc74dc40c0a89d92\""
Feb 13 15:12:00.166348 systemd[1]: Started cri-containerd-20fec1dcc7808ce48d357161fb3d44f0eba092edb262b335dc74dc40c0a89d92.scope - libcontainer container 20fec1dcc7808ce48d357161fb3d44f0eba092edb262b335dc74dc40c0a89d92.
Feb 13 15:12:00.193768 containerd[1484]: time="2025-02-13T15:12:00.193106177Z" level=info msg="StartContainer for \"20fec1dcc7808ce48d357161fb3d44f0eba092edb262b335dc74dc40c0a89d92\" returns successfully"
Feb 13 15:12:00.213891 systemd[1]: cri-containerd-20fec1dcc7808ce48d357161fb3d44f0eba092edb262b335dc74dc40c0a89d92.scope: Deactivated successfully.
Feb 13 15:12:00.239362 containerd[1484]: time="2025-02-13T15:12:00.239220491Z" level=info msg="shim disconnected" id=20fec1dcc7808ce48d357161fb3d44f0eba092edb262b335dc74dc40c0a89d92 namespace=k8s.io
Feb 13 15:12:00.239362 containerd[1484]: time="2025-02-13T15:12:00.239283528Z" level=warning msg="cleaning up after shim disconnected" id=20fec1dcc7808ce48d357161fb3d44f0eba092edb262b335dc74dc40c0a89d92 namespace=k8s.io
Feb 13 15:12:00.239362 containerd[1484]: time="2025-02-13T15:12:00.239293368Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:12:00.888348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20fec1dcc7808ce48d357161fb3d44f0eba092edb262b335dc74dc40c0a89d92-rootfs.mount: Deactivated successfully.
Feb 13 15:12:00.985412 containerd[1484]: time="2025-02-13T15:12:00.985359477Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:12:00.986826 containerd[1484]: time="2025-02-13T15:12:00.986781183Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306"
Feb 13 15:12:00.987772 containerd[1484]: time="2025-02-13T15:12:00.987737908Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:12:00.989532 containerd[1484]: time="2025-02-13T15:12:00.989497082Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.148119519s"
Feb 13 15:12:00.989581 containerd[1484]: time="2025-02-13T15:12:00.989543360Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\""
Feb 13 15:12:00.991707 containerd[1484]: time="2025-02-13T15:12:00.991674400Z" level=info msg="CreateContainer within sandbox \"1d18530128616f917980870250e49006d389122633079fda68e7bb52cd456dc7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}"
Feb 13 15:12:01.023979 containerd[1484]: time="2025-02-13T15:12:01.023921980Z" level=info msg="CreateContainer within sandbox \"1d18530128616f917980870250e49006d389122633079fda68e7bb52cd456dc7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1bd8521efe71c3d913ff237901071dcc5fd486b7786df268c54d7e90154eb633\""
Feb 13 15:12:01.027432 containerd[1484]: time="2025-02-13T15:12:01.027321176Z" level=info msg="StartContainer for \"1bd8521efe71c3d913ff237901071dcc5fd486b7786df268c54d7e90154eb633\""
Feb 13 15:12:01.058367 systemd[1]: Started cri-containerd-1bd8521efe71c3d913ff237901071dcc5fd486b7786df268c54d7e90154eb633.scope - libcontainer container 1bd8521efe71c3d913ff237901071dcc5fd486b7786df268c54d7e90154eb633.
Feb 13 15:12:01.084669 containerd[1484]: time="2025-02-13T15:12:01.084425025Z" level=info msg="StartContainer for \"1bd8521efe71c3d913ff237901071dcc5fd486b7786df268c54d7e90154eb633\" returns successfully"
Feb 13 15:12:01.119176 containerd[1484]: time="2025-02-13T15:12:01.119116767Z" level=info msg="CreateContainer within sandbox \"f250896b71366e31605aae78ccc165431c72195605b832ea804f55464ae58246\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb 13 15:12:01.147288 kubelet[2585]: I0213 15:12:01.146315    2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-p4vz9" podStartSLOduration=1.349339725 podStartE2EDuration="13.146298381s" podCreationTimestamp="2025-02-13 15:11:48 +0000 UTC" firstStartedPulling="2025-02-13 15:11:49.193347595 +0000 UTC m=+6.259328376" lastFinishedPulling="2025-02-13 15:12:00.990306251 +0000 UTC m=+18.056287032" observedRunningTime="2025-02-13 15:12:01.146290141 +0000 UTC m=+18.212270922" watchObservedRunningTime="2025-02-13 15:12:01.146298381 +0000 UTC m=+18.212279162"
Feb 13 15:12:01.166927 containerd[1484]: time="2025-02-13T15:12:01.166864795Z" level=info msg="CreateContainer within sandbox \"f250896b71366e31605aae78ccc165431c72195605b832ea804f55464ae58246\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"630e04c83b077e8fe82e40cceb016dc6f8801eafa948d8ddc23336d5f4be28a9\""
Feb 13 15:12:01.167470 containerd[1484]: time="2025-02-13T15:12:01.167445654Z" level=info msg="StartContainer for \"630e04c83b077e8fe82e40cceb016dc6f8801eafa948d8ddc23336d5f4be28a9\""
Feb 13 15:12:01.204459 systemd[1]: Started cri-containerd-630e04c83b077e8fe82e40cceb016dc6f8801eafa948d8ddc23336d5f4be28a9.scope - libcontainer container 630e04c83b077e8fe82e40cceb016dc6f8801eafa948d8ddc23336d5f4be28a9.
Feb 13 15:12:01.286579 systemd[1]: cri-containerd-630e04c83b077e8fe82e40cceb016dc6f8801eafa948d8ddc23336d5f4be28a9.scope: Deactivated successfully.
Feb 13 15:12:01.289129 containerd[1484]: time="2025-02-13T15:12:01.287707292Z" level=info msg="StartContainer for \"630e04c83b077e8fe82e40cceb016dc6f8801eafa948d8ddc23336d5f4be28a9\" returns successfully"
Feb 13 15:12:01.318565 containerd[1484]: time="2025-02-13T15:12:01.318362621Z" level=info msg="shim disconnected" id=630e04c83b077e8fe82e40cceb016dc6f8801eafa948d8ddc23336d5f4be28a9 namespace=k8s.io
Feb 13 15:12:01.318565 containerd[1484]: time="2025-02-13T15:12:01.318426818Z" level=warning msg="cleaning up after shim disconnected" id=630e04c83b077e8fe82e40cceb016dc6f8801eafa948d8ddc23336d5f4be28a9 namespace=k8s.io
Feb 13 15:12:01.318565 containerd[1484]: time="2025-02-13T15:12:01.318435458Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:12:02.129941 containerd[1484]: time="2025-02-13T15:12:02.129900493Z" level=info msg="CreateContainer within sandbox \"f250896b71366e31605aae78ccc165431c72195605b832ea804f55464ae58246\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb 13 15:12:02.171995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1558174632.mount: Deactivated successfully.
Feb 13 15:12:02.175444 containerd[1484]: time="2025-02-13T15:12:02.175312777Z" level=info msg="CreateContainer within sandbox \"f250896b71366e31605aae78ccc165431c72195605b832ea804f55464ae58246\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"795936695331df235576f751961797aa2a1b5a3d02c54c178e2158223f9386ab\""
Feb 13 15:12:02.176235 containerd[1484]: time="2025-02-13T15:12:02.175949115Z" level=info msg="StartContainer for \"795936695331df235576f751961797aa2a1b5a3d02c54c178e2158223f9386ab\""
Feb 13 15:12:02.200346 systemd[1]: Started cri-containerd-795936695331df235576f751961797aa2a1b5a3d02c54c178e2158223f9386ab.scope - libcontainer container 795936695331df235576f751961797aa2a1b5a3d02c54c178e2158223f9386ab.
Feb 13 15:12:02.226270 containerd[1484]: time="2025-02-13T15:12:02.226224549Z" level=info msg="StartContainer for \"795936695331df235576f751961797aa2a1b5a3d02c54c178e2158223f9386ab\" returns successfully"
Feb 13 15:12:02.354524 kubelet[2585]: I0213 15:12:02.354496    2585 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
Feb 13 15:12:02.399475 systemd[1]: Created slice kubepods-burstable-podd2f1c6d6_062e_4a49_bb08_cb19ddf5eefa.slice - libcontainer container kubepods-burstable-podd2f1c6d6_062e_4a49_bb08_cb19ddf5eefa.slice.
Feb 13 15:12:02.410075 systemd[1]: Created slice kubepods-burstable-pod077d0559_3545_485e_b544_138a2369a529.slice - libcontainer container kubepods-burstable-pod077d0559_3545_485e_b544_138a2369a529.slice.
Feb 13 15:12:02.561057 kubelet[2585]: I0213 15:12:02.561019    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/077d0559-3545-485e-b544-138a2369a529-config-volume\") pod \"coredns-668d6bf9bc-rfzxp\" (UID: \"077d0559-3545-485e-b544-138a2369a529\") " pod="kube-system/coredns-668d6bf9bc-rfzxp"
Feb 13 15:12:02.561057 kubelet[2585]: I0213 15:12:02.561062    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t65z8\" (UniqueName: \"kubernetes.io/projected/077d0559-3545-485e-b544-138a2369a529-kube-api-access-t65z8\") pod \"coredns-668d6bf9bc-rfzxp\" (UID: \"077d0559-3545-485e-b544-138a2369a529\") " pod="kube-system/coredns-668d6bf9bc-rfzxp"
Feb 13 15:12:02.561276 kubelet[2585]: I0213 15:12:02.561087    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2f1c6d6-062e-4a49-bb08-cb19ddf5eefa-config-volume\") pod \"coredns-668d6bf9bc-vrx44\" (UID: \"d2f1c6d6-062e-4a49-bb08-cb19ddf5eefa\") " pod="kube-system/coredns-668d6bf9bc-vrx44"
Feb 13 15:12:02.561276 kubelet[2585]: I0213 15:12:02.561103    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhvlq\" (UniqueName: \"kubernetes.io/projected/d2f1c6d6-062e-4a49-bb08-cb19ddf5eefa-kube-api-access-dhvlq\") pod \"coredns-668d6bf9bc-vrx44\" (UID: \"d2f1c6d6-062e-4a49-bb08-cb19ddf5eefa\") " pod="kube-system/coredns-668d6bf9bc-vrx44"
Feb 13 15:12:02.711895 containerd[1484]: time="2025-02-13T15:12:02.711765649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vrx44,Uid:d2f1c6d6-062e-4a49-bb08-cb19ddf5eefa,Namespace:kube-system,Attempt:0,}"
Feb 13 15:12:02.716077 containerd[1484]: time="2025-02-13T15:12:02.716038579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rfzxp,Uid:077d0559-3545-485e-b544-138a2369a529,Namespace:kube-system,Attempt:0,}"
Feb 13 15:12:03.152336 kubelet[2585]: I0213 15:12:03.151453    2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wwkhg" podStartSLOduration=6.301035443 podStartE2EDuration="15.151436966s" podCreationTimestamp="2025-02-13 15:11:48 +0000 UTC" firstStartedPulling="2025-02-13 15:11:48.990685972 +0000 UTC m=+6.056666753" lastFinishedPulling="2025-02-13 15:11:57.841087495 +0000 UTC m=+14.907068276" observedRunningTime="2025-02-13 15:12:03.151225054 +0000 UTC m=+20.217205875" watchObservedRunningTime="2025-02-13 15:12:03.151436966 +0000 UTC m=+20.217417747"
Feb 13 15:12:05.241689 systemd-networkd[1405]: cilium_host: Link UP
Feb 13 15:12:05.241878 systemd-networkd[1405]: cilium_net: Link UP
Feb 13 15:12:05.241881 systemd-networkd[1405]: cilium_net: Gained carrier
Feb 13 15:12:05.242086 systemd-networkd[1405]: cilium_host: Gained carrier
Feb 13 15:12:05.242975 systemd-networkd[1405]: cilium_net: Gained IPv6LL
Feb 13 15:12:05.243891 systemd-networkd[1405]: cilium_host: Gained IPv6LL
Feb 13 15:12:05.335164 systemd-networkd[1405]: cilium_vxlan: Link UP
Feb 13 15:12:05.335171 systemd-networkd[1405]: cilium_vxlan: Gained carrier
Feb 13 15:12:05.625178 kernel: NET: Registered PF_ALG protocol family
Feb 13 15:12:06.204563 systemd-networkd[1405]: lxc_health: Link UP
Feb 13 15:12:06.213255 systemd-networkd[1405]: lxc_health: Gained carrier
Feb 13 15:12:06.355427 systemd-networkd[1405]: lxcf9794995481b: Link UP
Feb 13 15:12:06.364228 kernel: eth0: renamed from tmpf7adc
Feb 13 15:12:06.383240 kernel: eth0: renamed from tmp55082
Feb 13 15:12:06.382867 systemd-networkd[1405]: lxc8ed30fe8c11b: Link UP
Feb 13 15:12:06.383159 systemd-networkd[1405]: lxcf9794995481b: Gained carrier
Feb 13 15:12:06.389415 systemd-networkd[1405]: lxc8ed30fe8c11b: Gained carrier
Feb 13 15:12:06.515582 systemd-networkd[1405]: cilium_vxlan: Gained IPv6LL
Feb 13 15:12:07.604644 systemd-networkd[1405]: lxc_health: Gained IPv6LL
Feb 13 15:12:07.859530 systemd-networkd[1405]: lxcf9794995481b: Gained IPv6LL
Feb 13 15:12:07.987511 systemd-networkd[1405]: lxc8ed30fe8c11b: Gained IPv6LL
Feb 13 15:12:10.168981 containerd[1484]: time="2025-02-13T15:12:10.168895301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:12:10.169806 containerd[1484]: time="2025-02-13T15:12:10.168999138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:12:10.169806 containerd[1484]: time="2025-02-13T15:12:10.169026057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:12:10.169806 containerd[1484]: time="2025-02-13T15:12:10.169301369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:12:10.186065 systemd[1]: run-containerd-runc-k8s.io-f7adcb90e21999eb39a8ce620e4aab14bc43c389eca4ddd08951d8e73c9dc20e-runc.asM0iN.mount: Deactivated successfully.
Feb 13 15:12:10.191380 containerd[1484]: time="2025-02-13T15:12:10.188770319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:12:10.191380 containerd[1484]: time="2025-02-13T15:12:10.191305810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:12:10.191380 containerd[1484]: time="2025-02-13T15:12:10.191329969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:12:10.191711 containerd[1484]: time="2025-02-13T15:12:10.191446326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:12:10.197342 systemd[1]: Started cri-containerd-f7adcb90e21999eb39a8ce620e4aab14bc43c389eca4ddd08951d8e73c9dc20e.scope - libcontainer container f7adcb90e21999eb39a8ce620e4aab14bc43c389eca4ddd08951d8e73c9dc20e.
Feb 13 15:12:10.218351 systemd[1]: Started cri-containerd-55082d5316ca819c4bf5ccab389d7f1dc2ef43604a05e1663678c4e5a3632e7f.scope - libcontainer container 55082d5316ca819c4bf5ccab389d7f1dc2ef43604a05e1663678c4e5a3632e7f.
Feb 13 15:12:10.224734 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Feb 13 15:12:10.231071 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Feb 13 15:12:10.243217 containerd[1484]: time="2025-02-13T15:12:10.243177356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vrx44,Uid:d2f1c6d6-062e-4a49-bb08-cb19ddf5eefa,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7adcb90e21999eb39a8ce620e4aab14bc43c389eca4ddd08951d8e73c9dc20e\""
Feb 13 15:12:10.247624 containerd[1484]: time="2025-02-13T15:12:10.247563596Z" level=info msg="CreateContainer within sandbox \"f7adcb90e21999eb39a8ce620e4aab14bc43c389eca4ddd08951d8e73c9dc20e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Feb 13 15:12:10.250604 containerd[1484]: time="2025-02-13T15:12:10.250571834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rfzxp,Uid:077d0559-3545-485e-b544-138a2369a529,Namespace:kube-system,Attempt:0,} returns sandbox id \"55082d5316ca819c4bf5ccab389d7f1dc2ef43604a05e1663678c4e5a3632e7f\""
Feb 13 15:12:10.253821 containerd[1484]: time="2025-02-13T15:12:10.253768427Z" level=info msg="CreateContainer within sandbox \"55082d5316ca819c4bf5ccab389d7f1dc2ef43604a05e1663678c4e5a3632e7f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Feb 13 15:12:10.300669 containerd[1484]: time="2025-02-13T15:12:10.300611551Z" level=info msg="CreateContainer within sandbox \"f7adcb90e21999eb39a8ce620e4aab14bc43c389eca4ddd08951d8e73c9dc20e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b951598987676727dc0d3f8443ae870ea3203e4f32350c497485e1b706070f41\""
Feb 13 15:12:10.301716 containerd[1484]: time="2025-02-13T15:12:10.301623683Z" level=info msg="StartContainer for \"b951598987676727dc0d3f8443ae870ea3203e4f32350c497485e1b706070f41\""
Feb 13 15:12:10.306176 containerd[1484]: time="2025-02-13T15:12:10.306120921Z" level=info msg="CreateContainer within sandbox \"55082d5316ca819c4bf5ccab389d7f1dc2ef43604a05e1663678c4e5a3632e7f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3ed2d162a9340116eb2189071d22056bfc355fd434662903bdfdbdf0b45eaf3c\""
Feb 13 15:12:10.307221 containerd[1484]: time="2025-02-13T15:12:10.307189331Z" level=info msg="StartContainer for \"3ed2d162a9340116eb2189071d22056bfc355fd434662903bdfdbdf0b45eaf3c\""
Feb 13 15:12:10.329322 systemd[1]: Started cri-containerd-b951598987676727dc0d3f8443ae870ea3203e4f32350c497485e1b706070f41.scope - libcontainer container b951598987676727dc0d3f8443ae870ea3203e4f32350c497485e1b706070f41.
Feb 13 15:12:10.332195 systemd[1]: Started cri-containerd-3ed2d162a9340116eb2189071d22056bfc355fd434662903bdfdbdf0b45eaf3c.scope - libcontainer container 3ed2d162a9340116eb2189071d22056bfc355fd434662903bdfdbdf0b45eaf3c.
Feb 13 15:12:10.366867 containerd[1484]: time="2025-02-13T15:12:10.366822466Z" level=info msg="StartContainer for \"b951598987676727dc0d3f8443ae870ea3203e4f32350c497485e1b706070f41\" returns successfully"
Feb 13 15:12:10.368188 containerd[1484]: time="2025-02-13T15:12:10.367018981Z" level=info msg="StartContainer for \"3ed2d162a9340116eb2189071d22056bfc355fd434662903bdfdbdf0b45eaf3c\" returns successfully"
Feb 13 15:12:10.567485 systemd[1]: Started sshd@7-10.0.0.48:22-10.0.0.1:58902.service - OpenSSH per-connection server daemon (10.0.0.1:58902).
Feb 13 15:12:10.618460 sshd[3997]: Accepted publickey for core from 10.0.0.1 port 58902 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE
Feb 13 15:12:10.619976 sshd-session[3997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:12:10.624409 systemd-logind[1467]: New session 8 of user core.
Feb 13 15:12:10.643353 systemd[1]: Started session-8.scope - Session 8 of User core.
Feb 13 15:12:10.773291 sshd[3999]: Connection closed by 10.0.0.1 port 58902
Feb 13 15:12:10.773691 sshd-session[3997]: pam_unix(sshd:session): session closed for user core
Feb 13 15:12:10.776965 systemd[1]: sshd@7-10.0.0.48:22-10.0.0.1:58902.service: Deactivated successfully.
Feb 13 15:12:10.778672 systemd[1]: session-8.scope: Deactivated successfully.
Feb 13 15:12:10.779408 systemd-logind[1467]: Session 8 logged out. Waiting for processes to exit.
Feb 13 15:12:10.780334 systemd-logind[1467]: Removed session 8.
Feb 13 15:12:11.211564 kubelet[2585]: I0213 15:12:11.211490    2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rfzxp" podStartSLOduration=23.211474105 podStartE2EDuration="23.211474105s" podCreationTimestamp="2025-02-13 15:11:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:12:11.211013197 +0000 UTC m=+28.276993978" watchObservedRunningTime="2025-02-13 15:12:11.211474105 +0000 UTC m=+28.277454886"
Feb 13 15:12:11.259286 kubelet[2585]: I0213 15:12:11.259217    2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vrx44" podStartSLOduration=23.259196445 podStartE2EDuration="23.259196445s" podCreationTimestamp="2025-02-13 15:11:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:12:11.258893733 +0000 UTC m=+28.324874514" watchObservedRunningTime="2025-02-13 15:12:11.259196445 +0000 UTC m=+28.325177266"
Feb 13 15:12:15.795876 systemd[1]: Started sshd@8-10.0.0.48:22-10.0.0.1:46700.service - OpenSSH per-connection server daemon (10.0.0.1:46700).
Feb 13 15:12:15.884682 sshd[4027]: Accepted publickey for core from 10.0.0.1 port 46700 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE
Feb 13 15:12:15.890040 sshd-session[4027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:12:15.897524 systemd-logind[1467]: New session 9 of user core.
Feb 13 15:12:15.905558 systemd[1]: Started session-9.scope - Session 9 of User core.
Feb 13 15:12:16.043194 sshd[4029]: Connection closed by 10.0.0.1 port 46700
Feb 13 15:12:16.044345 sshd-session[4027]: pam_unix(sshd:session): session closed for user core
Feb 13 15:12:16.047531 systemd[1]: sshd@8-10.0.0.48:22-10.0.0.1:46700.service: Deactivated successfully.
Feb 13 15:12:16.049424 systemd[1]: session-9.scope: Deactivated successfully.
Feb 13 15:12:16.050117 systemd-logind[1467]: Session 9 logged out. Waiting for processes to exit.
Feb 13 15:12:16.053033 systemd-logind[1467]: Removed session 9.
Feb 13 15:12:21.057227 systemd[1]: Started sshd@9-10.0.0.48:22-10.0.0.1:46712.service - OpenSSH per-connection server daemon (10.0.0.1:46712).
Feb 13 15:12:21.114778 sshd[4046]: Accepted publickey for core from 10.0.0.1 port 46712 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE
Feb 13 15:12:21.116182 sshd-session[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:12:21.120675 systemd-logind[1467]: New session 10 of user core.
Feb 13 15:12:21.131310 systemd[1]: Started session-10.scope - Session 10 of User core.
Feb 13 15:12:21.264917 sshd[4048]: Connection closed by 10.0.0.1 port 46712
Feb 13 15:12:21.264828 sshd-session[4046]: pam_unix(sshd:session): session closed for user core
Feb 13 15:12:21.269070 systemd[1]: sshd@9-10.0.0.48:22-10.0.0.1:46712.service: Deactivated successfully.
Feb 13 15:12:21.271832 systemd[1]: session-10.scope: Deactivated successfully.
Feb 13 15:12:21.273069 systemd-logind[1467]: Session 10 logged out. Waiting for processes to exit.
Feb 13 15:12:21.274504 systemd-logind[1467]: Removed session 10.
Feb 13 15:12:26.279642 systemd[1]: Started sshd@10-10.0.0.48:22-10.0.0.1:54824.service - OpenSSH per-connection server daemon (10.0.0.1:54824).
Feb 13 15:12:26.319445 sshd[4062]: Accepted publickey for core from 10.0.0.1 port 54824 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE
Feb 13 15:12:26.320638 sshd-session[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:12:26.325130 systemd-logind[1467]: New session 11 of user core.
Feb 13 15:12:26.339366 systemd[1]: Started session-11.scope - Session 11 of User core.
Feb 13 15:12:26.464111 sshd[4064]: Connection closed by 10.0.0.1 port 54824
Feb 13 15:12:26.464591 sshd-session[4062]: pam_unix(sshd:session): session closed for user core
Feb 13 15:12:26.476549 systemd[1]: sshd@10-10.0.0.48:22-10.0.0.1:54824.service: Deactivated successfully.
Feb 13 15:12:26.478362 systemd[1]: session-11.scope: Deactivated successfully.
Feb 13 15:12:26.480396 systemd-logind[1467]: Session 11 logged out. Waiting for processes to exit.
Feb 13 15:12:26.487471 systemd[1]: Started sshd@11-10.0.0.48:22-10.0.0.1:54832.service - OpenSSH per-connection server daemon (10.0.0.1:54832).
Feb 13 15:12:26.489504 systemd-logind[1467]: Removed session 11.
Feb 13 15:12:26.527998 sshd[4077]: Accepted publickey for core from 10.0.0.1 port 54832 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE
Feb 13 15:12:26.529432 sshd-session[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:12:26.534807 systemd-logind[1467]: New session 12 of user core.
Feb 13 15:12:26.544411 systemd[1]: Started session-12.scope - Session 12 of User core.
Feb 13 15:12:26.707705 sshd[4080]: Connection closed by 10.0.0.1 port 54832
Feb 13 15:12:26.708091 sshd-session[4077]: pam_unix(sshd:session): session closed for user core
Feb 13 15:12:26.718950 systemd[1]: sshd@11-10.0.0.48:22-10.0.0.1:54832.service: Deactivated successfully.
Feb 13 15:12:26.727972 systemd[1]: session-12.scope: Deactivated successfully.
Feb 13 15:12:26.731591 systemd-logind[1467]: Session 12 logged out. Waiting for processes to exit.
Feb 13 15:12:26.739600 systemd[1]: Started sshd@12-10.0.0.48:22-10.0.0.1:54842.service - OpenSSH per-connection server daemon (10.0.0.1:54842).
Feb 13 15:12:26.744392 systemd-logind[1467]: Removed session 12.
Feb 13 15:12:26.780445 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 54842 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE
Feb 13 15:12:26.781803 sshd-session[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:12:26.786186 systemd-logind[1467]: New session 13 of user core.
Feb 13 15:12:26.792326 systemd[1]: Started session-13.scope - Session 13 of User core.
Feb 13 15:12:26.922352 sshd[4093]: Connection closed by 10.0.0.1 port 54842
Feb 13 15:12:26.922840 sshd-session[4090]: pam_unix(sshd:session): session closed for user core
Feb 13 15:12:26.926178 systemd[1]: sshd@12-10.0.0.48:22-10.0.0.1:54842.service: Deactivated successfully.
Feb 13 15:12:26.928271 systemd[1]: session-13.scope: Deactivated successfully.
Feb 13 15:12:26.933483 systemd-logind[1467]: Session 13 logged out. Waiting for processes to exit.
Feb 13 15:12:26.934626 systemd-logind[1467]: Removed session 13.
Feb 13 15:12:31.953046 systemd[1]: Started sshd@13-10.0.0.48:22-10.0.0.1:54854.service - OpenSSH per-connection server daemon (10.0.0.1:54854).
Feb 13 15:12:31.998704 sshd[4107]: Accepted publickey for core from 10.0.0.1 port 54854 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE
Feb 13 15:12:31.999954 sshd-session[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:12:32.004602 systemd-logind[1467]: New session 14 of user core.
Feb 13 15:12:32.011325 systemd[1]: Started session-14.scope - Session 14 of User core.
Feb 13 15:12:32.151637 sshd[4109]: Connection closed by 10.0.0.1 port 54854
Feb 13 15:12:32.152122 sshd-session[4107]: pam_unix(sshd:session): session closed for user core
Feb 13 15:12:32.157838 systemd[1]: sshd@13-10.0.0.48:22-10.0.0.1:54854.service: Deactivated successfully.
Feb 13 15:12:32.162299 systemd[1]: session-14.scope: Deactivated successfully.
Feb 13 15:12:32.165323 systemd-logind[1467]: Session 14 logged out. Waiting for processes to exit.
Feb 13 15:12:32.166193 systemd-logind[1467]: Removed session 14.
Feb 13 15:12:37.167276 systemd[1]: Started sshd@14-10.0.0.48:22-10.0.0.1:45938.service - OpenSSH per-connection server daemon (10.0.0.1:45938).
Feb 13 15:12:37.229042 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 45938 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE
Feb 13 15:12:37.230378 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:12:37.235034 systemd-logind[1467]: New session 15 of user core.
Feb 13 15:12:37.244393 systemd[1]: Started session-15.scope - Session 15 of User core.
Feb 13 15:12:37.360887 sshd[4125]: Connection closed by 10.0.0.1 port 45938
Feb 13 15:12:37.361804 sshd-session[4123]: pam_unix(sshd:session): session closed for user core
Feb 13 15:12:37.374993 systemd[1]: sshd@14-10.0.0.48:22-10.0.0.1:45938.service: Deactivated successfully.
Feb 13 15:12:37.377344 systemd[1]: session-15.scope: Deactivated successfully.
Feb 13 15:12:37.378113 systemd-logind[1467]: Session 15 logged out. Waiting for processes to exit.
Feb 13 15:12:37.384559 systemd[1]: Started sshd@15-10.0.0.48:22-10.0.0.1:45950.service - OpenSSH per-connection server daemon (10.0.0.1:45950).
Feb 13 15:12:37.387228 systemd-logind[1467]: Removed session 15.
Feb 13 15:12:37.422287 sshd[4138]: Accepted publickey for core from 10.0.0.1 port 45950 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE
Feb 13 15:12:37.423654 sshd-session[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:12:37.429642 systemd-logind[1467]: New session 16 of user core.
Feb 13 15:12:37.436334 systemd[1]: Started session-16.scope - Session 16 of User core.
Feb 13 15:12:37.684421 sshd[4141]: Connection closed by 10.0.0.1 port 45950
Feb 13 15:12:37.685268 sshd-session[4138]: pam_unix(sshd:session): session closed for user core
Feb 13 15:12:37.708438 systemd[1]: Started sshd@16-10.0.0.48:22-10.0.0.1:45952.service - OpenSSH per-connection server daemon (10.0.0.1:45952).
Feb 13 15:12:37.709664 systemd[1]: sshd@15-10.0.0.48:22-10.0.0.1:45950.service: Deactivated successfully.
Feb 13 15:12:37.712698 systemd[1]: session-16.scope: Deactivated successfully.
Feb 13 15:12:37.714446 systemd-logind[1467]: Session 16 logged out. Waiting for processes to exit.
Feb 13 15:12:37.715713 systemd-logind[1467]: Removed session 16.
Feb 13 15:12:37.760491 sshd[4149]: Accepted publickey for core from 10.0.0.1 port 45952 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE
Feb 13 15:12:37.762011 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:12:37.766830 systemd-logind[1467]: New session 17 of user core.
Feb 13 15:12:37.775331 systemd[1]: Started session-17.scope - Session 17 of User core.
Feb 13 15:12:38.524067 sshd[4154]: Connection closed by 10.0.0.1 port 45952
Feb 13 15:12:38.523936 sshd-session[4149]: pam_unix(sshd:session): session closed for user core
Feb 13 15:12:38.534835 systemd[1]: sshd@16-10.0.0.48:22-10.0.0.1:45952.service: Deactivated successfully.
Feb 13 15:12:38.539150 systemd[1]: session-17.scope: Deactivated successfully.
Feb 13 15:12:38.543474 systemd-logind[1467]: Session 17 logged out. Waiting for processes to exit.
Feb 13 15:12:38.554014 systemd[1]: Started sshd@17-10.0.0.48:22-10.0.0.1:45964.service - OpenSSH per-connection server daemon (10.0.0.1:45964).
Feb 13 15:12:38.556856 systemd-logind[1467]: Removed session 17.
Feb 13 15:12:38.598636 sshd[4178]: Accepted publickey for core from 10.0.0.1 port 45964 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE
Feb 13 15:12:38.599982 sshd-session[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:12:38.605873 systemd-logind[1467]: New session 18 of user core.
Feb 13 15:12:38.615389 systemd[1]: Started session-18.scope - Session 18 of User core.
Feb 13 15:12:38.919449 sshd[4181]: Connection closed by 10.0.0.1 port 45964
Feb 13 15:12:38.920509 sshd-session[4178]: pam_unix(sshd:session): session closed for user core
Feb 13 15:12:38.926856 systemd[1]: sshd@17-10.0.0.48:22-10.0.0.1:45964.service: Deactivated successfully.
Feb 13 15:12:38.930022 systemd[1]: session-18.scope: Deactivated successfully.
Feb 13 15:12:38.930821 systemd-logind[1467]: Session 18 logged out. Waiting for processes to exit.
Feb 13 15:12:38.952538 systemd[1]: Started sshd@18-10.0.0.48:22-10.0.0.1:45968.service - OpenSSH per-connection server daemon (10.0.0.1:45968).
Feb 13 15:12:38.953361 systemd-logind[1467]: Removed session 18.
Feb 13 15:12:38.994258 sshd[4192]: Accepted publickey for core from 10.0.0.1 port 45968 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE
Feb 13 15:12:38.995856 sshd-session[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:12:39.001168 systemd-logind[1467]: New session 19 of user core.
Feb 13 15:12:39.010340 systemd[1]: Started session-19.scope - Session 19 of User core.
Feb 13 15:12:39.129117 sshd[4195]: Connection closed by 10.0.0.1 port 45968
Feb 13 15:12:39.129504 sshd-session[4192]: pam_unix(sshd:session): session closed for user core
Feb 13 15:12:39.132896 systemd[1]: sshd@18-10.0.0.48:22-10.0.0.1:45968.service: Deactivated successfully.
Feb 13 15:12:39.136969 systemd[1]: session-19.scope: Deactivated successfully.
Feb 13 15:12:39.137849 systemd-logind[1467]: Session 19 logged out. Waiting for processes to exit.
Feb 13 15:12:39.138664 systemd-logind[1467]: Removed session 19.
Feb 13 15:12:44.144581 systemd[1]: Started sshd@19-10.0.0.48:22-10.0.0.1:41386.service - OpenSSH per-connection server daemon (10.0.0.1:41386).
Feb 13 15:12:44.184268 sshd[4213]: Accepted publickey for core from 10.0.0.1 port 41386 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE
Feb 13 15:12:44.185724 sshd-session[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:12:44.189795 systemd-logind[1467]: New session 20 of user core.
Feb 13 15:12:44.205359 systemd[1]: Started session-20.scope - Session 20 of User core.
Feb 13 15:12:44.313956 sshd[4215]: Connection closed by 10.0.0.1 port 41386
Feb 13 15:12:44.314321 sshd-session[4213]: pam_unix(sshd:session): session closed for user core
Feb 13 15:12:44.316898 systemd[1]: sshd@19-10.0.0.48:22-10.0.0.1:41386.service: Deactivated successfully.
Feb 13 15:12:44.318571 systemd[1]: session-20.scope: Deactivated successfully.
Feb 13 15:12:44.319957 systemd-logind[1467]: Session 20 logged out. Waiting for processes to exit.
Feb 13 15:12:44.320900 systemd-logind[1467]: Removed session 20.
Feb 13 15:12:49.330448 systemd[1]: Started sshd@20-10.0.0.48:22-10.0.0.1:41400.service - OpenSSH per-connection server daemon (10.0.0.1:41400).
Feb 13 15:12:49.371586 sshd[4230]: Accepted publickey for core from 10.0.0.1 port 41400 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE
Feb 13 15:12:49.372875 sshd-session[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:12:49.377221 systemd-logind[1467]: New session 21 of user core.
Feb 13 15:12:49.387327 systemd[1]: Started session-21.scope - Session 21 of User core.
Feb 13 15:12:49.505903 sshd[4232]: Connection closed by 10.0.0.1 port 41400
Feb 13 15:12:49.506464 sshd-session[4230]: pam_unix(sshd:session): session closed for user core
Feb 13 15:12:49.510019 systemd[1]: sshd@20-10.0.0.48:22-10.0.0.1:41400.service: Deactivated successfully.
Feb 13 15:12:49.511745 systemd[1]: session-21.scope: Deactivated successfully.
Feb 13 15:12:49.512409 systemd-logind[1467]: Session 21 logged out. Waiting for processes to exit.
Feb 13 15:12:49.513301 systemd-logind[1467]: Removed session 21.
Feb 13 15:12:54.521603 systemd[1]: Started sshd@21-10.0.0.48:22-10.0.0.1:55534.service - OpenSSH per-connection server daemon (10.0.0.1:55534).
Feb 13 15:12:54.561186 sshd[4246]: Accepted publickey for core from 10.0.0.1 port 55534 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE
Feb 13 15:12:54.562420 sshd-session[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:12:54.566297 systemd-logind[1467]: New session 22 of user core.
Feb 13 15:12:54.580307 systemd[1]: Started session-22.scope - Session 22 of User core.
Feb 13 15:12:54.698679 sshd[4248]: Connection closed by 10.0.0.1 port 55534
Feb 13 15:12:54.699034 sshd-session[4246]: pam_unix(sshd:session): session closed for user core
Feb 13 15:12:54.702457 systemd[1]: sshd@21-10.0.0.48:22-10.0.0.1:55534.service: Deactivated successfully.
Feb 13 15:12:54.705958 systemd[1]: session-22.scope: Deactivated successfully.
Feb 13 15:12:54.706965 systemd-logind[1467]: Session 22 logged out. Waiting for processes to exit.
Feb 13 15:12:54.707982 systemd-logind[1467]: Removed session 22.
Feb 13 15:12:59.713040 systemd[1]: Started sshd@22-10.0.0.48:22-10.0.0.1:55540.service - OpenSSH per-connection server daemon (10.0.0.1:55540).
Feb 13 15:12:59.752038 sshd[4261]: Accepted publickey for core from 10.0.0.1 port 55540 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE
Feb 13 15:12:59.753570 sshd-session[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:12:59.757669 systemd-logind[1467]: New session 23 of user core.
Feb 13 15:12:59.769340 systemd[1]: Started session-23.scope - Session 23 of User core.
Feb 13 15:12:59.903700 sshd[4263]: Connection closed by 10.0.0.1 port 55540
Feb 13 15:12:59.904092 sshd-session[4261]: pam_unix(sshd:session): session closed for user core
Feb 13 15:12:59.917205 systemd[1]: sshd@22-10.0.0.48:22-10.0.0.1:55540.service: Deactivated successfully.
Feb 13 15:12:59.919228 systemd[1]: session-23.scope: Deactivated successfully.
Feb 13 15:12:59.920074 systemd-logind[1467]: Session 23 logged out. Waiting for processes to exit.
Feb 13 15:12:59.922306 systemd[1]: Started sshd@23-10.0.0.48:22-10.0.0.1:55550.service - OpenSSH per-connection server daemon (10.0.0.1:55550).
Feb 13 15:12:59.925331 systemd-logind[1467]: Removed session 23.
Feb 13 15:12:59.964746 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 55550 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE
Feb 13 15:12:59.966028 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:12:59.970848 systemd-logind[1467]: New session 24 of user core.
Feb 13 15:12:59.983360 systemd[1]: Started session-24.scope - Session 24 of User core.
Feb 13 15:13:01.922761 containerd[1484]: time="2025-02-13T15:13:01.922567112Z" level=info msg="StopContainer for \"1bd8521efe71c3d913ff237901071dcc5fd486b7786df268c54d7e90154eb633\" with timeout 30 (s)"
Feb 13 15:13:01.924399 containerd[1484]: time="2025-02-13T15:13:01.923152269Z" level=info msg="Stop container \"1bd8521efe71c3d913ff237901071dcc5fd486b7786df268c54d7e90154eb633\" with signal terminated"
Feb 13 15:13:01.945329 systemd[1]: cri-containerd-1bd8521efe71c3d913ff237901071dcc5fd486b7786df268c54d7e90154eb633.scope: Deactivated successfully.
Feb 13 15:13:01.966775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1bd8521efe71c3d913ff237901071dcc5fd486b7786df268c54d7e90154eb633-rootfs.mount: Deactivated successfully.
Feb 13 15:13:01.972471 containerd[1484]: time="2025-02-13T15:13:01.972405803Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE        \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 13 15:13:01.977729 containerd[1484]: time="2025-02-13T15:13:01.977662175Z" level=info msg="shim disconnected" id=1bd8521efe71c3d913ff237901071dcc5fd486b7786df268c54d7e90154eb633 namespace=k8s.io
Feb 13 15:13:01.977729 containerd[1484]: time="2025-02-13T15:13:01.977712014Z" level=warning msg="cleaning up after shim disconnected" id=1bd8521efe71c3d913ff237901071dcc5fd486b7786df268c54d7e90154eb633 namespace=k8s.io
Feb 13 15:13:01.977729 containerd[1484]: time="2025-02-13T15:13:01.977722294Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:13:01.980262 containerd[1484]: time="2025-02-13T15:13:01.980124641Z" level=info msg="StopContainer for \"795936695331df235576f751961797aa2a1b5a3d02c54c178e2158223f9386ab\" with timeout 2 (s)"
Feb 13 15:13:01.980721 containerd[1484]: time="2025-02-13T15:13:01.980696478Z" level=info msg="Stop container \"795936695331df235576f751961797aa2a1b5a3d02c54c178e2158223f9386ab\" with signal terminated"
Feb 13 15:13:01.988316 systemd-networkd[1405]: lxc_health: Link DOWN
Feb 13 15:13:01.988325 systemd-networkd[1405]: lxc_health: Lost carrier
Feb 13 15:13:02.011535 systemd[1]: cri-containerd-795936695331df235576f751961797aa2a1b5a3d02c54c178e2158223f9386ab.scope: Deactivated successfully.
Feb 13 15:13:02.012092 systemd[1]: cri-containerd-795936695331df235576f751961797aa2a1b5a3d02c54c178e2158223f9386ab.scope: Consumed 6.734s CPU time, 128.3M memory peak, 148K read from disk, 12.9M written to disk.
Feb 13 15:13:02.037544 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-795936695331df235576f751961797aa2a1b5a3d02c54c178e2158223f9386ab-rootfs.mount: Deactivated successfully.
Feb 13 15:13:02.041702 containerd[1484]: time="2025-02-13T15:13:02.041625916Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:13:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Feb 13 15:13:02.049478 containerd[1484]: time="2025-02-13T15:13:02.049266996Z" level=info msg="StopContainer for \"1bd8521efe71c3d913ff237901071dcc5fd486b7786df268c54d7e90154eb633\" returns successfully"
Feb 13 15:13:02.051599 containerd[1484]: time="2025-02-13T15:13:02.051382025Z" level=info msg="shim disconnected" id=795936695331df235576f751961797aa2a1b5a3d02c54c178e2158223f9386ab namespace=k8s.io
Feb 13 15:13:02.051599 containerd[1484]: time="2025-02-13T15:13:02.051439985Z" level=warning msg="cleaning up after shim disconnected" id=795936695331df235576f751961797aa2a1b5a3d02c54c178e2158223f9386ab namespace=k8s.io
Feb 13 15:13:02.051599 containerd[1484]: time="2025-02-13T15:13:02.051449625Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:13:02.055584 containerd[1484]: time="2025-02-13T15:13:02.055548323Z" level=info msg="StopPodSandbox for \"1d18530128616f917980870250e49006d389122633079fda68e7bb52cd456dc7\""
Feb 13 15:13:02.055692 containerd[1484]: time="2025-02-13T15:13:02.055611883Z" level=info msg="Container to stop \"1bd8521efe71c3d913ff237901071dcc5fd486b7786df268c54d7e90154eb633\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:13:02.057491 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1d18530128616f917980870250e49006d389122633079fda68e7bb52cd456dc7-shm.mount: Deactivated successfully.
Feb 13 15:13:02.062711 systemd[1]: cri-containerd-1d18530128616f917980870250e49006d389122633079fda68e7bb52cd456dc7.scope: Deactivated successfully.
Feb 13 15:13:02.069924 containerd[1484]: time="2025-02-13T15:13:02.069874489Z" level=info msg="StopContainer for \"795936695331df235576f751961797aa2a1b5a3d02c54c178e2158223f9386ab\" returns successfully"
Feb 13 15:13:02.074002 containerd[1484]: time="2025-02-13T15:13:02.073960187Z" level=info msg="StopPodSandbox for \"f250896b71366e31605aae78ccc165431c72195605b832ea804f55464ae58246\""
Feb 13 15:13:02.074162 containerd[1484]: time="2025-02-13T15:13:02.074036627Z" level=info msg="Container to stop \"ad4e203dc44ec36e466b574d13035ce39ba9427877b424b62dce5be999a01b1b\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:13:02.074162 containerd[1484]: time="2025-02-13T15:13:02.074055067Z" level=info msg="Container to stop \"630e04c83b077e8fe82e40cceb016dc6f8801eafa948d8ddc23336d5f4be28a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:13:02.074162 containerd[1484]: time="2025-02-13T15:13:02.074064347Z" level=info msg="Container to stop \"795936695331df235576f751961797aa2a1b5a3d02c54c178e2158223f9386ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:13:02.074162 containerd[1484]: time="2025-02-13T15:13:02.074080147Z" level=info msg="Container to stop \"ca092586becd32185230b8b891c27048618eb93b93a6f0ec0223ef025d90edef\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:13:02.074162 containerd[1484]: time="2025-02-13T15:13:02.074088347Z" level=info msg="Container to stop \"20fec1dcc7808ce48d357161fb3d44f0eba092edb262b335dc74dc40c0a89d92\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:13:02.082260 systemd[1]: cri-containerd-f250896b71366e31605aae78ccc165431c72195605b832ea804f55464ae58246.scope: Deactivated successfully.
Feb 13 15:13:02.107763 containerd[1484]: time="2025-02-13T15:13:02.107688851Z" level=info msg="shim disconnected" id=1d18530128616f917980870250e49006d389122633079fda68e7bb52cd456dc7 namespace=k8s.io
Feb 13 15:13:02.108242 containerd[1484]: time="2025-02-13T15:13:02.107824970Z" level=warning msg="cleaning up after shim disconnected" id=1d18530128616f917980870250e49006d389122633079fda68e7bb52cd456dc7 namespace=k8s.io
Feb 13 15:13:02.108242 containerd[1484]: time="2025-02-13T15:13:02.107841410Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:13:02.109115 containerd[1484]: time="2025-02-13T15:13:02.109057044Z" level=info msg="shim disconnected" id=f250896b71366e31605aae78ccc165431c72195605b832ea804f55464ae58246 namespace=k8s.io
Feb 13 15:13:02.109115 containerd[1484]: time="2025-02-13T15:13:02.109109443Z" level=warning msg="cleaning up after shim disconnected" id=f250896b71366e31605aae78ccc165431c72195605b832ea804f55464ae58246 namespace=k8s.io
Feb 13 15:13:02.109240 containerd[1484]: time="2025-02-13T15:13:02.109119203Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:13:02.120965 containerd[1484]: time="2025-02-13T15:13:02.120810342Z" level=info msg="TearDown network for sandbox \"1d18530128616f917980870250e49006d389122633079fda68e7bb52cd456dc7\" successfully"
Feb 13 15:13:02.120965 containerd[1484]: time="2025-02-13T15:13:02.120850062Z" level=info msg="StopPodSandbox for \"1d18530128616f917980870250e49006d389122633079fda68e7bb52cd456dc7\" returns successfully"
Feb 13 15:13:02.132843 containerd[1484]: time="2025-02-13T15:13:02.132764200Z" level=info msg="TearDown network for sandbox \"f250896b71366e31605aae78ccc165431c72195605b832ea804f55464ae58246\" successfully"
Feb 13 15:13:02.132843 containerd[1484]: time="2025-02-13T15:13:02.132827679Z" level=info msg="StopPodSandbox for \"f250896b71366e31605aae78ccc165431c72195605b832ea804f55464ae58246\" returns successfully"
Feb 13 15:13:02.258489 kubelet[2585]: I0213 15:13:02.258323    2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-bpf-maps\") pod \"c6055014-13e4-4f57-89fb-1b3635052f3f\" (UID: \"c6055014-13e4-4f57-89fb-1b3635052f3f\") "
Feb 13 15:13:02.258489 kubelet[2585]: I0213 15:13:02.258381    2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-cilium-run\") pod \"c6055014-13e4-4f57-89fb-1b3635052f3f\" (UID: \"c6055014-13e4-4f57-89fb-1b3635052f3f\") "
Feb 13 15:13:02.258489 kubelet[2585]: I0213 15:13:02.258412    2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-lib-modules\") pod \"c6055014-13e4-4f57-89fb-1b3635052f3f\" (UID: \"c6055014-13e4-4f57-89fb-1b3635052f3f\") "
Feb 13 15:13:02.258489 kubelet[2585]: I0213 15:13:02.258453    2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7fxs\" (UniqueName: \"kubernetes.io/projected/7f71a30d-7bdc-40c6-9f8a-a2a2d51fa348-kube-api-access-b7fxs\") pod \"7f71a30d-7bdc-40c6-9f8a-a2a2d51fa348\" (UID: \"7f71a30d-7bdc-40c6-9f8a-a2a2d51fa348\") "
Feb 13 15:13:02.258489 kubelet[2585]: I0213 15:13:02.258474    2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-xtables-lock\") pod \"c6055014-13e4-4f57-89fb-1b3635052f3f\" (UID: \"c6055014-13e4-4f57-89fb-1b3635052f3f\") "
Feb 13 15:13:02.258489 kubelet[2585]: I0213 15:13:02.258491    2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6055014-13e4-4f57-89fb-1b3635052f3f-hubble-tls\") pod \"c6055014-13e4-4f57-89fb-1b3635052f3f\" (UID: \"c6055014-13e4-4f57-89fb-1b3635052f3f\") "
Feb 13 15:13:02.259044 kubelet[2585]: I0213 15:13:02.258513    2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6055014-13e4-4f57-89fb-1b3635052f3f-clustermesh-secrets\") pod \"c6055014-13e4-4f57-89fb-1b3635052f3f\" (UID: \"c6055014-13e4-4f57-89fb-1b3635052f3f\") "
Feb 13 15:13:02.259044 kubelet[2585]: I0213 15:13:02.258530    2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxdjs\" (UniqueName: \"kubernetes.io/projected/c6055014-13e4-4f57-89fb-1b3635052f3f-kube-api-access-hxdjs\") pod \"c6055014-13e4-4f57-89fb-1b3635052f3f\" (UID: \"c6055014-13e4-4f57-89fb-1b3635052f3f\") "
Feb 13 15:13:02.259044 kubelet[2585]: I0213 15:13:02.258548    2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-etc-cni-netd\") pod \"c6055014-13e4-4f57-89fb-1b3635052f3f\" (UID: \"c6055014-13e4-4f57-89fb-1b3635052f3f\") "
Feb 13 15:13:02.259044 kubelet[2585]: I0213 15:13:02.258563    2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-cilium-cgroup\") pod \"c6055014-13e4-4f57-89fb-1b3635052f3f\" (UID: \"c6055014-13e4-4f57-89fb-1b3635052f3f\") "
Feb 13 15:13:02.259044 kubelet[2585]: I0213 15:13:02.258580    2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6055014-13e4-4f57-89fb-1b3635052f3f-cilium-config-path\") pod \"c6055014-13e4-4f57-89fb-1b3635052f3f\" (UID: \"c6055014-13e4-4f57-89fb-1b3635052f3f\") "
Feb 13 15:13:02.259044 kubelet[2585]: I0213 15:13:02.258597    2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-hostproc\") pod \"c6055014-13e4-4f57-89fb-1b3635052f3f\" (UID: \"c6055014-13e4-4f57-89fb-1b3635052f3f\") "
Feb 13 15:13:02.259233 kubelet[2585]: I0213 15:13:02.258616    2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-host-proc-sys-net\") pod \"c6055014-13e4-4f57-89fb-1b3635052f3f\" (UID: \"c6055014-13e4-4f57-89fb-1b3635052f3f\") "
Feb 13 15:13:02.259233 kubelet[2585]: I0213 15:13:02.258630    2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-cni-path\") pod \"c6055014-13e4-4f57-89fb-1b3635052f3f\" (UID: \"c6055014-13e4-4f57-89fb-1b3635052f3f\") "
Feb 13 15:13:02.259233 kubelet[2585]: I0213 15:13:02.258648    2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f71a30d-7bdc-40c6-9f8a-a2a2d51fa348-cilium-config-path\") pod \"7f71a30d-7bdc-40c6-9f8a-a2a2d51fa348\" (UID: \"7f71a30d-7bdc-40c6-9f8a-a2a2d51fa348\") "
Feb 13 15:13:02.259233 kubelet[2585]: I0213 15:13:02.258665    2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-host-proc-sys-kernel\") pod \"c6055014-13e4-4f57-89fb-1b3635052f3f\" (UID: \"c6055014-13e4-4f57-89fb-1b3635052f3f\") "
Feb 13 15:13:02.265801 kubelet[2585]: I0213 15:13:02.265244    2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c6055014-13e4-4f57-89fb-1b3635052f3f" (UID: "c6055014-13e4-4f57-89fb-1b3635052f3f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Feb 13 15:13:02.265801 kubelet[2585]: I0213 15:13:02.265332    2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c6055014-13e4-4f57-89fb-1b3635052f3f" (UID: "c6055014-13e4-4f57-89fb-1b3635052f3f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Feb 13 15:13:02.265801 kubelet[2585]: I0213 15:13:02.265512    2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c6055014-13e4-4f57-89fb-1b3635052f3f" (UID: "c6055014-13e4-4f57-89fb-1b3635052f3f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Feb 13 15:13:02.265801 kubelet[2585]: I0213 15:13:02.265788    2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c6055014-13e4-4f57-89fb-1b3635052f3f" (UID: "c6055014-13e4-4f57-89fb-1b3635052f3f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Feb 13 15:13:02.266044 kubelet[2585]: I0213 15:13:02.265840    2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c6055014-13e4-4f57-89fb-1b3635052f3f" (UID: "c6055014-13e4-4f57-89fb-1b3635052f3f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Feb 13 15:13:02.266150 kubelet[2585]: I0213 15:13:02.266110    2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c6055014-13e4-4f57-89fb-1b3635052f3f" (UID: "c6055014-13e4-4f57-89fb-1b3635052f3f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Feb 13 15:13:02.266329 kubelet[2585]: I0213 15:13:02.266313    2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c6055014-13e4-4f57-89fb-1b3635052f3f" (UID: "c6055014-13e4-4f57-89fb-1b3635052f3f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Feb 13 15:13:02.266381 kubelet[2585]: I0213 15:13:02.266340    2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-hostproc" (OuterVolumeSpecName: "hostproc") pod "c6055014-13e4-4f57-89fb-1b3635052f3f" (UID: "c6055014-13e4-4f57-89fb-1b3635052f3f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Feb 13 15:13:02.267749 kubelet[2585]: I0213 15:13:02.267709    2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6055014-13e4-4f57-89fb-1b3635052f3f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c6055014-13e4-4f57-89fb-1b3635052f3f" (UID: "c6055014-13e4-4f57-89fb-1b3635052f3f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
Feb 13 15:13:02.267807 kubelet[2585]: I0213 15:13:02.267774    2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-cni-path" (OuterVolumeSpecName: "cni-path") pod "c6055014-13e4-4f57-89fb-1b3635052f3f" (UID: "c6055014-13e4-4f57-89fb-1b3635052f3f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Feb 13 15:13:02.269659 kubelet[2585]: I0213 15:13:02.269615    2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f71a30d-7bdc-40c6-9f8a-a2a2d51fa348-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7f71a30d-7bdc-40c6-9f8a-a2a2d51fa348" (UID: "7f71a30d-7bdc-40c6-9f8a-a2a2d51fa348"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
Feb 13 15:13:02.269659 kubelet[2585]: I0213 15:13:02.269823    2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6055014-13e4-4f57-89fb-1b3635052f3f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c6055014-13e4-4f57-89fb-1b3635052f3f" (UID: "c6055014-13e4-4f57-89fb-1b3635052f3f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue ""
Feb 13 15:13:02.269659 kubelet[2585]: I0213 15:13:02.269889    2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c6055014-13e4-4f57-89fb-1b3635052f3f" (UID: "c6055014-13e4-4f57-89fb-1b3635052f3f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
Feb 13 15:13:02.271344 kubelet[2585]: I0213 15:13:02.271305    2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6055014-13e4-4f57-89fb-1b3635052f3f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c6055014-13e4-4f57-89fb-1b3635052f3f" (UID: "c6055014-13e4-4f57-89fb-1b3635052f3f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue ""
Feb 13 15:13:02.271344 kubelet[2585]: I0213 15:13:02.271307    2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f71a30d-7bdc-40c6-9f8a-a2a2d51fa348-kube-api-access-b7fxs" (OuterVolumeSpecName: "kube-api-access-b7fxs") pod "7f71a30d-7bdc-40c6-9f8a-a2a2d51fa348" (UID: "7f71a30d-7bdc-40c6-9f8a-a2a2d51fa348"). InnerVolumeSpecName "kube-api-access-b7fxs". PluginName "kubernetes.io/projected", VolumeGIDValue ""
Feb 13 15:13:02.271971 kubelet[2585]: I0213 15:13:02.271945    2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6055014-13e4-4f57-89fb-1b3635052f3f-kube-api-access-hxdjs" (OuterVolumeSpecName: "kube-api-access-hxdjs") pod "c6055014-13e4-4f57-89fb-1b3635052f3f" (UID: "c6055014-13e4-4f57-89fb-1b3635052f3f"). InnerVolumeSpecName "kube-api-access-hxdjs". PluginName "kubernetes.io/projected", VolumeGIDValue ""
Feb 13 15:13:02.274802 kubelet[2585]: I0213 15:13:02.274758    2585 scope.go:117] "RemoveContainer" containerID="795936695331df235576f751961797aa2a1b5a3d02c54c178e2158223f9386ab"
Feb 13 15:13:02.277439 containerd[1484]: time="2025-02-13T15:13:02.277396403Z" level=info msg="RemoveContainer for \"795936695331df235576f751961797aa2a1b5a3d02c54c178e2158223f9386ab\""
Feb 13 15:13:02.283457 systemd[1]: Removed slice kubepods-burstable-podc6055014_13e4_4f57_89fb_1b3635052f3f.slice - libcontainer container kubepods-burstable-podc6055014_13e4_4f57_89fb_1b3635052f3f.slice.
Feb 13 15:13:02.283575 systemd[1]: kubepods-burstable-podc6055014_13e4_4f57_89fb_1b3635052f3f.slice: Consumed 6.890s CPU time, 128.6M memory peak, 168K read from disk, 12.9M written to disk.
Feb 13 15:13:02.285017 systemd[1]: Removed slice kubepods-besteffort-pod7f71a30d_7bdc_40c6_9f8a_a2a2d51fa348.slice - libcontainer container kubepods-besteffort-pod7f71a30d_7bdc_40c6_9f8a_a2a2d51fa348.slice.
Feb 13 15:13:02.294026 containerd[1484]: time="2025-02-13T15:13:02.293987757Z" level=info msg="RemoveContainer for \"795936695331df235576f751961797aa2a1b5a3d02c54c178e2158223f9386ab\" returns successfully"
Feb 13 15:13:02.294543 kubelet[2585]: I0213 15:13:02.294390    2585 scope.go:117] "RemoveContainer" containerID="630e04c83b077e8fe82e40cceb016dc6f8801eafa948d8ddc23336d5f4be28a9"
Feb 13 15:13:02.296439 containerd[1484]: time="2025-02-13T15:13:02.296405264Z" level=info msg="RemoveContainer for \"630e04c83b077e8fe82e40cceb016dc6f8801eafa948d8ddc23336d5f4be28a9\""
Feb 13 15:13:02.302646 containerd[1484]: time="2025-02-13T15:13:02.301959115Z" level=info msg="RemoveContainer for \"630e04c83b077e8fe82e40cceb016dc6f8801eafa948d8ddc23336d5f4be28a9\" returns successfully"
Feb 13 15:13:02.302795 kubelet[2585]: I0213 15:13:02.302564    2585 scope.go:117] "RemoveContainer" containerID="20fec1dcc7808ce48d357161fb3d44f0eba092edb262b335dc74dc40c0a89d92"
Feb 13 15:13:02.305347 containerd[1484]: time="2025-02-13T15:13:02.305318657Z" level=info msg="RemoveContainer for \"20fec1dcc7808ce48d357161fb3d44f0eba092edb262b335dc74dc40c0a89d92\""
Feb 13 15:13:02.308233 containerd[1484]: time="2025-02-13T15:13:02.308192562Z" level=info msg="RemoveContainer for \"20fec1dcc7808ce48d357161fb3d44f0eba092edb262b335dc74dc40c0a89d92\" returns successfully"
Feb 13 15:13:02.308698 kubelet[2585]: I0213 15:13:02.308534    2585 scope.go:117] "RemoveContainer" containerID="ad4e203dc44ec36e466b574d13035ce39ba9427877b424b62dce5be999a01b1b"
Feb 13 15:13:02.310109 containerd[1484]: time="2025-02-13T15:13:02.310077873Z" level=info msg="RemoveContainer for \"ad4e203dc44ec36e466b574d13035ce39ba9427877b424b62dce5be999a01b1b\""
Feb 13 15:13:02.313476 containerd[1484]: time="2025-02-13T15:13:02.313435655Z" level=info msg="RemoveContainer for \"ad4e203dc44ec36e466b574d13035ce39ba9427877b424b62dce5be999a01b1b\" returns successfully"
Feb 13 15:13:02.313708 kubelet[2585]: I0213 15:13:02.313631    2585 scope.go:117] "RemoveContainer" containerID="ca092586becd32185230b8b891c27048618eb93b93a6f0ec0223ef025d90edef"
Feb 13 15:13:02.314838 containerd[1484]: time="2025-02-13T15:13:02.314810568Z" level=info msg="RemoveContainer for \"ca092586becd32185230b8b891c27048618eb93b93a6f0ec0223ef025d90edef\""
Feb 13 15:13:02.317245 containerd[1484]: time="2025-02-13T15:13:02.317205195Z" level=info msg="RemoveContainer for \"ca092586becd32185230b8b891c27048618eb93b93a6f0ec0223ef025d90edef\" returns successfully"
Feb 13 15:13:02.317448 kubelet[2585]: I0213 15:13:02.317420    2585 scope.go:117] "RemoveContainer" containerID="795936695331df235576f751961797aa2a1b5a3d02c54c178e2158223f9386ab"
Feb 13 15:13:02.317719 containerd[1484]: time="2025-02-13T15:13:02.317681633Z" level=error msg="ContainerStatus for \"795936695331df235576f751961797aa2a1b5a3d02c54c178e2158223f9386ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"795936695331df235576f751961797aa2a1b5a3d02c54c178e2158223f9386ab\": not found"
Feb 13 15:13:02.325548 kubelet[2585]: E0213 15:13:02.325310    2585 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"795936695331df235576f751961797aa2a1b5a3d02c54c178e2158223f9386ab\": not found" containerID="795936695331df235576f751961797aa2a1b5a3d02c54c178e2158223f9386ab"
Feb 13 15:13:02.325548 kubelet[2585]: I0213 15:13:02.325356    2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"795936695331df235576f751961797aa2a1b5a3d02c54c178e2158223f9386ab"} err="failed to get container status \"795936695331df235576f751961797aa2a1b5a3d02c54c178e2158223f9386ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"795936695331df235576f751961797aa2a1b5a3d02c54c178e2158223f9386ab\": not found"
Feb 13 15:13:02.325548 kubelet[2585]: I0213 15:13:02.325448    2585 scope.go:117] "RemoveContainer" containerID="630e04c83b077e8fe82e40cceb016dc6f8801eafa948d8ddc23336d5f4be28a9"
Feb 13 15:13:02.325765 containerd[1484]: time="2025-02-13T15:13:02.325714311Z" level=error msg="ContainerStatus for \"630e04c83b077e8fe82e40cceb016dc6f8801eafa948d8ddc23336d5f4be28a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"630e04c83b077e8fe82e40cceb016dc6f8801eafa948d8ddc23336d5f4be28a9\": not found"
Feb 13 15:13:02.325994 kubelet[2585]: E0213 15:13:02.325936    2585 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"630e04c83b077e8fe82e40cceb016dc6f8801eafa948d8ddc23336d5f4be28a9\": not found" containerID="630e04c83b077e8fe82e40cceb016dc6f8801eafa948d8ddc23336d5f4be28a9"
Feb 13 15:13:02.325994 kubelet[2585]: I0213 15:13:02.325971    2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"630e04c83b077e8fe82e40cceb016dc6f8801eafa948d8ddc23336d5f4be28a9"} err="failed to get container status \"630e04c83b077e8fe82e40cceb016dc6f8801eafa948d8ddc23336d5f4be28a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"630e04c83b077e8fe82e40cceb016dc6f8801eafa948d8ddc23336d5f4be28a9\": not found"
Feb 13 15:13:02.325994 kubelet[2585]: I0213 15:13:02.325990    2585 scope.go:117] "RemoveContainer" containerID="20fec1dcc7808ce48d357161fb3d44f0eba092edb262b335dc74dc40c0a89d92"
Feb 13 15:13:02.326275 containerd[1484]: time="2025-02-13T15:13:02.326223988Z" level=error msg="ContainerStatus for \"20fec1dcc7808ce48d357161fb3d44f0eba092edb262b335dc74dc40c0a89d92\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"20fec1dcc7808ce48d357161fb3d44f0eba092edb262b335dc74dc40c0a89d92\": not found"
Feb 13 15:13:02.326522 kubelet[2585]: E0213 15:13:02.326389    2585 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"20fec1dcc7808ce48d357161fb3d44f0eba092edb262b335dc74dc40c0a89d92\": not found" containerID="20fec1dcc7808ce48d357161fb3d44f0eba092edb262b335dc74dc40c0a89d92"
Feb 13 15:13:02.326522 kubelet[2585]: I0213 15:13:02.326413    2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"20fec1dcc7808ce48d357161fb3d44f0eba092edb262b335dc74dc40c0a89d92"} err="failed to get container status \"20fec1dcc7808ce48d357161fb3d44f0eba092edb262b335dc74dc40c0a89d92\": rpc error: code = NotFound desc = an error occurred when try to find container \"20fec1dcc7808ce48d357161fb3d44f0eba092edb262b335dc74dc40c0a89d92\": not found"
Feb 13 15:13:02.326522 kubelet[2585]: I0213 15:13:02.326427    2585 scope.go:117] "RemoveContainer" containerID="ad4e203dc44ec36e466b574d13035ce39ba9427877b424b62dce5be999a01b1b"
Feb 13 15:13:02.326651 containerd[1484]: time="2025-02-13T15:13:02.326608706Z" level=error msg="ContainerStatus for \"ad4e203dc44ec36e466b574d13035ce39ba9427877b424b62dce5be999a01b1b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ad4e203dc44ec36e466b574d13035ce39ba9427877b424b62dce5be999a01b1b\": not found"
Feb 13 15:13:02.326753 kubelet[2585]: E0213 15:13:02.326724    2585 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ad4e203dc44ec36e466b574d13035ce39ba9427877b424b62dce5be999a01b1b\": not found" containerID="ad4e203dc44ec36e466b574d13035ce39ba9427877b424b62dce5be999a01b1b"
Feb 13 15:13:02.327032 kubelet[2585]: I0213 15:13:02.326746    2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ad4e203dc44ec36e466b574d13035ce39ba9427877b424b62dce5be999a01b1b"} err="failed to get container status \"ad4e203dc44ec36e466b574d13035ce39ba9427877b424b62dce5be999a01b1b\": rpc error: code = NotFound desc = an error occurred when try to find container \"ad4e203dc44ec36e466b574d13035ce39ba9427877b424b62dce5be999a01b1b\": not found"
Feb 13 15:13:02.327032 kubelet[2585]: I0213 15:13:02.326762    2585 scope.go:117] "RemoveContainer" containerID="ca092586becd32185230b8b891c27048618eb93b93a6f0ec0223ef025d90edef"
Feb 13 15:13:02.327118 containerd[1484]: time="2025-02-13T15:13:02.326949024Z" level=error msg="ContainerStatus for \"ca092586becd32185230b8b891c27048618eb93b93a6f0ec0223ef025d90edef\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ca092586becd32185230b8b891c27048618eb93b93a6f0ec0223ef025d90edef\": not found"
Feb 13 15:13:02.327412 kubelet[2585]: E0213 15:13:02.327290    2585 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ca092586becd32185230b8b891c27048618eb93b93a6f0ec0223ef025d90edef\": not found" containerID="ca092586becd32185230b8b891c27048618eb93b93a6f0ec0223ef025d90edef"
Feb 13 15:13:02.327412 kubelet[2585]: I0213 15:13:02.327311    2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ca092586becd32185230b8b891c27048618eb93b93a6f0ec0223ef025d90edef"} err="failed to get container status \"ca092586becd32185230b8b891c27048618eb93b93a6f0ec0223ef025d90edef\": rpc error: code = NotFound desc = an error occurred when try to find container \"ca092586becd32185230b8b891c27048618eb93b93a6f0ec0223ef025d90edef\": not found"
Feb 13 15:13:02.327412 kubelet[2585]: I0213 15:13:02.327333    2585 scope.go:117] "RemoveContainer" containerID="1bd8521efe71c3d913ff237901071dcc5fd486b7786df268c54d7e90154eb633"
Feb 13 15:13:02.328594 containerd[1484]: time="2025-02-13T15:13:02.328560976Z" level=info msg="RemoveContainer for \"1bd8521efe71c3d913ff237901071dcc5fd486b7786df268c54d7e90154eb633\""
Feb 13 15:13:02.331343 containerd[1484]: time="2025-02-13T15:13:02.331235642Z" level=info msg="RemoveContainer for \"1bd8521efe71c3d913ff237901071dcc5fd486b7786df268c54d7e90154eb633\" returns successfully"
Feb 13 15:13:02.331561 kubelet[2585]: I0213 15:13:02.331442    2585 scope.go:117] "RemoveContainer" containerID="1bd8521efe71c3d913ff237901071dcc5fd486b7786df268c54d7e90154eb633"
Feb 13 15:13:02.331739 containerd[1484]: time="2025-02-13T15:13:02.331654560Z" level=error msg="ContainerStatus for \"1bd8521efe71c3d913ff237901071dcc5fd486b7786df268c54d7e90154eb633\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1bd8521efe71c3d913ff237901071dcc5fd486b7786df268c54d7e90154eb633\": not found"
Feb 13 15:13:02.331904 kubelet[2585]: E0213 15:13:02.331876    2585 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1bd8521efe71c3d913ff237901071dcc5fd486b7786df268c54d7e90154eb633\": not found" containerID="1bd8521efe71c3d913ff237901071dcc5fd486b7786df268c54d7e90154eb633"
Feb 13 15:13:02.332006 kubelet[2585]: I0213 15:13:02.331979    2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1bd8521efe71c3d913ff237901071dcc5fd486b7786df268c54d7e90154eb633"} err="failed to get container status \"1bd8521efe71c3d913ff237901071dcc5fd486b7786df268c54d7e90154eb633\": rpc error: code = NotFound desc = an error occurred when try to find container \"1bd8521efe71c3d913ff237901071dcc5fd486b7786df268c54d7e90154eb633\": not found"
Feb 13 15:13:02.359101 kubelet[2585]: I0213 15:13:02.359066    2585 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\""
Feb 13 15:13:02.359390 kubelet[2585]: I0213 15:13:02.359243    2585 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-bpf-maps\") on node \"localhost\" DevicePath \"\""
Feb 13 15:13:02.359390 kubelet[2585]: I0213 15:13:02.359259    2585 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-cilium-run\") on node \"localhost\" DevicePath \"\""
Feb 13 15:13:02.359390 kubelet[2585]: I0213 15:13:02.359267    2585 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-lib-modules\") on node \"localhost\" DevicePath \"\""
Feb 13 15:13:02.359390 kubelet[2585]: I0213 15:13:02.359275    2585 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b7fxs\" (UniqueName: \"kubernetes.io/projected/7f71a30d-7bdc-40c6-9f8a-a2a2d51fa348-kube-api-access-b7fxs\") on node \"localhost\" DevicePath \"\""
Feb 13 15:13:02.359390 kubelet[2585]: I0213 15:13:02.359284    2585 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-xtables-lock\") on node \"localhost\" DevicePath \"\""
Feb 13 15:13:02.359390 kubelet[2585]: I0213 15:13:02.359291    2585 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6055014-13e4-4f57-89fb-1b3635052f3f-hubble-tls\") on node \"localhost\" DevicePath \"\""
Feb 13 15:13:02.359390 kubelet[2585]: I0213 15:13:02.359337    2585 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6055014-13e4-4f57-89fb-1b3635052f3f-clustermesh-secrets\") on node \"localhost\" DevicePath \"\""
Feb 13 15:13:02.359390 kubelet[2585]: I0213 15:13:02.359348    2585 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hxdjs\" (UniqueName: \"kubernetes.io/projected/c6055014-13e4-4f57-89fb-1b3635052f3f-kube-api-access-hxdjs\") on node \"localhost\" DevicePath \"\""
Feb 13 15:13:02.359582 kubelet[2585]: I0213 15:13:02.359357    2585 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-etc-cni-netd\") on node \"localhost\" DevicePath \"\""
Feb 13 15:13:02.359582 kubelet[2585]: I0213 15:13:02.359372    2585 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-cilium-cgroup\") on node \"localhost\" DevicePath \"\""
Feb 13 15:13:02.359787 kubelet[2585]: I0213 15:13:02.359681    2585 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6055014-13e4-4f57-89fb-1b3635052f3f-cilium-config-path\") on node \"localhost\" DevicePath \"\""
Feb 13 15:13:02.359787 kubelet[2585]: I0213 15:13:02.359702    2585 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-hostproc\") on node \"localhost\" DevicePath \"\""
Feb 13 15:13:02.359787 kubelet[2585]: I0213 15:13:02.359711    2585 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-host-proc-sys-net\") on node \"localhost\" DevicePath \"\""
Feb 13 15:13:02.359787 kubelet[2585]: I0213 15:13:02.359718    2585 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6055014-13e4-4f57-89fb-1b3635052f3f-cni-path\") on node \"localhost\" DevicePath \"\""
Feb 13 15:13:02.359787 kubelet[2585]: I0213 15:13:02.359769    2585 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7f71a30d-7bdc-40c6-9f8a-a2a2d51fa348-cilium-config-path\") on node \"localhost\" DevicePath \"\""
Feb 13 15:13:02.938362 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d18530128616f917980870250e49006d389122633079fda68e7bb52cd456dc7-rootfs.mount: Deactivated successfully.
Feb 13 15:13:02.938465 systemd[1]: var-lib-kubelet-pods-7f71a30d\x2d7bdc\x2d40c6\x2d9f8a\x2da2a2d51fa348-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db7fxs.mount: Deactivated successfully.
Feb 13 15:13:02.938537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f250896b71366e31605aae78ccc165431c72195605b832ea804f55464ae58246-rootfs.mount: Deactivated successfully.
Feb 13 15:13:02.938592 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f250896b71366e31605aae78ccc165431c72195605b832ea804f55464ae58246-shm.mount: Deactivated successfully.
Feb 13 15:13:02.938644 systemd[1]: var-lib-kubelet-pods-c6055014\x2d13e4\x2d4f57\x2d89fb\x2d1b3635052f3f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhxdjs.mount: Deactivated successfully.
Feb 13 15:13:02.938693 systemd[1]: var-lib-kubelet-pods-c6055014\x2d13e4\x2d4f57\x2d89fb\x2d1b3635052f3f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Feb 13 15:13:02.938744 systemd[1]: var-lib-kubelet-pods-c6055014\x2d13e4\x2d4f57\x2d89fb\x2d1b3635052f3f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Feb 13 15:13:03.034197 kubelet[2585]: I0213 15:13:03.034159    2585 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f71a30d-7bdc-40c6-9f8a-a2a2d51fa348" path="/var/lib/kubelet/pods/7f71a30d-7bdc-40c6-9f8a-a2a2d51fa348/volumes"
Feb 13 15:13:03.034634 kubelet[2585]: I0213 15:13:03.034601    2585 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6055014-13e4-4f57-89fb-1b3635052f3f" path="/var/lib/kubelet/pods/c6055014-13e4-4f57-89fb-1b3635052f3f/volumes"
Feb 13 15:13:03.115783 kubelet[2585]: E0213 15:13:03.115740    2585 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 13 15:13:03.849738 sshd[4278]: Connection closed by 10.0.0.1 port 55550
Feb 13 15:13:03.850438 sshd-session[4275]: pam_unix(sshd:session): session closed for user core
Feb 13 15:13:03.865784 systemd[1]: sshd@23-10.0.0.48:22-10.0.0.1:55550.service: Deactivated successfully.
Feb 13 15:13:03.868199 systemd[1]: session-24.scope: Deactivated successfully.
Feb 13 15:13:03.868607 systemd[1]: session-24.scope: Consumed 1.233s CPU time, 27.6M memory peak.
Feb 13 15:13:03.869806 systemd-logind[1467]: Session 24 logged out. Waiting for processes to exit.
Feb 13 15:13:03.880506 systemd[1]: Started sshd@24-10.0.0.48:22-10.0.0.1:46666.service - OpenSSH per-connection server daemon (10.0.0.1:46666).
Feb 13 15:13:03.882165 systemd-logind[1467]: Removed session 24.
Feb 13 15:13:03.921433 sshd[4437]: Accepted publickey for core from 10.0.0.1 port 46666 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE
Feb 13 15:13:03.923350 sshd-session[4437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:13:03.930411 systemd-logind[1467]: New session 25 of user core.
Feb 13 15:13:03.945363 systemd[1]: Started session-25.scope - Session 25 of User core.
Feb 13 15:13:04.681041 sshd[4440]: Connection closed by 10.0.0.1 port 46666
Feb 13 15:13:04.681418 sshd-session[4437]: pam_unix(sshd:session): session closed for user core
Feb 13 15:13:04.696189 systemd[1]: sshd@24-10.0.0.48:22-10.0.0.1:46666.service: Deactivated successfully.
Feb 13 15:13:04.700978 systemd[1]: session-25.scope: Deactivated successfully.
Feb 13 15:13:04.704109 kubelet[2585]: I0213 15:13:04.704068    2585 memory_manager.go:355] "RemoveStaleState removing state" podUID="c6055014-13e4-4f57-89fb-1b3635052f3f" containerName="cilium-agent"
Feb 13 15:13:04.704109 kubelet[2585]: I0213 15:13:04.704097    2585 memory_manager.go:355] "RemoveStaleState removing state" podUID="7f71a30d-7bdc-40c6-9f8a-a2a2d51fa348" containerName="cilium-operator"
Feb 13 15:13:04.706466 systemd-logind[1467]: Session 25 logged out. Waiting for processes to exit.
Feb 13 15:13:04.723508 systemd[1]: Started sshd@25-10.0.0.48:22-10.0.0.1:46678.service - OpenSSH per-connection server daemon (10.0.0.1:46678).
Feb 13 15:13:04.727576 systemd-logind[1467]: Removed session 25.
Feb 13 15:13:04.733725 systemd[1]: Created slice kubepods-burstable-pod3d7595a6_c47c_406e_ac95_7b06efe7a665.slice - libcontainer container kubepods-burstable-pod3d7595a6_c47c_406e_ac95_7b06efe7a665.slice.
Feb 13 15:13:04.769060 sshd[4451]: Accepted publickey for core from 10.0.0.1 port 46678 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE
Feb 13 15:13:04.770451 sshd-session[4451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:13:04.773116 kubelet[2585]: I0213 15:13:04.772870    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d7595a6-c47c-406e-ac95-7b06efe7a665-bpf-maps\") pod \"cilium-wnv9n\" (UID: \"3d7595a6-c47c-406e-ac95-7b06efe7a665\") " pod="kube-system/cilium-wnv9n"
Feb 13 15:13:04.773116 kubelet[2585]: I0213 15:13:04.772910    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d7595a6-c47c-406e-ac95-7b06efe7a665-cni-path\") pod \"cilium-wnv9n\" (UID: \"3d7595a6-c47c-406e-ac95-7b06efe7a665\") " pod="kube-system/cilium-wnv9n"
Feb 13 15:13:04.773116 kubelet[2585]: I0213 15:13:04.772934    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d7595a6-c47c-406e-ac95-7b06efe7a665-cilium-config-path\") pod \"cilium-wnv9n\" (UID: \"3d7595a6-c47c-406e-ac95-7b06efe7a665\") " pod="kube-system/cilium-wnv9n"
Feb 13 15:13:04.773116 kubelet[2585]: I0213 15:13:04.772952    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bwfk\" (UniqueName: \"kubernetes.io/projected/3d7595a6-c47c-406e-ac95-7b06efe7a665-kube-api-access-8bwfk\") pod \"cilium-wnv9n\" (UID: \"3d7595a6-c47c-406e-ac95-7b06efe7a665\") " pod="kube-system/cilium-wnv9n"
Feb 13 15:13:04.773116 kubelet[2585]: I0213 15:13:04.772973    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d7595a6-c47c-406e-ac95-7b06efe7a665-etc-cni-netd\") pod \"cilium-wnv9n\" (UID: \"3d7595a6-c47c-406e-ac95-7b06efe7a665\") " pod="kube-system/cilium-wnv9n"
Feb 13 15:13:04.773116 kubelet[2585]: I0213 15:13:04.772989    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d7595a6-c47c-406e-ac95-7b06efe7a665-hubble-tls\") pod \"cilium-wnv9n\" (UID: \"3d7595a6-c47c-406e-ac95-7b06efe7a665\") " pod="kube-system/cilium-wnv9n"
Feb 13 15:13:04.773438 kubelet[2585]: I0213 15:13:04.773004    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d7595a6-c47c-406e-ac95-7b06efe7a665-clustermesh-secrets\") pod \"cilium-wnv9n\" (UID: \"3d7595a6-c47c-406e-ac95-7b06efe7a665\") " pod="kube-system/cilium-wnv9n"
Feb 13 15:13:04.773438 kubelet[2585]: I0213 15:13:04.773020    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d7595a6-c47c-406e-ac95-7b06efe7a665-hostproc\") pod \"cilium-wnv9n\" (UID: \"3d7595a6-c47c-406e-ac95-7b06efe7a665\") " pod="kube-system/cilium-wnv9n"
Feb 13 15:13:04.773438 kubelet[2585]: I0213 15:13:04.773038    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d7595a6-c47c-406e-ac95-7b06efe7a665-lib-modules\") pod \"cilium-wnv9n\" (UID: \"3d7595a6-c47c-406e-ac95-7b06efe7a665\") " pod="kube-system/cilium-wnv9n"
Feb 13 15:13:04.773438 kubelet[2585]: I0213 15:13:04.773057    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d7595a6-c47c-406e-ac95-7b06efe7a665-cilium-run\") pod \"cilium-wnv9n\" (UID: \"3d7595a6-c47c-406e-ac95-7b06efe7a665\") " pod="kube-system/cilium-wnv9n"
Feb 13 15:13:04.773438 kubelet[2585]: I0213 15:13:04.773072    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d7595a6-c47c-406e-ac95-7b06efe7a665-xtables-lock\") pod \"cilium-wnv9n\" (UID: \"3d7595a6-c47c-406e-ac95-7b06efe7a665\") " pod="kube-system/cilium-wnv9n"
Feb 13 15:13:04.773438 kubelet[2585]: I0213 15:13:04.773219    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d7595a6-c47c-406e-ac95-7b06efe7a665-cilium-cgroup\") pod \"cilium-wnv9n\" (UID: \"3d7595a6-c47c-406e-ac95-7b06efe7a665\") " pod="kube-system/cilium-wnv9n"
Feb 13 15:13:04.773697 kubelet[2585]: I0213 15:13:04.773254    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3d7595a6-c47c-406e-ac95-7b06efe7a665-cilium-ipsec-secrets\") pod \"cilium-wnv9n\" (UID: \"3d7595a6-c47c-406e-ac95-7b06efe7a665\") " pod="kube-system/cilium-wnv9n"
Feb 13 15:13:04.773697 kubelet[2585]: I0213 15:13:04.773271    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d7595a6-c47c-406e-ac95-7b06efe7a665-host-proc-sys-net\") pod \"cilium-wnv9n\" (UID: \"3d7595a6-c47c-406e-ac95-7b06efe7a665\") " pod="kube-system/cilium-wnv9n"
Feb 13 15:13:04.773697 kubelet[2585]: I0213 15:13:04.773288    2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d7595a6-c47c-406e-ac95-7b06efe7a665-host-proc-sys-kernel\") pod \"cilium-wnv9n\" (UID: \"3d7595a6-c47c-406e-ac95-7b06efe7a665\") " pod="kube-system/cilium-wnv9n"
Feb 13 15:13:04.775520 systemd-logind[1467]: New session 26 of user core.
Feb 13 15:13:04.787340 systemd[1]: Started session-26.scope - Session 26 of User core.
Feb 13 15:13:04.836846 sshd[4454]: Connection closed by 10.0.0.1 port 46678
Feb 13 15:13:04.837365 sshd-session[4451]: pam_unix(sshd:session): session closed for user core
Feb 13 15:13:04.851645 systemd[1]: sshd@25-10.0.0.48:22-10.0.0.1:46678.service: Deactivated successfully.
Feb 13 15:13:04.854787 systemd[1]: session-26.scope: Deactivated successfully.
Feb 13 15:13:04.857029 systemd-logind[1467]: Session 26 logged out. Waiting for processes to exit.
Feb 13 15:13:04.869531 systemd[1]: Started sshd@26-10.0.0.48:22-10.0.0.1:46682.service - OpenSSH per-connection server daemon (10.0.0.1:46682).
Feb 13 15:13:04.870764 systemd-logind[1467]: Removed session 26.
Feb 13 15:13:04.912923 sshd[4460]: Accepted publickey for core from 10.0.0.1 port 46682 ssh2: RSA SHA256:AdB3d4d03bamdKIusHnh7PKrUCuT6JPbjLYOHUhTeOE
Feb 13 15:13:04.914853 sshd-session[4460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:13:04.919512 systemd-logind[1467]: New session 27 of user core.
Feb 13 15:13:04.929442 kubelet[2585]: I0213 15:13:04.929383    2585 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:13:04Z","lastTransitionTime":"2025-02-13T15:13:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"}
Feb 13 15:13:04.930347 systemd[1]: Started session-27.scope - Session 27 of User core.
Feb 13 15:13:05.042176 containerd[1484]: time="2025-02-13T15:13:05.042047331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wnv9n,Uid:3d7595a6-c47c-406e-ac95-7b06efe7a665,Namespace:kube-system,Attempt:0,}"
Feb 13 15:13:05.063211 containerd[1484]: time="2025-02-13T15:13:05.062710113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:13:05.063211 containerd[1484]: time="2025-02-13T15:13:05.063173591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:13:05.063496 containerd[1484]: time="2025-02-13T15:13:05.063193231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:13:05.063627 containerd[1484]: time="2025-02-13T15:13:05.063582469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:13:05.088364 systemd[1]: Started cri-containerd-32be3b21a430020def695c45b74686dfd4160d9e6bedd535ff8da24241c70ac5.scope - libcontainer container 32be3b21a430020def695c45b74686dfd4160d9e6bedd535ff8da24241c70ac5.
Feb 13 15:13:05.114185 containerd[1484]: time="2025-02-13T15:13:05.114145188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wnv9n,Uid:3d7595a6-c47c-406e-ac95-7b06efe7a665,Namespace:kube-system,Attempt:0,} returns sandbox id \"32be3b21a430020def695c45b74686dfd4160d9e6bedd535ff8da24241c70ac5\""
Feb 13 15:13:05.116957 containerd[1484]: time="2025-02-13T15:13:05.116848056Z" level=info msg="CreateContainer within sandbox \"32be3b21a430020def695c45b74686dfd4160d9e6bedd535ff8da24241c70ac5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb 13 15:13:05.143227 containerd[1484]: time="2025-02-13T15:13:05.143083931Z" level=info msg="CreateContainer within sandbox \"32be3b21a430020def695c45b74686dfd4160d9e6bedd535ff8da24241c70ac5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3d4ad2b9ba668d01e787751d81f4638054a6af99a48f9ffc2d44f78af1855fa0\""
Feb 13 15:13:05.143657 containerd[1484]: time="2025-02-13T15:13:05.143599488Z" level=info msg="StartContainer for \"3d4ad2b9ba668d01e787751d81f4638054a6af99a48f9ffc2d44f78af1855fa0\""
Feb 13 15:13:05.170385 systemd[1]: Started cri-containerd-3d4ad2b9ba668d01e787751d81f4638054a6af99a48f9ffc2d44f78af1855fa0.scope - libcontainer container 3d4ad2b9ba668d01e787751d81f4638054a6af99a48f9ffc2d44f78af1855fa0.
Feb 13 15:13:05.198288 containerd[1484]: time="2025-02-13T15:13:05.198033310Z" level=info msg="StartContainer for \"3d4ad2b9ba668d01e787751d81f4638054a6af99a48f9ffc2d44f78af1855fa0\" returns successfully"
Feb 13 15:13:05.235257 systemd[1]: cri-containerd-3d4ad2b9ba668d01e787751d81f4638054a6af99a48f9ffc2d44f78af1855fa0.scope: Deactivated successfully.
Feb 13 15:13:05.270559 containerd[1484]: time="2025-02-13T15:13:05.270486925Z" level=info msg="shim disconnected" id=3d4ad2b9ba668d01e787751d81f4638054a6af99a48f9ffc2d44f78af1855fa0 namespace=k8s.io
Feb 13 15:13:05.270559 containerd[1484]: time="2025-02-13T15:13:05.270538085Z" level=warning msg="cleaning up after shim disconnected" id=3d4ad2b9ba668d01e787751d81f4638054a6af99a48f9ffc2d44f78af1855fa0 namespace=k8s.io
Feb 13 15:13:05.270559 containerd[1484]: time="2025-02-13T15:13:05.270547245Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:13:05.294424 containerd[1484]: time="2025-02-13T15:13:05.294321612Z" level=info msg="CreateContainer within sandbox \"32be3b21a430020def695c45b74686dfd4160d9e6bedd535ff8da24241c70ac5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb 13 15:13:05.309481 containerd[1484]: time="2025-02-13T15:13:05.309368940Z" level=info msg="CreateContainer within sandbox \"32be3b21a430020def695c45b74686dfd4160d9e6bedd535ff8da24241c70ac5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"03c1ff4756af228af39600944d2d4ed550f5923e84d6ebc3f53073b73f08879c\""
Feb 13 15:13:05.310436 containerd[1484]: time="2025-02-13T15:13:05.310258456Z" level=info msg="StartContainer for \"03c1ff4756af228af39600944d2d4ed550f5923e84d6ebc3f53073b73f08879c\""
Feb 13 15:13:05.340341 systemd[1]: Started cri-containerd-03c1ff4756af228af39600944d2d4ed550f5923e84d6ebc3f53073b73f08879c.scope - libcontainer container 03c1ff4756af228af39600944d2d4ed550f5923e84d6ebc3f53073b73f08879c.
Feb 13 15:13:05.366830 containerd[1484]: time="2025-02-13T15:13:05.366776547Z" level=info msg="StartContainer for \"03c1ff4756af228af39600944d2d4ed550f5923e84d6ebc3f53073b73f08879c\" returns successfully"
Feb 13 15:13:05.413521 systemd[1]: cri-containerd-03c1ff4756af228af39600944d2d4ed550f5923e84d6ebc3f53073b73f08879c.scope: Deactivated successfully.
Feb 13 15:13:05.443956 containerd[1484]: time="2025-02-13T15:13:05.443640262Z" level=info msg="shim disconnected" id=03c1ff4756af228af39600944d2d4ed550f5923e84d6ebc3f53073b73f08879c namespace=k8s.io
Feb 13 15:13:05.443956 containerd[1484]: time="2025-02-13T15:13:05.443705262Z" level=warning msg="cleaning up after shim disconnected" id=03c1ff4756af228af39600944d2d4ed550f5923e84d6ebc3f53073b73f08879c namespace=k8s.io
Feb 13 15:13:05.443956 containerd[1484]: time="2025-02-13T15:13:05.443714182Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:13:06.298307 containerd[1484]: time="2025-02-13T15:13:06.298250763Z" level=info msg="CreateContainer within sandbox \"32be3b21a430020def695c45b74686dfd4160d9e6bedd535ff8da24241c70ac5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb 13 15:13:06.312005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1042073814.mount: Deactivated successfully.
Feb 13 15:13:06.313585 containerd[1484]: time="2025-02-13T15:13:06.313534173Z" level=info msg="CreateContainer within sandbox \"32be3b21a430020def695c45b74686dfd4160d9e6bedd535ff8da24241c70ac5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f76083cf5eca496bdfbd2dc81ce77735d678c1f64668d14461e39a50f120113f\""
Feb 13 15:13:06.315281 containerd[1484]: time="2025-02-13T15:13:06.315253365Z" level=info msg="StartContainer for \"f76083cf5eca496bdfbd2dc81ce77735d678c1f64668d14461e39a50f120113f\""
Feb 13 15:13:06.343371 systemd[1]: Started cri-containerd-f76083cf5eca496bdfbd2dc81ce77735d678c1f64668d14461e39a50f120113f.scope - libcontainer container f76083cf5eca496bdfbd2dc81ce77735d678c1f64668d14461e39a50f120113f.
Feb 13 15:13:06.368995 systemd[1]: cri-containerd-f76083cf5eca496bdfbd2dc81ce77735d678c1f64668d14461e39a50f120113f.scope: Deactivated successfully.
Feb 13 15:13:06.369806 containerd[1484]: time="2025-02-13T15:13:06.369420075Z" level=info msg="StartContainer for \"f76083cf5eca496bdfbd2dc81ce77735d678c1f64668d14461e39a50f120113f\" returns successfully"
Feb 13 15:13:06.391215 containerd[1484]: time="2025-02-13T15:13:06.391153615Z" level=info msg="shim disconnected" id=f76083cf5eca496bdfbd2dc81ce77735d678c1f64668d14461e39a50f120113f namespace=k8s.io
Feb 13 15:13:06.391578 containerd[1484]: time="2025-02-13T15:13:06.391413054Z" level=warning msg="cleaning up after shim disconnected" id=f76083cf5eca496bdfbd2dc81ce77735d678c1f64668d14461e39a50f120113f namespace=k8s.io
Feb 13 15:13:06.391578 containerd[1484]: time="2025-02-13T15:13:06.391428934Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:13:06.879655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f76083cf5eca496bdfbd2dc81ce77735d678c1f64668d14461e39a50f120113f-rootfs.mount: Deactivated successfully.
Feb 13 15:13:07.303171 containerd[1484]: time="2025-02-13T15:13:07.302665380Z" level=info msg="CreateContainer within sandbox \"32be3b21a430020def695c45b74686dfd4160d9e6bedd535ff8da24241c70ac5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb 13 15:13:07.318738 containerd[1484]: time="2025-02-13T15:13:07.318680149Z" level=info msg="CreateContainer within sandbox \"32be3b21a430020def695c45b74686dfd4160d9e6bedd535ff8da24241c70ac5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9e9171016e3536d4ab470667ee7215b3ce8a70d4d136e76e8ba7959cb95f2e6e\""
Feb 13 15:13:07.320089 containerd[1484]: time="2025-02-13T15:13:07.319297266Z" level=info msg="StartContainer for \"9e9171016e3536d4ab470667ee7215b3ce8a70d4d136e76e8ba7959cb95f2e6e\""
Feb 13 15:13:07.354898 systemd[1]: Started cri-containerd-9e9171016e3536d4ab470667ee7215b3ce8a70d4d136e76e8ba7959cb95f2e6e.scope - libcontainer container 9e9171016e3536d4ab470667ee7215b3ce8a70d4d136e76e8ba7959cb95f2e6e.
Feb 13 15:13:07.381675 systemd[1]: cri-containerd-9e9171016e3536d4ab470667ee7215b3ce8a70d4d136e76e8ba7959cb95f2e6e.scope: Deactivated successfully.
Feb 13 15:13:07.384231 containerd[1484]: time="2025-02-13T15:13:07.384194976Z" level=info msg="StartContainer for \"9e9171016e3536d4ab470667ee7215b3ce8a70d4d136e76e8ba7959cb95f2e6e\" returns successfully"
Feb 13 15:13:07.417779 containerd[1484]: time="2025-02-13T15:13:07.417716827Z" level=info msg="shim disconnected" id=9e9171016e3536d4ab470667ee7215b3ce8a70d4d136e76e8ba7959cb95f2e6e namespace=k8s.io
Feb 13 15:13:07.417779 containerd[1484]: time="2025-02-13T15:13:07.417770147Z" level=warning msg="cleaning up after shim disconnected" id=9e9171016e3536d4ab470667ee7215b3ce8a70d4d136e76e8ba7959cb95f2e6e namespace=k8s.io
Feb 13 15:13:07.417779 containerd[1484]: time="2025-02-13T15:13:07.417778947Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:13:07.879683 systemd[1]: run-containerd-runc-k8s.io-9e9171016e3536d4ab470667ee7215b3ce8a70d4d136e76e8ba7959cb95f2e6e-runc.yTKKzh.mount: Deactivated successfully.
Feb 13 15:13:07.879778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e9171016e3536d4ab470667ee7215b3ce8a70d4d136e76e8ba7959cb95f2e6e-rootfs.mount: Deactivated successfully.
Feb 13 15:13:08.117325 kubelet[2585]: E0213 15:13:08.117252    2585 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 13 15:13:08.307779 containerd[1484]: time="2025-02-13T15:13:08.307279100Z" level=info msg="CreateContainer within sandbox \"32be3b21a430020def695c45b74686dfd4160d9e6bedd535ff8da24241c70ac5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb 13 15:13:08.335965 containerd[1484]: time="2025-02-13T15:13:08.335917257Z" level=info msg="CreateContainer within sandbox \"32be3b21a430020def695c45b74686dfd4160d9e6bedd535ff8da24241c70ac5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bc7ad0e5be9578b35709f6fdb725ffd252e0131dfce9d7b13cfdba9294af4dff\""
Feb 13 15:13:08.336631 containerd[1484]: time="2025-02-13T15:13:08.336534334Z" level=info msg="StartContainer for \"bc7ad0e5be9578b35709f6fdb725ffd252e0131dfce9d7b13cfdba9294af4dff\""
Feb 13 15:13:08.364351 systemd[1]: Started cri-containerd-bc7ad0e5be9578b35709f6fdb725ffd252e0131dfce9d7b13cfdba9294af4dff.scope - libcontainer container bc7ad0e5be9578b35709f6fdb725ffd252e0131dfce9d7b13cfdba9294af4dff.
Feb 13 15:13:08.407193 containerd[1484]: time="2025-02-13T15:13:08.406487192Z" level=info msg="StartContainer for \"bc7ad0e5be9578b35709f6fdb725ffd252e0131dfce9d7b13cfdba9294af4dff\" returns successfully"
Feb 13 15:13:08.678184 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce))
Feb 13 15:13:09.324230 kubelet[2585]: I0213 15:13:09.324167    2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wnv9n" podStartSLOduration=5.324150229 podStartE2EDuration="5.324150229s" podCreationTimestamp="2025-02-13 15:13:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:13:09.323665031 +0000 UTC m=+86.389645812" watchObservedRunningTime="2025-02-13 15:13:09.324150229 +0000 UTC m=+86.390131010"
Feb 13 15:13:11.593132 systemd-networkd[1405]: lxc_health: Link UP
Feb 13 15:13:11.596505 systemd-networkd[1405]: lxc_health: Gained carrier
Feb 13 15:13:13.075364 systemd-networkd[1405]: lxc_health: Gained IPv6LL
Feb 13 15:13:17.709642 sshd[4467]: Connection closed by 10.0.0.1 port 46682
Feb 13 15:13:17.710243 sshd-session[4460]: pam_unix(sshd:session): session closed for user core
Feb 13 15:13:17.713882 systemd[1]: session-27.scope: Deactivated successfully.
Feb 13 15:13:17.715243 systemd[1]: sshd@26-10.0.0.48:22-10.0.0.1:46682.service: Deactivated successfully.
Feb 13 15:13:17.717152 systemd-logind[1467]: Session 27 logged out. Waiting for processes to exit.
Feb 13 15:13:17.718272 systemd-logind[1467]: Removed session 27.