Feb 13 15:25:29.903874 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1]
Feb 13 15:25:29.903896 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 14:02:42 -00 2025
Feb 13 15:25:29.903905 kernel: KASLR enabled
Feb 13 15:25:29.903911 kernel: efi: EFI v2.7 by EDK II
Feb 13 15:25:29.903917 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 
Feb 13 15:25:29.903922 kernel: random: crng init done
Feb 13 15:25:29.903929 kernel: secureboot: Secure boot disabled
Feb 13 15:25:29.903935 kernel: ACPI: Early table checksum verification disabled
Feb 13 15:25:29.903941 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS )
Feb 13 15:25:29.903948 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS  BXPC     00000001      01000013)
Feb 13 15:25:29.903954 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:25:29.903959 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:25:29.903965 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:25:29.903971 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:25:29.903978 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:25:29.903986 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:25:29.903992 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:25:29.903999 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:25:29.904005 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:25:29.904011 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600
Feb 13 15:25:29.904017 kernel: NUMA: Failed to initialise from firmware
Feb 13 15:25:29.904035 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff]
Feb 13 15:25:29.904041 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff]
Feb 13 15:25:29.904047 kernel: Zone ranges:
Feb 13 15:25:29.904053 kernel:   DMA      [mem 0x0000000040000000-0x00000000dcffffff]
Feb 13 15:25:29.904061 kernel:   DMA32    empty
Feb 13 15:25:29.904067 kernel:   Normal   empty
Feb 13 15:25:29.904074 kernel: Movable zone start for each node
Feb 13 15:25:29.904080 kernel: Early memory node ranges
Feb 13 15:25:29.904086 kernel:   node   0: [mem 0x0000000040000000-0x00000000d967ffff]
Feb 13 15:25:29.904093 kernel:   node   0: [mem 0x00000000d9680000-0x00000000d968ffff]
Feb 13 15:25:29.904099 kernel:   node   0: [mem 0x00000000d9690000-0x00000000d976ffff]
Feb 13 15:25:29.904105 kernel:   node   0: [mem 0x00000000d9770000-0x00000000d9b3ffff]
Feb 13 15:25:29.904111 kernel:   node   0: [mem 0x00000000d9b40000-0x00000000dce1ffff]
Feb 13 15:25:29.904117 kernel:   node   0: [mem 0x00000000dce20000-0x00000000dceaffff]
Feb 13 15:25:29.904123 kernel:   node   0: [mem 0x00000000dceb0000-0x00000000dcebffff]
Feb 13 15:25:29.904129 kernel:   node   0: [mem 0x00000000dcec0000-0x00000000dcfdffff]
Feb 13 15:25:29.904137 kernel:   node   0: [mem 0x00000000dcfe0000-0x00000000dcffffff]
Feb 13 15:25:29.904143 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff]
Feb 13 15:25:29.904149 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges
Feb 13 15:25:29.904157 kernel: psci: probing for conduit method from ACPI.
Feb 13 15:25:29.904164 kernel: psci: PSCIv1.1 detected in firmware.
Feb 13 15:25:29.904170 kernel: psci: Using standard PSCI v0.2 function IDs
Feb 13 15:25:29.904178 kernel: psci: Trusted OS migration not required
Feb 13 15:25:29.904185 kernel: psci: SMC Calling Convention v1.1
Feb 13 15:25:29.904191 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003)
Feb 13 15:25:29.904198 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976
Feb 13 15:25:29.904204 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096
Feb 13 15:25:29.904211 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 
Feb 13 15:25:29.904218 kernel: Detected PIPT I-cache on CPU0
Feb 13 15:25:29.904224 kernel: CPU features: detected: GIC system register CPU interface
Feb 13 15:25:29.904230 kernel: CPU features: detected: Hardware dirty bit management
Feb 13 15:25:29.904237 kernel: CPU features: detected: Spectre-v4
Feb 13 15:25:29.904245 kernel: CPU features: detected: Spectre-BHB
Feb 13 15:25:29.904251 kernel: CPU features: kernel page table isolation forced ON by KASLR
Feb 13 15:25:29.904258 kernel: CPU features: detected: Kernel page table isolation (KPTI)
Feb 13 15:25:29.904264 kernel: CPU features: detected: ARM erratum 1418040
Feb 13 15:25:29.904271 kernel: CPU features: detected: SSBS not fully self-synchronizing
Feb 13 15:25:29.904277 kernel: alternatives: applying boot alternatives
Feb 13 15:25:29.904300 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=685b18f1e2a119f561f35348e788538aade62ddb9fa889a87d9e00058aaa4b5a
Feb 13 15:25:29.904308 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Feb 13 15:25:29.904314 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb 13 15:25:29.904321 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Feb 13 15:25:29.904327 kernel: Fallback order for Node 0: 0 
Feb 13 15:25:29.904336 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 633024
Feb 13 15:25:29.904342 kernel: Policy zone: DMA
Feb 13 15:25:29.904348 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 13 15:25:29.904355 kernel: software IO TLB: area num 4.
Feb 13 15:25:29.904361 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB)
Feb 13 15:25:29.904368 kernel: Memory: 2385940K/2572288K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39936K init, 897K bss, 186348K reserved, 0K cma-reserved)
Feb 13 15:25:29.904375 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Feb 13 15:25:29.904381 kernel: rcu: Preemptible hierarchical RCU implementation.
Feb 13 15:25:29.904388 kernel: rcu:         RCU event tracing is enabled.
Feb 13 15:25:29.904395 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4.
Feb 13 15:25:29.904402 kernel:         Trampoline variant of Tasks RCU enabled.
Feb 13 15:25:29.904408 kernel:         Tracing variant of Tasks RCU enabled.
Feb 13 15:25:29.904417 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 13 15:25:29.904423 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Feb 13 15:25:29.904430 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
Feb 13 15:25:29.904437 kernel: GICv3: 256 SPIs implemented
Feb 13 15:25:29.904443 kernel: GICv3: 0 Extended SPIs implemented
Feb 13 15:25:29.904449 kernel: Root IRQ handler: gic_handle_irq
Feb 13 15:25:29.904456 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI
Feb 13 15:25:29.904462 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000
Feb 13 15:25:29.904469 kernel: ITS [mem 0x08080000-0x0809ffff]
Feb 13 15:25:29.904475 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1)
Feb 13 15:25:29.904482 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1)
Feb 13 15:25:29.904490 kernel: GICv3: using LPI property table @0x00000000400f0000
Feb 13 15:25:29.904496 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000
Feb 13 15:25:29.904503 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb 13 15:25:29.904509 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb 13 15:25:29.904516 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt).
Feb 13 15:25:29.904523 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns
Feb 13 15:25:29.904529 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns
Feb 13 15:25:29.904536 kernel: arm-pv: using stolen time PV
Feb 13 15:25:29.904543 kernel: Console: colour dummy device 80x25
Feb 13 15:25:29.904549 kernel: ACPI: Core revision 20230628
Feb 13 15:25:29.904556 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000)
Feb 13 15:25:29.904564 kernel: pid_max: default: 32768 minimum: 301
Feb 13 15:25:29.904571 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Feb 13 15:25:29.904578 kernel: landlock: Up and running.
Feb 13 15:25:29.904584 kernel: SELinux:  Initializing.
Feb 13 15:25:29.904591 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb 13 15:25:29.904598 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb 13 15:25:29.904604 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Feb 13 15:25:29.904611 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Feb 13 15:25:29.904618 kernel: rcu: Hierarchical SRCU implementation.
Feb 13 15:25:29.904626 kernel: rcu:         Max phase no-delay instances is 400.
Feb 13 15:25:29.904633 kernel: Platform MSI: ITS@0x8080000 domain created
Feb 13 15:25:29.904640 kernel: PCI/MSI: ITS@0x8080000 domain created
Feb 13 15:25:29.904647 kernel: Remapping and enabling EFI services.
Feb 13 15:25:29.904653 kernel: smp: Bringing up secondary CPUs ...
Feb 13 15:25:29.904660 kernel: Detected PIPT I-cache on CPU1
Feb 13 15:25:29.904667 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000
Feb 13 15:25:29.904674 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000
Feb 13 15:25:29.904681 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb 13 15:25:29.904689 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1]
Feb 13 15:25:29.904696 kernel: Detected PIPT I-cache on CPU2
Feb 13 15:25:29.904708 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000
Feb 13 15:25:29.904716 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000
Feb 13 15:25:29.904724 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb 13 15:25:29.904731 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1]
Feb 13 15:25:29.904738 kernel: Detected PIPT I-cache on CPU3
Feb 13 15:25:29.904744 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000
Feb 13 15:25:29.904752 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000
Feb 13 15:25:29.904761 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb 13 15:25:29.904768 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1]
Feb 13 15:25:29.904776 kernel: smp: Brought up 1 node, 4 CPUs
Feb 13 15:25:29.904783 kernel: SMP: Total of 4 processors activated.
Feb 13 15:25:29.904790 kernel: CPU features: detected: 32-bit EL0 Support
Feb 13 15:25:29.904798 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence
Feb 13 15:25:29.904805 kernel: CPU features: detected: Common not Private translations
Feb 13 15:25:29.904812 kernel: CPU features: detected: CRC32 instructions
Feb 13 15:25:29.904821 kernel: CPU features: detected: Enhanced Virtualization Traps
Feb 13 15:25:29.904828 kernel: CPU features: detected: RCpc load-acquire (LDAPR)
Feb 13 15:25:29.904836 kernel: CPU features: detected: LSE atomic instructions
Feb 13 15:25:29.904842 kernel: CPU features: detected: Privileged Access Never
Feb 13 15:25:29.904854 kernel: CPU features: detected: RAS Extension Support
Feb 13 15:25:29.904862 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS)
Feb 13 15:25:29.904869 kernel: CPU: All CPU(s) started at EL1
Feb 13 15:25:29.904876 kernel: alternatives: applying system-wide alternatives
Feb 13 15:25:29.904883 kernel: devtmpfs: initialized
Feb 13 15:25:29.904890 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 13 15:25:29.904899 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Feb 13 15:25:29.904906 kernel: pinctrl core: initialized pinctrl subsystem
Feb 13 15:25:29.904913 kernel: SMBIOS 3.0.0 present.
Feb 13 15:25:29.904920 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022
Feb 13 15:25:29.904927 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 13 15:25:29.904934 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations
Feb 13 15:25:29.904942 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Feb 13 15:25:29.904949 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Feb 13 15:25:29.904956 kernel: audit: initializing netlink subsys (disabled)
Feb 13 15:25:29.904965 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1
Feb 13 15:25:29.904972 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 13 15:25:29.904979 kernel: cpuidle: using governor menu
Feb 13 15:25:29.904986 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
Feb 13 15:25:29.904993 kernel: ASID allocator initialised with 32768 entries
Feb 13 15:25:29.905000 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 13 15:25:29.905007 kernel: Serial: AMBA PL011 UART driver
Feb 13 15:25:29.905014 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL
Feb 13 15:25:29.905021 kernel: Modules: 0 pages in range for non-PLT usage
Feb 13 15:25:29.905029 kernel: Modules: 508880 pages in range for PLT usage
Feb 13 15:25:29.905036 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb 13 15:25:29.905043 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page
Feb 13 15:25:29.905050 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages
Feb 13 15:25:29.905057 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page
Feb 13 15:25:29.905064 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb 13 15:25:29.905071 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page
Feb 13 15:25:29.905078 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages
Feb 13 15:25:29.905085 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page
Feb 13 15:25:29.905093 kernel: ACPI: Added _OSI(Module Device)
Feb 13 15:25:29.905100 kernel: ACPI: Added _OSI(Processor Device)
Feb 13 15:25:29.905107 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Feb 13 15:25:29.905114 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 13 15:25:29.905121 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb 13 15:25:29.905128 kernel: ACPI: Interpreter enabled
Feb 13 15:25:29.905135 kernel: ACPI: Using GIC for interrupt routing
Feb 13 15:25:29.905142 kernel: ACPI: MCFG table detected, 1 entries
Feb 13 15:25:29.905149 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA
Feb 13 15:25:29.905157 kernel: printk: console [ttyAMA0] enabled
Feb 13 15:25:29.905165 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb 13 15:25:29.905369 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Feb 13 15:25:29.905452 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR]
Feb 13 15:25:29.905518 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
Feb 13 15:25:29.905595 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00
Feb 13 15:25:29.905659 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff]
Feb 13 15:25:29.905672 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io  0x0000-0xffff window]
Feb 13 15:25:29.905680 kernel: PCI host bridge to bus 0000:00
Feb 13 15:25:29.905751 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window]
Feb 13 15:25:29.905809 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0xffff window]
Feb 13 15:25:29.905877 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window]
Feb 13 15:25:29.905935 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb 13 15:25:29.906012 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000
Feb 13 15:25:29.906092 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00
Feb 13 15:25:29.906159 kernel: pci 0000:00:01.0: reg 0x10: [io  0x0000-0x001f]
Feb 13 15:25:29.906229 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff]
Feb 13 15:25:29.906304 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref]
Feb 13 15:25:29.906373 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref]
Feb 13 15:25:29.906438 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff]
Feb 13 15:25:29.906504 kernel: pci 0000:00:01.0: BAR 0: assigned [io  0x1000-0x101f]
Feb 13 15:25:29.906566 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window]
Feb 13 15:25:29.906630 kernel: pci_bus 0000:00: resource 5 [io  0x0000-0xffff window]
Feb 13 15:25:29.906687 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window]
Feb 13 15:25:29.906696 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35
Feb 13 15:25:29.906703 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36
Feb 13 15:25:29.906710 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37
Feb 13 15:25:29.906717 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38
Feb 13 15:25:29.906724 kernel: iommu: Default domain type: Translated
Feb 13 15:25:29.906734 kernel: iommu: DMA domain TLB invalidation policy: strict mode
Feb 13 15:25:29.906741 kernel: efivars: Registered efivars operations
Feb 13 15:25:29.906748 kernel: vgaarb: loaded
Feb 13 15:25:29.906755 kernel: clocksource: Switched to clocksource arch_sys_counter
Feb 13 15:25:29.906762 kernel: VFS: Disk quotas dquot_6.6.0
Feb 13 15:25:29.906769 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 13 15:25:29.906776 kernel: pnp: PnP ACPI init
Feb 13 15:25:29.906858 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved
Feb 13 15:25:29.906872 kernel: pnp: PnP ACPI: found 1 devices
Feb 13 15:25:29.906880 kernel: NET: Registered PF_INET protocol family
Feb 13 15:25:29.906887 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb 13 15:25:29.906894 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
Feb 13 15:25:29.906902 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 13 15:25:29.906909 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
Feb 13 15:25:29.906917 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear)
Feb 13 15:25:29.906924 kernel: TCP: Hash tables configured (established 32768 bind 32768)
Feb 13 15:25:29.906936 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb 13 15:25:29.906945 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb 13 15:25:29.906952 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 13 15:25:29.906959 kernel: PCI: CLS 0 bytes, default 64
Feb 13 15:25:29.906966 kernel: kvm [1]: HYP mode not available
Feb 13 15:25:29.906973 kernel: Initialise system trusted keyrings
Feb 13 15:25:29.906980 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0
Feb 13 15:25:29.906988 kernel: Key type asymmetric registered
Feb 13 15:25:29.906994 kernel: Asymmetric key parser 'x509' registered
Feb 13 15:25:29.907002 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
Feb 13 15:25:29.907010 kernel: io scheduler mq-deadline registered
Feb 13 15:25:29.907018 kernel: io scheduler kyber registered
Feb 13 15:25:29.907025 kernel: io scheduler bfq registered
Feb 13 15:25:29.907032 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0
Feb 13 15:25:29.907039 kernel: ACPI: button: Power Button [PWRB]
Feb 13 15:25:29.907047 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36
Feb 13 15:25:29.907114 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007)
Feb 13 15:25:29.907124 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 13 15:25:29.907131 kernel: thunder_xcv, ver 1.0
Feb 13 15:25:29.907140 kernel: thunder_bgx, ver 1.0
Feb 13 15:25:29.907147 kernel: nicpf, ver 1.0
Feb 13 15:25:29.907153 kernel: nicvf, ver 1.0
Feb 13 15:25:29.907232 kernel: rtc-efi rtc-efi.0: registered as rtc0
Feb 13 15:25:29.907306 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:25:29 UTC (1739460329)
Feb 13 15:25:29.907316 kernel: hid: raw HID events driver (C) Jiri Kosina
Feb 13 15:25:29.907324 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available
Feb 13 15:25:29.907331 kernel: watchdog: Delayed init of the lockup detector failed: -19
Feb 13 15:25:29.907341 kernel: watchdog: Hard watchdog permanently disabled
Feb 13 15:25:29.907349 kernel: NET: Registered PF_INET6 protocol family
Feb 13 15:25:29.907356 kernel: Segment Routing with IPv6
Feb 13 15:25:29.907363 kernel: In-situ OAM (IOAM) with IPv6
Feb 13 15:25:29.907371 kernel: NET: Registered PF_PACKET protocol family
Feb 13 15:25:29.907378 kernel: Key type dns_resolver registered
Feb 13 15:25:29.907385 kernel: registered taskstats version 1
Feb 13 15:25:29.907392 kernel: Loading compiled-in X.509 certificates
Feb 13 15:25:29.907400 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 62d673f884efd54b6d6ef802a9b879413c8a346e'
Feb 13 15:25:29.907409 kernel: Key type .fscrypt registered
Feb 13 15:25:29.907416 kernel: Key type fscrypt-provisioning registered
Feb 13 15:25:29.907423 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb 13 15:25:29.907430 kernel: ima: Allocated hash algorithm: sha1
Feb 13 15:25:29.907437 kernel: ima: No architecture policies found
Feb 13 15:25:29.907444 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng)
Feb 13 15:25:29.907451 kernel: clk: Disabling unused clocks
Feb 13 15:25:29.907458 kernel: Freeing unused kernel memory: 39936K
Feb 13 15:25:29.907465 kernel: Run /init as init process
Feb 13 15:25:29.907474 kernel:   with arguments:
Feb 13 15:25:29.907480 kernel:     /init
Feb 13 15:25:29.907487 kernel:   with environment:
Feb 13 15:25:29.907494 kernel:     HOME=/
Feb 13 15:25:29.907501 kernel:     TERM=linux
Feb 13 15:25:29.907508 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Feb 13 15:25:29.907517 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Feb 13 15:25:29.907526 systemd[1]: Detected virtualization kvm.
Feb 13 15:25:29.907535 systemd[1]: Detected architecture arm64.
Feb 13 15:25:29.907546 systemd[1]: Running in initrd.
Feb 13 15:25:29.907554 systemd[1]: No hostname configured, using default hostname.
Feb 13 15:25:29.907561 systemd[1]: Hostname set to <localhost>.
Feb 13 15:25:29.907571 systemd[1]: Initializing machine ID from VM UUID.
Feb 13 15:25:29.907584 systemd[1]: Queued start job for default target initrd.target.
Feb 13 15:25:29.907594 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:25:29.907602 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:25:29.907612 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Feb 13 15:25:29.907620 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 15:25:29.907628 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Feb 13 15:25:29.907636 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Feb 13 15:25:29.907647 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Feb 13 15:25:29.907655 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Feb 13 15:25:29.907665 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:25:29.907673 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:25:29.907681 systemd[1]: Reached target paths.target - Path Units.
Feb 13 15:25:29.907692 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 15:25:29.907703 systemd[1]: Reached target swap.target - Swaps.
Feb 13 15:25:29.907715 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 15:25:29.907723 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 15:25:29.907731 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 15:25:29.907739 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Feb 13 15:25:29.907748 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Feb 13 15:25:29.907756 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:25:29.907764 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:25:29.907772 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:25:29.907780 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 15:25:29.907787 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Feb 13 15:25:29.907795 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 15:25:29.907803 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Feb 13 15:25:29.907810 systemd[1]: Starting systemd-fsck-usr.service...
Feb 13 15:25:29.907820 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 15:25:29.907827 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 15:25:29.907835 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:25:29.907847 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Feb 13 15:25:29.907856 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:25:29.907863 systemd[1]: Finished systemd-fsck-usr.service.
Feb 13 15:25:29.907873 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Feb 13 15:25:29.907899 systemd-journald[238]: Collecting audit messages is disabled.
Feb 13 15:25:29.907920 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:25:29.907928 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 15:25:29.907935 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 13 15:25:29.907944 systemd-journald[238]: Journal started
Feb 13 15:25:29.907967 systemd-journald[238]: Runtime Journal (/run/log/journal/083b74480e6542f3a5621a086ea96320) is 5.9M, max 47.3M, 41.4M free.
Feb 13 15:25:29.894309 systemd-modules-load[239]: Inserted module 'overlay'
Feb 13 15:25:29.911093 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:25:29.912470 systemd-modules-load[239]: Inserted module 'br_netfilter'
Feb 13 15:25:29.913301 kernel: Bridge firewalling registered
Feb 13 15:25:29.915335 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 15:25:29.916893 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 15:25:29.917251 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:25:29.921403 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:25:29.922890 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 15:25:29.925311 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:25:29.933023 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:25:29.935467 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Feb 13 15:25:29.936482 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:25:29.938096 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:25:29.941709 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 15:25:29.949523 dracut-cmdline[272]: dracut-dracut-053
Feb 13 15:25:29.951960 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=685b18f1e2a119f561f35348e788538aade62ddb9fa889a87d9e00058aaa4b5a
Feb 13 15:25:29.965790 systemd-resolved[276]: Positive Trust Anchors:
Feb 13 15:25:29.965809 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 15:25:29.965839 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 15:25:29.970464 systemd-resolved[276]: Defaulting to hostname 'linux'.
Feb 13 15:25:29.971382 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 15:25:29.974097 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:25:30.023334 kernel: SCSI subsystem initialized
Feb 13 15:25:30.026314 kernel: Loading iSCSI transport class v2.0-870.
Feb 13 15:25:30.034310 kernel: iscsi: registered transport (tcp)
Feb 13 15:25:30.048439 kernel: iscsi: registered transport (qla4xxx)
Feb 13 15:25:30.048462 kernel: QLogic iSCSI HBA Driver
Feb 13 15:25:30.091345 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Feb 13 15:25:30.102447 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Feb 13 15:25:30.122042 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 13 15:25:30.122110 kernel: device-mapper: uevent: version 1.0.3
Feb 13 15:25:30.122122 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Feb 13 15:25:30.170324 kernel: raid6: neonx8   gen() 15751 MB/s
Feb 13 15:25:30.187309 kernel: raid6: neonx4   gen() 15754 MB/s
Feb 13 15:25:30.204313 kernel: raid6: neonx2   gen() 13202 MB/s
Feb 13 15:25:30.221303 kernel: raid6: neonx1   gen() 10431 MB/s
Feb 13 15:25:30.238307 kernel: raid6: int64x8  gen()  6767 MB/s
Feb 13 15:25:30.255303 kernel: raid6: int64x4  gen()  7319 MB/s
Feb 13 15:25:30.272303 kernel: raid6: int64x2  gen()  6095 MB/s
Feb 13 15:25:30.289307 kernel: raid6: int64x1  gen()  5044 MB/s
Feb 13 15:25:30.289330 kernel: raid6: using algorithm neonx4 gen() 15754 MB/s
Feb 13 15:25:30.306334 kernel: raid6: .... xor() 12361 MB/s, rmw enabled
Feb 13 15:25:30.306382 kernel: raid6: using neon recovery algorithm
Feb 13 15:25:30.311302 kernel: xor: measuring software checksum speed
Feb 13 15:25:30.311321 kernel:    8regs           : 21607 MB/sec
Feb 13 15:25:30.311330 kernel:    32regs          : 20222 MB/sec
Feb 13 15:25:30.312684 kernel:    arm64_neon      : 27946 MB/sec
Feb 13 15:25:30.312697 kernel: xor: using function: arm64_neon (27946 MB/sec)
Feb 13 15:25:30.363324 kernel: Btrfs loaded, zoned=no, fsverity=no
Feb 13 15:25:30.373891 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 15:25:30.388462 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:25:30.399662 systemd-udevd[460]: Using default interface naming scheme 'v255'.
Feb 13 15:25:30.403122 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:25:30.405810 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Feb 13 15:25:30.420193 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation
Feb 13 15:25:30.445476 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 15:25:30.456438 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 15:25:30.496311 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:25:30.507439 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Feb 13 15:25:30.520325 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Feb 13 15:25:30.522022 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 15:25:30.523450 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:25:30.525264 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 15:25:30.538326 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues
Feb 13 15:25:30.556941 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB)
Feb 13 15:25:30.557040 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Feb 13 15:25:30.557059 kernel: GPT:9289727 != 19775487
Feb 13 15:25:30.557068 kernel: GPT:Alternate GPT header not at the end of the disk.
Feb 13 15:25:30.557077 kernel: GPT:9289727 != 19775487
Feb 13 15:25:30.557085 kernel: GPT: Use GNU Parted to correct GPT errors.
Feb 13 15:25:30.557094 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb 13 15:25:30.538442 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Feb 13 15:25:30.547620 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 15:25:30.554023 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 15:25:30.554128 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:25:30.556659 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:25:30.557522 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 15:25:30.557641 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:25:30.559713 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:25:30.567509 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:25:30.576257 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (504)
Feb 13 15:25:30.576304 kernel: BTRFS: device fsid dbbe73f5-49db-4e16-b023-d47ce63b488f devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (517)
Feb 13 15:25:30.583092 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:25:30.589758 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT.
Feb 13 15:25:30.594299 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM.
Feb 13 15:25:30.598495 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Feb 13 15:25:30.602194 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A.
Feb 13 15:25:30.603320 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132.
Feb 13 15:25:30.617452 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Feb 13 15:25:30.619471 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:25:30.623653 disk-uuid[549]: Primary Header is updated.
Feb 13 15:25:30.623653 disk-uuid[549]: Secondary Entries is updated.
Feb 13 15:25:30.623653 disk-uuid[549]: Secondary Header is updated.
Feb 13 15:25:30.626301 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb 13 15:25:30.638314 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:25:31.637148 disk-uuid[550]: The operation has completed successfully.
Feb 13 15:25:31.638391 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb 13 15:25:31.663655 systemd[1]: disk-uuid.service: Deactivated successfully.
Feb 13 15:25:31.663751 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Feb 13 15:25:31.684483 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Feb 13 15:25:31.687905 sh[572]: Success
Feb 13 15:25:31.706320 kernel: device-mapper: verity: sha256 using implementation "sha256-ce"
Feb 13 15:25:31.742669 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Feb 13 15:25:31.744226 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Feb 13 15:25:31.745048 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Feb 13 15:25:31.756824 kernel: BTRFS info (device dm-0): first mount of filesystem dbbe73f5-49db-4e16-b023-d47ce63b488f
Feb 13 15:25:31.756873 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm
Feb 13 15:25:31.756892 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Feb 13 15:25:31.757696 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Feb 13 15:25:31.758723 kernel: BTRFS info (device dm-0): using free space tree
Feb 13 15:25:31.762045 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Feb 13 15:25:31.763244 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Feb 13 15:25:31.774462 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Feb 13 15:25:31.775886 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Feb 13 15:25:31.785472 kernel: BTRFS info (device vda6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74
Feb 13 15:25:31.785518 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 15:25:31.786307 kernel: BTRFS info (device vda6): using free space tree
Feb 13 15:25:31.789301 kernel: BTRFS info (device vda6): auto enabling async discard
Feb 13 15:25:31.797056 systemd[1]: mnt-oem.mount: Deactivated successfully.
Feb 13 15:25:31.798579 kernel: BTRFS info (device vda6): last unmount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74
Feb 13 15:25:31.805074 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Feb 13 15:25:31.811426 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Feb 13 15:25:31.878621 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 15:25:31.893443 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 15:25:31.911659 ignition[667]: Ignition 2.20.0
Feb 13 15:25:31.911669 ignition[667]: Stage: fetch-offline
Feb 13 15:25:31.911708 ignition[667]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:25:31.911717 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb 13 15:25:31.911931 ignition[667]: parsed url from cmdline: ""
Feb 13 15:25:31.911935 ignition[667]: no config URL provided
Feb 13 15:25:31.911939 ignition[667]: reading system config file "/usr/lib/ignition/user.ign"
Feb 13 15:25:31.911947 ignition[667]: no config at "/usr/lib/ignition/user.ign"
Feb 13 15:25:31.911972 ignition[667]: op(1): [started]  loading QEMU firmware config module
Feb 13 15:25:31.911977 ignition[667]: op(1): executing: "modprobe" "qemu_fw_cfg"
Feb 13 15:25:31.918452 systemd-networkd[764]: lo: Link UP
Feb 13 15:25:31.918463 systemd-networkd[764]: lo: Gained carrier
Feb 13 15:25:31.919241 systemd-networkd[764]: Enumeration completed
Feb 13 15:25:31.919376 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 15:25:31.919638 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:25:31.919641 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 15:25:31.920386 systemd-networkd[764]: eth0: Link UP
Feb 13 15:25:31.924956 ignition[667]: op(1): [finished] loading QEMU firmware config module
Feb 13 15:25:31.920389 systemd-networkd[764]: eth0: Gained carrier
Feb 13 15:25:31.920395 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:25:31.921620 systemd[1]: Reached target network.target - Network.
Feb 13 15:25:31.940335 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1
Feb 13 15:25:31.968419 ignition[667]: parsing config with SHA512: 0bdcfd6734104c2593eec4c0eab6999c98b15b8a3547d1b53cccd93840ec0e2a2c16f2913541a4ab4769cc12d237e37c2bd64c314fb741f22b122d89a0f829fb
Feb 13 15:25:31.974201 unknown[667]: fetched base config from "system"
Feb 13 15:25:31.974214 unknown[667]: fetched user config from "qemu"
Feb 13 15:25:31.974727 ignition[667]: fetch-offline: fetch-offline passed
Feb 13 15:25:31.974811 ignition[667]: Ignition finished successfully
Feb 13 15:25:31.976797 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 15:25:31.978431 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json).
Feb 13 15:25:31.987457 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Feb 13 15:25:31.997757 ignition[771]: Ignition 2.20.0
Feb 13 15:25:31.997767 ignition[771]: Stage: kargs
Feb 13 15:25:31.997940 ignition[771]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:25:31.997950 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb 13 15:25:31.998868 ignition[771]: kargs: kargs passed
Feb 13 15:25:31.998914 ignition[771]: Ignition finished successfully
Feb 13 15:25:32.000964 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Feb 13 15:25:32.002724 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Feb 13 15:25:32.015839 ignition[780]: Ignition 2.20.0
Feb 13 15:25:32.015850 ignition[780]: Stage: disks
Feb 13 15:25:32.016011 ignition[780]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:25:32.016021 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb 13 15:25:32.016948 ignition[780]: disks: disks passed
Feb 13 15:25:32.018530 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Feb 13 15:25:32.016994 ignition[780]: Ignition finished successfully
Feb 13 15:25:32.019669 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Feb 13 15:25:32.020818 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Feb 13 15:25:32.022361 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 15:25:32.023619 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 15:25:32.025039 systemd[1]: Reached target basic.target - Basic System.
Feb 13 15:25:32.039467 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Feb 13 15:25:32.057002 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks
Feb 13 15:25:32.061492 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Feb 13 15:25:32.074442 systemd[1]: Mounting sysroot.mount - /sysroot...
Feb 13 15:25:32.117318 kernel: EXT4-fs (vda9): mounted filesystem 469d244b-00c1-45f4-bce0-c1d88e98a895 r/w with ordered data mode. Quota mode: none.
Feb 13 15:25:32.117339 systemd[1]: Mounted sysroot.mount - /sysroot.
Feb 13 15:25:32.118616 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Feb 13 15:25:32.134397 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 15:25:32.136145 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Feb 13 15:25:32.137315 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met.
Feb 13 15:25:32.137376 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Feb 13 15:25:32.137449 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 15:25:32.144577 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (800)
Feb 13 15:25:32.143917 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Feb 13 15:25:32.148693 kernel: BTRFS info (device vda6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74
Feb 13 15:25:32.148711 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 15:25:32.148721 kernel: BTRFS info (device vda6): using free space tree
Feb 13 15:25:32.148970 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Feb 13 15:25:32.151350 kernel: BTRFS info (device vda6): auto enabling async discard
Feb 13 15:25:32.152891 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 15:25:32.193630 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory
Feb 13 15:25:32.198005 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory
Feb 13 15:25:32.201589 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory
Feb 13 15:25:32.205517 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory
Feb 13 15:25:32.297902 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Feb 13 15:25:32.310444 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Feb 13 15:25:32.311966 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Feb 13 15:25:32.317309 kernel: BTRFS info (device vda6): last unmount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74
Feb 13 15:25:32.336325 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Feb 13 15:25:32.338834 ignition[913]: INFO     : Ignition 2.20.0
Feb 13 15:25:32.338834 ignition[913]: INFO     : Stage: mount
Feb 13 15:25:32.340281 ignition[913]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:25:32.340281 ignition[913]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb 13 15:25:32.340281 ignition[913]: INFO     : mount: mount passed
Feb 13 15:25:32.340281 ignition[913]: INFO     : Ignition finished successfully
Feb 13 15:25:32.341609 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Feb 13 15:25:32.352415 systemd[1]: Starting ignition-files.service - Ignition (files)...
Feb 13 15:25:32.756066 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Feb 13 15:25:32.771485 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 15:25:32.777298 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (927)
Feb 13 15:25:32.779020 kernel: BTRFS info (device vda6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74
Feb 13 15:25:32.779037 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 15:25:32.779047 kernel: BTRFS info (device vda6): using free space tree
Feb 13 15:25:32.783309 kernel: BTRFS info (device vda6): auto enabling async discard
Feb 13 15:25:32.784148 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 15:25:32.803818 ignition[944]: INFO     : Ignition 2.20.0
Feb 13 15:25:32.803818 ignition[944]: INFO     : Stage: files
Feb 13 15:25:32.805138 ignition[944]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:25:32.805138 ignition[944]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb 13 15:25:32.805138 ignition[944]: DEBUG    : files: compiled without relabeling support, skipping
Feb 13 15:25:32.807856 ignition[944]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Feb 13 15:25:32.807856 ignition[944]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Feb 13 15:25:32.810217 ignition[944]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Feb 13 15:25:32.810217 ignition[944]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Feb 13 15:25:32.810217 ignition[944]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Feb 13 15:25:32.809417 unknown[944]: wrote ssh authorized keys file for user: core
Feb 13 15:25:32.814072 ignition[944]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz"
Feb 13 15:25:32.814072 ignition[944]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1
Feb 13 15:25:32.867101 ignition[944]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET result: OK
Feb 13 15:25:33.355951 ignition[944]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz"
Feb 13 15:25:33.357438 ignition[944]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/opt/bin/cilium.tar.gz"
Feb 13 15:25:33.357438 ignition[944]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1
Feb 13 15:25:33.663731 ignition[944]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET result: OK
Feb 13 15:25:33.737166 ignition[944]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz"
Feb 13 15:25:33.737166 ignition[944]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/home/core/install.sh"
Feb 13 15:25:33.740995 ignition[944]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh"
Feb 13 15:25:33.740995 ignition[944]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/nginx.yaml"
Feb 13 15:25:33.740995 ignition[944]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml"
Feb 13 15:25:33.740995 ignition[944]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Feb 13 15:25:33.740995 ignition[944]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Feb 13 15:25:33.740995 ignition[944]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb 13 15:25:33.740995 ignition[944]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb 13 15:25:33.740995 ignition[944]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 15:25:33.740995 ignition[944]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 15:25:33.740995 ignition[944]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw"
Feb 13 15:25:33.740995 ignition[944]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw"
Feb 13 15:25:33.740995 ignition[944]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw"
Feb 13 15:25:33.740995 ignition[944]: INFO     : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1
Feb 13 15:25:33.767155 systemd-networkd[764]: eth0: Gained IPv6LL
Feb 13 15:25:33.986001 ignition[944]: INFO     : files: createFilesystemsFiles: createFiles: op(b): GET result: OK
Feb 13 15:25:34.171532 ignition[944]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw"
Feb 13 15:25:34.171532 ignition[944]: INFO     : files: op(c): [started]  processing unit "prepare-helm.service"
Feb 13 15:25:34.174512 ignition[944]: INFO     : files: op(c): op(d): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb 13 15:25:34.174512 ignition[944]: INFO     : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb 13 15:25:34.174512 ignition[944]: INFO     : files: op(c): [finished] processing unit "prepare-helm.service"
Feb 13 15:25:34.174512 ignition[944]: INFO     : files: op(e): [started]  processing unit "coreos-metadata.service"
Feb 13 15:25:34.174512 ignition[944]: INFO     : files: op(e): op(f): [started]  writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Feb 13 15:25:34.174512 ignition[944]: INFO     : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Feb 13 15:25:34.174512 ignition[944]: INFO     : files: op(e): [finished] processing unit "coreos-metadata.service"
Feb 13 15:25:34.174512 ignition[944]: INFO     : files: op(10): [started]  setting preset to disabled for "coreos-metadata.service"
Feb 13 15:25:34.194323 ignition[944]: INFO     : files: op(10): op(11): [started]  removing enablement symlink(s) for "coreos-metadata.service"
Feb 13 15:25:34.197790 ignition[944]: INFO     : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service"
Feb 13 15:25:34.199975 ignition[944]: INFO     : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service"
Feb 13 15:25:34.199975 ignition[944]: INFO     : files: op(12): [started]  setting preset to enabled for "prepare-helm.service"
Feb 13 15:25:34.199975 ignition[944]: INFO     : files: op(12): [finished] setting preset to enabled for "prepare-helm.service"
Feb 13 15:25:34.199975 ignition[944]: INFO     : files: createResultFile: createFiles: op(13): [started]  writing file "/sysroot/etc/.ignition-result.json"
Feb 13 15:25:34.199975 ignition[944]: INFO     : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json"
Feb 13 15:25:34.199975 ignition[944]: INFO     : files: files passed
Feb 13 15:25:34.199975 ignition[944]: INFO     : Ignition finished successfully
Feb 13 15:25:34.200395 systemd[1]: Finished ignition-files.service - Ignition (files).
Feb 13 15:25:34.210486 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Feb 13 15:25:34.213465 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Feb 13 15:25:34.214887 systemd[1]: ignition-quench.service: Deactivated successfully.
Feb 13 15:25:34.216321 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Feb 13 15:25:34.220669 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory
Feb 13 15:25:34.223870 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:25:34.223870 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:25:34.226406 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:25:34.226547 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 15:25:34.228956 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Feb 13 15:25:34.241461 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Feb 13 15:25:34.261989 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 13 15:25:34.262124 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Feb 13 15:25:34.263932 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Feb 13 15:25:34.265145 systemd[1]: Reached target initrd.target - Initrd Default Target.
Feb 13 15:25:34.266608 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Feb 13 15:25:34.267384 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Feb 13 15:25:34.283805 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 15:25:34.292465 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Feb 13 15:25:34.301614 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:25:34.302903 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:25:34.304634 systemd[1]: Stopped target timers.target - Timer Units.
Feb 13 15:25:34.306177 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 13 15:25:34.306311 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 15:25:34.308573 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Feb 13 15:25:34.310280 systemd[1]: Stopped target basic.target - Basic System.
Feb 13 15:25:34.311776 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Feb 13 15:25:34.313325 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 15:25:34.315068 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Feb 13 15:25:34.317005 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Feb 13 15:25:34.318778 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 15:25:34.320453 systemd[1]: Stopped target sysinit.target - System Initialization.
Feb 13 15:25:34.322154 systemd[1]: Stopped target local-fs.target - Local File Systems.
Feb 13 15:25:34.323672 systemd[1]: Stopped target swap.target - Swaps.
Feb 13 15:25:34.325120 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 13 15:25:34.325249 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 15:25:34.327507 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:25:34.329152 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:25:34.330861 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Feb 13 15:25:34.330970 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:25:34.332734 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 13 15:25:34.332852 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Feb 13 15:25:34.335468 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Feb 13 15:25:34.335571 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 15:25:34.337388 systemd[1]: Stopped target paths.target - Path Units.
Feb 13 15:25:34.338807 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 13 15:25:34.338911 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:25:34.340531 systemd[1]: Stopped target slices.target - Slice Units.
Feb 13 15:25:34.342091 systemd[1]: Stopped target sockets.target - Socket Units.
Feb 13 15:25:34.343425 systemd[1]: iscsid.socket: Deactivated successfully.
Feb 13 15:25:34.343514 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 15:25:34.345051 systemd[1]: iscsiuio.socket: Deactivated successfully.
Feb 13 15:25:34.345133 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 15:25:34.347115 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Feb 13 15:25:34.347219 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 15:25:34.348673 systemd[1]: ignition-files.service: Deactivated successfully.
Feb 13 15:25:34.348768 systemd[1]: Stopped ignition-files.service - Ignition (files).
Feb 13 15:25:34.361450 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Feb 13 15:25:34.362130 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 13 15:25:34.362249 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:25:34.364699 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Feb 13 15:25:34.365419 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 13 15:25:34.365526 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:25:34.367399 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 13 15:25:34.367500 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 15:25:34.373181 ignition[999]: INFO     : Ignition 2.20.0
Feb 13 15:25:34.373181 ignition[999]: INFO     : Stage: umount
Feb 13 15:25:34.373181 ignition[999]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:25:34.373181 ignition[999]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb 13 15:25:34.372353 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 13 15:25:34.382481 ignition[999]: INFO     : umount: umount passed
Feb 13 15:25:34.382481 ignition[999]: INFO     : Ignition finished successfully
Feb 13 15:25:34.372437 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Feb 13 15:25:34.375006 systemd[1]: ignition-mount.service: Deactivated successfully.
Feb 13 15:25:34.375136 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Feb 13 15:25:34.376825 systemd[1]: Stopped target network.target - Network.
Feb 13 15:25:34.379357 systemd[1]: ignition-disks.service: Deactivated successfully.
Feb 13 15:25:34.379422 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Feb 13 15:25:34.381213 systemd[1]: ignition-kargs.service: Deactivated successfully.
Feb 13 15:25:34.381250 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Feb 13 15:25:34.383136 systemd[1]: ignition-setup.service: Deactivated successfully.
Feb 13 15:25:34.383177 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Feb 13 15:25:34.384609 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Feb 13 15:25:34.384651 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Feb 13 15:25:34.386252 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Feb 13 15:25:34.388063 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Feb 13 15:25:34.390363 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Feb 13 15:25:34.397368 systemd[1]: systemd-resolved.service: Deactivated successfully.
Feb 13 15:25:34.398210 systemd-networkd[764]: eth0: DHCPv6 lease lost
Feb 13 15:25:34.398341 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Feb 13 15:25:34.400685 systemd[1]: systemd-networkd.service: Deactivated successfully.
Feb 13 15:25:34.400793 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Feb 13 15:25:34.403029 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Feb 13 15:25:34.403074 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:25:34.413408 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Feb 13 15:25:34.414192 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Feb 13 15:25:34.414264 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 15:25:34.416157 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 13 15:25:34.416200 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:25:34.417771 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 13 15:25:34.417810 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:25:34.419755 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb 13 15:25:34.419808 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:25:34.421645 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:25:34.438586 systemd[1]: network-cleanup.service: Deactivated successfully.
Feb 13 15:25:34.438759 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Feb 13 15:25:34.440623 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 13 15:25:34.440765 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:25:34.442748 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 13 15:25:34.442904 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:25:34.444789 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 13 15:25:34.444830 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:25:34.446189 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 13 15:25:34.446250 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 15:25:34.448443 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 13 15:25:34.448487 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Feb 13 15:25:34.451195 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 15:25:34.451250 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:25:34.457476 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Feb 13 15:25:34.458621 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 13 15:25:34.458681 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:25:34.460717 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 15:25:34.460769 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:25:34.462896 systemd[1]: sysroot-boot.service: Deactivated successfully.
Feb 13 15:25:34.462995 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Feb 13 15:25:34.464708 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 13 15:25:34.464779 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Feb 13 15:25:34.468277 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Feb 13 15:25:34.470090 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Feb 13 15:25:34.470158 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Feb 13 15:25:34.472604 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Feb 13 15:25:34.482035 systemd[1]: Switching root.
Feb 13 15:25:34.506207 systemd-journald[238]: Journal stopped
Feb 13 15:25:35.192031 systemd-journald[238]: Received SIGTERM from PID 1 (systemd).
Feb 13 15:25:35.192089 kernel: SELinux:  policy capability network_peer_controls=1
Feb 13 15:25:35.192102 kernel: SELinux:  policy capability open_perms=1
Feb 13 15:25:35.192112 kernel: SELinux:  policy capability extended_socket_class=1
Feb 13 15:25:35.192121 kernel: SELinux:  policy capability always_check_network=0
Feb 13 15:25:35.192136 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 13 15:25:35.192151 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 13 15:25:35.192160 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Feb 13 15:25:35.192170 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Feb 13 15:25:35.192180 kernel: audit: type=1403 audit(1739460334.656:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb 13 15:25:35.192192 systemd[1]: Successfully loaded SELinux policy in 31.247ms.
Feb 13 15:25:35.192212 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.324ms.
Feb 13 15:25:35.192225 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Feb 13 15:25:35.192236 systemd[1]: Detected virtualization kvm.
Feb 13 15:25:35.192247 systemd[1]: Detected architecture arm64.
Feb 13 15:25:35.192259 systemd[1]: Detected first boot.
Feb 13 15:25:35.192269 systemd[1]: Initializing machine ID from VM UUID.
Feb 13 15:25:35.192279 zram_generator::config[1043]: No configuration found.
Feb 13 15:25:35.192313 systemd[1]: Populated /etc with preset unit settings.
Feb 13 15:25:35.192328 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Feb 13 15:25:35.192339 systemd[1]: Stopped initrd-switch-root.service - Switch Root.
Feb 13 15:25:35.192350 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Feb 13 15:25:35.192361 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Feb 13 15:25:35.192374 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Feb 13 15:25:35.192384 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Feb 13 15:25:35.192395 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Feb 13 15:25:35.192406 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Feb 13 15:25:35.192418 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Feb 13 15:25:35.192429 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Feb 13 15:25:35.192439 systemd[1]: Created slice user.slice - User and Session Slice.
Feb 13 15:25:35.192449 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:25:35.192460 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:25:35.192472 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Feb 13 15:25:35.192482 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Feb 13 15:25:35.192493 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Feb 13 15:25:35.192504 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 15:25:35.192516 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0...
Feb 13 15:25:35.192526 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:25:35.192538 systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
Feb 13 15:25:35.192548 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
Feb 13 15:25:35.192561 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
Feb 13 15:25:35.192572 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Feb 13 15:25:35.192583 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:25:35.192593 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 15:25:35.192604 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 15:25:35.192616 systemd[1]: Reached target swap.target - Swaps.
Feb 13 15:25:35.192627 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Feb 13 15:25:35.192637 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Feb 13 15:25:35.192664 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:25:35.192676 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:25:35.192687 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:25:35.192698 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Feb 13 15:25:35.192708 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Feb 13 15:25:35.192718 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Feb 13 15:25:35.192729 systemd[1]: Mounting media.mount - External Media Directory...
Feb 13 15:25:35.192740 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Feb 13 15:25:35.192750 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Feb 13 15:25:35.192762 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Feb 13 15:25:35.192773 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb 13 15:25:35.192783 systemd[1]: Reached target machines.target - Containers.
Feb 13 15:25:35.192794 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Feb 13 15:25:35.192805 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:25:35.192822 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 15:25:35.192835 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Feb 13 15:25:35.192846 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:25:35.192857 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Feb 13 15:25:35.192869 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 15:25:35.192880 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Feb 13 15:25:35.192891 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 15:25:35.192903 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Feb 13 15:25:35.192913 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Feb 13 15:25:35.192924 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
Feb 13 15:25:35.192934 kernel: fuse: init (API version 7.39)
Feb 13 15:25:35.192943 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Feb 13 15:25:35.192955 systemd[1]: Stopped systemd-fsck-usr.service.
Feb 13 15:25:35.192966 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 15:25:35.192976 kernel: loop: module loaded
Feb 13 15:25:35.192986 kernel: ACPI: bus type drm_connector registered
Feb 13 15:25:35.192996 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 15:25:35.193007 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Feb 13 15:25:35.193018 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Feb 13 15:25:35.193028 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 15:25:35.193057 systemd-journald[1117]: Collecting audit messages is disabled.
Feb 13 15:25:35.193085 systemd[1]: verity-setup.service: Deactivated successfully.
Feb 13 15:25:35.193096 systemd[1]: Stopped verity-setup.service.
Feb 13 15:25:35.193107 systemd-journald[1117]: Journal started
Feb 13 15:25:35.193133 systemd-journald[1117]: Runtime Journal (/run/log/journal/083b74480e6542f3a5621a086ea96320) is 5.9M, max 47.3M, 41.4M free.
Feb 13 15:25:35.016859 systemd[1]: Queued start job for default target multi-user.target.
Feb 13 15:25:35.033327 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6.
Feb 13 15:25:35.033666 systemd[1]: systemd-journald.service: Deactivated successfully.
Feb 13 15:25:35.195759 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 15:25:35.196492 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Feb 13 15:25:35.197451 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Feb 13 15:25:35.198434 systemd[1]: Mounted media.mount - External Media Directory.
Feb 13 15:25:35.199326 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Feb 13 15:25:35.200251 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Feb 13 15:25:35.201249 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Feb 13 15:25:35.203317 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Feb 13 15:25:35.204626 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:25:35.205950 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 13 15:25:35.206108 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Feb 13 15:25:35.207387 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:25:35.207523 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:25:35.208714 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 13 15:25:35.208874 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Feb 13 15:25:35.210022 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 15:25:35.210151 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 15:25:35.211565 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb 13 15:25:35.211709 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Feb 13 15:25:35.213015 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 15:25:35.213152 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 15:25:35.214440 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:25:35.215596 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Feb 13 15:25:35.216991 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Feb 13 15:25:35.228973 systemd[1]: Reached target network-pre.target - Preparation for Network.
Feb 13 15:25:35.235402 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Feb 13 15:25:35.237345 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Feb 13 15:25:35.238255 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Feb 13 15:25:35.238306 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 15:25:35.239994 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink).
Feb 13 15:25:35.242034 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Feb 13 15:25:35.243959 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Feb 13 15:25:35.244925 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:25:35.246402 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Feb 13 15:25:35.250470 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Feb 13 15:25:35.251467 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 15:25:35.254490 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Feb 13 15:25:35.255698 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 15:25:35.259315 systemd-journald[1117]: Time spent on flushing to /var/log/journal/083b74480e6542f3a5621a086ea96320 is 22.846ms for 858 entries.
Feb 13 15:25:35.259315 systemd-journald[1117]: System Journal (/var/log/journal/083b74480e6542f3a5621a086ea96320) is 8.0M, max 195.6M, 187.6M free.
Feb 13 15:25:35.299406 systemd-journald[1117]: Received client request to flush runtime journal.
Feb 13 15:25:35.299478 kernel: loop0: detected capacity change from 0 to 194512
Feb 13 15:25:35.299491 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Feb 13 15:25:35.259482 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:25:35.262882 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Feb 13 15:25:35.268488 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Feb 13 15:25:35.274035 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:25:35.275924 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Feb 13 15:25:35.277542 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Feb 13 15:25:35.281887 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Feb 13 15:25:35.283706 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Feb 13 15:25:35.288753 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Feb 13 15:25:35.295529 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Feb 13 15:25:35.301508 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Feb 13 15:25:35.303002 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:25:35.304437 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Feb 13 15:25:35.319161 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Feb 13 15:25:35.327369 kernel: loop1: detected capacity change from 0 to 113552
Feb 13 15:25:35.328578 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 15:25:35.330342 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Feb 13 15:25:35.330988 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Feb 13 15:25:35.336620 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Feb 13 15:25:35.352431 systemd-tmpfiles[1173]: ACLs are not supported, ignoring.
Feb 13 15:25:35.352449 systemd-tmpfiles[1173]: ACLs are not supported, ignoring.
Feb 13 15:25:35.356985 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:25:35.364787 kernel: loop2: detected capacity change from 0 to 116784
Feb 13 15:25:35.399319 kernel: loop3: detected capacity change from 0 to 194512
Feb 13 15:25:35.408377 kernel: loop4: detected capacity change from 0 to 113552
Feb 13 15:25:35.413398 kernel: loop5: detected capacity change from 0 to 116784
Feb 13 15:25:35.416981 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'.
Feb 13 15:25:35.417360 (sd-merge)[1182]: Merged extensions into '/usr'.
Feb 13 15:25:35.421427 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)...
Feb 13 15:25:35.421532 systemd[1]: Reloading...
Feb 13 15:25:35.468642 zram_generator::config[1208]: No configuration found.
Feb 13 15:25:35.540583 ldconfig[1149]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Feb 13 15:25:35.577806 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:25:35.613760 systemd[1]: Reloading finished in 191 ms.
Feb 13 15:25:35.646717 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Feb 13 15:25:35.648011 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Feb 13 15:25:35.663484 systemd[1]: Starting ensure-sysext.service...
Feb 13 15:25:35.665139 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 15:25:35.679263 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)...
Feb 13 15:25:35.679278 systemd[1]: Reloading...
Feb 13 15:25:35.685697 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Feb 13 15:25:35.686214 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Feb 13 15:25:35.687034 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Feb 13 15:25:35.687346 systemd-tmpfiles[1245]: ACLs are not supported, ignoring.
Feb 13 15:25:35.687471 systemd-tmpfiles[1245]: ACLs are not supported, ignoring.
Feb 13 15:25:35.694938 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot.
Feb 13 15:25:35.695074 systemd-tmpfiles[1245]: Skipping /boot
Feb 13 15:25:35.703181 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot.
Feb 13 15:25:35.703334 systemd-tmpfiles[1245]: Skipping /boot
Feb 13 15:25:35.729336 zram_generator::config[1275]: No configuration found.
Feb 13 15:25:35.809332 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:25:35.844552 systemd[1]: Reloading finished in 164 ms.
Feb 13 15:25:35.860033 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Feb 13 15:25:35.876722 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:25:35.884785 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Feb 13 15:25:35.887112 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Feb 13 15:25:35.889410 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Feb 13 15:25:35.895609 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 15:25:35.899675 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:25:35.904561 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Feb 13 15:25:35.907998 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:25:35.909453 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:25:35.914728 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 15:25:35.919235 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 15:25:35.920184 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:25:35.925557 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Feb 13 15:25:35.927274 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Feb 13 15:25:35.928692 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:25:35.928877 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:25:35.930270 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 15:25:35.930409 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 15:25:35.932010 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 15:25:35.932141 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 15:25:35.939640 systemd-udevd[1315]: Using default interface naming scheme 'v255'.
Feb 13 15:25:35.939884 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:25:35.950567 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:25:35.955516 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 15:25:35.960252 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 15:25:35.961290 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:25:35.963551 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Feb 13 15:25:35.964988 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:25:35.966536 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Feb 13 15:25:35.967849 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Feb 13 15:25:35.969450 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Feb 13 15:25:35.974769 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:25:35.974966 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:25:35.975993 augenrules[1360]: No rules
Feb 13 15:25:35.976616 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 15:25:35.978315 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 15:25:35.980028 systemd[1]: audit-rules.service: Deactivated successfully.
Feb 13 15:25:35.980185 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Feb 13 15:25:35.981418 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 15:25:35.981535 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 15:25:35.998570 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Feb 13 15:25:35.999905 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:25:36.003506 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:25:36.007485 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Feb 13 15:25:36.014498 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 15:25:36.017271 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 15:25:36.018133 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:25:36.019717 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 15:25:36.022370 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Feb 13 15:25:36.022927 systemd[1]: Finished ensure-sysext.service.
Feb 13 15:25:36.024243 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Feb 13 15:25:36.025312 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1355)
Feb 13 15:25:36.026107 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:25:36.026247 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:25:36.032858 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 13 15:25:36.033019 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Feb 13 15:25:36.038116 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 15:25:36.038268 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 15:25:36.039946 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped.
Feb 13 15:25:36.044077 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 15:25:36.049529 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization...
Feb 13 15:25:36.059235 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 15:25:36.059443 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 15:25:36.060659 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 15:25:36.063428 augenrules[1380]: /sbin/augenrules: No change
Feb 13 15:25:36.078396 augenrules[1411]: No rules
Feb 13 15:25:36.081100 systemd[1]: audit-rules.service: Deactivated successfully.
Feb 13 15:25:36.081321 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Feb 13 15:25:36.083682 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Feb 13 15:25:36.097178 systemd-resolved[1312]: Positive Trust Anchors:
Feb 13 15:25:36.097192 systemd-resolved[1312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 15:25:36.097227 systemd-resolved[1312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 15:25:36.097506 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Feb 13 15:25:36.106631 systemd-resolved[1312]: Defaulting to hostname 'linux'.
Feb 13 15:25:36.108524 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 15:25:36.109662 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:25:36.125415 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Feb 13 15:25:36.126899 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization.
Feb 13 15:25:36.128230 systemd[1]: Reached target time-set.target - System Time Set.
Feb 13 15:25:36.132500 systemd-networkd[1392]: lo: Link UP
Feb 13 15:25:36.132754 systemd-networkd[1392]: lo: Gained carrier
Feb 13 15:25:36.134036 systemd-networkd[1392]: Enumeration completed
Feb 13 15:25:36.134114 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 15:25:36.135097 systemd[1]: Reached target network.target - Network.
Feb 13 15:25:36.135142 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:25:36.135145 systemd-networkd[1392]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 15:25:36.135984 systemd-networkd[1392]: eth0: Link UP
Feb 13 15:25:36.135990 systemd-networkd[1392]: eth0: Gained carrier
Feb 13 15:25:36.136005 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:25:36.146600 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Feb 13 15:25:36.148390 systemd-networkd[1392]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1
Feb 13 15:25:36.149890 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection.
Feb 13 15:25:36.150758 systemd-timesyncd[1400]: Contacted time server 10.0.0.1:123 (10.0.0.1).
Feb 13 15:25:36.150903 systemd-timesyncd[1400]: Initial clock synchronization to Thu 2025-02-13 15:25:36.414359 UTC.
Feb 13 15:25:36.153107 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:25:36.166923 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Feb 13 15:25:36.169584 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Feb 13 15:25:36.189514 lvm[1432]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 13 15:25:36.207392 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:25:36.229381 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Feb 13 15:25:36.230596 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:25:36.231515 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 15:25:36.232383 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Feb 13 15:25:36.233333 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Feb 13 15:25:36.234468 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Feb 13 15:25:36.235365 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Feb 13 15:25:36.236364 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Feb 13 15:25:36.237324 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Feb 13 15:25:36.237355 systemd[1]: Reached target paths.target - Path Units.
Feb 13 15:25:36.238007 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 15:25:36.239640 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Feb 13 15:25:36.241859 systemd[1]: Starting docker.socket - Docker Socket for the API...
Feb 13 15:25:36.253238 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Feb 13 15:25:36.255781 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Feb 13 15:25:36.257361 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Feb 13 15:25:36.258270 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 15:25:36.259004 systemd[1]: Reached target basic.target - Basic System.
Feb 13 15:25:36.259788 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Feb 13 15:25:36.259824 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Feb 13 15:25:36.260774 systemd[1]: Starting containerd.service - containerd container runtime...
Feb 13 15:25:36.262628 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Feb 13 15:25:36.264436 lvm[1440]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 13 15:25:36.267080 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Feb 13 15:25:36.270505 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Feb 13 15:25:36.274452 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Feb 13 15:25:36.275458 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Feb 13 15:25:36.277351 jq[1443]: false
Feb 13 15:25:36.277623 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin...
Feb 13 15:25:36.282466 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Feb 13 15:25:36.287275 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Feb 13 15:25:36.292587 extend-filesystems[1444]: Found loop3
Feb 13 15:25:36.292587 extend-filesystems[1444]: Found loop4
Feb 13 15:25:36.292587 extend-filesystems[1444]: Found loop5
Feb 13 15:25:36.292587 extend-filesystems[1444]: Found vda
Feb 13 15:25:36.292587 extend-filesystems[1444]: Found vda1
Feb 13 15:25:36.292587 extend-filesystems[1444]: Found vda2
Feb 13 15:25:36.292587 extend-filesystems[1444]: Found vda3
Feb 13 15:25:36.292587 extend-filesystems[1444]: Found usr
Feb 13 15:25:36.292587 extend-filesystems[1444]: Found vda4
Feb 13 15:25:36.292587 extend-filesystems[1444]: Found vda6
Feb 13 15:25:36.292587 extend-filesystems[1444]: Found vda7
Feb 13 15:25:36.292587 extend-filesystems[1444]: Found vda9
Feb 13 15:25:36.292587 extend-filesystems[1444]: Checking size of /dev/vda9
Feb 13 15:25:36.305485 dbus-daemon[1442]: [system] SELinux support is enabled
Feb 13 15:25:36.293496 systemd[1]: Starting systemd-logind.service - User Login Management...
Feb 13 15:25:36.295623 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Feb 13 15:25:36.296083 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Feb 13 15:25:36.297567 systemd[1]: Starting update-engine.service - Update Engine...
Feb 13 15:25:36.302432 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Feb 13 15:25:36.304063 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Feb 13 15:25:36.306037 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Feb 13 15:25:36.316325 extend-filesystems[1444]: Resized partition /dev/vda9
Feb 13 15:25:36.318078 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Feb 13 15:25:36.318654 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Feb 13 15:25:36.319079 systemd[1]: motdgen.service: Deactivated successfully.
Feb 13 15:25:36.319260 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Feb 13 15:25:36.319936 jq[1461]: true
Feb 13 15:25:36.324914 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Feb 13 15:25:36.325171 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Feb 13 15:25:36.345311 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1357)
Feb 13 15:25:36.345395 extend-filesystems[1466]: resize2fs 1.47.1 (20-May-2024)
Feb 13 15:25:36.341446 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Feb 13 15:25:36.353607 tar[1467]: linux-arm64/helm
Feb 13 15:25:36.341472 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Feb 13 15:25:36.342475 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Feb 13 15:25:36.342494 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Feb 13 15:25:36.350489 (ntainerd)[1471]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Feb 13 15:25:36.360349 jq[1468]: true
Feb 13 15:25:36.362819 systemd-logind[1451]: Watching system buttons on /dev/input/event0 (Power Button)
Feb 13 15:25:36.363019 systemd-logind[1451]: New seat seat0.
Feb 13 15:25:36.363912 systemd[1]: Started systemd-logind.service - User Login Management.
Feb 13 15:25:36.369790 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks
Feb 13 15:25:36.405313 kernel: EXT4-fs (vda9): resized filesystem to 1864699
Feb 13 15:25:36.405724 update_engine[1455]: I20250213 15:25:36.405565  1455 main.cc:92] Flatcar Update Engine starting
Feb 13 15:25:36.410879 systemd[1]: Started update-engine.service - Update Engine.
Feb 13 15:25:36.412344 update_engine[1455]: I20250213 15:25:36.412231  1455 update_check_scheduler.cc:74] Next update check in 9m31s
Feb 13 15:25:36.421584 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Feb 13 15:25:36.427975 extend-filesystems[1466]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required
Feb 13 15:25:36.427975 extend-filesystems[1466]: old_desc_blocks = 1, new_desc_blocks = 1
Feb 13 15:25:36.427975 extend-filesystems[1466]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long.
Feb 13 15:25:36.436313 extend-filesystems[1444]: Resized filesystem in /dev/vda9
Feb 13 15:25:36.437563 bash[1495]: Updated "/home/core/.ssh/authorized_keys"
Feb 13 15:25:36.429417 systemd[1]: extend-filesystems.service: Deactivated successfully.
Feb 13 15:25:36.429580 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Feb 13 15:25:36.440345 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Feb 13 15:25:36.442624 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met.
Feb 13 15:25:36.502556 locksmithd[1496]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Feb 13 15:25:36.511974 sshd_keygen[1462]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Feb 13 15:25:36.534775 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Feb 13 15:25:36.549614 systemd[1]: Starting issuegen.service - Generate /run/issue...
Feb 13 15:25:36.556433 systemd[1]: issuegen.service: Deactivated successfully.
Feb 13 15:25:36.558708 systemd[1]: Finished issuegen.service - Generate /run/issue.
Feb 13 15:25:36.567577 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Feb 13 15:25:36.575424 containerd[1471]: time="2025-02-13T15:25:36.575212120Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23
Feb 13 15:25:36.581372 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Feb 13 15:25:36.595660 systemd[1]: Started getty@tty1.service - Getty on tty1.
Feb 13 15:25:36.598457 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0.
Feb 13 15:25:36.601456 containerd[1471]: time="2025-02-13T15:25:36.601209480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:25:36.600006 systemd[1]: Reached target getty.target - Login Prompts.
Feb 13 15:25:36.603079 containerd[1471]: time="2025-02-13T15:25:36.603015840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:25:36.603079 containerd[1471]: time="2025-02-13T15:25:36.603057080Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Feb 13 15:25:36.603170 containerd[1471]: time="2025-02-13T15:25:36.603100440Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Feb 13 15:25:36.603297 containerd[1471]: time="2025-02-13T15:25:36.603260760Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Feb 13 15:25:36.603337 containerd[1471]: time="2025-02-13T15:25:36.603302760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Feb 13 15:25:36.603390 containerd[1471]: time="2025-02-13T15:25:36.603372280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:25:36.603412 containerd[1471]: time="2025-02-13T15:25:36.603390400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:25:36.603586 containerd[1471]: time="2025-02-13T15:25:36.603559240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:25:36.603610 containerd[1471]: time="2025-02-13T15:25:36.603583480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Feb 13 15:25:36.603610 containerd[1471]: time="2025-02-13T15:25:36.603599840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:25:36.603644 containerd[1471]: time="2025-02-13T15:25:36.603610360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Feb 13 15:25:36.603707 containerd[1471]: time="2025-02-13T15:25:36.603690680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:25:36.603961 containerd[1471]: time="2025-02-13T15:25:36.603934720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:25:36.604061 containerd[1471]: time="2025-02-13T15:25:36.604045000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:25:36.604087 containerd[1471]: time="2025-02-13T15:25:36.604062880Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Feb 13 15:25:36.604155 containerd[1471]: time="2025-02-13T15:25:36.604141760Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Feb 13 15:25:36.604201 containerd[1471]: time="2025-02-13T15:25:36.604190200Z" level=info msg="metadata content store policy set" policy=shared
Feb 13 15:25:36.608942 containerd[1471]: time="2025-02-13T15:25:36.608899240Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Feb 13 15:25:36.609009 containerd[1471]: time="2025-02-13T15:25:36.608956880Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Feb 13 15:25:36.609009 containerd[1471]: time="2025-02-13T15:25:36.608979320Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Feb 13 15:25:36.609009 containerd[1471]: time="2025-02-13T15:25:36.608995200Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Feb 13 15:25:36.609081 containerd[1471]: time="2025-02-13T15:25:36.609010240Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Feb 13 15:25:36.609189 containerd[1471]: time="2025-02-13T15:25:36.609156360Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Feb 13 15:25:36.610341 containerd[1471]: time="2025-02-13T15:25:36.609451480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Feb 13 15:25:36.610341 containerd[1471]: time="2025-02-13T15:25:36.609599080Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Feb 13 15:25:36.610341 containerd[1471]: time="2025-02-13T15:25:36.609615200Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Feb 13 15:25:36.610341 containerd[1471]: time="2025-02-13T15:25:36.609631600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Feb 13 15:25:36.610341 containerd[1471]: time="2025-02-13T15:25:36.609646120Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Feb 13 15:25:36.610341 containerd[1471]: time="2025-02-13T15:25:36.609660800Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Feb 13 15:25:36.610341 containerd[1471]: time="2025-02-13T15:25:36.609674400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Feb 13 15:25:36.610341 containerd[1471]: time="2025-02-13T15:25:36.609688680Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Feb 13 15:25:36.610341 containerd[1471]: time="2025-02-13T15:25:36.609703120Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Feb 13 15:25:36.610341 containerd[1471]: time="2025-02-13T15:25:36.609716320Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Feb 13 15:25:36.610341 containerd[1471]: time="2025-02-13T15:25:36.609728640Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Feb 13 15:25:36.610341 containerd[1471]: time="2025-02-13T15:25:36.609740600Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Feb 13 15:25:36.610341 containerd[1471]: time="2025-02-13T15:25:36.609764720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Feb 13 15:25:36.610341 containerd[1471]: time="2025-02-13T15:25:36.609778200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Feb 13 15:25:36.610644 containerd[1471]: time="2025-02-13T15:25:36.609790240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Feb 13 15:25:36.610644 containerd[1471]: time="2025-02-13T15:25:36.609815760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Feb 13 15:25:36.610644 containerd[1471]: time="2025-02-13T15:25:36.609833240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Feb 13 15:25:36.610644 containerd[1471]: time="2025-02-13T15:25:36.609846400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Feb 13 15:25:36.610644 containerd[1471]: time="2025-02-13T15:25:36.609858080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Feb 13 15:25:36.610644 containerd[1471]: time="2025-02-13T15:25:36.609870320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Feb 13 15:25:36.610644 containerd[1471]: time="2025-02-13T15:25:36.609883640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Feb 13 15:25:36.610644 containerd[1471]: time="2025-02-13T15:25:36.609898280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Feb 13 15:25:36.610644 containerd[1471]: time="2025-02-13T15:25:36.609909960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Feb 13 15:25:36.610644 containerd[1471]: time="2025-02-13T15:25:36.609922920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Feb 13 15:25:36.610644 containerd[1471]: time="2025-02-13T15:25:36.609937200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Feb 13 15:25:36.610644 containerd[1471]: time="2025-02-13T15:25:36.609952840Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Feb 13 15:25:36.610644 containerd[1471]: time="2025-02-13T15:25:36.609973920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Feb 13 15:25:36.610644 containerd[1471]: time="2025-02-13T15:25:36.609987360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Feb 13 15:25:36.610644 containerd[1471]: time="2025-02-13T15:25:36.609999040Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Feb 13 15:25:36.610900 containerd[1471]: time="2025-02-13T15:25:36.610343360Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Feb 13 15:25:36.610900 containerd[1471]: time="2025-02-13T15:25:36.610364240Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Feb 13 15:25:36.610900 containerd[1471]: time="2025-02-13T15:25:36.610375480Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Feb 13 15:25:36.610900 containerd[1471]: time="2025-02-13T15:25:36.610387000Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Feb 13 15:25:36.610900 containerd[1471]: time="2025-02-13T15:25:36.610396560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Feb 13 15:25:36.610900 containerd[1471]: time="2025-02-13T15:25:36.610423800Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Feb 13 15:25:36.610900 containerd[1471]: time="2025-02-13T15:25:36.610434840Z" level=info msg="NRI interface is disabled by configuration."
Feb 13 15:25:36.610900 containerd[1471]: time="2025-02-13T15:25:36.610445920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Feb 13 15:25:36.611038 containerd[1471]: time="2025-02-13T15:25:36.610876920Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Feb 13 15:25:36.611038 containerd[1471]: time="2025-02-13T15:25:36.610924960Z" level=info msg="Connect containerd service"
Feb 13 15:25:36.611038 containerd[1471]: time="2025-02-13T15:25:36.610961480Z" level=info msg="using legacy CRI server"
Feb 13 15:25:36.611038 containerd[1471]: time="2025-02-13T15:25:36.610968600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Feb 13 15:25:36.611370 containerd[1471]: time="2025-02-13T15:25:36.611341920Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Feb 13 15:25:36.612184 containerd[1471]: time="2025-02-13T15:25:36.612140040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 13 15:25:36.613001 containerd[1471]: time="2025-02-13T15:25:36.612518840Z" level=info msg="Start subscribing containerd event"
Feb 13 15:25:36.613001 containerd[1471]: time="2025-02-13T15:25:36.612583120Z" level=info msg="Start recovering state"
Feb 13 15:25:36.613001 containerd[1471]: time="2025-02-13T15:25:36.612661800Z" level=info msg="Start event monitor"
Feb 13 15:25:36.613001 containerd[1471]: time="2025-02-13T15:25:36.612674680Z" level=info msg="Start snapshots syncer"
Feb 13 15:25:36.613001 containerd[1471]: time="2025-02-13T15:25:36.612684400Z" level=info msg="Start cni network conf syncer for default"
Feb 13 15:25:36.613001 containerd[1471]: time="2025-02-13T15:25:36.612691760Z" level=info msg="Start streaming server"
Feb 13 15:25:36.613159 containerd[1471]: time="2025-02-13T15:25:36.613049920Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Feb 13 15:25:36.613159 containerd[1471]: time="2025-02-13T15:25:36.613098640Z" level=info msg=serving... address=/run/containerd/containerd.sock
Feb 13 15:25:36.613159 containerd[1471]: time="2025-02-13T15:25:36.613148520Z" level=info msg="containerd successfully booted in 0.039628s"
Feb 13 15:25:36.613326 systemd[1]: Started containerd.service - containerd container runtime.
Feb 13 15:25:36.743212 tar[1467]: linux-arm64/LICENSE
Feb 13 15:25:36.743457 tar[1467]: linux-arm64/README.md
Feb 13 15:25:36.756064 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin.
Feb 13 15:25:37.801462 systemd-networkd[1392]: eth0: Gained IPv6LL
Feb 13 15:25:37.804394 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Feb 13 15:25:37.805893 systemd[1]: Reached target network-online.target - Network is Online.
Feb 13 15:25:37.822577 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent...
Feb 13 15:25:37.824962 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:25:37.827064 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Feb 13 15:25:37.843093 systemd[1]: coreos-metadata.service: Deactivated successfully.
Feb 13 15:25:37.843271 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent.
Feb 13 15:25:37.844802 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Feb 13 15:25:37.846711 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Feb 13 15:25:38.313273 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:25:38.314663 systemd[1]: Reached target multi-user.target - Multi-User System.
Feb 13 15:25:38.315708 systemd[1]: Startup finished in 538ms (kernel) + 4.958s (initrd) + 3.691s (userspace) = 9.188s.
Feb 13 15:25:38.318446 (kubelet)[1557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 15:25:38.327833 agetty[1527]: failed to open credentials directory
Feb 13 15:25:38.327883 agetty[1528]: failed to open credentials directory
Feb 13 15:25:38.819867 kubelet[1557]: E0213 15:25:38.819786    1557 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 15:25:38.822978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 15:25:38.823135 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 15:25:42.420229 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Feb 13 15:25:42.421454 systemd[1]: Started sshd@0-10.0.0.55:22-10.0.0.1:38856.service - OpenSSH per-connection server daemon (10.0.0.1:38856).
Feb 13 15:25:42.490686 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 38856 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:25:42.492568 sshd-session[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:25:42.501178 systemd-logind[1451]: New session 1 of user core.
Feb 13 15:25:42.502222 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Feb 13 15:25:42.513549 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Feb 13 15:25:42.522684 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Feb 13 15:25:42.525626 systemd[1]: Starting user@500.service - User Manager for UID 500...
Feb 13 15:25:42.532134 (systemd)[1575]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Feb 13 15:25:42.628447 systemd[1575]: Queued start job for default target default.target.
Feb 13 15:25:42.638264 systemd[1575]: Created slice app.slice - User Application Slice.
Feb 13 15:25:42.638330 systemd[1575]: Reached target paths.target - Paths.
Feb 13 15:25:42.638344 systemd[1575]: Reached target timers.target - Timers.
Feb 13 15:25:42.639535 systemd[1575]: Starting dbus.socket - D-Bus User Message Bus Socket...
Feb 13 15:25:42.649689 systemd[1575]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Feb 13 15:25:42.649755 systemd[1575]: Reached target sockets.target - Sockets.
Feb 13 15:25:42.649767 systemd[1575]: Reached target basic.target - Basic System.
Feb 13 15:25:42.649804 systemd[1575]: Reached target default.target - Main User Target.
Feb 13 15:25:42.649830 systemd[1575]: Startup finished in 111ms.
Feb 13 15:25:42.650082 systemd[1]: Started user@500.service - User Manager for UID 500.
Feb 13 15:25:42.659492 systemd[1]: Started session-1.scope - Session 1 of User core.
Feb 13 15:25:42.724935 systemd[1]: Started sshd@1-10.0.0.55:22-10.0.0.1:38868.service - OpenSSH per-connection server daemon (10.0.0.1:38868).
Feb 13 15:25:42.769803 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 38868 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:25:42.771002 sshd-session[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:25:42.774610 systemd-logind[1451]: New session 2 of user core.
Feb 13 15:25:42.778467 systemd[1]: Started session-2.scope - Session 2 of User core.
Feb 13 15:25:42.830545 sshd[1588]: Connection closed by 10.0.0.1 port 38868
Feb 13 15:25:42.830998 sshd-session[1586]: pam_unix(sshd:session): session closed for user core
Feb 13 15:25:42.841060 systemd[1]: sshd@1-10.0.0.55:22-10.0.0.1:38868.service: Deactivated successfully.
Feb 13 15:25:42.846166 systemd[1]: session-2.scope: Deactivated successfully.
Feb 13 15:25:42.848416 systemd-logind[1451]: Session 2 logged out. Waiting for processes to exit.
Feb 13 15:25:42.855629 systemd[1]: Started sshd@2-10.0.0.55:22-10.0.0.1:38884.service - OpenSSH per-connection server daemon (10.0.0.1:38884).
Feb 13 15:25:42.857206 systemd-logind[1451]: Removed session 2.
Feb 13 15:25:42.897463 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 38884 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:25:42.898714 sshd-session[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:25:42.907370 systemd-logind[1451]: New session 3 of user core.
Feb 13 15:25:42.917490 systemd[1]: Started session-3.scope - Session 3 of User core.
Feb 13 15:25:42.970071 sshd[1595]: Connection closed by 10.0.0.1 port 38884
Feb 13 15:25:42.970526 sshd-session[1593]: pam_unix(sshd:session): session closed for user core
Feb 13 15:25:42.983401 systemd[1]: sshd@2-10.0.0.55:22-10.0.0.1:38884.service: Deactivated successfully.
Feb 13 15:25:42.985002 systemd[1]: session-3.scope: Deactivated successfully.
Feb 13 15:25:42.987454 systemd-logind[1451]: Session 3 logged out. Waiting for processes to exit.
Feb 13 15:25:42.987874 systemd[1]: Started sshd@3-10.0.0.55:22-10.0.0.1:38896.service - OpenSSH per-connection server daemon (10.0.0.1:38896).
Feb 13 15:25:42.988950 systemd-logind[1451]: Removed session 3.
Feb 13 15:25:43.039995 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 38896 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:25:43.041225 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:25:43.047086 systemd-logind[1451]: New session 4 of user core.
Feb 13 15:25:43.058443 systemd[1]: Started session-4.scope - Session 4 of User core.
Feb 13 15:25:43.111817 sshd[1602]: Connection closed by 10.0.0.1 port 38896
Feb 13 15:25:43.112310 sshd-session[1600]: pam_unix(sshd:session): session closed for user core
Feb 13 15:25:43.129696 systemd[1]: sshd@3-10.0.0.55:22-10.0.0.1:38896.service: Deactivated successfully.
Feb 13 15:25:43.131026 systemd[1]: session-4.scope: Deactivated successfully.
Feb 13 15:25:43.132225 systemd-logind[1451]: Session 4 logged out. Waiting for processes to exit.
Feb 13 15:25:43.133356 systemd[1]: Started sshd@4-10.0.0.55:22-10.0.0.1:38904.service - OpenSSH per-connection server daemon (10.0.0.1:38904).
Feb 13 15:25:43.134143 systemd-logind[1451]: Removed session 4.
Feb 13 15:25:43.187305 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 38904 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:25:43.188634 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:25:43.192476 systemd-logind[1451]: New session 5 of user core.
Feb 13 15:25:43.203432 systemd[1]: Started session-5.scope - Session 5 of User core.
Feb 13 15:25:43.267871 sudo[1610]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Feb 13 15:25:43.268168 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:25:43.280144 sudo[1610]: pam_unix(sudo:session): session closed for user root
Feb 13 15:25:43.285493 sshd[1609]: Connection closed by 10.0.0.1 port 38904
Feb 13 15:25:43.285945 sshd-session[1607]: pam_unix(sshd:session): session closed for user core
Feb 13 15:25:43.301738 systemd[1]: sshd@4-10.0.0.55:22-10.0.0.1:38904.service: Deactivated successfully.
Feb 13 15:25:43.303425 systemd[1]: session-5.scope: Deactivated successfully.
Feb 13 15:25:43.304826 systemd-logind[1451]: Session 5 logged out. Waiting for processes to exit.
Feb 13 15:25:43.306104 systemd[1]: Started sshd@5-10.0.0.55:22-10.0.0.1:38908.service - OpenSSH per-connection server daemon (10.0.0.1:38908).
Feb 13 15:25:43.306852 systemd-logind[1451]: Removed session 5.
Feb 13 15:25:43.350777 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 38908 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:25:43.352230 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:25:43.355940 systemd-logind[1451]: New session 6 of user core.
Feb 13 15:25:43.364442 systemd[1]: Started session-6.scope - Session 6 of User core.
Feb 13 15:25:43.416454 sudo[1619]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Feb 13 15:25:43.416726 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:25:43.419562 sudo[1619]: pam_unix(sudo:session): session closed for user root
Feb 13 15:25:43.423762 sudo[1618]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules
Feb 13 15:25:43.424018 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:25:43.444661 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Feb 13 15:25:43.466311 augenrules[1641]: No rules
Feb 13 15:25:43.467489 systemd[1]: audit-rules.service: Deactivated successfully.
Feb 13 15:25:43.468928 sudo[1618]: pam_unix(sudo:session): session closed for user root
Feb 13 15:25:43.467683 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Feb 13 15:25:43.472092 sshd[1617]: Connection closed by 10.0.0.1 port 38908
Feb 13 15:25:43.471985 sshd-session[1615]: pam_unix(sshd:session): session closed for user core
Feb 13 15:25:43.481556 systemd[1]: sshd@5-10.0.0.55:22-10.0.0.1:38908.service: Deactivated successfully.
Feb 13 15:25:43.482955 systemd[1]: session-6.scope: Deactivated successfully.
Feb 13 15:25:43.484169 systemd-logind[1451]: Session 6 logged out. Waiting for processes to exit.
Feb 13 15:25:43.485352 systemd[1]: Started sshd@6-10.0.0.55:22-10.0.0.1:38918.service - OpenSSH per-connection server daemon (10.0.0.1:38918).
Feb 13 15:25:43.486088 systemd-logind[1451]: Removed session 6.
Feb 13 15:25:43.531458 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 38918 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:25:43.532403 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:25:43.536077 systemd-logind[1451]: New session 7 of user core.
Feb 13 15:25:43.546425 systemd[1]: Started session-7.scope - Session 7 of User core.
Feb 13 15:25:43.597427 sudo[1652]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Feb 13 15:25:43.597694 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:25:43.941531 systemd[1]: Starting docker.service - Docker Application Container Engine...
Feb 13 15:25:43.941669 (dockerd)[1673]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU
Feb 13 15:25:44.185195 dockerd[1673]: time="2025-02-13T15:25:44.185137180Z" level=info msg="Starting up"
Feb 13 15:25:44.335711 dockerd[1673]: time="2025-02-13T15:25:44.335239042Z" level=info msg="Loading containers: start."
Feb 13 15:25:44.486319 kernel: Initializing XFRM netlink socket
Feb 13 15:25:44.554621 systemd-networkd[1392]: docker0: Link UP
Feb 13 15:25:44.583539 dockerd[1673]: time="2025-02-13T15:25:44.583495130Z" level=info msg="Loading containers: done."
Feb 13 15:25:44.596428 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1431897623-merged.mount: Deactivated successfully.
Feb 13 15:25:44.597903 dockerd[1673]: time="2025-02-13T15:25:44.597844407Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Feb 13 15:25:44.597978 dockerd[1673]: time="2025-02-13T15:25:44.597941488Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1
Feb 13 15:25:44.598145 dockerd[1673]: time="2025-02-13T15:25:44.598113850Z" level=info msg="Daemon has completed initialization"
Feb 13 15:25:44.627731 dockerd[1673]: time="2025-02-13T15:25:44.627664533Z" level=info msg="API listen on /run/docker.sock"
Feb 13 15:25:44.627910 systemd[1]: Started docker.service - Docker Application Container Engine.
Feb 13 15:25:45.312037 containerd[1471]: time="2025-02-13T15:25:45.311987822Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\""
Feb 13 15:25:45.982309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3873905896.mount: Deactivated successfully.
Feb 13 15:25:46.990557 containerd[1471]: time="2025-02-13T15:25:46.990501744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:25:46.990951 containerd[1471]: time="2025-02-13T15:25:46.990864463Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.14: active requests=0, bytes read=32205863"
Feb 13 15:25:46.991796 containerd[1471]: time="2025-02-13T15:25:46.991765787Z" level=info msg="ImageCreate event name:\"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:25:46.994438 containerd[1471]: time="2025-02-13T15:25:46.994404722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:25:46.995940 containerd[1471]: time="2025-02-13T15:25:46.995897935Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.14\" with image id \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.14\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\", size \"32202661\" in 1.683869509s"
Feb 13 15:25:46.995981 containerd[1471]: time="2025-02-13T15:25:46.995939987Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\" returns image reference \"sha256:c136612236eb39fcac4abea395de985f019cf87f72cc1afd828fb78de88a649f\""
Feb 13 15:25:47.015095 containerd[1471]: time="2025-02-13T15:25:47.015062433Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\""
Feb 13 15:25:48.463768 containerd[1471]: time="2025-02-13T15:25:48.463719025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:25:48.464740 containerd[1471]: time="2025-02-13T15:25:48.464495593Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.14: active requests=0, bytes read=29383093"
Feb 13 15:25:48.465580 containerd[1471]: time="2025-02-13T15:25:48.465554567Z" level=info msg="ImageCreate event name:\"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:25:48.467989 containerd[1471]: time="2025-02-13T15:25:48.467935697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:25:48.469092 containerd[1471]: time="2025-02-13T15:25:48.469047026Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.14\" with image id \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.14\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\", size \"30786820\" in 1.453949017s"
Feb 13 15:25:48.469092 containerd[1471]: time="2025-02-13T15:25:48.469079067Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\" returns image reference \"sha256:582085ec6cd04751293bebad40e35d6b2066b81f6e5868a9db60b8127ca7921d\""
Feb 13 15:25:48.488976 containerd[1471]: time="2025-02-13T15:25:48.488940950Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\""
Feb 13 15:25:49.073514 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Feb 13 15:25:49.084472 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:25:49.177092 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:25:49.181005 (kubelet)[1956]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 15:25:49.302529 kubelet[1956]: E0213 15:25:49.302421    1956 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 15:25:49.306313 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 15:25:49.306638 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 15:25:49.479707 containerd[1471]: time="2025-02-13T15:25:49.479416223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:25:49.480656 containerd[1471]: time="2025-02-13T15:25:49.480413923Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.14: active requests=0, bytes read=15766982"
Feb 13 15:25:49.481315 containerd[1471]: time="2025-02-13T15:25:49.481243594Z" level=info msg="ImageCreate event name:\"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:25:49.484452 containerd[1471]: time="2025-02-13T15:25:49.484412131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:25:49.485701 containerd[1471]: time="2025-02-13T15:25:49.485656659Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.14\" with image id \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.14\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\", size \"17170727\" in 996.675053ms"
Feb 13 15:25:49.485701 containerd[1471]: time="2025-02-13T15:25:49.485691569Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\" returns image reference \"sha256:dfb84ea1121ad6a9ceccfe5078af3eee1b27b8d2b2e93d6449d11e1526dbeff8\""
Feb 13 15:25:49.504246 containerd[1471]: time="2025-02-13T15:25:49.504186343Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\""
Feb 13 15:25:50.477504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount843466819.mount: Deactivated successfully.
Feb 13 15:25:50.791928 containerd[1471]: time="2025-02-13T15:25:50.791802508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:25:50.792423 containerd[1471]: time="2025-02-13T15:25:50.792393199Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=25273377"
Feb 13 15:25:50.793279 containerd[1471]: time="2025-02-13T15:25:50.793248700Z" level=info msg="ImageCreate event name:\"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:25:50.795302 containerd[1471]: time="2025-02-13T15:25:50.795261927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:25:50.795963 containerd[1471]: time="2025-02-13T15:25:50.795931635Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"25272394\" in 1.291714093s"
Feb 13 15:25:50.796002 containerd[1471]: time="2025-02-13T15:25:50.795961528Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\""
Feb 13 15:25:50.814008 containerd[1471]: time="2025-02-13T15:25:50.813971664Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\""
Feb 13 15:25:51.574900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1444501269.mount: Deactivated successfully.
Feb 13 15:25:52.055618 containerd[1471]: time="2025-02-13T15:25:52.055504077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:25:52.056061 containerd[1471]: time="2025-02-13T15:25:52.056019519Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383"
Feb 13 15:25:52.057113 containerd[1471]: time="2025-02-13T15:25:52.057082465Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:25:52.060184 containerd[1471]: time="2025-02-13T15:25:52.060114451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:25:52.061731 containerd[1471]: time="2025-02-13T15:25:52.061692638Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.247683367s"
Feb 13 15:25:52.061775 containerd[1471]: time="2025-02-13T15:25:52.061730205Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\""
Feb 13 15:25:52.080509 containerd[1471]: time="2025-02-13T15:25:52.080476369Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\""
Feb 13 15:25:52.532061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2246388692.mount: Deactivated successfully.
Feb 13 15:25:52.536118 containerd[1471]: time="2025-02-13T15:25:52.536073149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:25:52.537046 containerd[1471]: time="2025-02-13T15:25:52.536996718Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823"
Feb 13 15:25:52.537800 containerd[1471]: time="2025-02-13T15:25:52.537770826Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:25:52.540490 containerd[1471]: time="2025-02-13T15:25:52.540444947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:25:52.541281 containerd[1471]: time="2025-02-13T15:25:52.541242156Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 460.735131ms"
Feb 13 15:25:52.541281 containerd[1471]: time="2025-02-13T15:25:52.541273817Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\""
Feb 13 15:25:52.559598 containerd[1471]: time="2025-02-13T15:25:52.559558980Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\""
Feb 13 15:25:53.206901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3682348883.mount: Deactivated successfully.
Feb 13 15:25:54.667941 containerd[1471]: time="2025-02-13T15:25:54.667879316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:25:54.668454 containerd[1471]: time="2025-02-13T15:25:54.668407870Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200788"
Feb 13 15:25:54.669166 containerd[1471]: time="2025-02-13T15:25:54.669140717Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:25:54.672198 containerd[1471]: time="2025-02-13T15:25:54.672146156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:25:54.673547 containerd[1471]: time="2025-02-13T15:25:54.673512031Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.113914128s"
Feb 13 15:25:54.673595 containerd[1471]: time="2025-02-13T15:25:54.673550040Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\""
Feb 13 15:25:59.556750 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Feb 13 15:25:59.566487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:25:59.682003 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:25:59.685782 (kubelet)[2177]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 15:25:59.726133 kubelet[2177]: E0213 15:25:59.726061    2177 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 15:25:59.729078 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 15:25:59.729224 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 15:26:00.190585 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:26:00.202548 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:26:00.219419 systemd[1]: Reloading requested from client PID 2192 ('systemctl') (unit session-7.scope)...
Feb 13 15:26:00.219441 systemd[1]: Reloading...
Feb 13 15:26:00.285324 zram_generator::config[2234]: No configuration found.
Feb 13 15:26:00.518852 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:26:00.570905 systemd[1]: Reloading finished in 351 ms.
Feb 13 15:26:00.611168 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:26:00.612643 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:26:00.615230 systemd[1]: kubelet.service: Deactivated successfully.
Feb 13 15:26:00.615463 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:26:00.617133 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:26:00.707269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:26:00.711798 (kubelet)[2278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Feb 13 15:26:00.765753 kubelet[2278]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 15:26:00.765753 kubelet[2278]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Feb 13 15:26:00.765753 kubelet[2278]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 15:26:00.766101 kubelet[2278]: I0213 15:26:00.765799    2278 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 13 15:26:01.705530 kubelet[2278]: I0213 15:26:01.705483    2278 server.go:487] "Kubelet version" kubeletVersion="v1.29.2"
Feb 13 15:26:01.705530 kubelet[2278]: I0213 15:26:01.705519    2278 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 13 15:26:01.705764 kubelet[2278]: I0213 15:26:01.705738    2278 server.go:919] "Client rotation is on, will bootstrap in background"
Feb 13 15:26:01.768599 kubelet[2278]: I0213 15:26:01.768429    2278 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 13 15:26:01.770420 kubelet[2278]: E0213 15:26:01.770386    2278 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.55:6443: connect: connection refused
Feb 13 15:26:01.779183 kubelet[2278]: I0213 15:26:01.779144    2278 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb 13 15:26:01.780169 kubelet[2278]: I0213 15:26:01.780130    2278 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb 13 15:26:01.780429 kubelet[2278]: I0213 15:26:01.780397    2278 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Feb 13 15:26:01.780429 kubelet[2278]: I0213 15:26:01.780426    2278 topology_manager.go:138] "Creating topology manager with none policy"
Feb 13 15:26:01.780575 kubelet[2278]: I0213 15:26:01.780436    2278 container_manager_linux.go:301] "Creating device plugin manager"
Feb 13 15:26:01.780605 kubelet[2278]: I0213 15:26:01.780581    2278 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 15:26:01.782758 kubelet[2278]: I0213 15:26:01.782723    2278 kubelet.go:396] "Attempting to sync node with API server"
Feb 13 15:26:01.782758 kubelet[2278]: I0213 15:26:01.782757    2278 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 13 15:26:01.783046 kubelet[2278]: I0213 15:26:01.782791    2278 kubelet.go:312] "Adding apiserver pod source"
Feb 13 15:26:01.783046 kubelet[2278]: I0213 15:26:01.782805    2278 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 13 15:26:01.783385 kubelet[2278]: W0213 15:26:01.783313    2278 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused
Feb 13 15:26:01.783459 kubelet[2278]: E0213 15:26:01.783388    2278 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused
Feb 13 15:26:01.783821 kubelet[2278]: W0213 15:26:01.783745    2278 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused
Feb 13 15:26:01.783821 kubelet[2278]: E0213 15:26:01.783789    2278 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused
Feb 13 15:26:01.784659 kubelet[2278]: I0213 15:26:01.784624    2278 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1"
Feb 13 15:26:01.787355 kubelet[2278]: I0213 15:26:01.787325    2278 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Feb 13 15:26:01.787968 kubelet[2278]: W0213 15:26:01.787923    2278 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Feb 13 15:26:01.788975 kubelet[2278]: I0213 15:26:01.788950    2278 server.go:1256] "Started kubelet"
Feb 13 15:26:01.790095 kubelet[2278]: I0213 15:26:01.790058    2278 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 13 15:26:01.790708 kubelet[2278]: I0213 15:26:01.790665    2278 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Feb 13 15:26:01.791399 kubelet[2278]: I0213 15:26:01.790946    2278 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Feb 13 15:26:01.791399 kubelet[2278]: I0213 15:26:01.791006    2278 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Feb 13 15:26:01.791884 kubelet[2278]: I0213 15:26:01.791847    2278 server.go:461] "Adding debug handlers to kubelet server"
Feb 13 15:26:01.795580 kubelet[2278]: E0213 15:26:01.795246    2278 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Feb 13 15:26:01.795580 kubelet[2278]: I0213 15:26:01.795310    2278 volume_manager.go:291] "Starting Kubelet Volume Manager"
Feb 13 15:26:01.795580 kubelet[2278]: I0213 15:26:01.795418    2278 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Feb 13 15:26:01.795580 kubelet[2278]: I0213 15:26:01.795506    2278 reconciler_new.go:29] "Reconciler: start to sync state"
Feb 13 15:26:01.796093 kubelet[2278]: W0213 15:26:01.795958    2278 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused
Feb 13 15:26:01.796093 kubelet[2278]: E0213 15:26:01.796027    2278 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused
Feb 13 15:26:01.797787 kubelet[2278]: E0213 15:26:01.796777    2278 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.55:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.55:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823cdfba54d2656  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 15:26:01.788925526 +0000 UTC m=+1.073734964,LastTimestamp:2025-02-13 15:26:01.788925526 +0000 UTC m=+1.073734964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}"
Feb 13 15:26:01.799071 kubelet[2278]: E0213 15:26:01.798973    2278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="200ms"
Feb 13 15:26:01.801565 kubelet[2278]: I0213 15:26:01.801534    2278 factory.go:221] Registration of the systemd container factory successfully
Feb 13 15:26:01.802015 kubelet[2278]: I0213 15:26:01.801768    2278 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Feb 13 15:26:01.803030 kubelet[2278]: I0213 15:26:01.802994    2278 factory.go:221] Registration of the containerd container factory successfully
Feb 13 15:26:01.804041 kubelet[2278]: E0213 15:26:01.803995    2278 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb 13 15:26:01.809572 kubelet[2278]: I0213 15:26:01.809432    2278 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Feb 13 15:26:01.810695 kubelet[2278]: I0213 15:26:01.810387    2278 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Feb 13 15:26:01.810695 kubelet[2278]: I0213 15:26:01.810409    2278 status_manager.go:217] "Starting to sync pod status with apiserver"
Feb 13 15:26:01.810695 kubelet[2278]: I0213 15:26:01.810426    2278 kubelet.go:2329] "Starting kubelet main sync loop"
Feb 13 15:26:01.810695 kubelet[2278]: E0213 15:26:01.810473    2278 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Feb 13 15:26:01.814717 kubelet[2278]: W0213 15:26:01.814539    2278 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused
Feb 13 15:26:01.814899 kubelet[2278]: E0213 15:26:01.814885    2278 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused
Feb 13 15:26:01.817651 kubelet[2278]: I0213 15:26:01.817627    2278 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb 13 15:26:01.817651 kubelet[2278]: I0213 15:26:01.817649    2278 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb 13 15:26:01.817761 kubelet[2278]: I0213 15:26:01.817667    2278 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 15:26:01.881704 kubelet[2278]: I0213 15:26:01.881658    2278 policy_none.go:49] "None policy: Start"
Feb 13 15:26:01.882477 kubelet[2278]: I0213 15:26:01.882456    2278 memory_manager.go:170] "Starting memorymanager" policy="None"
Feb 13 15:26:01.882550 kubelet[2278]: I0213 15:26:01.882499    2278 state_mem.go:35] "Initializing new in-memory state store"
Feb 13 15:26:01.889084 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice.
Feb 13 15:26:01.896592 kubelet[2278]: I0213 15:26:01.896560    2278 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Feb 13 15:26:01.897089 kubelet[2278]: E0213 15:26:01.897051    2278 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost"
Feb 13 15:26:01.898759 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice.
Feb 13 15:26:01.901374 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice.
Feb 13 15:26:01.908003 kubelet[2278]: I0213 15:26:01.907973    2278 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 13 15:26:01.908269 kubelet[2278]: I0213 15:26:01.908247    2278 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb 13 15:26:01.909780 kubelet[2278]: E0213 15:26:01.909755    2278 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found"
Feb 13 15:26:01.910635 kubelet[2278]: I0213 15:26:01.910616    2278 topology_manager.go:215] "Topology Admit Handler" podUID="8685864d5dbd48c6e1b4f343a0340974" podNamespace="kube-system" podName="kube-apiserver-localhost"
Feb 13 15:26:01.911537 kubelet[2278]: I0213 15:26:01.911490    2278 topology_manager.go:215] "Topology Admit Handler" podUID="8dd79284f50d348595750c57a6b03620" podNamespace="kube-system" podName="kube-controller-manager-localhost"
Feb 13 15:26:01.912521 kubelet[2278]: I0213 15:26:01.912494    2278 topology_manager.go:215] "Topology Admit Handler" podUID="34a43d8200b04e3b81251db6a65bc0ce" podNamespace="kube-system" podName="kube-scheduler-localhost"
Feb 13 15:26:01.917204 systemd[1]: Created slice kubepods-burstable-pod8685864d5dbd48c6e1b4f343a0340974.slice - libcontainer container kubepods-burstable-pod8685864d5dbd48c6e1b4f343a0340974.slice.
Feb 13 15:26:01.930368 systemd[1]: Created slice kubepods-burstable-pod8dd79284f50d348595750c57a6b03620.slice - libcontainer container kubepods-burstable-pod8dd79284f50d348595750c57a6b03620.slice.
Feb 13 15:26:01.941670 systemd[1]: Created slice kubepods-burstable-pod34a43d8200b04e3b81251db6a65bc0ce.slice - libcontainer container kubepods-burstable-pod34a43d8200b04e3b81251db6a65bc0ce.slice.
Feb 13 15:26:02.000271 kubelet[2278]: E0213 15:26:02.000179    2278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="400ms"
Feb 13 15:26:02.096614 kubelet[2278]: I0213 15:26:02.096517    2278 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost"
Feb 13 15:26:02.096614 kubelet[2278]: I0213 15:26:02.096565    2278 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost"
Feb 13 15:26:02.096614 kubelet[2278]: I0213 15:26:02.096587    2278 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost"
Feb 13 15:26:02.096944 kubelet[2278]: I0213 15:26:02.096668    2278 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34a43d8200b04e3b81251db6a65bc0ce-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"34a43d8200b04e3b81251db6a65bc0ce\") " pod="kube-system/kube-scheduler-localhost"
Feb 13 15:26:02.096944 kubelet[2278]: I0213 15:26:02.096729    2278 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8685864d5dbd48c6e1b4f343a0340974-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8685864d5dbd48c6e1b4f343a0340974\") " pod="kube-system/kube-apiserver-localhost"
Feb 13 15:26:02.096944 kubelet[2278]: I0213 15:26:02.096774    2278 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8685864d5dbd48c6e1b4f343a0340974-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8685864d5dbd48c6e1b4f343a0340974\") " pod="kube-system/kube-apiserver-localhost"
Feb 13 15:26:02.096944 kubelet[2278]: I0213 15:26:02.096821    2278 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8685864d5dbd48c6e1b4f343a0340974-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8685864d5dbd48c6e1b4f343a0340974\") " pod="kube-system/kube-apiserver-localhost"
Feb 13 15:26:02.096944 kubelet[2278]: I0213 15:26:02.096871    2278 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost"
Feb 13 15:26:02.097055 kubelet[2278]: I0213 15:26:02.096909    2278 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost"
Feb 13 15:26:02.098208 kubelet[2278]: I0213 15:26:02.098156    2278 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Feb 13 15:26:02.098559 kubelet[2278]: E0213 15:26:02.098521    2278 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost"
Feb 13 15:26:02.229232 kubelet[2278]: E0213 15:26:02.229202    2278 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:02.229921 containerd[1471]: time="2025-02-13T15:26:02.229875536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8685864d5dbd48c6e1b4f343a0340974,Namespace:kube-system,Attempt:0,}"
Feb 13 15:26:02.232057 kubelet[2278]: E0213 15:26:02.232028    2278 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:02.232447 containerd[1471]: time="2025-02-13T15:26:02.232412701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8dd79284f50d348595750c57a6b03620,Namespace:kube-system,Attempt:0,}"
Feb 13 15:26:02.244737 kubelet[2278]: E0213 15:26:02.244713    2278 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:02.245086 containerd[1471]: time="2025-02-13T15:26:02.245056237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:34a43d8200b04e3b81251db6a65bc0ce,Namespace:kube-system,Attempt:0,}"
Feb 13 15:26:02.401734 kubelet[2278]: E0213 15:26:02.401632    2278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="800ms"
Feb 13 15:26:02.500244 kubelet[2278]: I0213 15:26:02.500184    2278 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Feb 13 15:26:02.500548 kubelet[2278]: E0213 15:26:02.500521    2278 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost"
Feb 13 15:26:02.601585 kubelet[2278]: W0213 15:26:02.601519    2278 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused
Feb 13 15:26:02.601585 kubelet[2278]: E0213 15:26:02.601575    2278 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused
Feb 13 15:26:02.693261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4032879293.mount: Deactivated successfully.
Feb 13 15:26:02.697142 containerd[1471]: time="2025-02-13T15:26:02.697098643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 15:26:02.698357 containerd[1471]: time="2025-02-13T15:26:02.698266208Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 15:26:02.699372 containerd[1471]: time="2025-02-13T15:26:02.699329090Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175"
Feb 13 15:26:02.699863 containerd[1471]: time="2025-02-13T15:26:02.699828634Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Feb 13 15:26:02.701112 containerd[1471]: time="2025-02-13T15:26:02.701065839Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 15:26:02.702300 containerd[1471]: time="2025-02-13T15:26:02.702216304Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Feb 13 15:26:02.703774 containerd[1471]: time="2025-02-13T15:26:02.703742848Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 15:26:02.706420 containerd[1471]: time="2025-02-13T15:26:02.706356703Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 473.879286ms"
Feb 13 15:26:02.706612 containerd[1471]: time="2025-02-13T15:26:02.706564946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 15:26:02.707971 containerd[1471]: time="2025-02-13T15:26:02.707941074Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 477.986085ms"
Feb 13 15:26:02.709243 containerd[1471]: time="2025-02-13T15:26:02.709076081Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 463.960294ms"
Feb 13 15:26:02.894043 containerd[1471]: time="2025-02-13T15:26:02.893901039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:26:02.894043 containerd[1471]: time="2025-02-13T15:26:02.893974565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:26:02.894043 containerd[1471]: time="2025-02-13T15:26:02.893991345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:26:02.894315 containerd[1471]: time="2025-02-13T15:26:02.894066552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:26:02.894713 containerd[1471]: time="2025-02-13T15:26:02.894451322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:26:02.894713 containerd[1471]: time="2025-02-13T15:26:02.894500740Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:26:02.894713 containerd[1471]: time="2025-02-13T15:26:02.894511553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:26:02.894713 containerd[1471]: time="2025-02-13T15:26:02.894576228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:26:02.895909 containerd[1471]: time="2025-02-13T15:26:02.895725932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:26:02.895909 containerd[1471]: time="2025-02-13T15:26:02.895780676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:26:02.895909 containerd[1471]: time="2025-02-13T15:26:02.895792129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:26:02.895909 containerd[1471]: time="2025-02-13T15:26:02.895884117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:26:02.918467 systemd[1]: Started cri-containerd-04f93aaf89e0d9cee42be3e1181a2ddcc98e77871a5e97a284386f8e0e132bce.scope - libcontainer container 04f93aaf89e0d9cee42be3e1181a2ddcc98e77871a5e97a284386f8e0e132bce.
Feb 13 15:26:02.919639 systemd[1]: Started cri-containerd-d71ca39f415edb3a84833e92b4048ed480efe73df9d5c4980cec050b3c0e72a7.scope - libcontainer container d71ca39f415edb3a84833e92b4048ed480efe73df9d5c4980cec050b3c0e72a7.
Feb 13 15:26:02.923180 systemd[1]: Started cri-containerd-f412ade4a70625f53c0869d2e86f1a6d02b36511de8168a19bc76c5f80c4ed10.scope - libcontainer container f412ade4a70625f53c0869d2e86f1a6d02b36511de8168a19bc76c5f80c4ed10.
Feb 13 15:26:02.952734 containerd[1471]: time="2025-02-13T15:26:02.952047473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:34a43d8200b04e3b81251db6a65bc0ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"d71ca39f415edb3a84833e92b4048ed480efe73df9d5c4980cec050b3c0e72a7\""
Feb 13 15:26:02.955736 kubelet[2278]: E0213 15:26:02.955526    2278 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:02.958197 containerd[1471]: time="2025-02-13T15:26:02.958149604Z" level=info msg="CreateContainer within sandbox \"d71ca39f415edb3a84833e92b4048ed480efe73df9d5c4980cec050b3c0e72a7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Feb 13 15:26:02.961448 containerd[1471]: time="2025-02-13T15:26:02.961414339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8685864d5dbd48c6e1b4f343a0340974,Namespace:kube-system,Attempt:0,} returns sandbox id \"04f93aaf89e0d9cee42be3e1181a2ddcc98e77871a5e97a284386f8e0e132bce\""
Feb 13 15:26:02.962667 containerd[1471]: time="2025-02-13T15:26:02.962639932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8dd79284f50d348595750c57a6b03620,Namespace:kube-system,Attempt:0,} returns sandbox id \"f412ade4a70625f53c0869d2e86f1a6d02b36511de8168a19bc76c5f80c4ed10\""
Feb 13 15:26:02.962974 kubelet[2278]: E0213 15:26:02.962950    2278 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:02.963472 kubelet[2278]: E0213 15:26:02.963151    2278 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:02.966094 containerd[1471]: time="2025-02-13T15:26:02.965450937Z" level=info msg="CreateContainer within sandbox \"f412ade4a70625f53c0869d2e86f1a6d02b36511de8168a19bc76c5f80c4ed10\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Feb 13 15:26:02.966680 containerd[1471]: time="2025-02-13T15:26:02.966430001Z" level=info msg="CreateContainer within sandbox \"04f93aaf89e0d9cee42be3e1181a2ddcc98e77871a5e97a284386f8e0e132bce\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Feb 13 15:26:02.976883 kubelet[2278]: W0213 15:26:02.976844    2278 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused
Feb 13 15:26:02.977058 kubelet[2278]: E0213 15:26:02.977031    2278 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused
Feb 13 15:26:02.977122 containerd[1471]: time="2025-02-13T15:26:02.977004159Z" level=info msg="CreateContainer within sandbox \"d71ca39f415edb3a84833e92b4048ed480efe73df9d5c4980cec050b3c0e72a7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7ce905c76b0f4e76e9760aaf2883a4aabd00018d9cb5e6b69135647a577c7787\""
Feb 13 15:26:02.977888 containerd[1471]: time="2025-02-13T15:26:02.977773338Z" level=info msg="StartContainer for \"7ce905c76b0f4e76e9760aaf2883a4aabd00018d9cb5e6b69135647a577c7787\""
Feb 13 15:26:02.983613 containerd[1471]: time="2025-02-13T15:26:02.983573035Z" level=info msg="CreateContainer within sandbox \"04f93aaf89e0d9cee42be3e1181a2ddcc98e77871a5e97a284386f8e0e132bce\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b3baec8073fe308d2f330b212359abb6d8520d2e1a3d8b2d14dbfbb7b93b51a3\""
Feb 13 15:26:02.984417 containerd[1471]: time="2025-02-13T15:26:02.984333204Z" level=info msg="StartContainer for \"b3baec8073fe308d2f330b212359abb6d8520d2e1a3d8b2d14dbfbb7b93b51a3\""
Feb 13 15:26:02.986276 containerd[1471]: time="2025-02-13T15:26:02.986241394Z" level=info msg="CreateContainer within sandbox \"f412ade4a70625f53c0869d2e86f1a6d02b36511de8168a19bc76c5f80c4ed10\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1b306b48bcc43b9bd878333ef36ca18c8e6dc7c940b891296cb971469de2c639\""
Feb 13 15:26:02.986977 containerd[1471]: time="2025-02-13T15:26:02.986725159Z" level=info msg="StartContainer for \"1b306b48bcc43b9bd878333ef36ca18c8e6dc7c940b891296cb971469de2c639\""
Feb 13 15:26:02.990257 kubelet[2278]: W0213 15:26:02.990199    2278 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused
Feb 13 15:26:02.990347 kubelet[2278]: E0213 15:26:02.990264    2278 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused
Feb 13 15:26:03.011503 systemd[1]: Started cri-containerd-7ce905c76b0f4e76e9760aaf2883a4aabd00018d9cb5e6b69135647a577c7787.scope - libcontainer container 7ce905c76b0f4e76e9760aaf2883a4aabd00018d9cb5e6b69135647a577c7787.
Feb 13 15:26:03.015546 systemd[1]: Started cri-containerd-1b306b48bcc43b9bd878333ef36ca18c8e6dc7c940b891296cb971469de2c639.scope - libcontainer container 1b306b48bcc43b9bd878333ef36ca18c8e6dc7c940b891296cb971469de2c639.
Feb 13 15:26:03.016798 systemd[1]: Started cri-containerd-b3baec8073fe308d2f330b212359abb6d8520d2e1a3d8b2d14dbfbb7b93b51a3.scope - libcontainer container b3baec8073fe308d2f330b212359abb6d8520d2e1a3d8b2d14dbfbb7b93b51a3.
Feb 13 15:26:03.060978 containerd[1471]: time="2025-02-13T15:26:03.057312035Z" level=info msg="StartContainer for \"b3baec8073fe308d2f330b212359abb6d8520d2e1a3d8b2d14dbfbb7b93b51a3\" returns successfully"
Feb 13 15:26:03.060978 containerd[1471]: time="2025-02-13T15:26:03.057423429Z" level=info msg="StartContainer for \"7ce905c76b0f4e76e9760aaf2883a4aabd00018d9cb5e6b69135647a577c7787\" returns successfully"
Feb 13 15:26:03.078157 containerd[1471]: time="2025-02-13T15:26:03.071020295Z" level=info msg="StartContainer for \"1b306b48bcc43b9bd878333ef36ca18c8e6dc7c940b891296cb971469de2c639\" returns successfully"
Feb 13 15:26:03.122117 kubelet[2278]: W0213 15:26:03.113487    2278 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused
Feb 13 15:26:03.122117 kubelet[2278]: E0213 15:26:03.113555    2278 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.55:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.55:6443: connect: connection refused
Feb 13 15:26:03.203374 kubelet[2278]: E0213 15:26:03.203182    2278 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="1.6s"
Feb 13 15:26:03.302779 kubelet[2278]: I0213 15:26:03.302746    2278 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Feb 13 15:26:03.303096 kubelet[2278]: E0213 15:26:03.303077    2278 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost"
Feb 13 15:26:03.825752 kubelet[2278]: E0213 15:26:03.825682    2278 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:03.826821 kubelet[2278]: E0213 15:26:03.826798    2278 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:03.831620 kubelet[2278]: E0213 15:26:03.831601    2278 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:04.809292 kubelet[2278]: E0213 15:26:04.808792    2278 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost"
Feb 13 15:26:04.833771 kubelet[2278]: E0213 15:26:04.833729    2278 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:04.904718 kubelet[2278]: I0213 15:26:04.904474    2278 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Feb 13 15:26:04.914111 kubelet[2278]: I0213 15:26:04.914074    2278 kubelet_node_status.go:76] "Successfully registered node" node="localhost"
Feb 13 15:26:04.921234 kubelet[2278]: E0213 15:26:04.921204    2278 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Feb 13 15:26:05.022065 kubelet[2278]: E0213 15:26:05.022019    2278 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Feb 13 15:26:05.122588 kubelet[2278]: E0213 15:26:05.122468    2278 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Feb 13 15:26:05.222996 kubelet[2278]: E0213 15:26:05.222954    2278 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Feb 13 15:26:05.323991 kubelet[2278]: E0213 15:26:05.323949    2278 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Feb 13 15:26:05.424869 kubelet[2278]: E0213 15:26:05.424753    2278 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Feb 13 15:26:05.525533 kubelet[2278]: E0213 15:26:05.525474    2278 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Feb 13 15:26:05.626147 kubelet[2278]: E0213 15:26:05.626093    2278 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Feb 13 15:26:05.726517 kubelet[2278]: E0213 15:26:05.726379    2278 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Feb 13 15:26:05.807956 kubelet[2278]: E0213 15:26:05.807915    2278 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:05.827294 kubelet[2278]: E0213 15:26:05.827218    2278 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Feb 13 15:26:05.835137 kubelet[2278]: E0213 15:26:05.835084    2278 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:05.927733 kubelet[2278]: E0213 15:26:05.927593    2278 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Feb 13 15:26:06.786534 kubelet[2278]: I0213 15:26:06.786394    2278 apiserver.go:52] "Watching apiserver"
Feb 13 15:26:06.795846 kubelet[2278]: I0213 15:26:06.795800    2278 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Feb 13 15:26:07.386272 systemd[1]: Reloading requested from client PID 2561 ('systemctl') (unit session-7.scope)...
Feb 13 15:26:07.386359 systemd[1]: Reloading...
Feb 13 15:26:07.443326 zram_generator::config[2600]: No configuration found.
Feb 13 15:26:07.520456 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:26:07.583403 systemd[1]: Reloading finished in 196 ms.
Feb 13 15:26:07.615271 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:26:07.632219 systemd[1]: kubelet.service: Deactivated successfully.
Feb 13 15:26:07.633377 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:26:07.633437 systemd[1]: kubelet.service: Consumed 1.468s CPU time, 112.7M memory peak, 0B memory swap peak.
Feb 13 15:26:07.642539 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:26:07.728147 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:26:07.732046 (kubelet)[2642]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Feb 13 15:26:07.777720 kubelet[2642]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 15:26:07.777720 kubelet[2642]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Feb 13 15:26:07.777720 kubelet[2642]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 15:26:07.778070 kubelet[2642]: I0213 15:26:07.777724    2642 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 13 15:26:07.782314 kubelet[2642]: I0213 15:26:07.781775    2642 server.go:487] "Kubelet version" kubeletVersion="v1.29.2"
Feb 13 15:26:07.782314 kubelet[2642]: I0213 15:26:07.781801    2642 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 13 15:26:07.782314 kubelet[2642]: I0213 15:26:07.781964    2642 server.go:919] "Client rotation is on, will bootstrap in background"
Feb 13 15:26:07.783390 kubelet[2642]: I0213 15:26:07.783364    2642 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Feb 13 15:26:07.785109 kubelet[2642]: I0213 15:26:07.785080    2642 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 13 15:26:07.791884 kubelet[2642]: I0213 15:26:07.791861    2642 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb 13 15:26:07.792047 kubelet[2642]: I0213 15:26:07.792035    2642 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb 13 15:26:07.792214 kubelet[2642]: I0213 15:26:07.792190    2642 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Feb 13 15:26:07.792214 kubelet[2642]: I0213 15:26:07.792215    2642 topology_manager.go:138] "Creating topology manager with none policy"
Feb 13 15:26:07.792333 kubelet[2642]: I0213 15:26:07.792223    2642 container_manager_linux.go:301] "Creating device plugin manager"
Feb 13 15:26:07.792333 kubelet[2642]: I0213 15:26:07.792251    2642 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 15:26:07.792375 kubelet[2642]: I0213 15:26:07.792360    2642 kubelet.go:396] "Attempting to sync node with API server"
Feb 13 15:26:07.792375 kubelet[2642]: I0213 15:26:07.792374    2642 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 13 15:26:07.792413 kubelet[2642]: I0213 15:26:07.792393    2642 kubelet.go:312] "Adding apiserver pod source"
Feb 13 15:26:07.792413 kubelet[2642]: I0213 15:26:07.792406    2642 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 13 15:26:07.793291 kubelet[2642]: I0213 15:26:07.793266    2642 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1"
Feb 13 15:26:07.793445 kubelet[2642]: I0213 15:26:07.793432    2642 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Feb 13 15:26:07.793765 kubelet[2642]: I0213 15:26:07.793752    2642 server.go:1256] "Started kubelet"
Feb 13 15:26:07.795158 kubelet[2642]: I0213 15:26:07.795140    2642 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 13 15:26:07.795988 kubelet[2642]: I0213 15:26:07.795955    2642 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Feb 13 15:26:07.796957 kubelet[2642]: I0213 15:26:07.796726    2642 server.go:461] "Adding debug handlers to kubelet server"
Feb 13 15:26:07.797200 kubelet[2642]: I0213 15:26:07.797081    2642 volume_manager.go:291] "Starting Kubelet Volume Manager"
Feb 13 15:26:07.797200 kubelet[2642]: I0213 15:26:07.797175    2642 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Feb 13 15:26:07.797529 kubelet[2642]: I0213 15:26:07.797320    2642 reconciler_new.go:29] "Reconciler: start to sync state"
Feb 13 15:26:07.797630 kubelet[2642]: I0213 15:26:07.797605    2642 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Feb 13 15:26:07.797969 kubelet[2642]: I0213 15:26:07.797755    2642 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Feb 13 15:26:07.797969 kubelet[2642]: E0213 15:26:07.797805    2642 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found"
Feb 13 15:26:07.798843 sudo[2657]:     root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin
Feb 13 15:26:07.799134 sudo[2657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0)
Feb 13 15:26:07.799480 kubelet[2642]: I0213 15:26:07.799342    2642 factory.go:221] Registration of the systemd container factory successfully
Feb 13 15:26:07.799480 kubelet[2642]: I0213 15:26:07.799423    2642 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Feb 13 15:26:07.800166 kubelet[2642]: E0213 15:26:07.800040    2642 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb 13 15:26:07.809293 kubelet[2642]: I0213 15:26:07.807045    2642 factory.go:221] Registration of the containerd container factory successfully
Feb 13 15:26:07.809293 kubelet[2642]: I0213 15:26:07.808474    2642 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Feb 13 15:26:07.809394 kubelet[2642]: I0213 15:26:07.809340    2642 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Feb 13 15:26:07.809394 kubelet[2642]: I0213 15:26:07.809358    2642 status_manager.go:217] "Starting to sync pod status with apiserver"
Feb 13 15:26:07.809394 kubelet[2642]: I0213 15:26:07.809371    2642 kubelet.go:2329] "Starting kubelet main sync loop"
Feb 13 15:26:07.809459 kubelet[2642]: E0213 15:26:07.809428    2642 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Feb 13 15:26:07.860509 kubelet[2642]: I0213 15:26:07.860466    2642 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb 13 15:26:07.860509 kubelet[2642]: I0213 15:26:07.860505    2642 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb 13 15:26:07.860644 kubelet[2642]: I0213 15:26:07.860523    2642 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 15:26:07.860666 kubelet[2642]: I0213 15:26:07.860654    2642 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Feb 13 15:26:07.860685 kubelet[2642]: I0213 15:26:07.860674    2642 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Feb 13 15:26:07.860685 kubelet[2642]: I0213 15:26:07.860681    2642 policy_none.go:49] "None policy: Start"
Feb 13 15:26:07.861488 kubelet[2642]: I0213 15:26:07.861467    2642 memory_manager.go:170] "Starting memorymanager" policy="None"
Feb 13 15:26:07.861549 kubelet[2642]: I0213 15:26:07.861495    2642 state_mem.go:35] "Initializing new in-memory state store"
Feb 13 15:26:07.861644 kubelet[2642]: I0213 15:26:07.861628    2642 state_mem.go:75] "Updated machine memory state"
Feb 13 15:26:07.867248 kubelet[2642]: I0213 15:26:07.867222    2642 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 13 15:26:07.867628 kubelet[2642]: I0213 15:26:07.867456    2642 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb 13 15:26:07.902436 kubelet[2642]: I0213 15:26:07.902341    2642 kubelet_node_status.go:73] "Attempting to register node" node="localhost"
Feb 13 15:26:07.908907 kubelet[2642]: I0213 15:26:07.908630    2642 kubelet_node_status.go:112] "Node was previously registered" node="localhost"
Feb 13 15:26:07.908907 kubelet[2642]: I0213 15:26:07.908711    2642 kubelet_node_status.go:76] "Successfully registered node" node="localhost"
Feb 13 15:26:07.909868 kubelet[2642]: I0213 15:26:07.909765    2642 topology_manager.go:215] "Topology Admit Handler" podUID="8685864d5dbd48c6e1b4f343a0340974" podNamespace="kube-system" podName="kube-apiserver-localhost"
Feb 13 15:26:07.910201 kubelet[2642]: I0213 15:26:07.909900    2642 topology_manager.go:215] "Topology Admit Handler" podUID="8dd79284f50d348595750c57a6b03620" podNamespace="kube-system" podName="kube-controller-manager-localhost"
Feb 13 15:26:07.910201 kubelet[2642]: I0213 15:26:07.910015    2642 topology_manager.go:215] "Topology Admit Handler" podUID="34a43d8200b04e3b81251db6a65bc0ce" podNamespace="kube-system" podName="kube-scheduler-localhost"
Feb 13 15:26:08.098442 kubelet[2642]: I0213 15:26:08.098375    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost"
Feb 13 15:26:08.098442 kubelet[2642]: I0213 15:26:08.098430    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost"
Feb 13 15:26:08.098442 kubelet[2642]: I0213 15:26:08.098452    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8685864d5dbd48c6e1b4f343a0340974-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8685864d5dbd48c6e1b4f343a0340974\") " pod="kube-system/kube-apiserver-localhost"
Feb 13 15:26:08.098856 kubelet[2642]: I0213 15:26:08.098477    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8685864d5dbd48c6e1b4f343a0340974-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8685864d5dbd48c6e1b4f343a0340974\") " pod="kube-system/kube-apiserver-localhost"
Feb 13 15:26:08.098856 kubelet[2642]: I0213 15:26:08.098496    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8685864d5dbd48c6e1b4f343a0340974-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8685864d5dbd48c6e1b4f343a0340974\") " pod="kube-system/kube-apiserver-localhost"
Feb 13 15:26:08.098856 kubelet[2642]: I0213 15:26:08.098516    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost"
Feb 13 15:26:08.098856 kubelet[2642]: I0213 15:26:08.098567    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost"
Feb 13 15:26:08.098856 kubelet[2642]: I0213 15:26:08.098610    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8dd79284f50d348595750c57a6b03620-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8dd79284f50d348595750c57a6b03620\") " pod="kube-system/kube-controller-manager-localhost"
Feb 13 15:26:08.098958 kubelet[2642]: I0213 15:26:08.098634    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/34a43d8200b04e3b81251db6a65bc0ce-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"34a43d8200b04e3b81251db6a65bc0ce\") " pod="kube-system/kube-scheduler-localhost"
Feb 13 15:26:08.219548 kubelet[2642]: E0213 15:26:08.219448    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:08.220638 kubelet[2642]: E0213 15:26:08.219989    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:08.220638 kubelet[2642]: E0213 15:26:08.220252    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:08.247980 sudo[2657]: pam_unix(sudo:session): session closed for user root
Feb 13 15:26:08.793327 kubelet[2642]: I0213 15:26:08.793221    2642 apiserver.go:52] "Watching apiserver"
Feb 13 15:26:08.798528 kubelet[2642]: I0213 15:26:08.798309    2642 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Feb 13 15:26:08.847648 kubelet[2642]: E0213 15:26:08.847594    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:08.848346 kubelet[2642]: E0213 15:26:08.848238    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:08.855159 kubelet[2642]: E0213 15:26:08.854656    2642 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost"
Feb 13 15:26:08.855159 kubelet[2642]: E0213 15:26:08.855094    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:08.874303 kubelet[2642]: I0213 15:26:08.874257    2642 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.874214316 podStartE2EDuration="1.874214316s" podCreationTimestamp="2025-02-13 15:26:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:26:08.873843618 +0000 UTC m=+1.138595494" watchObservedRunningTime="2025-02-13 15:26:08.874214316 +0000 UTC m=+1.138966192"
Feb 13 15:26:08.892699 kubelet[2642]: I0213 15:26:08.892650    2642 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.892612931 podStartE2EDuration="1.892612931s" podCreationTimestamp="2025-02-13 15:26:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:26:08.885870248 +0000 UTC m=+1.150622124" watchObservedRunningTime="2025-02-13 15:26:08.892612931 +0000 UTC m=+1.157364767"
Feb 13 15:26:09.848772 kubelet[2642]: E0213 15:26:09.848735    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:10.084850 sudo[1652]: pam_unix(sudo:session): session closed for user root
Feb 13 15:26:10.085977 sshd[1651]: Connection closed by 10.0.0.1 port 38918
Feb 13 15:26:10.086426 sshd-session[1649]: pam_unix(sshd:session): session closed for user core
Feb 13 15:26:10.089299 systemd[1]: sshd@6-10.0.0.55:22-10.0.0.1:38918.service: Deactivated successfully.
Feb 13 15:26:10.090938 systemd[1]: session-7.scope: Deactivated successfully.
Feb 13 15:26:10.091146 systemd[1]: session-7.scope: Consumed 8.166s CPU time, 188.3M memory peak, 0B memory swap peak.
Feb 13 15:26:10.092461 systemd-logind[1451]: Session 7 logged out. Waiting for processes to exit.
Feb 13 15:26:10.093452 systemd-logind[1451]: Removed session 7.
Feb 13 15:26:10.649294 kubelet[2642]: E0213 15:26:10.649257    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:14.716528 kubelet[2642]: E0213 15:26:14.716497    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:14.731514 kubelet[2642]: I0213 15:26:14.731486    2642 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=7.731432811 podStartE2EDuration="7.731432811s" podCreationTimestamp="2025-02-13 15:26:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:26:08.892916549 +0000 UTC m=+1.157668425" watchObservedRunningTime="2025-02-13 15:26:14.731432811 +0000 UTC m=+6.996184687"
Feb 13 15:26:14.855933 kubelet[2642]: E0213 15:26:14.855895    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:17.194601 kubelet[2642]: E0213 15:26:17.194515    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:17.860342 kubelet[2642]: E0213 15:26:17.860188    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:20.655631 kubelet[2642]: E0213 15:26:20.655601    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:22.138791 update_engine[1455]: I20250213 15:26:22.138712  1455 update_attempter.cc:509] Updating boot flags...
Feb 13 15:26:22.169380 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2725)
Feb 13 15:26:22.200452 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2726)
Feb 13 15:26:23.473274 kubelet[2642]: I0213 15:26:23.473239    2642 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Feb 13 15:26:23.479555 containerd[1471]: time="2025-02-13T15:26:23.479455699Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Feb 13 15:26:23.479940 kubelet[2642]: I0213 15:26:23.479836    2642 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Feb 13 15:26:23.591589 kubelet[2642]: I0213 15:26:23.591549    2642 topology_manager.go:215] "Topology Admit Handler" podUID="9bd2e66b-9891-4f47-a75c-67d2a2c78c22" podNamespace="kube-system" podName="cilium-operator-5cc964979-l88st"
Feb 13 15:26:23.603834 kubelet[2642]: I0213 15:26:23.603788    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrxxd\" (UniqueName: \"kubernetes.io/projected/9bd2e66b-9891-4f47-a75c-67d2a2c78c22-kube-api-access-qrxxd\") pod \"cilium-operator-5cc964979-l88st\" (UID: \"9bd2e66b-9891-4f47-a75c-67d2a2c78c22\") " pod="kube-system/cilium-operator-5cc964979-l88st"
Feb 13 15:26:23.603834 kubelet[2642]: I0213 15:26:23.603833    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9bd2e66b-9891-4f47-a75c-67d2a2c78c22-cilium-config-path\") pod \"cilium-operator-5cc964979-l88st\" (UID: \"9bd2e66b-9891-4f47-a75c-67d2a2c78c22\") " pod="kube-system/cilium-operator-5cc964979-l88st"
Feb 13 15:26:23.609259 systemd[1]: Created slice kubepods-besteffort-pod9bd2e66b_9891_4f47_a75c_67d2a2c78c22.slice - libcontainer container kubepods-besteffort-pod9bd2e66b_9891_4f47_a75c_67d2a2c78c22.slice.
Feb 13 15:26:23.617676 kubelet[2642]: I0213 15:26:23.617600    2642 topology_manager.go:215] "Topology Admit Handler" podUID="c63c2a96-3201-4875-86f9-a479fe8a9ae1" podNamespace="kube-system" podName="kube-proxy-f4hmq"
Feb 13 15:26:23.621966 kubelet[2642]: I0213 15:26:23.621887    2642 topology_manager.go:215] "Topology Admit Handler" podUID="20751177-dc28-4b5e-b54a-0fd4e3679a3b" podNamespace="kube-system" podName="cilium-dgfnj"
Feb 13 15:26:23.627431 systemd[1]: Created slice kubepods-besteffort-podc63c2a96_3201_4875_86f9_a479fe8a9ae1.slice - libcontainer container kubepods-besteffort-podc63c2a96_3201_4875_86f9_a479fe8a9ae1.slice.
Feb 13 15:26:23.633582 systemd[1]: Created slice kubepods-burstable-pod20751177_dc28_4b5e_b54a_0fd4e3679a3b.slice - libcontainer container kubepods-burstable-pod20751177_dc28_4b5e_b54a_0fd4e3679a3b.slice.
Feb 13 15:26:23.703991 kubelet[2642]: I0213 15:26:23.703958    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-host-proc-sys-net\") pod \"cilium-dgfnj\" (UID: \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\") " pod="kube-system/cilium-dgfnj"
Feb 13 15:26:23.703991 kubelet[2642]: I0213 15:26:23.704002    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhbrx\" (UniqueName: \"kubernetes.io/projected/c63c2a96-3201-4875-86f9-a479fe8a9ae1-kube-api-access-bhbrx\") pod \"kube-proxy-f4hmq\" (UID: \"c63c2a96-3201-4875-86f9-a479fe8a9ae1\") " pod="kube-system/kube-proxy-f4hmq"
Feb 13 15:26:23.704147 kubelet[2642]: I0213 15:26:23.704023    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20751177-dc28-4b5e-b54a-0fd4e3679a3b-cilium-config-path\") pod \"cilium-dgfnj\" (UID: \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\") " pod="kube-system/cilium-dgfnj"
Feb 13 15:26:23.704147 kubelet[2642]: I0213 15:26:23.704058    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-cni-path\") pod \"cilium-dgfnj\" (UID: \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\") " pod="kube-system/cilium-dgfnj"
Feb 13 15:26:23.704147 kubelet[2642]: I0213 15:26:23.704076    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-etc-cni-netd\") pod \"cilium-dgfnj\" (UID: \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\") " pod="kube-system/cilium-dgfnj"
Feb 13 15:26:23.704147 kubelet[2642]: I0213 15:26:23.704098    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-cilium-run\") pod \"cilium-dgfnj\" (UID: \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\") " pod="kube-system/cilium-dgfnj"
Feb 13 15:26:23.704147 kubelet[2642]: I0213 15:26:23.704116    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/20751177-dc28-4b5e-b54a-0fd4e3679a3b-clustermesh-secrets\") pod \"cilium-dgfnj\" (UID: \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\") " pod="kube-system/cilium-dgfnj"
Feb 13 15:26:23.704147 kubelet[2642]: I0213 15:26:23.704135    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c63c2a96-3201-4875-86f9-a479fe8a9ae1-xtables-lock\") pod \"kube-proxy-f4hmq\" (UID: \"c63c2a96-3201-4875-86f9-a479fe8a9ae1\") " pod="kube-system/kube-proxy-f4hmq"
Feb 13 15:26:23.704300 kubelet[2642]: I0213 15:26:23.704155    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-bpf-maps\") pod \"cilium-dgfnj\" (UID: \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\") " pod="kube-system/cilium-dgfnj"
Feb 13 15:26:23.704300 kubelet[2642]: I0213 15:26:23.704180    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-cilium-cgroup\") pod \"cilium-dgfnj\" (UID: \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\") " pod="kube-system/cilium-dgfnj"
Feb 13 15:26:23.704300 kubelet[2642]: I0213 15:26:23.704198    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-host-proc-sys-kernel\") pod \"cilium-dgfnj\" (UID: \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\") " pod="kube-system/cilium-dgfnj"
Feb 13 15:26:23.704300 kubelet[2642]: I0213 15:26:23.704228    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c63c2a96-3201-4875-86f9-a479fe8a9ae1-kube-proxy\") pod \"kube-proxy-f4hmq\" (UID: \"c63c2a96-3201-4875-86f9-a479fe8a9ae1\") " pod="kube-system/kube-proxy-f4hmq"
Feb 13 15:26:23.704300 kubelet[2642]: I0213 15:26:23.704248    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c63c2a96-3201-4875-86f9-a479fe8a9ae1-lib-modules\") pod \"kube-proxy-f4hmq\" (UID: \"c63c2a96-3201-4875-86f9-a479fe8a9ae1\") " pod="kube-system/kube-proxy-f4hmq"
Feb 13 15:26:23.704300 kubelet[2642]: I0213 15:26:23.704265    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-lib-modules\") pod \"cilium-dgfnj\" (UID: \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\") " pod="kube-system/cilium-dgfnj"
Feb 13 15:26:23.704444 kubelet[2642]: I0213 15:26:23.704304    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-xtables-lock\") pod \"cilium-dgfnj\" (UID: \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\") " pod="kube-system/cilium-dgfnj"
Feb 13 15:26:23.704444 kubelet[2642]: I0213 15:26:23.704330    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/20751177-dc28-4b5e-b54a-0fd4e3679a3b-hubble-tls\") pod \"cilium-dgfnj\" (UID: \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\") " pod="kube-system/cilium-dgfnj"
Feb 13 15:26:23.704444 kubelet[2642]: I0213 15:26:23.704352    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smlms\" (UniqueName: \"kubernetes.io/projected/20751177-dc28-4b5e-b54a-0fd4e3679a3b-kube-api-access-smlms\") pod \"cilium-dgfnj\" (UID: \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\") " pod="kube-system/cilium-dgfnj"
Feb 13 15:26:23.705362 kubelet[2642]: I0213 15:26:23.705332    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-hostproc\") pod \"cilium-dgfnj\" (UID: \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\") " pod="kube-system/cilium-dgfnj"
Feb 13 15:26:23.920961 kubelet[2642]: E0213 15:26:23.920627    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:23.921707 containerd[1471]: time="2025-02-13T15:26:23.921200829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-l88st,Uid:9bd2e66b-9891-4f47-a75c-67d2a2c78c22,Namespace:kube-system,Attempt:0,}"
Feb 13 15:26:23.932619 kubelet[2642]: E0213 15:26:23.931847    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:23.932741 containerd[1471]: time="2025-02-13T15:26:23.932279350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f4hmq,Uid:c63c2a96-3201-4875-86f9-a479fe8a9ae1,Namespace:kube-system,Attempt:0,}"
Feb 13 15:26:23.937513 kubelet[2642]: E0213 15:26:23.937330    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:23.937836 containerd[1471]: time="2025-02-13T15:26:23.937806667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dgfnj,Uid:20751177-dc28-4b5e-b54a-0fd4e3679a3b,Namespace:kube-system,Attempt:0,}"
Feb 13 15:26:23.964146 containerd[1471]: time="2025-02-13T15:26:23.963383902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:26:23.964146 containerd[1471]: time="2025-02-13T15:26:23.963449279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:26:23.964146 containerd[1471]: time="2025-02-13T15:26:23.963463163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:26:23.964146 containerd[1471]: time="2025-02-13T15:26:23.963543305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:26:23.966426 containerd[1471]: time="2025-02-13T15:26:23.966336531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:26:23.969785 containerd[1471]: time="2025-02-13T15:26:23.966440439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:26:23.969785 containerd[1471]: time="2025-02-13T15:26:23.966468326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:26:23.969785 containerd[1471]: time="2025-02-13T15:26:23.966571034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:26:23.984194 containerd[1471]: time="2025-02-13T15:26:23.982813054Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:26:23.984194 containerd[1471]: time="2025-02-13T15:26:23.982881832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:26:23.984194 containerd[1471]: time="2025-02-13T15:26:23.982893115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:26:23.984194 containerd[1471]: time="2025-02-13T15:26:23.982967855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:26:23.994467 systemd[1]: Started cri-containerd-71c803521ba02d9e4616f819a20c8a24a996a81615dd7c7af989136eb8bc299f.scope - libcontainer container 71c803521ba02d9e4616f819a20c8a24a996a81615dd7c7af989136eb8bc299f.
Feb 13 15:26:23.997923 systemd[1]: Started cri-containerd-61b1255699fa95612491900f8a06efd3bdc9dae42ad85b0f1e4910301aa32267.scope - libcontainer container 61b1255699fa95612491900f8a06efd3bdc9dae42ad85b0f1e4910301aa32267.
Feb 13 15:26:24.001475 systemd[1]: Started cri-containerd-11d28ac8e2a03f21a3893539a5b90a1b65b4858a3cf3bbe91d4a063558c2f395.scope - libcontainer container 11d28ac8e2a03f21a3893539a5b90a1b65b4858a3cf3bbe91d4a063558c2f395.
Feb 13 15:26:24.036477 containerd[1471]: time="2025-02-13T15:26:24.035589227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f4hmq,Uid:c63c2a96-3201-4875-86f9-a479fe8a9ae1,Namespace:kube-system,Attempt:0,} returns sandbox id \"61b1255699fa95612491900f8a06efd3bdc9dae42ad85b0f1e4910301aa32267\""
Feb 13 15:26:24.037744 kubelet[2642]: E0213 15:26:24.037720    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:24.042224 containerd[1471]: time="2025-02-13T15:26:24.042169942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dgfnj,Uid:20751177-dc28-4b5e-b54a-0fd4e3679a3b,Namespace:kube-system,Attempt:0,} returns sandbox id \"11d28ac8e2a03f21a3893539a5b90a1b65b4858a3cf3bbe91d4a063558c2f395\""
Feb 13 15:26:24.043488 kubelet[2642]: E0213 15:26:24.043467    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:24.048571 containerd[1471]: time="2025-02-13T15:26:24.046532452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-l88st,Uid:9bd2e66b-9891-4f47-a75c-67d2a2c78c22,Namespace:kube-system,Attempt:0,} returns sandbox id \"71c803521ba02d9e4616f819a20c8a24a996a81615dd7c7af989136eb8bc299f\""
Feb 13 15:26:24.049217 kubelet[2642]: E0213 15:26:24.047051    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:24.062649 containerd[1471]: time="2025-02-13T15:26:24.062523722Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\""
Feb 13 15:26:24.063817 containerd[1471]: time="2025-02-13T15:26:24.063698861Z" level=info msg="CreateContainer within sandbox \"61b1255699fa95612491900f8a06efd3bdc9dae42ad85b0f1e4910301aa32267\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Feb 13 15:26:24.083116 containerd[1471]: time="2025-02-13T15:26:24.083063628Z" level=info msg="CreateContainer within sandbox \"61b1255699fa95612491900f8a06efd3bdc9dae42ad85b0f1e4910301aa32267\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9d9e0c7c7bec8755fc838f58d7e2e0eeae065925d8916bad422f9bf731528722\""
Feb 13 15:26:24.083866 containerd[1471]: time="2025-02-13T15:26:24.083838626Z" level=info msg="StartContainer for \"9d9e0c7c7bec8755fc838f58d7e2e0eeae065925d8916bad422f9bf731528722\""
Feb 13 15:26:24.111512 systemd[1]: Started cri-containerd-9d9e0c7c7bec8755fc838f58d7e2e0eeae065925d8916bad422f9bf731528722.scope - libcontainer container 9d9e0c7c7bec8755fc838f58d7e2e0eeae065925d8916bad422f9bf731528722.
Feb 13 15:26:24.143735 containerd[1471]: time="2025-02-13T15:26:24.143682534Z" level=info msg="StartContainer for \"9d9e0c7c7bec8755fc838f58d7e2e0eeae065925d8916bad422f9bf731528722\" returns successfully"
Feb 13 15:26:24.873920 kubelet[2642]: E0213 15:26:24.873879    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:24.882707 kubelet[2642]: I0213 15:26:24.882674    2642 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-f4hmq" podStartSLOduration=1.88263698 podStartE2EDuration="1.88263698s" podCreationTimestamp="2025-02-13 15:26:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:26:24.882256763 +0000 UTC m=+17.147008639" watchObservedRunningTime="2025-02-13 15:26:24.88263698 +0000 UTC m=+17.147388816"
Feb 13 15:26:29.073509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount647683905.mount: Deactivated successfully.
Feb 13 15:26:30.886584 containerd[1471]: time="2025-02-13T15:26:30.885669551Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:26:30.886584 containerd[1471]: time="2025-02-13T15:26:30.886200213Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710"
Feb 13 15:26:30.887328 containerd[1471]: time="2025-02-13T15:26:30.887255697Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:26:30.889343 containerd[1471]: time="2025-02-13T15:26:30.889299611Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.826717557s"
Feb 13 15:26:30.889343 containerd[1471]: time="2025-02-13T15:26:30.889340459Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\""
Feb 13 15:26:30.892123 containerd[1471]: time="2025-02-13T15:26:30.892076267Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\""
Feb 13 15:26:30.894271 containerd[1471]: time="2025-02-13T15:26:30.894210559Z" level=info msg="CreateContainer within sandbox \"11d28ac8e2a03f21a3893539a5b90a1b65b4858a3cf3bbe91d4a063558c2f395\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb 13 15:26:30.928706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1173518833.mount: Deactivated successfully.
Feb 13 15:26:30.931724 containerd[1471]: time="2025-02-13T15:26:30.931618099Z" level=info msg="CreateContainer within sandbox \"11d28ac8e2a03f21a3893539a5b90a1b65b4858a3cf3bbe91d4a063558c2f395\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b7acb5ee226562f51c446046e6011e400da017e3ef008cdbbe22746446be3f1b\""
Feb 13 15:26:30.932601 containerd[1471]: time="2025-02-13T15:26:30.932213494Z" level=info msg="StartContainer for \"b7acb5ee226562f51c446046e6011e400da017e3ef008cdbbe22746446be3f1b\""
Feb 13 15:26:30.964496 systemd[1]: Started cri-containerd-b7acb5ee226562f51c446046e6011e400da017e3ef008cdbbe22746446be3f1b.scope - libcontainer container b7acb5ee226562f51c446046e6011e400da017e3ef008cdbbe22746446be3f1b.
Feb 13 15:26:31.025360 systemd[1]: cri-containerd-b7acb5ee226562f51c446046e6011e400da017e3ef008cdbbe22746446be3f1b.scope: Deactivated successfully.
Feb 13 15:26:31.042935 containerd[1471]: time="2025-02-13T15:26:31.042593984Z" level=info msg="StartContainer for \"b7acb5ee226562f51c446046e6011e400da017e3ef008cdbbe22746446be3f1b\" returns successfully"
Feb 13 15:26:31.139523 containerd[1471]: time="2025-02-13T15:26:31.134681330Z" level=info msg="shim disconnected" id=b7acb5ee226562f51c446046e6011e400da017e3ef008cdbbe22746446be3f1b namespace=k8s.io
Feb 13 15:26:31.139523 containerd[1471]: time="2025-02-13T15:26:31.139436009Z" level=warning msg="cleaning up after shim disconnected" id=b7acb5ee226562f51c446046e6011e400da017e3ef008cdbbe22746446be3f1b namespace=k8s.io
Feb 13 15:26:31.139523 containerd[1471]: time="2025-02-13T15:26:31.139451932Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:26:31.891220 kubelet[2642]: E0213 15:26:31.889663    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:31.893387 containerd[1471]: time="2025-02-13T15:26:31.893056187Z" level=info msg="CreateContainer within sandbox \"11d28ac8e2a03f21a3893539a5b90a1b65b4858a3cf3bbe91d4a063558c2f395\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb 13 15:26:31.926833 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7acb5ee226562f51c446046e6011e400da017e3ef008cdbbe22746446be3f1b-rootfs.mount: Deactivated successfully.
Feb 13 15:26:31.928363 containerd[1471]: time="2025-02-13T15:26:31.928212527Z" level=info msg="CreateContainer within sandbox \"11d28ac8e2a03f21a3893539a5b90a1b65b4858a3cf3bbe91d4a063558c2f395\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"89f8b07440ce741377345d4321e25c6d3df90ec897697154c9374bb63a9a6292\""
Feb 13 15:26:31.929129 containerd[1471]: time="2025-02-13T15:26:31.929091409Z" level=info msg="StartContainer for \"89f8b07440ce741377345d4321e25c6d3df90ec897697154c9374bb63a9a6292\""
Feb 13 15:26:31.959484 systemd[1]: Started cri-containerd-89f8b07440ce741377345d4321e25c6d3df90ec897697154c9374bb63a9a6292.scope - libcontainer container 89f8b07440ce741377345d4321e25c6d3df90ec897697154c9374bb63a9a6292.
Feb 13 15:26:31.984560 containerd[1471]: time="2025-02-13T15:26:31.983461982Z" level=info msg="StartContainer for \"89f8b07440ce741377345d4321e25c6d3df90ec897697154c9374bb63a9a6292\" returns successfully"
Feb 13 15:26:32.015609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3047992480.mount: Deactivated successfully.
Feb 13 15:26:32.032536 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 13 15:26:32.032763 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:26:32.032830 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:26:32.043962 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:26:32.044162 systemd[1]: cri-containerd-89f8b07440ce741377345d4321e25c6d3df90ec897697154c9374bb63a9a6292.scope: Deactivated successfully.
Feb 13 15:26:32.091183 containerd[1471]: time="2025-02-13T15:26:32.091087357Z" level=info msg="shim disconnected" id=89f8b07440ce741377345d4321e25c6d3df90ec897697154c9374bb63a9a6292 namespace=k8s.io
Feb 13 15:26:32.091183 containerd[1471]: time="2025-02-13T15:26:32.091174412Z" level=warning msg="cleaning up after shim disconnected" id=89f8b07440ce741377345d4321e25c6d3df90ec897697154c9374bb63a9a6292 namespace=k8s.io
Feb 13 15:26:32.091183 containerd[1471]: time="2025-02-13T15:26:32.091185294Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:26:32.092798 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:26:32.893008 kubelet[2642]: E0213 15:26:32.892958    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:32.895929 containerd[1471]: time="2025-02-13T15:26:32.895732601Z" level=info msg="CreateContainer within sandbox \"11d28ac8e2a03f21a3893539a5b90a1b65b4858a3cf3bbe91d4a063558c2f395\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb 13 15:26:32.930408 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89f8b07440ce741377345d4321e25c6d3df90ec897697154c9374bb63a9a6292-rootfs.mount: Deactivated successfully.
Feb 13 15:26:32.932830 containerd[1471]: time="2025-02-13T15:26:32.932727879Z" level=info msg="CreateContainer within sandbox \"11d28ac8e2a03f21a3893539a5b90a1b65b4858a3cf3bbe91d4a063558c2f395\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"673bdc065ab05d0af7a1e451133cb75fb9ad0f2b637701e5e7bbe08997b6a26f\""
Feb 13 15:26:32.933553 containerd[1471]: time="2025-02-13T15:26:32.933524420Z" level=info msg="StartContainer for \"673bdc065ab05d0af7a1e451133cb75fb9ad0f2b637701e5e7bbe08997b6a26f\""
Feb 13 15:26:32.968522 systemd[1]: Started cri-containerd-673bdc065ab05d0af7a1e451133cb75fb9ad0f2b637701e5e7bbe08997b6a26f.scope - libcontainer container 673bdc065ab05d0af7a1e451133cb75fb9ad0f2b637701e5e7bbe08997b6a26f.
Feb 13 15:26:33.002220 containerd[1471]: time="2025-02-13T15:26:33.002157454Z" level=info msg="StartContainer for \"673bdc065ab05d0af7a1e451133cb75fb9ad0f2b637701e5e7bbe08997b6a26f\" returns successfully"
Feb 13 15:26:33.013606 systemd[1]: cri-containerd-673bdc065ab05d0af7a1e451133cb75fb9ad0f2b637701e5e7bbe08997b6a26f.scope: Deactivated successfully.
Feb 13 15:26:33.035196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-673bdc065ab05d0af7a1e451133cb75fb9ad0f2b637701e5e7bbe08997b6a26f-rootfs.mount: Deactivated successfully.
Feb 13 15:26:33.051940 containerd[1471]: time="2025-02-13T15:26:33.051850588Z" level=info msg="shim disconnected" id=673bdc065ab05d0af7a1e451133cb75fb9ad0f2b637701e5e7bbe08997b6a26f namespace=k8s.io
Feb 13 15:26:33.051940 containerd[1471]: time="2025-02-13T15:26:33.051929322Z" level=warning msg="cleaning up after shim disconnected" id=673bdc065ab05d0af7a1e451133cb75fb9ad0f2b637701e5e7bbe08997b6a26f namespace=k8s.io
Feb 13 15:26:33.051940 containerd[1471]: time="2025-02-13T15:26:33.051949645Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:26:33.908728 kubelet[2642]: E0213 15:26:33.905897    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:33.911091 containerd[1471]: time="2025-02-13T15:26:33.908404078Z" level=info msg="CreateContainer within sandbox \"11d28ac8e2a03f21a3893539a5b90a1b65b4858a3cf3bbe91d4a063558c2f395\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb 13 15:26:33.934844 containerd[1471]: time="2025-02-13T15:26:33.934785567Z" level=info msg="CreateContainer within sandbox \"11d28ac8e2a03f21a3893539a5b90a1b65b4858a3cf3bbe91d4a063558c2f395\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"175213ccb3bff7b91ce418151fd560267157049b4a95e75071879982e363801d\""
Feb 13 15:26:33.936308 containerd[1471]: time="2025-02-13T15:26:33.935428556Z" level=info msg="StartContainer for \"175213ccb3bff7b91ce418151fd560267157049b4a95e75071879982e363801d\""
Feb 13 15:26:33.965587 systemd[1]: Started cri-containerd-175213ccb3bff7b91ce418151fd560267157049b4a95e75071879982e363801d.scope - libcontainer container 175213ccb3bff7b91ce418151fd560267157049b4a95e75071879982e363801d.
Feb 13 15:26:33.988036 systemd[1]: cri-containerd-175213ccb3bff7b91ce418151fd560267157049b4a95e75071879982e363801d.scope: Deactivated successfully.
Feb 13 15:26:33.991926 containerd[1471]: time="2025-02-13T15:26:33.991884841Z" level=info msg="StartContainer for \"175213ccb3bff7b91ce418151fd560267157049b4a95e75071879982e363801d\" returns successfully"
Feb 13 15:26:34.008085 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-175213ccb3bff7b91ce418151fd560267157049b4a95e75071879982e363801d-rootfs.mount: Deactivated successfully.
Feb 13 15:26:34.013630 containerd[1471]: time="2025-02-13T15:26:34.013564161Z" level=info msg="shim disconnected" id=175213ccb3bff7b91ce418151fd560267157049b4a95e75071879982e363801d namespace=k8s.io
Feb 13 15:26:34.013630 containerd[1471]: time="2025-02-13T15:26:34.013619570Z" level=warning msg="cleaning up after shim disconnected" id=175213ccb3bff7b91ce418151fd560267157049b4a95e75071879982e363801d namespace=k8s.io
Feb 13 15:26:34.013791 containerd[1471]: time="2025-02-13T15:26:34.013637333Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:26:34.908543 kubelet[2642]: E0213 15:26:34.908505    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:34.913226 containerd[1471]: time="2025-02-13T15:26:34.913176115Z" level=info msg="CreateContainer within sandbox \"11d28ac8e2a03f21a3893539a5b90a1b65b4858a3cf3bbe91d4a063558c2f395\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb 13 15:26:34.930263 containerd[1471]: time="2025-02-13T15:26:34.930161251Z" level=info msg="CreateContainer within sandbox \"11d28ac8e2a03f21a3893539a5b90a1b65b4858a3cf3bbe91d4a063558c2f395\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"032d4aec648e409325e36adfc35c27d01e3a05073d74ac17de4c73da7b37153a\""
Feb 13 15:26:34.930831 containerd[1471]: time="2025-02-13T15:26:34.930805316Z" level=info msg="StartContainer for \"032d4aec648e409325e36adfc35c27d01e3a05073d74ac17de4c73da7b37153a\""
Feb 13 15:26:34.962455 systemd[1]: Started cri-containerd-032d4aec648e409325e36adfc35c27d01e3a05073d74ac17de4c73da7b37153a.scope - libcontainer container 032d4aec648e409325e36adfc35c27d01e3a05073d74ac17de4c73da7b37153a.
Feb 13 15:26:34.987266 containerd[1471]: time="2025-02-13T15:26:34.987177409Z" level=info msg="StartContainer for \"032d4aec648e409325e36adfc35c27d01e3a05073d74ac17de4c73da7b37153a\" returns successfully"
Feb 13 15:26:35.156706 systemd[1]: Started sshd@7-10.0.0.55:22-10.0.0.1:54670.service - OpenSSH per-connection server daemon (10.0.0.1:54670).
Feb 13 15:26:35.205672 sshd[3367]: Accepted publickey for core from 10.0.0.1 port 54670 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:26:35.205374 sshd-session[3367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:26:35.207011 kubelet[2642]: I0213 15:26:35.206909    2642 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
Feb 13 15:26:35.237879 systemd-logind[1451]: New session 8 of user core.
Feb 13 15:26:35.244510 systemd[1]: Started session-8.scope - Session 8 of User core.
Feb 13 15:26:35.264190 kubelet[2642]: I0213 15:26:35.264023    2642 topology_manager.go:215] "Topology Admit Handler" podUID="ca282044-d1d6-4fa3-9bc3-6753265725f4" podNamespace="kube-system" podName="coredns-76f75df574-hzvsf"
Feb 13 15:26:35.277136 kubelet[2642]: I0213 15:26:35.275206    2642 topology_manager.go:215] "Topology Admit Handler" podUID="cf14d6d4-34d2-4c1b-b599-592c0b38046f" podNamespace="kube-system" podName="coredns-76f75df574-s2ncb"
Feb 13 15:26:35.287103 systemd[1]: Created slice kubepods-burstable-podca282044_d1d6_4fa3_9bc3_6753265725f4.slice - libcontainer container kubepods-burstable-podca282044_d1d6_4fa3_9bc3_6753265725f4.slice.
Feb 13 15:26:35.299056 systemd[1]: Created slice kubepods-burstable-podcf14d6d4_34d2_4c1b_b599_592c0b38046f.slice - libcontainer container kubepods-burstable-podcf14d6d4_34d2_4c1b_b599_592c0b38046f.slice.
Feb 13 15:26:35.391742 kubelet[2642]: I0213 15:26:35.383198    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xhgb\" (UniqueName: \"kubernetes.io/projected/cf14d6d4-34d2-4c1b-b599-592c0b38046f-kube-api-access-7xhgb\") pod \"coredns-76f75df574-s2ncb\" (UID: \"cf14d6d4-34d2-4c1b-b599-592c0b38046f\") " pod="kube-system/coredns-76f75df574-s2ncb"
Feb 13 15:26:35.391742 kubelet[2642]: I0213 15:26:35.383258    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvcx7\" (UniqueName: \"kubernetes.io/projected/ca282044-d1d6-4fa3-9bc3-6753265725f4-kube-api-access-fvcx7\") pod \"coredns-76f75df574-hzvsf\" (UID: \"ca282044-d1d6-4fa3-9bc3-6753265725f4\") " pod="kube-system/coredns-76f75df574-hzvsf"
Feb 13 15:26:35.391742 kubelet[2642]: I0213 15:26:35.383296    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca282044-d1d6-4fa3-9bc3-6753265725f4-config-volume\") pod \"coredns-76f75df574-hzvsf\" (UID: \"ca282044-d1d6-4fa3-9bc3-6753265725f4\") " pod="kube-system/coredns-76f75df574-hzvsf"
Feb 13 15:26:35.391742 kubelet[2642]: I0213 15:26:35.383321    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cf14d6d4-34d2-4c1b-b599-592c0b38046f-config-volume\") pod \"coredns-76f75df574-s2ncb\" (UID: \"cf14d6d4-34d2-4c1b-b599-592c0b38046f\") " pod="kube-system/coredns-76f75df574-s2ncb"
Feb 13 15:26:35.531013 sshd[3373]: Connection closed by 10.0.0.1 port 54670
Feb 13 15:26:35.531470 sshd-session[3367]: pam_unix(sshd:session): session closed for user core
Feb 13 15:26:35.537664 systemd[1]: session-8.scope: Deactivated successfully.
Feb 13 15:26:35.540399 systemd[1]: sshd@7-10.0.0.55:22-10.0.0.1:54670.service: Deactivated successfully.
Feb 13 15:26:35.548509 systemd-logind[1451]: Session 8 logged out. Waiting for processes to exit.
Feb 13 15:26:35.549718 systemd-logind[1451]: Removed session 8.
Feb 13 15:26:35.594322 kubelet[2642]: E0213 15:26:35.593907    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:35.596337 containerd[1471]: time="2025-02-13T15:26:35.596298551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hzvsf,Uid:ca282044-d1d6-4fa3-9bc3-6753265725f4,Namespace:kube-system,Attempt:0,}"
Feb 13 15:26:35.603187 kubelet[2642]: E0213 15:26:35.603146    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:35.603673 containerd[1471]: time="2025-02-13T15:26:35.603617261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-s2ncb,Uid:cf14d6d4-34d2-4c1b-b599-592c0b38046f,Namespace:kube-system,Attempt:0,}"
Feb 13 15:26:35.869330 containerd[1471]: time="2025-02-13T15:26:35.869188560Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:26:35.871436 containerd[1471]: time="2025-02-13T15:26:35.871015807Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306"
Feb 13 15:26:35.872455 containerd[1471]: time="2025-02-13T15:26:35.872414747Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:26:35.874553 containerd[1471]: time="2025-02-13T15:26:35.874524239Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 4.982397522s"
Feb 13 15:26:35.874611 containerd[1471]: time="2025-02-13T15:26:35.874559764Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\""
Feb 13 15:26:35.878536 containerd[1471]: time="2025-02-13T15:26:35.878490662Z" level=info msg="CreateContainer within sandbox \"71c803521ba02d9e4616f819a20c8a24a996a81615dd7c7af989136eb8bc299f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}"
Feb 13 15:26:35.900005 containerd[1471]: time="2025-02-13T15:26:35.899955476Z" level=info msg="CreateContainer within sandbox \"71c803521ba02d9e4616f819a20c8a24a996a81615dd7c7af989136eb8bc299f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0d4f3e683a559590b54e0673e77f23f5b25ce463e59da98c7c20afb152c5b54f\""
Feb 13 15:26:35.900504 containerd[1471]: time="2025-02-13T15:26:35.900474797Z" level=info msg="StartContainer for \"0d4f3e683a559590b54e0673e77f23f5b25ce463e59da98c7c20afb152c5b54f\""
Feb 13 15:26:35.919280 kubelet[2642]: E0213 15:26:35.918632    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:35.940363 kubelet[2642]: I0213 15:26:35.940319    2642 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-dgfnj" podStartSLOduration=6.109780242 podStartE2EDuration="12.939856467s" podCreationTimestamp="2025-02-13 15:26:23 +0000 UTC" firstStartedPulling="2025-02-13 15:26:24.061587763 +0000 UTC m=+16.326339599" lastFinishedPulling="2025-02-13 15:26:30.891663868 +0000 UTC m=+23.156415824" observedRunningTime="2025-02-13 15:26:35.939596346 +0000 UTC m=+28.204348302" watchObservedRunningTime="2025-02-13 15:26:35.939856467 +0000 UTC m=+28.204608343"
Feb 13 15:26:35.941829 systemd[1]: Started cri-containerd-0d4f3e683a559590b54e0673e77f23f5b25ce463e59da98c7c20afb152c5b54f.scope - libcontainer container 0d4f3e683a559590b54e0673e77f23f5b25ce463e59da98c7c20afb152c5b54f.
Feb 13 15:26:35.975491 containerd[1471]: time="2025-02-13T15:26:35.975433858Z" level=info msg="StartContainer for \"0d4f3e683a559590b54e0673e77f23f5b25ce463e59da98c7c20afb152c5b54f\" returns successfully"
Feb 13 15:26:36.919919 kubelet[2642]: E0213 15:26:36.919893    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:36.920432 kubelet[2642]: E0213 15:26:36.920416    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:36.931754 kubelet[2642]: I0213 15:26:36.931637    2642 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-l88st" podStartSLOduration=2.117681465 podStartE2EDuration="13.931593941s" podCreationTimestamp="2025-02-13 15:26:23 +0000 UTC" firstStartedPulling="2025-02-13 15:26:24.062336754 +0000 UTC m=+16.327088630" lastFinishedPulling="2025-02-13 15:26:35.87624923 +0000 UTC m=+28.141001106" observedRunningTime="2025-02-13 15:26:36.93125745 +0000 UTC m=+29.196009326" watchObservedRunningTime="2025-02-13 15:26:36.931593941 +0000 UTC m=+29.196345817"
Feb 13 15:26:37.923097 kubelet[2642]: E0213 15:26:37.921227    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:37.923097 kubelet[2642]: E0213 15:26:37.921959    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:39.586190 systemd-networkd[1392]: cilium_host: Link UP
Feb 13 15:26:39.586998 systemd-networkd[1392]: cilium_net: Link UP
Feb 13 15:26:39.587001 systemd-networkd[1392]: cilium_net: Gained carrier
Feb 13 15:26:39.587240 systemd-networkd[1392]: cilium_host: Gained carrier
Feb 13 15:26:39.685184 systemd-networkd[1392]: cilium_vxlan: Link UP
Feb 13 15:26:39.685201 systemd-networkd[1392]: cilium_vxlan: Gained carrier
Feb 13 15:26:39.688140 systemd-networkd[1392]: cilium_net: Gained IPv6LL
Feb 13 15:26:39.888277 kubelet[2642]: E0213 15:26:39.887637    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:40.016330 kernel: NET: Registered PF_ALG protocol family
Feb 13 15:26:40.199567 systemd-networkd[1392]: cilium_host: Gained IPv6LL
Feb 13 15:26:40.552941 systemd[1]: Started sshd@8-10.0.0.55:22-10.0.0.1:54684.service - OpenSSH per-connection server daemon (10.0.0.1:54684).
Feb 13 15:26:40.605162 sshd[3823]: Accepted publickey for core from 10.0.0.1 port 54684 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:26:40.604842 sshd-session[3823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:26:40.611065 systemd-logind[1451]: New session 9 of user core.
Feb 13 15:26:40.615538 systemd[1]: Started session-9.scope - Session 9 of User core.
Feb 13 15:26:40.628455 systemd-networkd[1392]: lxc_health: Link UP
Feb 13 15:26:40.639168 systemd-networkd[1392]: lxc_health: Gained carrier
Feb 13 15:26:40.795422 sshd[3862]: Connection closed by 10.0.0.1 port 54684
Feb 13 15:26:40.791356 sshd-session[3823]: pam_unix(sshd:session): session closed for user core
Feb 13 15:26:40.808181 systemd-networkd[1392]: lxc238c7a16b6f4: Link UP
Feb 13 15:26:40.815379 kernel: eth0: renamed from tmp4be69
Feb 13 15:26:40.831224 systemd-networkd[1392]: lxc093caaa76f0d: Link UP
Feb 13 15:26:40.832512 kernel: eth0: renamed from tmp25480
Feb 13 15:26:40.842210 systemd[1]: sshd@8-10.0.0.55:22-10.0.0.1:54684.service: Deactivated successfully.
Feb 13 15:26:40.846539 systemd[1]: session-9.scope: Deactivated successfully.
Feb 13 15:26:40.848700 systemd-networkd[1392]: lxc093caaa76f0d: Gained carrier
Feb 13 15:26:40.849106 systemd-networkd[1392]: lxc238c7a16b6f4: Gained carrier
Feb 13 15:26:40.860502 systemd-logind[1451]: Session 9 logged out. Waiting for processes to exit.
Feb 13 15:26:40.866775 systemd-logind[1451]: Removed session 9.
Feb 13 15:26:41.414502 systemd-networkd[1392]: cilium_vxlan: Gained IPv6LL
Feb 13 15:26:41.798507 systemd-networkd[1392]: lxc_health: Gained IPv6LL
Feb 13 15:26:41.949528 kubelet[2642]: E0213 15:26:41.949259    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:42.183481 systemd-networkd[1392]: lxc238c7a16b6f4: Gained IPv6LL
Feb 13 15:26:42.694455 systemd-networkd[1392]: lxc093caaa76f0d: Gained IPv6LL
Feb 13 15:26:42.932897 kubelet[2642]: E0213 15:26:42.932695    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:44.620311 containerd[1471]: time="2025-02-13T15:26:44.620213630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:26:44.620689 containerd[1471]: time="2025-02-13T15:26:44.620587874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:26:44.620689 containerd[1471]: time="2025-02-13T15:26:44.620609556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:26:44.620737 containerd[1471]: time="2025-02-13T15:26:44.620692006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:26:44.628078 containerd[1471]: time="2025-02-13T15:26:44.627884558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:26:44.628078 containerd[1471]: time="2025-02-13T15:26:44.627933443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:26:44.628078 containerd[1471]: time="2025-02-13T15:26:44.627944725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:26:44.628078 containerd[1471]: time="2025-02-13T15:26:44.628030495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:26:44.642500 systemd[1]: Started cri-containerd-254800cef454e5868ebfc66219d87b176aeffc05de7e9ae976987cd7f6d301ab.scope - libcontainer container 254800cef454e5868ebfc66219d87b176aeffc05de7e9ae976987cd7f6d301ab.
Feb 13 15:26:44.647472 systemd[1]: Started cri-containerd-4be698f2446d777f5075119d2d7e86675e5d1486ebc73594de35472e14a10261.scope - libcontainer container 4be698f2446d777f5075119d2d7e86675e5d1486ebc73594de35472e14a10261.
Feb 13 15:26:44.657405 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Feb 13 15:26:44.659613 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Feb 13 15:26:44.678442 containerd[1471]: time="2025-02-13T15:26:44.677134575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-s2ncb,Uid:cf14d6d4-34d2-4c1b-b599-592c0b38046f,Namespace:kube-system,Attempt:0,} returns sandbox id \"254800cef454e5868ebfc66219d87b176aeffc05de7e9ae976987cd7f6d301ab\""
Feb 13 15:26:44.678575 kubelet[2642]: E0213 15:26:44.678206    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:44.681930 containerd[1471]: time="2025-02-13T15:26:44.681876204Z" level=info msg="CreateContainer within sandbox \"254800cef454e5868ebfc66219d87b176aeffc05de7e9ae976987cd7f6d301ab\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Feb 13 15:26:44.686835 containerd[1471]: time="2025-02-13T15:26:44.686715524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hzvsf,Uid:ca282044-d1d6-4fa3-9bc3-6753265725f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4be698f2446d777f5075119d2d7e86675e5d1486ebc73594de35472e14a10261\""
Feb 13 15:26:44.687510 kubelet[2642]: E0213 15:26:44.687488    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:44.690173 containerd[1471]: time="2025-02-13T15:26:44.690141320Z" level=info msg="CreateContainer within sandbox \"4be698f2446d777f5075119d2d7e86675e5d1486ebc73594de35472e14a10261\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Feb 13 15:26:44.699787 containerd[1471]: time="2025-02-13T15:26:44.699734830Z" level=info msg="CreateContainer within sandbox \"254800cef454e5868ebfc66219d87b176aeffc05de7e9ae976987cd7f6d301ab\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ae7ca447fa48697e2ed01a417e9760972c2e72af4a24be6a925943c30dc568ea\""
Feb 13 15:26:44.700482 containerd[1471]: time="2025-02-13T15:26:44.700428190Z" level=info msg="StartContainer for \"ae7ca447fa48697e2ed01a417e9760972c2e72af4a24be6a925943c30dc568ea\""
Feb 13 15:26:44.714099 containerd[1471]: time="2025-02-13T15:26:44.714019603Z" level=info msg="CreateContainer within sandbox \"4be698f2446d777f5075119d2d7e86675e5d1486ebc73594de35472e14a10261\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ec84e860674ec2030522220bd0c7cfab50c63c091a4e81a1d47a28d36beaa35a\""
Feb 13 15:26:44.714689 containerd[1471]: time="2025-02-13T15:26:44.714626273Z" level=info msg="StartContainer for \"ec84e860674ec2030522220bd0c7cfab50c63c091a4e81a1d47a28d36beaa35a\""
Feb 13 15:26:44.726448 systemd[1]: Started cri-containerd-ae7ca447fa48697e2ed01a417e9760972c2e72af4a24be6a925943c30dc568ea.scope - libcontainer container ae7ca447fa48697e2ed01a417e9760972c2e72af4a24be6a925943c30dc568ea.
Feb 13 15:26:44.742481 systemd[1]: Started cri-containerd-ec84e860674ec2030522220bd0c7cfab50c63c091a4e81a1d47a28d36beaa35a.scope - libcontainer container ec84e860674ec2030522220bd0c7cfab50c63c091a4e81a1d47a28d36beaa35a.
Feb 13 15:26:44.772320 containerd[1471]: time="2025-02-13T15:26:44.769247152Z" level=info msg="StartContainer for \"ae7ca447fa48697e2ed01a417e9760972c2e72af4a24be6a925943c30dc568ea\" returns successfully"
Feb 13 15:26:44.774890 containerd[1471]: time="2025-02-13T15:26:44.774829558Z" level=info msg="StartContainer for \"ec84e860674ec2030522220bd0c7cfab50c63c091a4e81a1d47a28d36beaa35a\" returns successfully"
Feb 13 15:26:44.939422 kubelet[2642]: E0213 15:26:44.938191    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:44.942704 kubelet[2642]: E0213 15:26:44.942673    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:44.971028 kubelet[2642]: I0213 15:26:44.970976    2642 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-hzvsf" podStartSLOduration=21.970937166 podStartE2EDuration="21.970937166s" podCreationTimestamp="2025-02-13 15:26:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:26:44.970235005 +0000 UTC m=+37.234986881" watchObservedRunningTime="2025-02-13 15:26:44.970937166 +0000 UTC m=+37.235689042"
Feb 13 15:26:44.985732 kubelet[2642]: I0213 15:26:44.985322    2642 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-s2ncb" podStartSLOduration=21.985261463 podStartE2EDuration="21.985261463s" podCreationTimestamp="2025-02-13 15:26:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:26:44.985253022 +0000 UTC m=+37.250004898" watchObservedRunningTime="2025-02-13 15:26:44.985261463 +0000 UTC m=+37.250013299"
Feb 13 15:26:45.798225 systemd[1]: Started sshd@9-10.0.0.55:22-10.0.0.1:46958.service - OpenSSH per-connection server daemon (10.0.0.1:46958).
Feb 13 15:26:45.853939 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 46958 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:26:45.855636 sshd-session[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:26:45.859905 systemd-logind[1451]: New session 10 of user core.
Feb 13 15:26:45.866477 systemd[1]: Started session-10.scope - Session 10 of User core.
Feb 13 15:26:45.944139 kubelet[2642]: E0213 15:26:45.944095    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:45.944560 kubelet[2642]: E0213 15:26:45.944530    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:45.992530 sshd[4088]: Connection closed by 10.0.0.1 port 46958
Feb 13 15:26:45.993092 sshd-session[4086]: pam_unix(sshd:session): session closed for user core
Feb 13 15:26:46.005063 systemd[1]: sshd@9-10.0.0.55:22-10.0.0.1:46958.service: Deactivated successfully.
Feb 13 15:26:46.006917 systemd[1]: session-10.scope: Deactivated successfully.
Feb 13 15:26:46.008642 systemd-logind[1451]: Session 10 logged out. Waiting for processes to exit.
Feb 13 15:26:46.010955 systemd[1]: Started sshd@10-10.0.0.55:22-10.0.0.1:46962.service - OpenSSH per-connection server daemon (10.0.0.1:46962).
Feb 13 15:26:46.012129 systemd-logind[1451]: Removed session 10.
Feb 13 15:26:46.057719 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 46962 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:26:46.059616 sshd-session[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:26:46.066480 systemd-logind[1451]: New session 11 of user core.
Feb 13 15:26:46.076478 systemd[1]: Started session-11.scope - Session 11 of User core.
Feb 13 15:26:46.228005 sshd[4111]: Connection closed by 10.0.0.1 port 46962
Feb 13 15:26:46.229836 sshd-session[4104]: pam_unix(sshd:session): session closed for user core
Feb 13 15:26:46.238131 systemd[1]: sshd@10-10.0.0.55:22-10.0.0.1:46962.service: Deactivated successfully.
Feb 13 15:26:46.240807 systemd[1]: session-11.scope: Deactivated successfully.
Feb 13 15:26:46.242590 systemd-logind[1451]: Session 11 logged out. Waiting for processes to exit.
Feb 13 15:26:46.255630 systemd[1]: Started sshd@11-10.0.0.55:22-10.0.0.1:46978.service - OpenSSH per-connection server daemon (10.0.0.1:46978).
Feb 13 15:26:46.257866 systemd-logind[1451]: Removed session 11.
Feb 13 15:26:46.301486 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 46978 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:26:46.302891 sshd-session[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:26:46.306549 systemd-logind[1451]: New session 12 of user core.
Feb 13 15:26:46.317459 systemd[1]: Started session-12.scope - Session 12 of User core.
Feb 13 15:26:46.428485 sshd[4123]: Connection closed by 10.0.0.1 port 46978
Feb 13 15:26:46.428852 sshd-session[4121]: pam_unix(sshd:session): session closed for user core
Feb 13 15:26:46.432890 systemd[1]: sshd@11-10.0.0.55:22-10.0.0.1:46978.service: Deactivated successfully.
Feb 13 15:26:46.436586 systemd[1]: session-12.scope: Deactivated successfully.
Feb 13 15:26:46.437471 systemd-logind[1451]: Session 12 logged out. Waiting for processes to exit.
Feb 13 15:26:46.438393 systemd-logind[1451]: Removed session 12.
Feb 13 15:26:46.946250 kubelet[2642]: E0213 15:26:46.945902    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:46.946250 kubelet[2642]: E0213 15:26:46.945933    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:26:51.450918 systemd[1]: Started sshd@12-10.0.0.55:22-10.0.0.1:46988.service - OpenSSH per-connection server daemon (10.0.0.1:46988).
Feb 13 15:26:51.502071 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 46988 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:26:51.503364 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:26:51.508040 systemd-logind[1451]: New session 13 of user core.
Feb 13 15:26:51.521559 systemd[1]: Started session-13.scope - Session 13 of User core.
Feb 13 15:26:51.644152 sshd[4139]: Connection closed by 10.0.0.1 port 46988
Feb 13 15:26:51.644601 sshd-session[4137]: pam_unix(sshd:session): session closed for user core
Feb 13 15:26:51.648064 systemd[1]: sshd@12-10.0.0.55:22-10.0.0.1:46988.service: Deactivated successfully.
Feb 13 15:26:51.649778 systemd[1]: session-13.scope: Deactivated successfully.
Feb 13 15:26:51.652530 systemd-logind[1451]: Session 13 logged out. Waiting for processes to exit.
Feb 13 15:26:51.653636 systemd-logind[1451]: Removed session 13.
Feb 13 15:26:56.657399 systemd[1]: Started sshd@13-10.0.0.55:22-10.0.0.1:51644.service - OpenSSH per-connection server daemon (10.0.0.1:51644).
Feb 13 15:26:56.710652 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 51644 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:26:56.711926 sshd-session[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:26:56.715666 systemd-logind[1451]: New session 14 of user core.
Feb 13 15:26:56.725480 systemd[1]: Started session-14.scope - Session 14 of User core.
Feb 13 15:26:56.873804 sshd[4155]: Connection closed by 10.0.0.1 port 51644
Feb 13 15:26:56.874528 sshd-session[4153]: pam_unix(sshd:session): session closed for user core
Feb 13 15:26:56.884836 systemd[1]: sshd@13-10.0.0.55:22-10.0.0.1:51644.service: Deactivated successfully.
Feb 13 15:26:56.886637 systemd[1]: session-14.scope: Deactivated successfully.
Feb 13 15:26:56.888814 systemd-logind[1451]: Session 14 logged out. Waiting for processes to exit.
Feb 13 15:26:56.896594 systemd[1]: Started sshd@14-10.0.0.55:22-10.0.0.1:51648.service - OpenSSH per-connection server daemon (10.0.0.1:51648).
Feb 13 15:26:56.897712 systemd-logind[1451]: Removed session 14.
Feb 13 15:26:56.944299 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 51648 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:26:56.946048 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:26:56.950830 systemd-logind[1451]: New session 15 of user core.
Feb 13 15:26:56.961488 systemd[1]: Started session-15.scope - Session 15 of User core.
Feb 13 15:26:57.260790 sshd[4169]: Connection closed by 10.0.0.1 port 51648
Feb 13 15:26:57.261484 sshd-session[4167]: pam_unix(sshd:session): session closed for user core
Feb 13 15:26:57.269984 systemd[1]: sshd@14-10.0.0.55:22-10.0.0.1:51648.service: Deactivated successfully.
Feb 13 15:26:57.272787 systemd[1]: session-15.scope: Deactivated successfully.
Feb 13 15:26:57.274247 systemd-logind[1451]: Session 15 logged out. Waiting for processes to exit.
Feb 13 15:26:57.276225 systemd[1]: Started sshd@15-10.0.0.55:22-10.0.0.1:51654.service - OpenSSH per-connection server daemon (10.0.0.1:51654).
Feb 13 15:26:57.277243 systemd-logind[1451]: Removed session 15.
Feb 13 15:26:57.343427 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 51654 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:26:57.344996 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:26:57.351369 systemd-logind[1451]: New session 16 of user core.
Feb 13 15:26:57.368702 systemd[1]: Started session-16.scope - Session 16 of User core.
Feb 13 15:26:58.680384 sshd[4181]: Connection closed by 10.0.0.1 port 51654
Feb 13 15:26:58.680885 sshd-session[4179]: pam_unix(sshd:session): session closed for user core
Feb 13 15:26:58.691843 systemd[1]: sshd@15-10.0.0.55:22-10.0.0.1:51654.service: Deactivated successfully.
Feb 13 15:26:58.694171 systemd[1]: session-16.scope: Deactivated successfully.
Feb 13 15:26:58.698159 systemd-logind[1451]: Session 16 logged out. Waiting for processes to exit.
Feb 13 15:26:58.703612 systemd[1]: Started sshd@16-10.0.0.55:22-10.0.0.1:51660.service - OpenSSH per-connection server daemon (10.0.0.1:51660).
Feb 13 15:26:58.706053 systemd-logind[1451]: Removed session 16.
Feb 13 15:26:58.751155 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 51660 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:26:58.752737 sshd-session[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:26:58.757397 systemd-logind[1451]: New session 17 of user core.
Feb 13 15:26:58.764479 systemd[1]: Started session-17.scope - Session 17 of User core.
Feb 13 15:26:58.994074 sshd[4202]: Connection closed by 10.0.0.1 port 51660
Feb 13 15:26:58.993223 sshd-session[4200]: pam_unix(sshd:session): session closed for user core
Feb 13 15:26:59.006572 systemd[1]: sshd@16-10.0.0.55:22-10.0.0.1:51660.service: Deactivated successfully.
Feb 13 15:26:59.008825 systemd[1]: session-17.scope: Deactivated successfully.
Feb 13 15:26:59.013607 systemd-logind[1451]: Session 17 logged out. Waiting for processes to exit.
Feb 13 15:26:59.023654 systemd[1]: Started sshd@17-10.0.0.55:22-10.0.0.1:51668.service - OpenSSH per-connection server daemon (10.0.0.1:51668).
Feb 13 15:26:59.024876 systemd-logind[1451]: Removed session 17.
Feb 13 15:26:59.067180 sshd[4213]: Accepted publickey for core from 10.0.0.1 port 51668 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:26:59.070265 sshd-session[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:26:59.077645 systemd-logind[1451]: New session 18 of user core.
Feb 13 15:26:59.085547 systemd[1]: Started session-18.scope - Session 18 of User core.
Feb 13 15:26:59.202538 sshd[4215]: Connection closed by 10.0.0.1 port 51668
Feb 13 15:26:59.202961 sshd-session[4213]: pam_unix(sshd:session): session closed for user core
Feb 13 15:26:59.206807 systemd[1]: sshd@17-10.0.0.55:22-10.0.0.1:51668.service: Deactivated successfully.
Feb 13 15:26:59.208574 systemd[1]: session-18.scope: Deactivated successfully.
Feb 13 15:26:59.209256 systemd-logind[1451]: Session 18 logged out. Waiting for processes to exit.
Feb 13 15:26:59.210076 systemd-logind[1451]: Removed session 18.
Feb 13 15:27:04.214427 systemd[1]: Started sshd@18-10.0.0.55:22-10.0.0.1:46690.service - OpenSSH per-connection server daemon (10.0.0.1:46690).
Feb 13 15:27:04.263841 sshd[4231]: Accepted publickey for core from 10.0.0.1 port 46690 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:27:04.265094 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:27:04.270159 systemd-logind[1451]: New session 19 of user core.
Feb 13 15:27:04.280473 systemd[1]: Started session-19.scope - Session 19 of User core.
Feb 13 15:27:04.389211 sshd[4233]: Connection closed by 10.0.0.1 port 46690
Feb 13 15:27:04.389554 sshd-session[4231]: pam_unix(sshd:session): session closed for user core
Feb 13 15:27:04.392878 systemd[1]: sshd@18-10.0.0.55:22-10.0.0.1:46690.service: Deactivated successfully.
Feb 13 15:27:04.394427 systemd[1]: session-19.scope: Deactivated successfully.
Feb 13 15:27:04.395602 systemd-logind[1451]: Session 19 logged out. Waiting for processes to exit.
Feb 13 15:27:04.397033 systemd-logind[1451]: Removed session 19.
Feb 13 15:27:09.403172 systemd[1]: Started sshd@19-10.0.0.55:22-10.0.0.1:46706.service - OpenSSH per-connection server daemon (10.0.0.1:46706).
Feb 13 15:27:09.460989 sshd[4247]: Accepted publickey for core from 10.0.0.1 port 46706 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:27:09.463248 sshd-session[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:27:09.467967 systemd-logind[1451]: New session 20 of user core.
Feb 13 15:27:09.479494 systemd[1]: Started session-20.scope - Session 20 of User core.
Feb 13 15:27:09.605196 sshd[4249]: Connection closed by 10.0.0.1 port 46706
Feb 13 15:27:09.605743 sshd-session[4247]: pam_unix(sshd:session): session closed for user core
Feb 13 15:27:09.609433 systemd[1]: sshd@19-10.0.0.55:22-10.0.0.1:46706.service: Deactivated successfully.
Feb 13 15:27:09.609436 systemd-logind[1451]: Session 20 logged out. Waiting for processes to exit.
Feb 13 15:27:09.611525 systemd[1]: session-20.scope: Deactivated successfully.
Feb 13 15:27:09.612469 systemd-logind[1451]: Removed session 20.
Feb 13 15:27:14.620547 systemd[1]: Started sshd@20-10.0.0.55:22-10.0.0.1:34012.service - OpenSSH per-connection server daemon (10.0.0.1:34012).
Feb 13 15:27:14.675074 sshd[4261]: Accepted publickey for core from 10.0.0.1 port 34012 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:27:14.676406 sshd-session[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:27:14.685018 systemd-logind[1451]: New session 21 of user core.
Feb 13 15:27:14.693477 systemd[1]: Started session-21.scope - Session 21 of User core.
Feb 13 15:27:14.808757 sshd[4265]: Connection closed by 10.0.0.1 port 34012
Feb 13 15:27:14.808371 sshd-session[4261]: pam_unix(sshd:session): session closed for user core
Feb 13 15:27:14.818126 systemd[1]: sshd@20-10.0.0.55:22-10.0.0.1:34012.service: Deactivated successfully.
Feb 13 15:27:14.820223 systemd[1]: session-21.scope: Deactivated successfully.
Feb 13 15:27:14.821660 systemd-logind[1451]: Session 21 logged out. Waiting for processes to exit.
Feb 13 15:27:14.829637 systemd[1]: Started sshd@21-10.0.0.55:22-10.0.0.1:34014.service - OpenSSH per-connection server daemon (10.0.0.1:34014).
Feb 13 15:27:14.831275 systemd-logind[1451]: Removed session 21.
Feb 13 15:27:14.874245 sshd[4277]: Accepted publickey for core from 10.0.0.1 port 34014 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:27:14.875498 sshd-session[4277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:27:14.882996 systemd-logind[1451]: New session 22 of user core.
Feb 13 15:27:14.888489 systemd[1]: Started session-22.scope - Session 22 of User core.
Feb 13 15:27:17.207885 containerd[1471]: time="2025-02-13T15:27:17.207806006Z" level=info msg="StopContainer for \"0d4f3e683a559590b54e0673e77f23f5b25ce463e59da98c7c20afb152c5b54f\" with timeout 30 (s)"
Feb 13 15:27:17.208678 containerd[1471]: time="2025-02-13T15:27:17.208644358Z" level=info msg="Stop container \"0d4f3e683a559590b54e0673e77f23f5b25ce463e59da98c7c20afb152c5b54f\" with signal terminated"
Feb 13 15:27:17.222100 systemd[1]: cri-containerd-0d4f3e683a559590b54e0673e77f23f5b25ce463e59da98c7c20afb152c5b54f.scope: Deactivated successfully.
Feb 13 15:27:17.250131 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d4f3e683a559590b54e0673e77f23f5b25ce463e59da98c7c20afb152c5b54f-rootfs.mount: Deactivated successfully.
Feb 13 15:27:17.253941 containerd[1471]: time="2025-02-13T15:27:17.253837553Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE        \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 13 15:27:17.259709 containerd[1471]: time="2025-02-13T15:27:17.259651183Z" level=info msg="StopContainer for \"032d4aec648e409325e36adfc35c27d01e3a05073d74ac17de4c73da7b37153a\" with timeout 2 (s)"
Feb 13 15:27:17.260091 containerd[1471]: time="2025-02-13T15:27:17.259955326Z" level=info msg="Stop container \"032d4aec648e409325e36adfc35c27d01e3a05073d74ac17de4c73da7b37153a\" with signal terminated"
Feb 13 15:27:17.262231 containerd[1471]: time="2025-02-13T15:27:17.262174760Z" level=info msg="shim disconnected" id=0d4f3e683a559590b54e0673e77f23f5b25ce463e59da98c7c20afb152c5b54f namespace=k8s.io
Feb 13 15:27:17.262231 containerd[1471]: time="2025-02-13T15:27:17.262233236Z" level=warning msg="cleaning up after shim disconnected" id=0d4f3e683a559590b54e0673e77f23f5b25ce463e59da98c7c20afb152c5b54f namespace=k8s.io
Feb 13 15:27:17.262363 containerd[1471]: time="2025-02-13T15:27:17.262243836Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:27:17.266532 systemd-networkd[1392]: lxc_health: Link DOWN
Feb 13 15:27:17.266538 systemd-networkd[1392]: lxc_health: Lost carrier
Feb 13 15:27:17.284612 systemd[1]: cri-containerd-032d4aec648e409325e36adfc35c27d01e3a05073d74ac17de4c73da7b37153a.scope: Deactivated successfully.
Feb 13 15:27:17.285084 systemd[1]: cri-containerd-032d4aec648e409325e36adfc35c27d01e3a05073d74ac17de4c73da7b37153a.scope: Consumed 7.143s CPU time.
Feb 13 15:27:17.305656 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-032d4aec648e409325e36adfc35c27d01e3a05073d74ac17de4c73da7b37153a-rootfs.mount: Deactivated successfully.
Feb 13 15:27:17.314329 containerd[1471]: time="2025-02-13T15:27:17.314233605Z" level=info msg="StopContainer for \"0d4f3e683a559590b54e0673e77f23f5b25ce463e59da98c7c20afb152c5b54f\" returns successfully"
Feb 13 15:27:17.314641 containerd[1471]: time="2025-02-13T15:27:17.314561546Z" level=info msg="shim disconnected" id=032d4aec648e409325e36adfc35c27d01e3a05073d74ac17de4c73da7b37153a namespace=k8s.io
Feb 13 15:27:17.314641 containerd[1471]: time="2025-02-13T15:27:17.314629822Z" level=warning msg="cleaning up after shim disconnected" id=032d4aec648e409325e36adfc35c27d01e3a05073d74ac17de4c73da7b37153a namespace=k8s.io
Feb 13 15:27:17.314641 containerd[1471]: time="2025-02-13T15:27:17.314639742Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:27:17.316958 containerd[1471]: time="2025-02-13T15:27:17.316760621Z" level=info msg="StopPodSandbox for \"71c803521ba02d9e4616f819a20c8a24a996a81615dd7c7af989136eb8bc299f\""
Feb 13 15:27:17.316958 containerd[1471]: time="2025-02-13T15:27:17.316834977Z" level=info msg="Container to stop \"0d4f3e683a559590b54e0673e77f23f5b25ce463e59da98c7c20afb152c5b54f\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:27:17.318712 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-71c803521ba02d9e4616f819a20c8a24a996a81615dd7c7af989136eb8bc299f-shm.mount: Deactivated successfully.
Feb 13 15:27:17.323766 systemd[1]: cri-containerd-71c803521ba02d9e4616f819a20c8a24a996a81615dd7c7af989136eb8bc299f.scope: Deactivated successfully.
Feb 13 15:27:17.331151 containerd[1471]: time="2025-02-13T15:27:17.331097127Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:27:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Feb 13 15:27:17.334235 containerd[1471]: time="2025-02-13T15:27:17.334189912Z" level=info msg="StopContainer for \"032d4aec648e409325e36adfc35c27d01e3a05073d74ac17de4c73da7b37153a\" returns successfully"
Feb 13 15:27:17.334892 containerd[1471]: time="2025-02-13T15:27:17.334860074Z" level=info msg="StopPodSandbox for \"11d28ac8e2a03f21a3893539a5b90a1b65b4858a3cf3bbe91d4a063558c2f395\""
Feb 13 15:27:17.334950 containerd[1471]: time="2025-02-13T15:27:17.334913671Z" level=info msg="Container to stop \"032d4aec648e409325e36adfc35c27d01e3a05073d74ac17de4c73da7b37153a\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:27:17.334950 containerd[1471]: time="2025-02-13T15:27:17.334924830Z" level=info msg="Container to stop \"673bdc065ab05d0af7a1e451133cb75fb9ad0f2b637701e5e7bbe08997b6a26f\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:27:17.334950 containerd[1471]: time="2025-02-13T15:27:17.334932950Z" level=info msg="Container to stop \"175213ccb3bff7b91ce418151fd560267157049b4a95e75071879982e363801d\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:27:17.334950 containerd[1471]: time="2025-02-13T15:27:17.334942429Z" level=info msg="Container to stop \"b7acb5ee226562f51c446046e6011e400da017e3ef008cdbbe22746446be3f1b\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:27:17.335122 containerd[1471]: time="2025-02-13T15:27:17.334952589Z" level=info msg="Container to stop \"89f8b07440ce741377345d4321e25c6d3df90ec897697154c9374bb63a9a6292\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:27:17.336601 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-11d28ac8e2a03f21a3893539a5b90a1b65b4858a3cf3bbe91d4a063558c2f395-shm.mount: Deactivated successfully.
Feb 13 15:27:17.342616 systemd[1]: cri-containerd-11d28ac8e2a03f21a3893539a5b90a1b65b4858a3cf3bbe91d4a063558c2f395.scope: Deactivated successfully.
Feb 13 15:27:17.358834 containerd[1471]: time="2025-02-13T15:27:17.358761117Z" level=info msg="shim disconnected" id=71c803521ba02d9e4616f819a20c8a24a996a81615dd7c7af989136eb8bc299f namespace=k8s.io
Feb 13 15:27:17.358834 containerd[1471]: time="2025-02-13T15:27:17.358826073Z" level=warning msg="cleaning up after shim disconnected" id=71c803521ba02d9e4616f819a20c8a24a996a81615dd7c7af989136eb8bc299f namespace=k8s.io
Feb 13 15:27:17.358834 containerd[1471]: time="2025-02-13T15:27:17.358836313Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:27:17.373359 containerd[1471]: time="2025-02-13T15:27:17.373269174Z" level=info msg="TearDown network for sandbox \"71c803521ba02d9e4616f819a20c8a24a996a81615dd7c7af989136eb8bc299f\" successfully"
Feb 13 15:27:17.373359 containerd[1471]: time="2025-02-13T15:27:17.373340809Z" level=info msg="StopPodSandbox for \"71c803521ba02d9e4616f819a20c8a24a996a81615dd7c7af989136eb8bc299f\" returns successfully"
Feb 13 15:27:17.387723 containerd[1471]: time="2025-02-13T15:27:17.387635278Z" level=info msg="shim disconnected" id=11d28ac8e2a03f21a3893539a5b90a1b65b4858a3cf3bbe91d4a063558c2f395 namespace=k8s.io
Feb 13 15:27:17.387723 containerd[1471]: time="2025-02-13T15:27:17.387696315Z" level=warning msg="cleaning up after shim disconnected" id=11d28ac8e2a03f21a3893539a5b90a1b65b4858a3cf3bbe91d4a063558c2f395 namespace=k8s.io
Feb 13 15:27:17.387723 containerd[1471]: time="2025-02-13T15:27:17.387707354Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:27:17.403274 containerd[1471]: time="2025-02-13T15:27:17.403134518Z" level=info msg="TearDown network for sandbox \"11d28ac8e2a03f21a3893539a5b90a1b65b4858a3cf3bbe91d4a063558c2f395\" successfully"
Feb 13 15:27:17.403274 containerd[1471]: time="2025-02-13T15:27:17.403176796Z" level=info msg="StopPodSandbox for \"11d28ac8e2a03f21a3893539a5b90a1b65b4858a3cf3bbe91d4a063558c2f395\" returns successfully"
Feb 13 15:27:17.563804 kubelet[2642]: I0213 15:27:17.563768    2642 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smlms\" (UniqueName: \"kubernetes.io/projected/20751177-dc28-4b5e-b54a-0fd4e3679a3b-kube-api-access-smlms\") pod \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\" (UID: \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\") "
Feb 13 15:27:17.563804 kubelet[2642]: I0213 15:27:17.563814    2642 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/20751177-dc28-4b5e-b54a-0fd4e3679a3b-clustermesh-secrets\") pod \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\" (UID: \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\") "
Feb 13 15:27:17.564257 kubelet[2642]: I0213 15:27:17.563835    2642 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-host-proc-sys-net\") pod \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\" (UID: \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\") "
Feb 13 15:27:17.564257 kubelet[2642]: I0213 15:27:17.563852    2642 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-xtables-lock\") pod \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\" (UID: \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\") "
Feb 13 15:27:17.564257 kubelet[2642]: I0213 15:27:17.563872    2642 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-host-proc-sys-kernel\") pod \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\" (UID: \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\") "
Feb 13 15:27:17.564257 kubelet[2642]: I0213 15:27:17.563890    2642 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/20751177-dc28-4b5e-b54a-0fd4e3679a3b-hubble-tls\") pod \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\" (UID: \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\") "
Feb 13 15:27:17.564257 kubelet[2642]: I0213 15:27:17.563912    2642 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrxxd\" (UniqueName: \"kubernetes.io/projected/9bd2e66b-9891-4f47-a75c-67d2a2c78c22-kube-api-access-qrxxd\") pod \"9bd2e66b-9891-4f47-a75c-67d2a2c78c22\" (UID: \"9bd2e66b-9891-4f47-a75c-67d2a2c78c22\") "
Feb 13 15:27:17.564257 kubelet[2642]: I0213 15:27:17.563956    2642 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-lib-modules\") pod \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\" (UID: \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\") "
Feb 13 15:27:17.564430 kubelet[2642]: I0213 15:27:17.563974    2642 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-hostproc\") pod \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\" (UID: \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\") "
Feb 13 15:27:17.564430 kubelet[2642]: I0213 15:27:17.564002    2642 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-cilium-run\") pod \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\" (UID: \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\") "
Feb 13 15:27:17.564430 kubelet[2642]: I0213 15:27:17.564027    2642 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20751177-dc28-4b5e-b54a-0fd4e3679a3b-cilium-config-path\") pod \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\" (UID: \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\") "
Feb 13 15:27:17.564430 kubelet[2642]: I0213 15:27:17.564047    2642 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9bd2e66b-9891-4f47-a75c-67d2a2c78c22-cilium-config-path\") pod \"9bd2e66b-9891-4f47-a75c-67d2a2c78c22\" (UID: \"9bd2e66b-9891-4f47-a75c-67d2a2c78c22\") "
Feb 13 15:27:17.564430 kubelet[2642]: I0213 15:27:17.564064    2642 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-etc-cni-netd\") pod \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\" (UID: \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\") "
Feb 13 15:27:17.564430 kubelet[2642]: I0213 15:27:17.564080    2642 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-bpf-maps\") pod \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\" (UID: \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\") "
Feb 13 15:27:17.564553 kubelet[2642]: I0213 15:27:17.564102    2642 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-cilium-cgroup\") pod \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\" (UID: \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\") "
Feb 13 15:27:17.564553 kubelet[2642]: I0213 15:27:17.564119    2642 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-cni-path\") pod \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\" (UID: \"20751177-dc28-4b5e-b54a-0fd4e3679a3b\") "
Feb 13 15:27:17.571897 kubelet[2642]: I0213 15:27:17.571749    2642 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "20751177-dc28-4b5e-b54a-0fd4e3679a3b" (UID: "20751177-dc28-4b5e-b54a-0fd4e3679a3b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:27:17.571897 kubelet[2642]: I0213 15:27:17.571871    2642 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "20751177-dc28-4b5e-b54a-0fd4e3679a3b" (UID: "20751177-dc28-4b5e-b54a-0fd4e3679a3b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:27:17.571897 kubelet[2642]: I0213 15:27:17.571897    2642 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "20751177-dc28-4b5e-b54a-0fd4e3679a3b" (UID: "20751177-dc28-4b5e-b54a-0fd4e3679a3b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:27:17.572068 kubelet[2642]: I0213 15:27:17.571914    2642 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "20751177-dc28-4b5e-b54a-0fd4e3679a3b" (UID: "20751177-dc28-4b5e-b54a-0fd4e3679a3b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:27:17.572374 kubelet[2642]: I0213 15:27:17.572341    2642 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-cni-path" (OuterVolumeSpecName: "cni-path") pod "20751177-dc28-4b5e-b54a-0fd4e3679a3b" (UID: "20751177-dc28-4b5e-b54a-0fd4e3679a3b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:27:17.574118 kubelet[2642]: I0213 15:27:17.574077    2642 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20751177-dc28-4b5e-b54a-0fd4e3679a3b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "20751177-dc28-4b5e-b54a-0fd4e3679a3b" (UID: "20751177-dc28-4b5e-b54a-0fd4e3679a3b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Feb 13 15:27:17.576320 kubelet[2642]: I0213 15:27:17.575772    2642 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bd2e66b-9891-4f47-a75c-67d2a2c78c22-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9bd2e66b-9891-4f47-a75c-67d2a2c78c22" (UID: "9bd2e66b-9891-4f47-a75c-67d2a2c78c22"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Feb 13 15:27:17.576320 kubelet[2642]: I0213 15:27:17.575825    2642 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "20751177-dc28-4b5e-b54a-0fd4e3679a3b" (UID: "20751177-dc28-4b5e-b54a-0fd4e3679a3b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:27:17.576320 kubelet[2642]: I0213 15:27:17.575844    2642 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "20751177-dc28-4b5e-b54a-0fd4e3679a3b" (UID: "20751177-dc28-4b5e-b54a-0fd4e3679a3b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:27:17.576320 kubelet[2642]: I0213 15:27:17.575861    2642 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "20751177-dc28-4b5e-b54a-0fd4e3679a3b" (UID: "20751177-dc28-4b5e-b54a-0fd4e3679a3b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:27:17.576320 kubelet[2642]: I0213 15:27:17.575885    2642 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "20751177-dc28-4b5e-b54a-0fd4e3679a3b" (UID: "20751177-dc28-4b5e-b54a-0fd4e3679a3b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:27:17.578453 kubelet[2642]: I0213 15:27:17.578379    2642 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-hostproc" (OuterVolumeSpecName: "hostproc") pod "20751177-dc28-4b5e-b54a-0fd4e3679a3b" (UID: "20751177-dc28-4b5e-b54a-0fd4e3679a3b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:27:17.578953 kubelet[2642]: I0213 15:27:17.578916    2642 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bd2e66b-9891-4f47-a75c-67d2a2c78c22-kube-api-access-qrxxd" (OuterVolumeSpecName: "kube-api-access-qrxxd") pod "9bd2e66b-9891-4f47-a75c-67d2a2c78c22" (UID: "9bd2e66b-9891-4f47-a75c-67d2a2c78c22"). InnerVolumeSpecName "kube-api-access-qrxxd". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb 13 15:27:17.579182 kubelet[2642]: I0213 15:27:17.579111    2642 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20751177-dc28-4b5e-b54a-0fd4e3679a3b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "20751177-dc28-4b5e-b54a-0fd4e3679a3b" (UID: "20751177-dc28-4b5e-b54a-0fd4e3679a3b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb 13 15:27:17.579994 kubelet[2642]: I0213 15:27:17.579959    2642 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20751177-dc28-4b5e-b54a-0fd4e3679a3b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "20751177-dc28-4b5e-b54a-0fd4e3679a3b" (UID: "20751177-dc28-4b5e-b54a-0fd4e3679a3b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb 13 15:27:17.580326 kubelet[2642]: I0213 15:27:17.580269    2642 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20751177-dc28-4b5e-b54a-0fd4e3679a3b-kube-api-access-smlms" (OuterVolumeSpecName: "kube-api-access-smlms") pod "20751177-dc28-4b5e-b54a-0fd4e3679a3b" (UID: "20751177-dc28-4b5e-b54a-0fd4e3679a3b"). InnerVolumeSpecName "kube-api-access-smlms". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb 13 15:27:17.664636 kubelet[2642]: I0213 15:27:17.664403    2642 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-cilium-cgroup\") on node \"localhost\" DevicePath \"\""
Feb 13 15:27:17.664636 kubelet[2642]: I0213 15:27:17.664470    2642 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-cni-path\") on node \"localhost\" DevicePath \"\""
Feb 13 15:27:17.664636 kubelet[2642]: I0213 15:27:17.664484    2642 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-smlms\" (UniqueName: \"kubernetes.io/projected/20751177-dc28-4b5e-b54a-0fd4e3679a3b-kube-api-access-smlms\") on node \"localhost\" DevicePath \"\""
Feb 13 15:27:17.664636 kubelet[2642]: I0213 15:27:17.664496    2642 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/20751177-dc28-4b5e-b54a-0fd4e3679a3b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\""
Feb 13 15:27:17.664636 kubelet[2642]: I0213 15:27:17.664506    2642 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\""
Feb 13 15:27:17.664636 kubelet[2642]: I0213 15:27:17.664515    2642 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-xtables-lock\") on node \"localhost\" DevicePath \"\""
Feb 13 15:27:17.664636 kubelet[2642]: I0213 15:27:17.664527    2642 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\""
Feb 13 15:27:17.664636 kubelet[2642]: I0213 15:27:17.664537    2642 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/20751177-dc28-4b5e-b54a-0fd4e3679a3b-hubble-tls\") on node \"localhost\" DevicePath \"\""
Feb 13 15:27:17.664910 kubelet[2642]: I0213 15:27:17.664548    2642 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qrxxd\" (UniqueName: \"kubernetes.io/projected/9bd2e66b-9891-4f47-a75c-67d2a2c78c22-kube-api-access-qrxxd\") on node \"localhost\" DevicePath \"\""
Feb 13 15:27:17.664910 kubelet[2642]: I0213 15:27:17.664559    2642 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-lib-modules\") on node \"localhost\" DevicePath \"\""
Feb 13 15:27:17.664910 kubelet[2642]: I0213 15:27:17.664569    2642 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-hostproc\") on node \"localhost\" DevicePath \"\""
Feb 13 15:27:17.664910 kubelet[2642]: I0213 15:27:17.664578    2642 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-cilium-run\") on node \"localhost\" DevicePath \"\""
Feb 13 15:27:17.664910 kubelet[2642]: I0213 15:27:17.664587    2642 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/20751177-dc28-4b5e-b54a-0fd4e3679a3b-cilium-config-path\") on node \"localhost\" DevicePath \"\""
Feb 13 15:27:17.664910 kubelet[2642]: I0213 15:27:17.664597    2642 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9bd2e66b-9891-4f47-a75c-67d2a2c78c22-cilium-config-path\") on node \"localhost\" DevicePath \"\""
Feb 13 15:27:17.664910 kubelet[2642]: I0213 15:27:17.664606    2642 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-etc-cni-netd\") on node \"localhost\" DevicePath \"\""
Feb 13 15:27:17.664910 kubelet[2642]: I0213 15:27:17.664615    2642 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/20751177-dc28-4b5e-b54a-0fd4e3679a3b-bpf-maps\") on node \"localhost\" DevicePath \"\""
Feb 13 15:27:17.823869 systemd[1]: Removed slice kubepods-burstable-pod20751177_dc28_4b5e_b54a_0fd4e3679a3b.slice - libcontainer container kubepods-burstable-pod20751177_dc28_4b5e_b54a_0fd4e3679a3b.slice.
Feb 13 15:27:17.823957 systemd[1]: kubepods-burstable-pod20751177_dc28_4b5e_b54a_0fd4e3679a3b.slice: Consumed 7.296s CPU time.
Feb 13 15:27:17.825454 systemd[1]: Removed slice kubepods-besteffort-pod9bd2e66b_9891_4f47_a75c_67d2a2c78c22.slice - libcontainer container kubepods-besteffort-pod9bd2e66b_9891_4f47_a75c_67d2a2c78c22.slice.
Feb 13 15:27:17.895618 kubelet[2642]: E0213 15:27:17.895568    2642 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 13 15:27:18.035629 kubelet[2642]: I0213 15:27:18.035474    2642 scope.go:117] "RemoveContainer" containerID="032d4aec648e409325e36adfc35c27d01e3a05073d74ac17de4c73da7b37153a"
Feb 13 15:27:18.038212 containerd[1471]: time="2025-02-13T15:27:18.037816491Z" level=info msg="RemoveContainer for \"032d4aec648e409325e36adfc35c27d01e3a05073d74ac17de4c73da7b37153a\""
Feb 13 15:27:18.041129 containerd[1471]: time="2025-02-13T15:27:18.041084916Z" level=info msg="RemoveContainer for \"032d4aec648e409325e36adfc35c27d01e3a05073d74ac17de4c73da7b37153a\" returns successfully"
Feb 13 15:27:18.041532 kubelet[2642]: I0213 15:27:18.041506    2642 scope.go:117] "RemoveContainer" containerID="175213ccb3bff7b91ce418151fd560267157049b4a95e75071879982e363801d"
Feb 13 15:27:18.042872 containerd[1471]: time="2025-02-13T15:27:18.042830183Z" level=info msg="RemoveContainer for \"175213ccb3bff7b91ce418151fd560267157049b4a95e75071879982e363801d\""
Feb 13 15:27:18.046526 containerd[1471]: time="2025-02-13T15:27:18.046480428Z" level=info msg="RemoveContainer for \"175213ccb3bff7b91ce418151fd560267157049b4a95e75071879982e363801d\" returns successfully"
Feb 13 15:27:18.047044 kubelet[2642]: I0213 15:27:18.047005    2642 scope.go:117] "RemoveContainer" containerID="673bdc065ab05d0af7a1e451133cb75fb9ad0f2b637701e5e7bbe08997b6a26f"
Feb 13 15:27:18.049712 containerd[1471]: time="2025-02-13T15:27:18.049671857Z" level=info msg="RemoveContainer for \"673bdc065ab05d0af7a1e451133cb75fb9ad0f2b637701e5e7bbe08997b6a26f\""
Feb 13 15:27:18.058262 containerd[1471]: time="2025-02-13T15:27:18.058204761Z" level=info msg="RemoveContainer for \"673bdc065ab05d0af7a1e451133cb75fb9ad0f2b637701e5e7bbe08997b6a26f\" returns successfully"
Feb 13 15:27:18.058650 kubelet[2642]: I0213 15:27:18.058568    2642 scope.go:117] "RemoveContainer" containerID="89f8b07440ce741377345d4321e25c6d3df90ec897697154c9374bb63a9a6292"
Feb 13 15:27:18.063635 containerd[1471]: time="2025-02-13T15:27:18.063091979Z" level=info msg="RemoveContainer for \"89f8b07440ce741377345d4321e25c6d3df90ec897697154c9374bb63a9a6292\""
Feb 13 15:27:18.066993 containerd[1471]: time="2025-02-13T15:27:18.066930134Z" level=info msg="RemoveContainer for \"89f8b07440ce741377345d4321e25c6d3df90ec897697154c9374bb63a9a6292\" returns successfully"
Feb 13 15:27:18.067249 kubelet[2642]: I0213 15:27:18.067215    2642 scope.go:117] "RemoveContainer" containerID="b7acb5ee226562f51c446046e6011e400da017e3ef008cdbbe22746446be3f1b"
Feb 13 15:27:18.068682 containerd[1471]: time="2025-02-13T15:27:18.068649242Z" level=info msg="RemoveContainer for \"b7acb5ee226562f51c446046e6011e400da017e3ef008cdbbe22746446be3f1b\""
Feb 13 15:27:18.076406 containerd[1471]: time="2025-02-13T15:27:18.076266794Z" level=info msg="RemoveContainer for \"b7acb5ee226562f51c446046e6011e400da017e3ef008cdbbe22746446be3f1b\" returns successfully"
Feb 13 15:27:18.076594 kubelet[2642]: I0213 15:27:18.076538    2642 scope.go:117] "RemoveContainer" containerID="032d4aec648e409325e36adfc35c27d01e3a05073d74ac17de4c73da7b37153a"
Feb 13 15:27:18.077015 containerd[1471]: time="2025-02-13T15:27:18.076827285Z" level=error msg="ContainerStatus for \"032d4aec648e409325e36adfc35c27d01e3a05073d74ac17de4c73da7b37153a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"032d4aec648e409325e36adfc35c27d01e3a05073d74ac17de4c73da7b37153a\": not found"
Feb 13 15:27:18.088942 kubelet[2642]: E0213 15:27:18.088904    2642 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"032d4aec648e409325e36adfc35c27d01e3a05073d74ac17de4c73da7b37153a\": not found" containerID="032d4aec648e409325e36adfc35c27d01e3a05073d74ac17de4c73da7b37153a"
Feb 13 15:27:18.092494 kubelet[2642]: I0213 15:27:18.092093    2642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"032d4aec648e409325e36adfc35c27d01e3a05073d74ac17de4c73da7b37153a"} err="failed to get container status \"032d4aec648e409325e36adfc35c27d01e3a05073d74ac17de4c73da7b37153a\": rpc error: code = NotFound desc = an error occurred when try to find container \"032d4aec648e409325e36adfc35c27d01e3a05073d74ac17de4c73da7b37153a\": not found"
Feb 13 15:27:18.092494 kubelet[2642]: I0213 15:27:18.092149    2642 scope.go:117] "RemoveContainer" containerID="175213ccb3bff7b91ce418151fd560267157049b4a95e75071879982e363801d"
Feb 13 15:27:18.092642 containerd[1471]: time="2025-02-13T15:27:18.092550364Z" level=error msg="ContainerStatus for \"175213ccb3bff7b91ce418151fd560267157049b4a95e75071879982e363801d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"175213ccb3bff7b91ce418151fd560267157049b4a95e75071879982e363801d\": not found"
Feb 13 15:27:18.092821 kubelet[2642]: E0213 15:27:18.092749    2642 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"175213ccb3bff7b91ce418151fd560267157049b4a95e75071879982e363801d\": not found" containerID="175213ccb3bff7b91ce418151fd560267157049b4a95e75071879982e363801d"
Feb 13 15:27:18.092821 kubelet[2642]: I0213 15:27:18.092793    2642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"175213ccb3bff7b91ce418151fd560267157049b4a95e75071879982e363801d"} err="failed to get container status \"175213ccb3bff7b91ce418151fd560267157049b4a95e75071879982e363801d\": rpc error: code = NotFound desc = an error occurred when try to find container \"175213ccb3bff7b91ce418151fd560267157049b4a95e75071879982e363801d\": not found"
Feb 13 15:27:18.092821 kubelet[2642]: I0213 15:27:18.092808    2642 scope.go:117] "RemoveContainer" containerID="673bdc065ab05d0af7a1e451133cb75fb9ad0f2b637701e5e7bbe08997b6a26f"
Feb 13 15:27:18.093180 kubelet[2642]: E0213 15:27:18.093160    2642 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"673bdc065ab05d0af7a1e451133cb75fb9ad0f2b637701e5e7bbe08997b6a26f\": not found" containerID="673bdc065ab05d0af7a1e451133cb75fb9ad0f2b637701e5e7bbe08997b6a26f"
Feb 13 15:27:18.093214 containerd[1471]: time="2025-02-13T15:27:18.093008619Z" level=error msg="ContainerStatus for \"673bdc065ab05d0af7a1e451133cb75fb9ad0f2b637701e5e7bbe08997b6a26f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"673bdc065ab05d0af7a1e451133cb75fb9ad0f2b637701e5e7bbe08997b6a26f\": not found"
Feb 13 15:27:18.093241 kubelet[2642]: I0213 15:27:18.093191    2642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"673bdc065ab05d0af7a1e451133cb75fb9ad0f2b637701e5e7bbe08997b6a26f"} err="failed to get container status \"673bdc065ab05d0af7a1e451133cb75fb9ad0f2b637701e5e7bbe08997b6a26f\": rpc error: code = NotFound desc = an error occurred when try to find container \"673bdc065ab05d0af7a1e451133cb75fb9ad0f2b637701e5e7bbe08997b6a26f\": not found"
Feb 13 15:27:18.093241 kubelet[2642]: I0213 15:27:18.093202    2642 scope.go:117] "RemoveContainer" containerID="89f8b07440ce741377345d4321e25c6d3df90ec897697154c9374bb63a9a6292"
Feb 13 15:27:18.093417 containerd[1471]: time="2025-02-13T15:27:18.093381959Z" level=error msg="ContainerStatus for \"89f8b07440ce741377345d4321e25c6d3df90ec897697154c9374bb63a9a6292\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"89f8b07440ce741377345d4321e25c6d3df90ec897697154c9374bb63a9a6292\": not found"
Feb 13 15:27:18.094170 kubelet[2642]: E0213 15:27:18.093572    2642 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"89f8b07440ce741377345d4321e25c6d3df90ec897697154c9374bb63a9a6292\": not found" containerID="89f8b07440ce741377345d4321e25c6d3df90ec897697154c9374bb63a9a6292"
Feb 13 15:27:18.094170 kubelet[2642]: I0213 15:27:18.093607    2642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"89f8b07440ce741377345d4321e25c6d3df90ec897697154c9374bb63a9a6292"} err="failed to get container status \"89f8b07440ce741377345d4321e25c6d3df90ec897697154c9374bb63a9a6292\": rpc error: code = NotFound desc = an error occurred when try to find container \"89f8b07440ce741377345d4321e25c6d3df90ec897697154c9374bb63a9a6292\": not found"
Feb 13 15:27:18.094170 kubelet[2642]: I0213 15:27:18.093621    2642 scope.go:117] "RemoveContainer" containerID="b7acb5ee226562f51c446046e6011e400da017e3ef008cdbbe22746446be3f1b"
Feb 13 15:27:18.094170 kubelet[2642]: E0213 15:27:18.094013    2642 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b7acb5ee226562f51c446046e6011e400da017e3ef008cdbbe22746446be3f1b\": not found" containerID="b7acb5ee226562f51c446046e6011e400da017e3ef008cdbbe22746446be3f1b"
Feb 13 15:27:18.094170 kubelet[2642]: I0213 15:27:18.094065    2642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b7acb5ee226562f51c446046e6011e400da017e3ef008cdbbe22746446be3f1b"} err="failed to get container status \"b7acb5ee226562f51c446046e6011e400da017e3ef008cdbbe22746446be3f1b\": rpc error: code = NotFound desc = an error occurred when try to find container \"b7acb5ee226562f51c446046e6011e400da017e3ef008cdbbe22746446be3f1b\": not found"
Feb 13 15:27:18.094170 kubelet[2642]: I0213 15:27:18.094077    2642 scope.go:117] "RemoveContainer" containerID="0d4f3e683a559590b54e0673e77f23f5b25ce463e59da98c7c20afb152c5b54f"
Feb 13 15:27:18.094373 containerd[1471]: time="2025-02-13T15:27:18.093808296Z" level=error msg="ContainerStatus for \"b7acb5ee226562f51c446046e6011e400da017e3ef008cdbbe22746446be3f1b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b7acb5ee226562f51c446046e6011e400da017e3ef008cdbbe22746446be3f1b\": not found"
Feb 13 15:27:18.095021 containerd[1471]: time="2025-02-13T15:27:18.094977234Z" level=info msg="RemoveContainer for \"0d4f3e683a559590b54e0673e77f23f5b25ce463e59da98c7c20afb152c5b54f\""
Feb 13 15:27:18.097765 containerd[1471]: time="2025-02-13T15:27:18.097394384Z" level=info msg="RemoveContainer for \"0d4f3e683a559590b54e0673e77f23f5b25ce463e59da98c7c20afb152c5b54f\" returns successfully"
Feb 13 15:27:18.097831 kubelet[2642]: I0213 15:27:18.097569    2642 scope.go:117] "RemoveContainer" containerID="0d4f3e683a559590b54e0673e77f23f5b25ce463e59da98c7c20afb152c5b54f"
Feb 13 15:27:18.097872 containerd[1471]: time="2025-02-13T15:27:18.097767525Z" level=error msg="ContainerStatus for \"0d4f3e683a559590b54e0673e77f23f5b25ce463e59da98c7c20afb152c5b54f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0d4f3e683a559590b54e0673e77f23f5b25ce463e59da98c7c20afb152c5b54f\": not found"
Feb 13 15:27:18.097969 kubelet[2642]: E0213 15:27:18.097903    2642 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0d4f3e683a559590b54e0673e77f23f5b25ce463e59da98c7c20afb152c5b54f\": not found" containerID="0d4f3e683a559590b54e0673e77f23f5b25ce463e59da98c7c20afb152c5b54f"
Feb 13 15:27:18.097969 kubelet[2642]: I0213 15:27:18.097940    2642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0d4f3e683a559590b54e0673e77f23f5b25ce463e59da98c7c20afb152c5b54f"} err="failed to get container status \"0d4f3e683a559590b54e0673e77f23f5b25ce463e59da98c7c20afb152c5b54f\": rpc error: code = NotFound desc = an error occurred when try to find container \"0d4f3e683a559590b54e0673e77f23f5b25ce463e59da98c7c20afb152c5b54f\": not found"
Feb 13 15:27:18.221683 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11d28ac8e2a03f21a3893539a5b90a1b65b4858a3cf3bbe91d4a063558c2f395-rootfs.mount: Deactivated successfully.
Feb 13 15:27:18.221793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71c803521ba02d9e4616f819a20c8a24a996a81615dd7c7af989136eb8bc299f-rootfs.mount: Deactivated successfully.
Feb 13 15:27:18.221843 systemd[1]: var-lib-kubelet-pods-20751177\x2ddc28\x2d4b5e\x2db54a\x2d0fd4e3679a3b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsmlms.mount: Deactivated successfully.
Feb 13 15:27:18.221901 systemd[1]: var-lib-kubelet-pods-20751177\x2ddc28\x2d4b5e\x2db54a\x2d0fd4e3679a3b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Feb 13 15:27:18.221963 systemd[1]: var-lib-kubelet-pods-20751177\x2ddc28\x2d4b5e\x2db54a\x2d0fd4e3679a3b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Feb 13 15:27:18.222018 systemd[1]: var-lib-kubelet-pods-9bd2e66b\x2d9891\x2d4f47\x2da75c\x2d67d2a2c78c22-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqrxxd.mount: Deactivated successfully.
Feb 13 15:27:19.149402 sshd[4279]: Connection closed by 10.0.0.1 port 34014
Feb 13 15:27:19.149876 sshd-session[4277]: pam_unix(sshd:session): session closed for user core
Feb 13 15:27:19.160876 systemd[1]: sshd@21-10.0.0.55:22-10.0.0.1:34014.service: Deactivated successfully.
Feb 13 15:27:19.162410 systemd[1]: session-22.scope: Deactivated successfully.
Feb 13 15:27:19.162564 systemd[1]: session-22.scope: Consumed 1.610s CPU time.
Feb 13 15:27:19.163654 systemd-logind[1451]: Session 22 logged out. Waiting for processes to exit.
Feb 13 15:27:19.165121 systemd[1]: Started sshd@22-10.0.0.55:22-10.0.0.1:34028.service - OpenSSH per-connection server daemon (10.0.0.1:34028).
Feb 13 15:27:19.166255 systemd-logind[1451]: Removed session 22.
Feb 13 15:27:19.234551 sshd[4443]: Accepted publickey for core from 10.0.0.1 port 34028 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:27:19.235988 sshd-session[4443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:27:19.240231 systemd-logind[1451]: New session 23 of user core.
Feb 13 15:27:19.252011 systemd[1]: Started session-23.scope - Session 23 of User core.
Feb 13 15:27:19.661831 kubelet[2642]: I0213 15:27:19.661462    2642 setters.go:568] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:27:19Z","lastTransitionTime":"2025-02-13T15:27:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"}
Feb 13 15:27:19.813705 kubelet[2642]: I0213 15:27:19.812881    2642 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="20751177-dc28-4b5e-b54a-0fd4e3679a3b" path="/var/lib/kubelet/pods/20751177-dc28-4b5e-b54a-0fd4e3679a3b/volumes"
Feb 13 15:27:19.813705 kubelet[2642]: I0213 15:27:19.813461    2642 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9bd2e66b-9891-4f47-a75c-67d2a2c78c22" path="/var/lib/kubelet/pods/9bd2e66b-9891-4f47-a75c-67d2a2c78c22/volumes"
Feb 13 15:27:19.938385 sshd[4445]: Connection closed by 10.0.0.1 port 34028
Feb 13 15:27:19.937400 sshd-session[4443]: pam_unix(sshd:session): session closed for user core
Feb 13 15:27:19.947630 systemd[1]: sshd@22-10.0.0.55:22-10.0.0.1:34028.service: Deactivated successfully.
Feb 13 15:27:19.953371 kubelet[2642]: I0213 15:27:19.951196    2642 topology_manager.go:215] "Topology Admit Handler" podUID="32a2baea-ed1d-479d-abd9-c184b922d464" podNamespace="kube-system" podName="cilium-vbq4t"
Feb 13 15:27:19.953371 kubelet[2642]: E0213 15:27:19.951258    2642 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="20751177-dc28-4b5e-b54a-0fd4e3679a3b" containerName="mount-cgroup"
Feb 13 15:27:19.953371 kubelet[2642]: E0213 15:27:19.951307    2642 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="20751177-dc28-4b5e-b54a-0fd4e3679a3b" containerName="mount-bpf-fs"
Feb 13 15:27:19.953371 kubelet[2642]: E0213 15:27:19.951318    2642 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="20751177-dc28-4b5e-b54a-0fd4e3679a3b" containerName="clean-cilium-state"
Feb 13 15:27:19.953371 kubelet[2642]: E0213 15:27:19.951328    2642 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="20751177-dc28-4b5e-b54a-0fd4e3679a3b" containerName="apply-sysctl-overwrites"
Feb 13 15:27:19.953371 kubelet[2642]: E0213 15:27:19.951334    2642 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="20751177-dc28-4b5e-b54a-0fd4e3679a3b" containerName="cilium-agent"
Feb 13 15:27:19.953371 kubelet[2642]: E0213 15:27:19.951343    2642 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9bd2e66b-9891-4f47-a75c-67d2a2c78c22" containerName="cilium-operator"
Feb 13 15:27:19.951615 systemd[1]: session-23.scope: Deactivated successfully.
Feb 13 15:27:19.954981 systemd-logind[1451]: Session 23 logged out. Waiting for processes to exit.
Feb 13 15:27:19.965755 kubelet[2642]: I0213 15:27:19.965068    2642 memory_manager.go:354] "RemoveStaleState removing state" podUID="20751177-dc28-4b5e-b54a-0fd4e3679a3b" containerName="cilium-agent"
Feb 13 15:27:19.965755 kubelet[2642]: I0213 15:27:19.965115    2642 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bd2e66b-9891-4f47-a75c-67d2a2c78c22" containerName="cilium-operator"
Feb 13 15:27:19.968508 systemd[1]: Started sshd@23-10.0.0.55:22-10.0.0.1:34034.service - OpenSSH per-connection server daemon (10.0.0.1:34034).
Feb 13 15:27:19.973837 systemd-logind[1451]: Removed session 23.
Feb 13 15:27:19.984381 systemd[1]: Created slice kubepods-burstable-pod32a2baea_ed1d_479d_abd9_c184b922d464.slice - libcontainer container kubepods-burstable-pod32a2baea_ed1d_479d_abd9_c184b922d464.slice.
Feb 13 15:27:20.025812 sshd[4456]: Accepted publickey for core from 10.0.0.1 port 34034 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:27:20.027204 sshd-session[4456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:27:20.032076 systemd-logind[1451]: New session 24 of user core.
Feb 13 15:27:20.041505 systemd[1]: Started session-24.scope - Session 24 of User core.
Feb 13 15:27:20.078771 kubelet[2642]: I0213 15:27:20.078671    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/32a2baea-ed1d-479d-abd9-c184b922d464-hostproc\") pod \"cilium-vbq4t\" (UID: \"32a2baea-ed1d-479d-abd9-c184b922d464\") " pod="kube-system/cilium-vbq4t"
Feb 13 15:27:20.078771 kubelet[2642]: I0213 15:27:20.078719    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/32a2baea-ed1d-479d-abd9-c184b922d464-cilium-ipsec-secrets\") pod \"cilium-vbq4t\" (UID: \"32a2baea-ed1d-479d-abd9-c184b922d464\") " pod="kube-system/cilium-vbq4t"
Feb 13 15:27:20.078771 kubelet[2642]: I0213 15:27:20.078746    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/32a2baea-ed1d-479d-abd9-c184b922d464-cni-path\") pod \"cilium-vbq4t\" (UID: \"32a2baea-ed1d-479d-abd9-c184b922d464\") " pod="kube-system/cilium-vbq4t"
Feb 13 15:27:20.079149 kubelet[2642]: I0213 15:27:20.078796    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32a2baea-ed1d-479d-abd9-c184b922d464-xtables-lock\") pod \"cilium-vbq4t\" (UID: \"32a2baea-ed1d-479d-abd9-c184b922d464\") " pod="kube-system/cilium-vbq4t"
Feb 13 15:27:20.079149 kubelet[2642]: I0213 15:27:20.078834    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32a2baea-ed1d-479d-abd9-c184b922d464-cilium-config-path\") pod \"cilium-vbq4t\" (UID: \"32a2baea-ed1d-479d-abd9-c184b922d464\") " pod="kube-system/cilium-vbq4t"
Feb 13 15:27:20.079149 kubelet[2642]: I0213 15:27:20.078880    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/32a2baea-ed1d-479d-abd9-c184b922d464-host-proc-sys-kernel\") pod \"cilium-vbq4t\" (UID: \"32a2baea-ed1d-479d-abd9-c184b922d464\") " pod="kube-system/cilium-vbq4t"
Feb 13 15:27:20.079149 kubelet[2642]: I0213 15:27:20.078902    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/32a2baea-ed1d-479d-abd9-c184b922d464-hubble-tls\") pod \"cilium-vbq4t\" (UID: \"32a2baea-ed1d-479d-abd9-c184b922d464\") " pod="kube-system/cilium-vbq4t"
Feb 13 15:27:20.079149 kubelet[2642]: I0213 15:27:20.078933    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32a2baea-ed1d-479d-abd9-c184b922d464-lib-modules\") pod \"cilium-vbq4t\" (UID: \"32a2baea-ed1d-479d-abd9-c184b922d464\") " pod="kube-system/cilium-vbq4t"
Feb 13 15:27:20.079149 kubelet[2642]: I0213 15:27:20.078983    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/32a2baea-ed1d-479d-abd9-c184b922d464-bpf-maps\") pod \"cilium-vbq4t\" (UID: \"32a2baea-ed1d-479d-abd9-c184b922d464\") " pod="kube-system/cilium-vbq4t"
Feb 13 15:27:20.079281 kubelet[2642]: I0213 15:27:20.079023    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/32a2baea-ed1d-479d-abd9-c184b922d464-cilium-cgroup\") pod \"cilium-vbq4t\" (UID: \"32a2baea-ed1d-479d-abd9-c184b922d464\") " pod="kube-system/cilium-vbq4t"
Feb 13 15:27:20.079281 kubelet[2642]: I0213 15:27:20.079096    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/32a2baea-ed1d-479d-abd9-c184b922d464-cilium-run\") pod \"cilium-vbq4t\" (UID: \"32a2baea-ed1d-479d-abd9-c184b922d464\") " pod="kube-system/cilium-vbq4t"
Feb 13 15:27:20.079281 kubelet[2642]: I0213 15:27:20.079158    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/32a2baea-ed1d-479d-abd9-c184b922d464-etc-cni-netd\") pod \"cilium-vbq4t\" (UID: \"32a2baea-ed1d-479d-abd9-c184b922d464\") " pod="kube-system/cilium-vbq4t"
Feb 13 15:27:20.079281 kubelet[2642]: I0213 15:27:20.079182    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/32a2baea-ed1d-479d-abd9-c184b922d464-host-proc-sys-net\") pod \"cilium-vbq4t\" (UID: \"32a2baea-ed1d-479d-abd9-c184b922d464\") " pod="kube-system/cilium-vbq4t"
Feb 13 15:27:20.079281 kubelet[2642]: I0213 15:27:20.079208    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr55p\" (UniqueName: \"kubernetes.io/projected/32a2baea-ed1d-479d-abd9-c184b922d464-kube-api-access-jr55p\") pod \"cilium-vbq4t\" (UID: \"32a2baea-ed1d-479d-abd9-c184b922d464\") " pod="kube-system/cilium-vbq4t"
Feb 13 15:27:20.079281 kubelet[2642]: I0213 15:27:20.079235    2642 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/32a2baea-ed1d-479d-abd9-c184b922d464-clustermesh-secrets\") pod \"cilium-vbq4t\" (UID: \"32a2baea-ed1d-479d-abd9-c184b922d464\") " pod="kube-system/cilium-vbq4t"
Feb 13 15:27:20.091308 sshd[4458]: Connection closed by 10.0.0.1 port 34034
Feb 13 15:27:20.092028 sshd-session[4456]: pam_unix(sshd:session): session closed for user core
Feb 13 15:27:20.106144 systemd[1]: sshd@23-10.0.0.55:22-10.0.0.1:34034.service: Deactivated successfully.
Feb 13 15:27:20.107923 systemd[1]: session-24.scope: Deactivated successfully.
Feb 13 15:27:20.109501 systemd-logind[1451]: Session 24 logged out. Waiting for processes to exit.
Feb 13 15:27:20.111077 systemd[1]: Started sshd@24-10.0.0.55:22-10.0.0.1:34036.service - OpenSSH per-connection server daemon (10.0.0.1:34036).
Feb 13 15:27:20.114309 systemd-logind[1451]: Removed session 24.
Feb 13 15:27:20.157786 sshd[4464]: Accepted publickey for core from 10.0.0.1 port 34036 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:27:20.159927 sshd-session[4464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:27:20.165119 systemd-logind[1451]: New session 25 of user core.
Feb 13 15:27:20.173511 systemd[1]: Started session-25.scope - Session 25 of User core.
Feb 13 15:27:20.290260 kubelet[2642]: E0213 15:27:20.289925    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:27:20.290515 containerd[1471]: time="2025-02-13T15:27:20.290478111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vbq4t,Uid:32a2baea-ed1d-479d-abd9-c184b922d464,Namespace:kube-system,Attempt:0,}"
Feb 13 15:27:20.316370 containerd[1471]: time="2025-02-13T15:27:20.316207775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:27:20.316370 containerd[1471]: time="2025-02-13T15:27:20.316278412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:27:20.316370 containerd[1471]: time="2025-02-13T15:27:20.316337929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:27:20.316636 containerd[1471]: time="2025-02-13T15:27:20.316427845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:27:20.333489 systemd[1]: Started cri-containerd-e8bd9af744800fd866f17ddb5aa9fabe425ac5dadf4afe0e4f0e6c00bee5df74.scope - libcontainer container e8bd9af744800fd866f17ddb5aa9fabe425ac5dadf4afe0e4f0e6c00bee5df74.
Feb 13 15:27:20.368024 containerd[1471]: time="2025-02-13T15:27:20.367983250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vbq4t,Uid:32a2baea-ed1d-479d-abd9-c184b922d464,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8bd9af744800fd866f17ddb5aa9fabe425ac5dadf4afe0e4f0e6c00bee5df74\""
Feb 13 15:27:20.369128 kubelet[2642]: E0213 15:27:20.369100    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:27:20.371720 containerd[1471]: time="2025-02-13T15:27:20.371682035Z" level=info msg="CreateContainer within sandbox \"e8bd9af744800fd866f17ddb5aa9fabe425ac5dadf4afe0e4f0e6c00bee5df74\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb 13 15:27:20.382439 containerd[1471]: time="2025-02-13T15:27:20.382389010Z" level=info msg="CreateContainer within sandbox \"e8bd9af744800fd866f17ddb5aa9fabe425ac5dadf4afe0e4f0e6c00bee5df74\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bac74d1bd97dd61bbf696fa024db0dd1df78ad6cfa6e161f240bbe412e32c94e\""
Feb 13 15:27:20.383215 containerd[1471]: time="2025-02-13T15:27:20.383192972Z" level=info msg="StartContainer for \"bac74d1bd97dd61bbf696fa024db0dd1df78ad6cfa6e161f240bbe412e32c94e\""
Feb 13 15:27:20.410494 systemd[1]: Started cri-containerd-bac74d1bd97dd61bbf696fa024db0dd1df78ad6cfa6e161f240bbe412e32c94e.scope - libcontainer container bac74d1bd97dd61bbf696fa024db0dd1df78ad6cfa6e161f240bbe412e32c94e.
Feb 13 15:27:20.432898 containerd[1471]: time="2025-02-13T15:27:20.432771150Z" level=info msg="StartContainer for \"bac74d1bd97dd61bbf696fa024db0dd1df78ad6cfa6e161f240bbe412e32c94e\" returns successfully"
Feb 13 15:27:20.459950 systemd[1]: cri-containerd-bac74d1bd97dd61bbf696fa024db0dd1df78ad6cfa6e161f240bbe412e32c94e.scope: Deactivated successfully.
Feb 13 15:27:20.488584 containerd[1471]: time="2025-02-13T15:27:20.488503398Z" level=info msg="shim disconnected" id=bac74d1bd97dd61bbf696fa024db0dd1df78ad6cfa6e161f240bbe412e32c94e namespace=k8s.io
Feb 13 15:27:20.488584 containerd[1471]: time="2025-02-13T15:27:20.488578194Z" level=warning msg="cleaning up after shim disconnected" id=bac74d1bd97dd61bbf696fa024db0dd1df78ad6cfa6e161f240bbe412e32c94e namespace=k8s.io
Feb 13 15:27:20.488584 containerd[1471]: time="2025-02-13T15:27:20.488587194Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:27:21.036551 kubelet[2642]: E0213 15:27:21.036518    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:27:21.043699 containerd[1471]: time="2025-02-13T15:27:21.043590705Z" level=info msg="CreateContainer within sandbox \"e8bd9af744800fd866f17ddb5aa9fabe425ac5dadf4afe0e4f0e6c00bee5df74\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb 13 15:27:21.054576 containerd[1471]: time="2025-02-13T15:27:21.054514862Z" level=info msg="CreateContainer within sandbox \"e8bd9af744800fd866f17ddb5aa9fabe425ac5dadf4afe0e4f0e6c00bee5df74\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"04aacbd9873626d931930dc5eb0e51df672c444b9d6e509329e8b905e235e2df\""
Feb 13 15:27:21.055039 containerd[1471]: time="2025-02-13T15:27:21.055002560Z" level=info msg="StartContainer for \"04aacbd9873626d931930dc5eb0e51df672c444b9d6e509329e8b905e235e2df\""
Feb 13 15:27:21.087524 systemd[1]: Started cri-containerd-04aacbd9873626d931930dc5eb0e51df672c444b9d6e509329e8b905e235e2df.scope - libcontainer container 04aacbd9873626d931930dc5eb0e51df672c444b9d6e509329e8b905e235e2df.
Feb 13 15:27:21.110459 containerd[1471]: time="2025-02-13T15:27:21.110410068Z" level=info msg="StartContainer for \"04aacbd9873626d931930dc5eb0e51df672c444b9d6e509329e8b905e235e2df\" returns successfully"
Feb 13 15:27:21.120527 systemd[1]: cri-containerd-04aacbd9873626d931930dc5eb0e51df672c444b9d6e509329e8b905e235e2df.scope: Deactivated successfully.
Feb 13 15:27:21.152953 containerd[1471]: time="2025-02-13T15:27:21.152697277Z" level=info msg="shim disconnected" id=04aacbd9873626d931930dc5eb0e51df672c444b9d6e509329e8b905e235e2df namespace=k8s.io
Feb 13 15:27:21.152953 containerd[1471]: time="2025-02-13T15:27:21.152761194Z" level=warning msg="cleaning up after shim disconnected" id=04aacbd9873626d931930dc5eb0e51df672c444b9d6e509329e8b905e235e2df namespace=k8s.io
Feb 13 15:27:21.152953 containerd[1471]: time="2025-02-13T15:27:21.152770994Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:27:22.041202 kubelet[2642]: E0213 15:27:22.041146    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:27:22.045067 containerd[1471]: time="2025-02-13T15:27:22.045014318Z" level=info msg="CreateContainer within sandbox \"e8bd9af744800fd866f17ddb5aa9fabe425ac5dadf4afe0e4f0e6c00bee5df74\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb 13 15:27:22.071904 containerd[1471]: time="2025-02-13T15:27:22.071835768Z" level=info msg="CreateContainer within sandbox \"e8bd9af744800fd866f17ddb5aa9fabe425ac5dadf4afe0e4f0e6c00bee5df74\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d5f2f0653c45ae311eddc33ee4244c411d3adb562825089b0dda4016ffc7234a\""
Feb 13 15:27:22.074075 containerd[1471]: time="2025-02-13T15:27:22.072586617Z" level=info msg="StartContainer for \"d5f2f0653c45ae311eddc33ee4244c411d3adb562825089b0dda4016ffc7234a\""
Feb 13 15:27:22.115480 systemd[1]: Started cri-containerd-d5f2f0653c45ae311eddc33ee4244c411d3adb562825089b0dda4016ffc7234a.scope - libcontainer container d5f2f0653c45ae311eddc33ee4244c411d3adb562825089b0dda4016ffc7234a.
Feb 13 15:27:22.141996 containerd[1471]: time="2025-02-13T15:27:22.141946988Z" level=info msg="StartContainer for \"d5f2f0653c45ae311eddc33ee4244c411d3adb562825089b0dda4016ffc7234a\" returns successfully"
Feb 13 15:27:22.144280 systemd[1]: cri-containerd-d5f2f0653c45ae311eddc33ee4244c411d3adb562825089b0dda4016ffc7234a.scope: Deactivated successfully.
Feb 13 15:27:22.170577 containerd[1471]: time="2025-02-13T15:27:22.170513367Z" level=info msg="shim disconnected" id=d5f2f0653c45ae311eddc33ee4244c411d3adb562825089b0dda4016ffc7234a namespace=k8s.io
Feb 13 15:27:22.170577 containerd[1471]: time="2025-02-13T15:27:22.170571404Z" level=warning msg="cleaning up after shim disconnected" id=d5f2f0653c45ae311eddc33ee4244c411d3adb562825089b0dda4016ffc7234a namespace=k8s.io
Feb 13 15:27:22.170577 containerd[1471]: time="2025-02-13T15:27:22.170583444Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:27:22.183746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5f2f0653c45ae311eddc33ee4244c411d3adb562825089b0dda4016ffc7234a-rootfs.mount: Deactivated successfully.
Feb 13 15:27:22.896619 kubelet[2642]: E0213 15:27:22.896572    2642 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 13 15:27:23.045738 kubelet[2642]: E0213 15:27:23.045271    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:27:23.049206 containerd[1471]: time="2025-02-13T15:27:23.048087878Z" level=info msg="CreateContainer within sandbox \"e8bd9af744800fd866f17ddb5aa9fabe425ac5dadf4afe0e4f0e6c00bee5df74\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb 13 15:27:23.071916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3112236962.mount: Deactivated successfully.
Feb 13 15:27:23.074639 containerd[1471]: time="2025-02-13T15:27:23.074496819Z" level=info msg="CreateContainer within sandbox \"e8bd9af744800fd866f17ddb5aa9fabe425ac5dadf4afe0e4f0e6c00bee5df74\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7171ae42ce09d227a1ccd36913ae7c37d15409d401ef4abd0b8b342b4855dce8\""
Feb 13 15:27:23.074977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1521884107.mount: Deactivated successfully.
Feb 13 15:27:23.075111 containerd[1471]: time="2025-02-13T15:27:23.075042478Z" level=info msg="StartContainer for \"7171ae42ce09d227a1ccd36913ae7c37d15409d401ef4abd0b8b342b4855dce8\""
Feb 13 15:27:23.111539 systemd[1]: Started cri-containerd-7171ae42ce09d227a1ccd36913ae7c37d15409d401ef4abd0b8b342b4855dce8.scope - libcontainer container 7171ae42ce09d227a1ccd36913ae7c37d15409d401ef4abd0b8b342b4855dce8.
Feb 13 15:27:23.134396 systemd[1]: cri-containerd-7171ae42ce09d227a1ccd36913ae7c37d15409d401ef4abd0b8b342b4855dce8.scope: Deactivated successfully.
Feb 13 15:27:23.136179 containerd[1471]: time="2025-02-13T15:27:23.136142002Z" level=info msg="StartContainer for \"7171ae42ce09d227a1ccd36913ae7c37d15409d401ef4abd0b8b342b4855dce8\" returns successfully"
Feb 13 15:27:23.161959 containerd[1471]: time="2025-02-13T15:27:23.161542982Z" level=info msg="shim disconnected" id=7171ae42ce09d227a1ccd36913ae7c37d15409d401ef4abd0b8b342b4855dce8 namespace=k8s.io
Feb 13 15:27:23.162320 containerd[1471]: time="2025-02-13T15:27:23.162129280Z" level=warning msg="cleaning up after shim disconnected" id=7171ae42ce09d227a1ccd36913ae7c37d15409d401ef4abd0b8b342b4855dce8 namespace=k8s.io
Feb 13 15:27:23.162320 containerd[1471]: time="2025-02-13T15:27:23.162149399Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:27:23.183774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7171ae42ce09d227a1ccd36913ae7c37d15409d401ef4abd0b8b342b4855dce8-rootfs.mount: Deactivated successfully.
Feb 13 15:27:24.049926 kubelet[2642]: E0213 15:27:24.049893    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:27:24.053643 containerd[1471]: time="2025-02-13T15:27:24.053590201Z" level=info msg="CreateContainer within sandbox \"e8bd9af744800fd866f17ddb5aa9fabe425ac5dadf4afe0e4f0e6c00bee5df74\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb 13 15:27:24.067495 containerd[1471]: time="2025-02-13T15:27:24.067256671Z" level=info msg="CreateContainer within sandbox \"e8bd9af744800fd866f17ddb5aa9fabe425ac5dadf4afe0e4f0e6c00bee5df74\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2e7ddb243aeecd7dc8e0fd0505e66d55bf8bf931192769b49c9d808851a9d282\""
Feb 13 15:27:24.068385 containerd[1471]: time="2025-02-13T15:27:24.068329912Z" level=info msg="StartContainer for \"2e7ddb243aeecd7dc8e0fd0505e66d55bf8bf931192769b49c9d808851a9d282\""
Feb 13 15:27:24.071028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2866082929.mount: Deactivated successfully.
Feb 13 15:27:24.106551 systemd[1]: Started cri-containerd-2e7ddb243aeecd7dc8e0fd0505e66d55bf8bf931192769b49c9d808851a9d282.scope - libcontainer container 2e7ddb243aeecd7dc8e0fd0505e66d55bf8bf931192769b49c9d808851a9d282.
Feb 13 15:27:24.134413 containerd[1471]: time="2025-02-13T15:27:24.134369384Z" level=info msg="StartContainer for \"2e7ddb243aeecd7dc8e0fd0505e66d55bf8bf931192769b49c9d808851a9d282\" returns successfully"
Feb 13 15:27:24.448319 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce))
Feb 13 15:27:25.055817 kubelet[2642]: E0213 15:27:25.055369    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:27:25.071304 kubelet[2642]: I0213 15:27:25.071246    2642 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-vbq4t" podStartSLOduration=6.071202736 podStartE2EDuration="6.071202736s" podCreationTimestamp="2025-02-13 15:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:27:25.070967784 +0000 UTC m=+77.335719660" watchObservedRunningTime="2025-02-13 15:27:25.071202736 +0000 UTC m=+77.335954572"
Feb 13 15:27:26.292653 kubelet[2642]: E0213 15:27:26.292546    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:27:27.553276 systemd-networkd[1392]: lxc_health: Link UP
Feb 13 15:27:27.566608 systemd-networkd[1392]: lxc_health: Gained carrier
Feb 13 15:27:28.295319 kubelet[2642]: E0213 15:27:28.292111    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:27:28.774486 systemd-networkd[1392]: lxc_health: Gained IPv6LL
Feb 13 15:27:29.065404 kubelet[2642]: E0213 15:27:29.065148    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:27:30.067127 kubelet[2642]: E0213 15:27:30.067089    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:27:32.810969 kubelet[2642]: E0213 15:27:32.810933    2642 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:27:32.957068 sshd[4466]: Connection closed by 10.0.0.1 port 34036
Feb 13 15:27:32.957465 sshd-session[4464]: pam_unix(sshd:session): session closed for user core
Feb 13 15:27:32.961078 systemd[1]: sshd@24-10.0.0.55:22-10.0.0.1:34036.service: Deactivated successfully.
Feb 13 15:27:32.963008 systemd[1]: session-25.scope: Deactivated successfully.
Feb 13 15:27:32.964903 systemd-logind[1451]: Session 25 logged out. Waiting for processes to exit.
Feb 13 15:27:32.965911 systemd-logind[1451]: Removed session 25.