Feb 13 15:25:56.142161 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 13:54:58 -00 2025
Feb 13 15:25:56.142220 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2
Feb 13 15:25:56.142240 kernel: BIOS-provided physical RAM map:
Feb 13 15:25:56.142254 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved
Feb 13 15:25:56.142266 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable
Feb 13 15:25:56.142281 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved
Feb 13 15:25:56.142297 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable
Feb 13 15:25:56.142316 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved
Feb 13 15:25:56.142331 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd327fff] usable
Feb 13 15:25:56.142345 kernel: BIOS-e820: [mem 0x00000000bd328000-0x00000000bd330fff] ACPI data
Feb 13 15:25:56.142359 kernel: BIOS-e820: [mem 0x00000000bd331000-0x00000000bf8ecfff] usable
Feb 13 15:25:56.142374 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved
Feb 13 15:25:56.142387 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data
Feb 13 15:25:56.142402 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS
Feb 13 15:25:56.142423 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable
Feb 13 15:25:56.142439 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved
Feb 13 15:25:56.142455 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable
Feb 13 15:25:56.142470 kernel: NX (Execute Disable) protection: active
Feb 13 15:25:56.142483 kernel: APIC: Static calls initialized
Feb 13 15:25:56.142497 kernel: efi: EFI v2.7 by EDK II
Feb 13 15:25:56.142514 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd328018 
Feb 13 15:25:56.142529 kernel: random: crng init done
Feb 13 15:25:56.142546 kernel: secureboot: Secure boot disabled
Feb 13 15:25:56.142565 kernel: SMBIOS 2.4 present.
Feb 13 15:25:56.142607 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024
Feb 13 15:25:56.142621 kernel: Hypervisor detected: KVM
Feb 13 15:25:56.142635 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Feb 13 15:25:56.142647 kernel: kvm-clock: using sched offset of 13742194118 cycles
Feb 13 15:25:56.142664 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Feb 13 15:25:56.142681 kernel: tsc: Detected 2299.998 MHz processor
Feb 13 15:25:56.142696 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Feb 13 15:25:56.142716 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Feb 13 15:25:56.142731 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000
Feb 13 15:25:56.142750 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs
Feb 13 15:25:56.142765 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Feb 13 15:25:56.142779 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000
Feb 13 15:25:56.142795 kernel: Using GB pages for direct mapping
Feb 13 15:25:56.142812 kernel: ACPI: Early table checksum verification disabled
Feb 13 15:25:56.142829 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google)
Feb 13 15:25:56.142847 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001      01000013)
Feb 13 15:25:56.142889 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001)
Feb 13 15:25:56.142923 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001)
Feb 13 15:25:56.142941 kernel: ACPI: FACS 0x00000000BFBF2000 000040
Feb 13 15:25:56.142960 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322)
Feb 13 15:25:56.142986 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE          00000001 GOOG 00000001)
Feb 13 15:25:56.143005 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001)
Feb 13 15:25:56.143024 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001)
Feb 13 15:25:56.143046 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001)
Feb 13 15:25:56.143063 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001)
Feb 13 15:25:56.143079 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3]
Feb 13 15:25:56.143097 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63]
Feb 13 15:25:56.143114 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f]
Feb 13 15:25:56.143132 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315]
Feb 13 15:25:56.143149 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033]
Feb 13 15:25:56.143166 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7]
Feb 13 15:25:56.143182 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075]
Feb 13 15:25:56.143205 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f]
Feb 13 15:25:56.143222 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027]
Feb 13 15:25:56.143238 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0
Feb 13 15:25:56.143256 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0
Feb 13 15:25:56.143273 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff]
Feb 13 15:25:56.143290 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff]
Feb 13 15:25:56.143308 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff]
Feb 13 15:25:56.143324 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff]
Feb 13 15:25:56.143342 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff]
Feb 13 15:25:56.143363 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff]
Feb 13 15:25:56.143380 kernel: Zone ranges:
Feb 13 15:25:56.143398 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Feb 13 15:25:56.143415 kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Feb 13 15:25:56.143432 kernel:   Normal   [mem 0x0000000100000000-0x000000021fffffff]
Feb 13 15:25:56.143450 kernel: Movable zone start for each node
Feb 13 15:25:56.143473 kernel: Early memory node ranges
Feb 13 15:25:56.143490 kernel:   node   0: [mem 0x0000000000001000-0x0000000000054fff]
Feb 13 15:25:56.143508 kernel:   node   0: [mem 0x0000000000060000-0x0000000000097fff]
Feb 13 15:25:56.143524 kernel:   node   0: [mem 0x0000000000100000-0x00000000bd327fff]
Feb 13 15:25:56.143546 kernel:   node   0: [mem 0x00000000bd331000-0x00000000bf8ecfff]
Feb 13 15:25:56.143563 kernel:   node   0: [mem 0x00000000bfbff000-0x00000000bffdffff]
Feb 13 15:25:56.143580 kernel:   node   0: [mem 0x0000000100000000-0x000000021fffffff]
Feb 13 15:25:56.143598 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff]
Feb 13 15:25:56.143614 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Feb 13 15:25:56.143631 kernel: On node 0, zone DMA: 11 pages in unavailable ranges
Feb 13 15:25:56.143648 kernel: On node 0, zone DMA: 104 pages in unavailable ranges
Feb 13 15:25:56.143665 kernel: On node 0, zone DMA32: 9 pages in unavailable ranges
Feb 13 15:25:56.143683 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges
Feb 13 15:25:56.143705 kernel: On node 0, zone Normal: 32 pages in unavailable ranges
Feb 13 15:25:56.143721 kernel: ACPI: PM-Timer IO Port: 0xb008
Feb 13 15:25:56.143739 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Feb 13 15:25:56.143756 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Feb 13 15:25:56.143773 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Feb 13 15:25:56.143808 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Feb 13 15:25:56.143825 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Feb 13 15:25:56.143843 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Feb 13 15:25:56.143860 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Feb 13 15:25:56.143896 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs
Feb 13 15:25:56.143914 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices
Feb 13 15:25:56.143930 kernel: Booting paravirtualized kernel on KVM
Feb 13 15:25:56.143948 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Feb 13 15:25:56.143965 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1
Feb 13 15:25:56.143990 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576
Feb 13 15:25:56.144008 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152
Feb 13 15:25:56.144024 kernel: pcpu-alloc: [0] 0 1 
Feb 13 15:25:56.144041 kernel: kvm-guest: PV spinlocks enabled
Feb 13 15:25:56.144063 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Feb 13 15:25:56.144082 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2
Feb 13 15:25:56.144098 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Feb 13 15:25:56.144114 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Feb 13 15:25:56.144148 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb 13 15:25:56.144163 kernel: Fallback order for Node 0: 0 
Feb 13 15:25:56.144179 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 1932271
Feb 13 15:25:56.144197 kernel: Policy zone: Normal
Feb 13 15:25:56.144225 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 13 15:25:56.144241 kernel: software IO TLB: area num 2.
Feb 13 15:25:56.144257 kernel: Memory: 7513364K/7860548K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 346928K reserved, 0K cma-reserved)
Feb 13 15:25:56.144275 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Feb 13 15:25:56.144294 kernel: Kernel/User page tables isolation: enabled
Feb 13 15:25:56.144312 kernel: ftrace: allocating 37920 entries in 149 pages
Feb 13 15:25:56.144331 kernel: ftrace: allocated 149 pages with 4 groups
Feb 13 15:25:56.144350 kernel: Dynamic Preempt: voluntary
Feb 13 15:25:56.144390 kernel: rcu: Preemptible hierarchical RCU implementation.
Feb 13 15:25:56.144411 kernel: rcu:         RCU event tracing is enabled.
Feb 13 15:25:56.144431 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Feb 13 15:25:56.144451 kernel:         Trampoline variant of Tasks RCU enabled.
Feb 13 15:25:56.144475 kernel:         Rude variant of Tasks RCU enabled.
Feb 13 15:25:56.144494 kernel:         Tracing variant of Tasks RCU enabled.
Feb 13 15:25:56.144514 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 13 15:25:56.144533 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Feb 13 15:25:56.144553 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16
Feb 13 15:25:56.144577 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb 13 15:25:56.144596 kernel: Console: colour dummy device 80x25
Feb 13 15:25:56.144616 kernel: printk: console [ttyS0] enabled
Feb 13 15:25:56.144644 kernel: ACPI: Core revision 20230628
Feb 13 15:25:56.144663 kernel: APIC: Switch to symmetric I/O mode setup
Feb 13 15:25:56.144683 kernel: x2apic enabled
Feb 13 15:25:56.144703 kernel: APIC: Switched APIC routing to: physical x2apic
Feb 13 15:25:56.144722 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1
Feb 13 15:25:56.144742 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns
Feb 13 15:25:56.144767 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998)
Feb 13 15:25:56.144787 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024
Feb 13 15:25:56.144807 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4
Feb 13 15:25:56.144826 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Feb 13 15:25:56.144846 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit
Feb 13 15:25:56.144865 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall
Feb 13 15:25:56.144900 kernel: Spectre V2 : Mitigation: IBRS
Feb 13 15:25:56.144920 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Feb 13 15:25:56.144944 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Feb 13 15:25:56.144964 kernel: RETBleed: Mitigation: IBRS
Feb 13 15:25:56.144992 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Feb 13 15:25:56.145012 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl
Feb 13 15:25:56.145031 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Feb 13 15:25:56.145051 kernel: MDS: Mitigation: Clear CPU buffers
Feb 13 15:25:56.145070 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode
Feb 13 15:25:56.145090 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb 13 15:25:56.145109 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb 13 15:25:56.145133 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb 13 15:25:56.145153 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Feb 13 15:25:56.145173 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Feb 13 15:25:56.145192 kernel: Freeing SMP alternatives memory: 32K
Feb 13 15:25:56.145211 kernel: pid_max: default: 32768 minimum: 301
Feb 13 15:25:56.145230 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Feb 13 15:25:56.145250 kernel: landlock: Up and running.
Feb 13 15:25:56.145269 kernel: SELinux:  Initializing.
Feb 13 15:25:56.145289 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 13 15:25:56.145313 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 13 15:25:56.145333 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0)
Feb 13 15:25:56.145352 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 15:25:56.145372 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 15:25:56.145392 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 15:25:56.145412 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only.
Feb 13 15:25:56.145431 kernel: signal: max sigframe size: 1776
Feb 13 15:25:56.145451 kernel: rcu: Hierarchical SRCU implementation.
Feb 13 15:25:56.145471 kernel: rcu:         Max phase no-delay instances is 400.
Feb 13 15:25:56.145494 kernel: NMI watchdog: Perf NMI watchdog permanently disabled
Feb 13 15:25:56.145514 kernel: smp: Bringing up secondary CPUs ...
Feb 13 15:25:56.145533 kernel: smpboot: x86: Booting SMP configuration:
Feb 13 15:25:56.145553 kernel: .... node  #0, CPUs:      #1
Feb 13 15:25:56.145573 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
Feb 13 15:25:56.145594 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
Feb 13 15:25:56.145613 kernel: smp: Brought up 1 node, 2 CPUs
Feb 13 15:25:56.145632 kernel: smpboot: Max logical packages: 1
Feb 13 15:25:56.145656 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS)
Feb 13 15:25:56.145676 kernel: devtmpfs: initialized
Feb 13 15:25:56.145695 kernel: x86/mm: Memory block size: 128MB
Feb 13 15:25:56.145715 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes)
Feb 13 15:25:56.145735 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 13 15:25:56.145755 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Feb 13 15:25:56.145774 kernel: pinctrl core: initialized pinctrl subsystem
Feb 13 15:25:56.145794 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 13 15:25:56.145813 kernel: audit: initializing netlink subsys (disabled)
Feb 13 15:25:56.145837 kernel: audit: type=2000 audit(1739460354.331:1): state=initialized audit_enabled=0 res=1
Feb 13 15:25:56.145857 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 13 15:25:56.145888 kernel: thermal_sys: Registered thermal governor 'user_space'
Feb 13 15:25:56.145920 kernel: cpuidle: using governor menu
Feb 13 15:25:56.145939 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 13 15:25:56.145959 kernel: dca service started, version 1.12.1
Feb 13 15:25:56.145985 kernel: PCI: Using configuration type 1 for base access
Feb 13 15:25:56.146004 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Feb 13 15:25:56.146024 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb 13 15:25:56.146049 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Feb 13 15:25:56.146069 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb 13 15:25:56.146089 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Feb 13 15:25:56.146108 kernel: ACPI: Added _OSI(Module Device)
Feb 13 15:25:56.146128 kernel: ACPI: Added _OSI(Processor Device)
Feb 13 15:25:56.146148 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Feb 13 15:25:56.146167 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 13 15:25:56.146187 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded
Feb 13 15:25:56.146207 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Feb 13 15:25:56.146230 kernel: ACPI: Interpreter enabled
Feb 13 15:25:56.146249 kernel: ACPI: PM: (supports S0 S3 S5)
Feb 13 15:25:56.146269 kernel: ACPI: Using IOAPIC for interrupt routing
Feb 13 15:25:56.146288 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Feb 13 15:25:56.146307 kernel: PCI: Ignoring E820 reservations for host bridge windows
Feb 13 15:25:56.146327 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F
Feb 13 15:25:56.146346 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb 13 15:25:56.146687 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3]
Feb 13 15:25:56.146919 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI]
Feb 13 15:25:56.147453 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
Feb 13 15:25:56.147483 kernel: PCI host bridge to bus 0000:00
Feb 13 15:25:56.147668 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Feb 13 15:25:56.147834 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Feb 13 15:25:56.148033 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Feb 13 15:25:56.148197 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window]
Feb 13 15:25:56.148370 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb 13 15:25:56.148585 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Feb 13 15:25:56.149985 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100
Feb 13 15:25:56.150399 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
Feb 13 15:25:56.150603 kernel: pci 0000:00:01.3: quirk: [io  0xb000-0xb03f] claimed by PIIX4 ACPI
Feb 13 15:25:56.150813 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000
Feb 13 15:25:56.151049 kernel: pci 0000:00:03.0: reg 0x10: [io  0xc040-0xc07f]
Feb 13 15:25:56.151227 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f]
Feb 13 15:25:56.151442 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000
Feb 13 15:25:56.151795 kernel: pci 0000:00:04.0: reg 0x10: [io  0xc000-0xc03f]
Feb 13 15:25:56.152038 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f]
Feb 13 15:25:56.152240 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00
Feb 13 15:25:56.152428 kernel: pci 0000:00:05.0: reg 0x10: [io  0xc080-0xc09f]
Feb 13 15:25:56.152625 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f]
Feb 13 15:25:56.152650 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Feb 13 15:25:56.152671 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Feb 13 15:25:56.152691 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Feb 13 15:25:56.152710 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Feb 13 15:25:56.152730 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Feb 13 15:25:56.152750 kernel: iommu: Default domain type: Translated
Feb 13 15:25:56.152769 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Feb 13 15:25:56.152788 kernel: efivars: Registered efivars operations
Feb 13 15:25:56.152814 kernel: PCI: Using ACPI for IRQ routing
Feb 13 15:25:56.152833 kernel: PCI: pci_cache_line_size set to 64 bytes
Feb 13 15:25:56.152853 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff]
Feb 13 15:25:56.152871 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff]
Feb 13 15:25:56.153055 kernel: e820: reserve RAM buffer [mem 0xbd328000-0xbfffffff]
Feb 13 15:25:56.153074 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff]
Feb 13 15:25:56.153094 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff]
Feb 13 15:25:56.153112 kernel: vgaarb: loaded
Feb 13 15:25:56.153131 kernel: clocksource: Switched to clocksource kvm-clock
Feb 13 15:25:56.153157 kernel: VFS: Disk quotas dquot_6.6.0
Feb 13 15:25:56.153177 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 13 15:25:56.153197 kernel: pnp: PnP ACPI init
Feb 13 15:25:56.153217 kernel: pnp: PnP ACPI: found 7 devices
Feb 13 15:25:56.153237 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Feb 13 15:25:56.153257 kernel: NET: Registered PF_INET protocol family
Feb 13 15:25:56.153277 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Feb 13 15:25:56.153297 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Feb 13 15:25:56.153316 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 13 15:25:56.153341 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb 13 15:25:56.153361 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear)
Feb 13 15:25:56.153380 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Feb 13 15:25:56.153399 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb 13 15:25:56.153418 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb 13 15:25:56.153437 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 13 15:25:56.153462 kernel: NET: Registered PF_XDP protocol family
Feb 13 15:25:56.153661 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Feb 13 15:25:56.153836 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Feb 13 15:25:56.154113 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Feb 13 15:25:56.154429 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window]
Feb 13 15:25:56.154643 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Feb 13 15:25:56.154670 kernel: PCI: CLS 0 bytes, default 64
Feb 13 15:25:56.154690 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Feb 13 15:25:56.154710 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB)
Feb 13 15:25:56.154737 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer
Feb 13 15:25:56.154758 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns
Feb 13 15:25:56.154777 kernel: clocksource: Switched to clocksource tsc
Feb 13 15:25:56.154796 kernel: Initialise system trusted keyrings
Feb 13 15:25:56.154816 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0
Feb 13 15:25:56.154835 kernel: Key type asymmetric registered
Feb 13 15:25:56.154853 kernel: Asymmetric key parser 'x509' registered
Feb 13 15:25:56.154887 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
Feb 13 15:25:56.154919 kernel: io scheduler mq-deadline registered
Feb 13 15:25:56.154943 kernel: io scheduler kyber registered
Feb 13 15:25:56.154962 kernel: io scheduler bfq registered
Feb 13 15:25:56.154991 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Feb 13 15:25:56.155012 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Feb 13 15:25:56.155224 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver
Feb 13 15:25:56.155251 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10
Feb 13 15:25:56.155445 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver
Feb 13 15:25:56.155471 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Feb 13 15:25:56.155663 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver
Feb 13 15:25:56.155691 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 13 15:25:56.155708 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Feb 13 15:25:56.155726 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A
Feb 13 15:25:56.155744 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A
Feb 13 15:25:56.155760 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A
Feb 13 15:25:56.156030 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0)
Feb 13 15:25:56.156061 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Feb 13 15:25:56.156081 kernel: i8042: Warning: Keylock active
Feb 13 15:25:56.156106 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Feb 13 15:25:56.156126 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Feb 13 15:25:56.156316 kernel: rtc_cmos 00:00: RTC can wake from S4
Feb 13 15:25:56.156489 kernel: rtc_cmos 00:00: registered as rtc0
Feb 13 15:25:56.156661 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T15:25:55 UTC (1739460355)
Feb 13 15:25:56.156833 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram
Feb 13 15:25:56.156859 kernel: intel_pstate: CPU model not supported
Feb 13 15:25:56.166016 kernel: pstore: Using crash dump compression: deflate
Feb 13 15:25:56.166076 kernel: pstore: Registered efi_pstore as persistent store backend
Feb 13 15:25:56.166098 kernel: NET: Registered PF_INET6 protocol family
Feb 13 15:25:56.166118 kernel: Segment Routing with IPv6
Feb 13 15:25:56.166138 kernel: In-situ OAM (IOAM) with IPv6
Feb 13 15:25:56.166159 kernel: NET: Registered PF_PACKET protocol family
Feb 13 15:25:56.166179 kernel: Key type dns_resolver registered
Feb 13 15:25:56.166199 kernel: IPI shorthand broadcast: enabled
Feb 13 15:25:56.166219 kernel: sched_clock: Marking stable (1016004099, 168671150)->(1244751286, -60076037)
Feb 13 15:25:56.166238 kernel: registered taskstats version 1
Feb 13 15:25:56.166263 kernel: Loading compiled-in X.509 certificates
Feb 13 15:25:56.166284 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 9ec780e1db69d46be90bbba73ae62b0106e27ae0'
Feb 13 15:25:56.166304 kernel: Key type .fscrypt registered
Feb 13 15:25:56.166323 kernel: Key type fscrypt-provisioning registered
Feb 13 15:25:56.166343 kernel: ima: Allocated hash algorithm: sha1
Feb 13 15:25:56.166362 kernel: ima: No architecture policies found
Feb 13 15:25:56.166382 kernel: clk: Disabling unused clocks
Feb 13 15:25:56.166401 kernel: Freeing unused kernel image (initmem) memory: 42976K
Feb 13 15:25:56.166422 kernel: Write protecting the kernel read-only data: 36864k
Feb 13 15:25:56.166447 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
Feb 13 15:25:56.166468 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K
Feb 13 15:25:56.166488 kernel: Run /init as init process
Feb 13 15:25:56.166509 kernel:   with arguments:
Feb 13 15:25:56.166529 kernel:     /init
Feb 13 15:25:56.166549 kernel:   with environment:
Feb 13 15:25:56.166568 kernel:     HOME=/
Feb 13 15:25:56.166585 kernel:     TERM=linux
Feb 13 15:25:56.166605 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Feb 13 15:25:56.166638 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Feb 13 15:25:56.166663 systemd[1]: Detected virtualization google.
Feb 13 15:25:56.166684 systemd[1]: Detected architecture x86-64.
Feb 13 15:25:56.166704 systemd[1]: Running in initrd.
Feb 13 15:25:56.166723 systemd[1]: No hostname configured, using default hostname.
Feb 13 15:25:56.166742 systemd[1]: Hostname set to <localhost>.
Feb 13 15:25:56.166764 systemd[1]: Initializing machine ID from random generator.
Feb 13 15:25:56.166792 systemd[1]: Queued start job for default target initrd.target.
Feb 13 15:25:56.166813 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:25:56.166834 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:25:56.166857 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Feb 13 15:25:56.166913 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 15:25:56.166936 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Feb 13 15:25:56.166957 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Feb 13 15:25:56.166994 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Feb 13 15:25:56.167037 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Feb 13 15:25:56.167063 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:25:56.167085 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:25:56.167107 systemd[1]: Reached target paths.target - Path Units.
Feb 13 15:25:56.167132 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 15:25:56.167154 systemd[1]: Reached target swap.target - Swaps.
Feb 13 15:25:56.167176 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 15:25:56.167198 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 15:25:56.167219 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 15:25:56.167240 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Feb 13 15:25:56.167262 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Feb 13 15:25:56.167289 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:25:56.167311 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:25:56.167337 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:25:56.167359 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 15:25:56.167379 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Feb 13 15:25:56.167401 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 15:25:56.167423 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Feb 13 15:25:56.167445 systemd[1]: Starting systemd-fsck-usr.service...
Feb 13 15:25:56.167467 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 15:25:56.167488 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 15:25:56.167511 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:25:56.167605 systemd-journald[184]: Collecting audit messages is disabled.
Feb 13 15:25:56.167654 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Feb 13 15:25:56.167676 systemd-journald[184]: Journal started
Feb 13 15:25:56.167726 systemd-journald[184]: Runtime Journal (/run/log/journal/6514cbd2b27d457bbfea50ce367cd6d0) is 8.0M, max 148.7M, 140.7M free.
Feb 13 15:25:56.140745 systemd-modules-load[185]: Inserted module 'overlay'
Feb 13 15:25:56.196915 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 13 15:25:56.203557 systemd-modules-load[185]: Inserted module 'br_netfilter'
Feb 13 15:25:56.204918 kernel: Bridge firewalling registered
Feb 13 15:25:56.214264 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 15:25:56.214403 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:25:56.215293 systemd[1]: Finished systemd-fsck-usr.service.
Feb 13 15:25:56.215638 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:25:56.224188 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:25:56.296168 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Feb 13 15:25:56.327828 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 15:25:56.339756 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:25:56.351735 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:25:56.372693 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 15:25:56.393648 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:25:56.421301 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:25:56.463600 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 15:25:56.467194 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 15:25:56.499089 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:25:56.510522 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:25:56.527197 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Feb 13 15:25:56.527310 systemd-resolved[206]: Positive Trust Anchors:
Feb 13 15:25:56.527324 systemd-resolved[206]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 15:25:56.527395 systemd-resolved[206]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 15:25:56.533222 systemd-resolved[206]: Defaulting to hostname 'linux'.
Feb 13 15:25:56.545432 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 15:25:56.556736 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:25:56.682120 dracut-cmdline[219]: dracut-dracut-053
Feb 13 15:25:56.682120 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2
Feb 13 15:25:56.779956 kernel: SCSI subsystem initialized
Feb 13 15:25:56.795928 kernel: Loading iSCSI transport class v2.0-870.
Feb 13 15:25:56.813989 kernel: iscsi: registered transport (tcp)
Feb 13 15:25:56.845267 kernel: iscsi: registered transport (qla4xxx)
Feb 13 15:25:56.845390 kernel: QLogic iSCSI HBA Driver
Feb 13 15:25:56.901651 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Feb 13 15:25:56.907220 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Feb 13 15:25:56.986835 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 13 15:25:56.986971 kernel: device-mapper: uevent: version 1.0.3
Feb 13 15:25:56.987051 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Feb 13 15:25:57.044914 kernel: raid6: avx2x4   gen() 17928 MB/s
Feb 13 15:25:57.065913 kernel: raid6: avx2x2   gen() 17912 MB/s
Feb 13 15:25:57.091904 kernel: raid6: avx2x1   gen() 13564 MB/s
Feb 13 15:25:57.091966 kernel: raid6: using algorithm avx2x4 gen() 17928 MB/s
Feb 13 15:25:57.118917 kernel: raid6: .... xor() 6978 MB/s, rmw enabled
Feb 13 15:25:57.119005 kernel: raid6: using avx2x2 recovery algorithm
Feb 13 15:25:57.147914 kernel: xor: automatically using best checksumming function   avx       
Feb 13 15:25:57.326910 kernel: Btrfs loaded, zoned=no, fsverity=no
Feb 13 15:25:57.340451 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 15:25:57.356167 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:25:57.406995 systemd-udevd[401]: Using default interface naming scheme 'v255'.
Feb 13 15:25:57.414347 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:25:57.445153 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Feb 13 15:25:57.484631 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation
Feb 13 15:25:57.524060 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 15:25:57.548114 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 15:25:57.665719 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:25:57.702572 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Feb 13 15:25:57.753161 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Feb 13 15:25:57.795044 kernel: cryptd: max_cpu_qlen set to 1000
Feb 13 15:25:57.765937 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 15:25:57.872040 kernel: AVX2 version of gcm_enc/dec engaged.
Feb 13 15:25:57.872089 kernel: AES CTR mode by8 optimization enabled
Feb 13 15:25:57.872116 kernel: scsi host0: Virtio SCSI HBA
Feb 13 15:25:57.880965 kernel: scsi 0:0:1:0: Direct-Access     Google   PersistentDisk   1    PQ: 0 ANSI: 6
Feb 13 15:25:57.784333 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:25:57.810105 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 15:25:57.846328 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Feb 13 15:25:57.923772 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 15:25:57.924609 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:25:58.024274 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB)
Feb 13 15:25:58.045239 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks
Feb 13 15:25:58.045530 kernel: sd 0:0:1:0: [sda] Write Protect is off
Feb 13 15:25:58.045786 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08
Feb 13 15:25:58.046061 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Feb 13 15:25:58.046823 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Feb 13 15:25:58.046857 kernel: GPT:17805311 != 25165823
Feb 13 15:25:58.046916 kernel: GPT:Alternate GPT header not at the end of the disk.
Feb 13 15:25:58.046944 kernel: GPT:17805311 != 25165823
Feb 13 15:25:58.046968 kernel: GPT: Use GNU Parted to correct GPT errors.
Feb 13 15:25:58.047001 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Feb 13 15:25:58.047027 kernel: sd 0:0:1:0: [sda] Attached SCSI disk
Feb 13 15:25:58.038403 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:25:58.054037 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 15:25:58.054350 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:25:58.054714 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:25:58.132089 kernel: BTRFS: device fsid 966d6124-9067-4089-b000-5e99065fe7e2 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (450)
Feb 13 15:25:58.132140 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (445)
Feb 13 15:25:58.130532 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:25:58.144030 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 15:25:58.190260 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM.
Feb 13 15:25:58.190804 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:25:58.235242 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT.
Feb 13 15:25:58.266016 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A.
Feb 13 15:25:58.266354 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A.
Feb 13 15:25:58.297595 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM.
Feb 13 15:25:58.330375 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Feb 13 15:25:58.351243 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:25:58.369793 disk-uuid[543]: Primary Header is updated.
Feb 13 15:25:58.369793 disk-uuid[543]: Secondary Entries is updated.
Feb 13 15:25:58.369793 disk-uuid[543]: Secondary Header is updated.
Feb 13 15:25:58.400297 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Feb 13 15:25:58.415923 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Feb 13 15:25:58.430731 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:25:59.433266 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Feb 13 15:25:59.433378 disk-uuid[544]: The operation has completed successfully.
Feb 13 15:25:59.520247 systemd[1]: disk-uuid.service: Deactivated successfully.
Feb 13 15:25:59.520416 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Feb 13 15:25:59.545216 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Feb 13 15:25:59.581196 sh[566]: Success
Feb 13 15:25:59.608208 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2"
Feb 13 15:25:59.712903 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Feb 13 15:25:59.721704 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Feb 13 15:25:59.748717 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Feb 13 15:25:59.796712 kernel: BTRFS info (device dm-0): first mount of filesystem 966d6124-9067-4089-b000-5e99065fe7e2
Feb 13 15:25:59.796855 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm
Feb 13 15:25:59.796902 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Feb 13 15:25:59.806167 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Feb 13 15:25:59.818802 kernel: BTRFS info (device dm-0): using free space tree
Feb 13 15:25:59.844948 kernel: BTRFS info (device dm-0): enabling ssd optimizations
Feb 13 15:25:59.855008 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Feb 13 15:25:59.856211 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Feb 13 15:25:59.861159 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Feb 13 15:25:59.910183 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Feb 13 15:25:59.965083 kernel: BTRFS info (device sda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1
Feb 13 15:25:59.965142 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Feb 13 15:25:59.965167 kernel: BTRFS info (device sda6): using free space tree
Feb 13 15:25:59.965189 kernel: BTRFS info (device sda6): enabling ssd optimizations
Feb 13 15:25:59.965213 kernel: BTRFS info (device sda6): auto enabling async discard
Feb 13 15:25:59.976773 systemd[1]: mnt-oem.mount: Deactivated successfully.
Feb 13 15:25:59.995145 kernel: BTRFS info (device sda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1
Feb 13 15:26:00.003373 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Feb 13 15:26:00.031264 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Feb 13 15:26:00.119755 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 15:26:00.151190 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 15:26:00.236220 ignition[677]: Ignition 2.20.0
Feb 13 15:26:00.236241 ignition[677]: Stage: fetch-offline
Feb 13 15:26:00.240132 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 15:26:00.236320 ignition[677]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:26:00.246177 systemd-networkd[749]: lo: Link UP
Feb 13 15:26:00.236336 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/gcp"
Feb 13 15:26:00.246184 systemd-networkd[749]: lo: Gained carrier
Feb 13 15:26:00.237233 ignition[677]: parsed url from cmdline: ""
Feb 13 15:26:00.248285 systemd-networkd[749]: Enumeration completed
Feb 13 15:26:00.237243 ignition[677]: no config URL provided
Feb 13 15:26:00.248945 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:26:00.237257 ignition[677]: reading system config file "/usr/lib/ignition/user.ign"
Feb 13 15:26:00.248953 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 15:26:00.237276 ignition[677]: no config at "/usr/lib/ignition/user.ign"
Feb 13 15:26:00.252111 systemd-networkd[749]: eth0: Link UP
Feb 13 15:26:00.237292 ignition[677]: failed to fetch config: resource requires networking
Feb 13 15:26:00.252118 systemd-networkd[749]: eth0: Gained carrier
Feb 13 15:26:00.237649 ignition[677]: Ignition finished successfully
Feb 13 15:26:00.252136 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:26:00.337306 ignition[758]: Ignition 2.20.0
Feb 13 15:26:00.263495 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 15:26:00.337317 ignition[758]: Stage: fetch
Feb 13 15:26:00.270032 systemd-networkd[749]: eth0: DHCPv4 address 10.128.0.79/32, gateway 10.128.0.1 acquired from 169.254.169.254
Feb 13 15:26:00.337532 ignition[758]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:26:00.290461 systemd[1]: Reached target network.target - Network.
Feb 13 15:26:00.337543 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/gcp"
Feb 13 15:26:00.313217 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)...
Feb 13 15:26:00.337668 ignition[758]: parsed url from cmdline: ""
Feb 13 15:26:00.352277 unknown[758]: fetched base config from "system"
Feb 13 15:26:00.337676 ignition[758]: no config URL provided
Feb 13 15:26:00.352292 unknown[758]: fetched base config from "system"
Feb 13 15:26:00.337683 ignition[758]: reading system config file "/usr/lib/ignition/user.ign"
Feb 13 15:26:00.352309 unknown[758]: fetched user config from "gcp"
Feb 13 15:26:00.337693 ignition[758]: no config at "/usr/lib/ignition/user.ign"
Feb 13 15:26:00.357494 systemd[1]: Finished ignition-fetch.service - Ignition (fetch).
Feb 13 15:26:00.337723 ignition[758]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1
Feb 13 15:26:00.384941 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Feb 13 15:26:00.343130 ignition[758]: GET result: OK
Feb 13 15:26:00.426780 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Feb 13 15:26:00.343239 ignition[758]: parsing config with SHA512: 9d4ab887cabd97aa27b724f84dc65be09104071117e316b5b821f11b6854a69f5b54845ef43b816b46412ee195f075821fa5a2b0d5301484f40c750636530bc2
Feb 13 15:26:00.449210 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Feb 13 15:26:00.354592 ignition[758]: fetch: fetch complete
Feb 13 15:26:00.499994 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Feb 13 15:26:00.354604 ignition[758]: fetch: fetch passed
Feb 13 15:26:00.511409 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Feb 13 15:26:00.354699 ignition[758]: Ignition finished successfully
Feb 13 15:26:00.528190 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Feb 13 15:26:00.423632 ignition[764]: Ignition 2.20.0
Feb 13 15:26:00.550145 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 15:26:00.423643 ignition[764]: Stage: kargs
Feb 13 15:26:00.566163 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 15:26:00.423897 ignition[764]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:26:00.582137 systemd[1]: Reached target basic.target - Basic System.
Feb 13 15:26:00.423918 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/gcp"
Feb 13 15:26:00.606206 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Feb 13 15:26:00.425200 ignition[764]: kargs: kargs passed
Feb 13 15:26:00.425278 ignition[764]: Ignition finished successfully
Feb 13 15:26:00.497221 ignition[770]: Ignition 2.20.0
Feb 13 15:26:00.497232 ignition[770]: Stage: disks
Feb 13 15:26:00.497459 ignition[770]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:26:00.497471 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/gcp"
Feb 13 15:26:00.498647 ignition[770]: disks: disks passed
Feb 13 15:26:00.498743 ignition[770]: Ignition finished successfully
Feb 13 15:26:00.666325 systemd-fsck[778]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks
Feb 13 15:26:00.849018 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Feb 13 15:26:00.855221 systemd[1]: Mounting sysroot.mount - /sysroot...
Feb 13 15:26:00.998371 kernel: EXT4-fs (sda9): mounted filesystem 85ed0b0d-7f0f-4eeb-80d8-6213e9fcc55d r/w with ordered data mode. Quota mode: none.
Feb 13 15:26:00.999472 systemd[1]: Mounted sysroot.mount - /sysroot.
Feb 13 15:26:01.000566 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Feb 13 15:26:01.035349 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 15:26:01.050061 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Feb 13 15:26:01.070517 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met.
Feb 13 15:26:01.070635 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Feb 13 15:26:01.155302 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (786)
Feb 13 15:26:01.155361 kernel: BTRFS info (device sda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1
Feb 13 15:26:01.155378 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Feb 13 15:26:01.155393 kernel: BTRFS info (device sda6): using free space tree
Feb 13 15:26:01.155408 kernel: BTRFS info (device sda6): enabling ssd optimizations
Feb 13 15:26:01.155424 kernel: BTRFS info (device sda6): auto enabling async discard
Feb 13 15:26:01.070697 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 15:26:01.126357 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Feb 13 15:26:01.165098 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 15:26:01.187234 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Feb 13 15:26:01.331471 initrd-setup-root[810]: cut: /sysroot/etc/passwd: No such file or directory
Feb 13 15:26:01.342115 initrd-setup-root[817]: cut: /sysroot/etc/group: No such file or directory
Feb 13 15:26:01.353085 initrd-setup-root[824]: cut: /sysroot/etc/shadow: No such file or directory
Feb 13 15:26:01.365096 initrd-setup-root[831]: cut: /sysroot/etc/gshadow: No such file or directory
Feb 13 15:26:01.521811 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Feb 13 15:26:01.527067 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Feb 13 15:26:01.557291 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Feb 13 15:26:01.577969 kernel: BTRFS info (device sda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1
Feb 13 15:26:01.585548 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Feb 13 15:26:01.604738 systemd-networkd[749]: eth0: Gained IPv6LL
Feb 13 15:26:01.633390 ignition[898]: INFO     : Ignition 2.20.0
Feb 13 15:26:01.633390 ignition[898]: INFO     : Stage: mount
Feb 13 15:26:01.641290 ignition[898]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:26:01.641290 ignition[898]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/gcp"
Feb 13 15:26:01.641290 ignition[898]: INFO     : mount: mount passed
Feb 13 15:26:01.641290 ignition[898]: INFO     : Ignition finished successfully
Feb 13 15:26:01.635151 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Feb 13 15:26:01.655689 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Feb 13 15:26:01.682140 systemd[1]: Starting ignition-files.service - Ignition (files)...
Feb 13 15:26:02.007347 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 15:26:02.058937 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (910)
Feb 13 15:26:02.069925 kernel: BTRFS info (device sda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1
Feb 13 15:26:02.070049 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Feb 13 15:26:02.082848 kernel: BTRFS info (device sda6): using free space tree
Feb 13 15:26:02.100921 kernel: BTRFS info (device sda6): enabling ssd optimizations
Feb 13 15:26:02.101029 kernel: BTRFS info (device sda6): auto enabling async discard
Feb 13 15:26:02.104678 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 15:26:02.140308 ignition[927]: INFO     : Ignition 2.20.0
Feb 13 15:26:02.140308 ignition[927]: INFO     : Stage: files
Feb 13 15:26:02.155053 ignition[927]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:26:02.155053 ignition[927]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/gcp"
Feb 13 15:26:02.155053 ignition[927]: DEBUG    : files: compiled without relabeling support, skipping
Feb 13 15:26:02.155053 ignition[927]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Feb 13 15:26:02.155053 ignition[927]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Feb 13 15:26:02.155053 ignition[927]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Feb 13 15:26:02.155053 ignition[927]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Feb 13 15:26:02.155053 ignition[927]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Feb 13 15:26:02.154950 unknown[927]: wrote ssh authorized keys file for user: core
Feb 13 15:26:02.257079 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/etc/flatcar-cgroupv1"
Feb 13 15:26:02.257079 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1"
Feb 13 15:26:02.257079 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Feb 13 15:26:02.257079 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1
Feb 13 15:26:02.324081 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET result: OK
Feb 13 15:26:02.438280 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Feb 13 15:26:02.455055 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/opt/bin/cilium.tar.gz"
Feb 13 15:26:02.455055 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1
Feb 13 15:26:02.749577 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(5): GET result: OK
Feb 13 15:26:02.918632 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz"
Feb 13 15:26:02.934057 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/install.sh"
Feb 13 15:26:02.934057 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh"
Feb 13 15:26:02.934057 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nginx.yaml"
Feb 13 15:26:02.934057 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml"
Feb 13 15:26:02.934057 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Feb 13 15:26:02.934057 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Feb 13 15:26:02.934057 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb 13 15:26:02.934057 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb 13 15:26:02.934057 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 15:26:02.934057 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 15:26:02.934057 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Feb 13 15:26:02.934057 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Feb 13 15:26:02.934057 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(c): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Feb 13 15:26:02.934057 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1
Feb 13 15:26:03.169129 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(c): GET result: OK
Feb 13 15:26:04.026165 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Feb 13 15:26:04.026165 ignition[927]: INFO     : files: op(d): [started]  processing unit "containerd.service"
Feb 13 15:26:04.045309 ignition[927]: INFO     : files: op(d): op(e): [started]  writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf"
Feb 13 15:26:04.045309 ignition[927]: INFO     : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf"
Feb 13 15:26:04.045309 ignition[927]: INFO     : files: op(d): [finished] processing unit "containerd.service"
Feb 13 15:26:04.045309 ignition[927]: INFO     : files: op(f): [started]  processing unit "prepare-helm.service"
Feb 13 15:26:04.045309 ignition[927]: INFO     : files: op(f): op(10): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb 13 15:26:04.045309 ignition[927]: INFO     : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb 13 15:26:04.045309 ignition[927]: INFO     : files: op(f): [finished] processing unit "prepare-helm.service"
Feb 13 15:26:04.045309 ignition[927]: INFO     : files: op(11): [started]  setting preset to enabled for "prepare-helm.service"
Feb 13 15:26:04.045309 ignition[927]: INFO     : files: op(11): [finished] setting preset to enabled for "prepare-helm.service"
Feb 13 15:26:04.045309 ignition[927]: INFO     : files: createResultFile: createFiles: op(12): [started]  writing file "/sysroot/etc/.ignition-result.json"
Feb 13 15:26:04.045309 ignition[927]: INFO     : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json"
Feb 13 15:26:04.045309 ignition[927]: INFO     : files: files passed
Feb 13 15:26:04.045309 ignition[927]: INFO     : Ignition finished successfully
Feb 13 15:26:04.032554 systemd[1]: Finished ignition-files.service - Ignition (files).
Feb 13 15:26:04.070410 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Feb 13 15:26:04.118200 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Feb 13 15:26:04.123469 systemd[1]: ignition-quench.service: Deactivated successfully.
Feb 13 15:26:04.343205 initrd-setup-root-after-ignition[955]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:26:04.343205 initrd-setup-root-after-ignition[955]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:26:04.123620 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Feb 13 15:26:04.400235 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:26:04.172673 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 15:26:04.200591 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Feb 13 15:26:04.228277 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Feb 13 15:26:04.341547 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 13 15:26:04.341704 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Feb 13 15:26:04.354555 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Feb 13 15:26:04.390225 systemd[1]: Reached target initrd.target - Initrd Default Target.
Feb 13 15:26:04.410426 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Feb 13 15:26:04.417382 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Feb 13 15:26:04.470281 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 15:26:04.488210 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Feb 13 15:26:04.562057 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:26:04.579456 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:26:04.601542 systemd[1]: Stopped target timers.target - Timer Units.
Feb 13 15:26:04.611578 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 13 15:26:04.611804 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 15:26:04.646597 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Feb 13 15:26:04.665542 systemd[1]: Stopped target basic.target - Basic System.
Feb 13 15:26:04.693521 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Feb 13 15:26:04.702599 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 15:26:04.731518 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Feb 13 15:26:04.760389 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Feb 13 15:26:04.777335 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 15:26:04.777818 systemd[1]: Stopped target sysinit.target - System Initialization.
Feb 13 15:26:04.798704 systemd[1]: Stopped target local-fs.target - Local File Systems.
Feb 13 15:26:04.825528 systemd[1]: Stopped target swap.target - Swaps.
Feb 13 15:26:04.843494 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 13 15:26:04.843782 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 15:26:04.877492 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:26:04.888429 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:26:04.909448 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Feb 13 15:26:04.909685 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:26:04.931504 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 13 15:26:04.931742 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Feb 13 15:26:04.963538 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Feb 13 15:26:04.963822 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 15:26:04.983482 systemd[1]: ignition-files.service: Deactivated successfully.
Feb 13 15:26:04.983702 systemd[1]: Stopped ignition-files.service - Ignition (files).
Feb 13 15:26:05.010468 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Feb 13 15:26:05.022126 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 13 15:26:05.105131 ignition[980]: INFO     : Ignition 2.20.0
Feb 13 15:26:05.105131 ignition[980]: INFO     : Stage: umount
Feb 13 15:26:05.105131 ignition[980]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:26:05.105131 ignition[980]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/gcp"
Feb 13 15:26:05.105131 ignition[980]: INFO     : umount: umount passed
Feb 13 15:26:05.105131 ignition[980]: INFO     : Ignition finished successfully
Feb 13 15:26:05.022539 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:26:05.058544 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Feb 13 15:26:05.083122 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 13 15:26:05.083609 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:26:05.095733 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 13 15:26:05.096087 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 15:26:05.175896 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Feb 13 15:26:05.177223 systemd[1]: ignition-mount.service: Deactivated successfully.
Feb 13 15:26:05.177363 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Feb 13 15:26:05.191152 systemd[1]: sysroot-boot.service: Deactivated successfully.
Feb 13 15:26:05.191292 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Feb 13 15:26:05.213051 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 13 15:26:05.213237 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Feb 13 15:26:05.233707 systemd[1]: ignition-disks.service: Deactivated successfully.
Feb 13 15:26:05.233794 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Feb 13 15:26:05.251373 systemd[1]: ignition-kargs.service: Deactivated successfully.
Feb 13 15:26:05.251475 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Feb 13 15:26:05.271335 systemd[1]: ignition-fetch.service: Deactivated successfully.
Feb 13 15:26:05.271429 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch).
Feb 13 15:26:05.289410 systemd[1]: Stopped target network.target - Network.
Feb 13 15:26:05.305317 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Feb 13 15:26:05.305442 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 15:26:05.313540 systemd[1]: Stopped target paths.target - Path Units.
Feb 13 15:26:05.331418 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 13 15:26:05.336120 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:26:05.357163 systemd[1]: Stopped target slices.target - Slice Units.
Feb 13 15:26:05.375291 systemd[1]: Stopped target sockets.target - Socket Units.
Feb 13 15:26:05.396393 systemd[1]: iscsid.socket: Deactivated successfully.
Feb 13 15:26:05.396479 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 15:26:05.415345 systemd[1]: iscsiuio.socket: Deactivated successfully.
Feb 13 15:26:05.415434 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 15:26:05.424461 systemd[1]: ignition-setup.service: Deactivated successfully.
Feb 13 15:26:05.424562 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Feb 13 15:26:05.451416 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Feb 13 15:26:05.451521 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Feb 13 15:26:05.470400 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Feb 13 15:26:05.470507 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Feb 13 15:26:05.496661 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Feb 13 15:26:05.501013 systemd-networkd[749]: eth0: DHCPv6 lease lost
Feb 13 15:26:05.515446 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Feb 13 15:26:05.535781 systemd[1]: systemd-resolved.service: Deactivated successfully.
Feb 13 15:26:05.535974 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Feb 13 15:26:05.557554 systemd[1]: systemd-networkd.service: Deactivated successfully.
Feb 13 15:26:05.557851 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Feb 13 15:26:05.576454 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Feb 13 15:26:05.576519 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:26:05.612222 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Feb 13 15:26:05.623090 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Feb 13 15:26:05.623260 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 15:26:05.635189 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 13 15:26:05.635317 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:26:05.635490 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 13 15:26:05.635554 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:26:06.070123 systemd-journald[184]: Received SIGTERM from PID 1 (systemd).
Feb 13 15:26:05.662374 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb 13 15:26:05.662476 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:26:05.682542 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:26:05.706707 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 13 15:26:05.707034 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:26:05.731940 systemd[1]: network-cleanup.service: Deactivated successfully.
Feb 13 15:26:05.732099 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Feb 13 15:26:05.751957 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 13 15:26:05.752045 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:26:05.769388 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 13 15:26:05.769463 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:26:05.789315 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 13 15:26:05.789433 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 15:26:05.833328 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 13 15:26:05.833471 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Feb 13 15:26:05.860433 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 15:26:05.860553 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:26:05.895393 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Feb 13 15:26:05.909108 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 13 15:26:05.909307 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:26:05.921232 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 15:26:05.921353 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:26:05.933892 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 13 15:26:05.934105 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Feb 13 15:26:05.955133 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Feb 13 15:26:05.980437 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Feb 13 15:26:06.031377 systemd[1]: Switching root.
Feb 13 15:26:06.342072 systemd-journald[184]: Journal stopped
Feb 13 15:25:56.142161 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 13:54:58 -00 2025
Feb 13 15:25:56.142220 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2
Feb 13 15:25:56.142240 kernel: BIOS-provided physical RAM map:
Feb 13 15:25:56.142254 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved
Feb 13 15:25:56.142266 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable
Feb 13 15:25:56.142281 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved
Feb 13 15:25:56.142297 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable
Feb 13 15:25:56.142316 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved
Feb 13 15:25:56.142331 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd327fff] usable
Feb 13 15:25:56.142345 kernel: BIOS-e820: [mem 0x00000000bd328000-0x00000000bd330fff] ACPI data
Feb 13 15:25:56.142359 kernel: BIOS-e820: [mem 0x00000000bd331000-0x00000000bf8ecfff] usable
Feb 13 15:25:56.142374 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved
Feb 13 15:25:56.142387 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data
Feb 13 15:25:56.142402 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS
Feb 13 15:25:56.142423 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable
Feb 13 15:25:56.142439 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved
Feb 13 15:25:56.142455 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable
Feb 13 15:25:56.142470 kernel: NX (Execute Disable) protection: active
Feb 13 15:25:56.142483 kernel: APIC: Static calls initialized
Feb 13 15:25:56.142497 kernel: efi: EFI v2.7 by EDK II
Feb 13 15:25:56.142514 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd328018 
Feb 13 15:25:56.142529 kernel: random: crng init done
Feb 13 15:25:56.142546 kernel: secureboot: Secure boot disabled
Feb 13 15:25:56.142565 kernel: SMBIOS 2.4 present.
Feb 13 15:25:56.142607 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024
Feb 13 15:25:56.142621 kernel: Hypervisor detected: KVM
Feb 13 15:25:56.142635 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Feb 13 15:25:56.142647 kernel: kvm-clock: using sched offset of 13742194118 cycles
Feb 13 15:25:56.142664 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Feb 13 15:25:56.142681 kernel: tsc: Detected 2299.998 MHz processor
Feb 13 15:25:56.142696 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Feb 13 15:25:56.142716 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Feb 13 15:25:56.142731 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000
Feb 13 15:25:56.142750 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs
Feb 13 15:25:56.142765 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Feb 13 15:25:56.142779 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000
Feb 13 15:25:56.142795 kernel: Using GB pages for direct mapping
Feb 13 15:25:56.142812 kernel: ACPI: Early table checksum verification disabled
Feb 13 15:25:56.142829 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google)
Feb 13 15:25:56.142847 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001      01000013)
Feb 13 15:25:56.142889 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001)
Feb 13 15:25:56.142923 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001)
Feb 13 15:25:56.142941 kernel: ACPI: FACS 0x00000000BFBF2000 000040
Feb 13 15:25:56.142960 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322)
Feb 13 15:25:56.142986 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE          00000001 GOOG 00000001)
Feb 13 15:25:56.143005 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001)
Feb 13 15:25:56.143024 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001)
Feb 13 15:25:56.143046 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001)
Feb 13 15:25:56.143063 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001)
Feb 13 15:25:56.143079 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3]
Feb 13 15:25:56.143097 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63]
Feb 13 15:25:56.143114 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f]
Feb 13 15:25:56.143132 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315]
Feb 13 15:25:56.143149 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033]
Feb 13 15:25:56.143166 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7]
Feb 13 15:25:56.143182 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075]
Feb 13 15:25:56.143205 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f]
Feb 13 15:25:56.143222 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027]
Feb 13 15:25:56.143238 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0
Feb 13 15:25:56.143256 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0
Feb 13 15:25:56.143273 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff]
Feb 13 15:25:56.143290 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff]
Feb 13 15:25:56.143308 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff]
Feb 13 15:25:56.143324 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff]
Feb 13 15:25:56.143342 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff]
Feb 13 15:25:56.143363 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff]
Feb 13 15:25:56.143380 kernel: Zone ranges:
Feb 13 15:25:56.143398 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Feb 13 15:25:56.143415 kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Feb 13 15:25:56.143432 kernel:   Normal   [mem 0x0000000100000000-0x000000021fffffff]
Feb 13 15:25:56.143450 kernel: Movable zone start for each node
Feb 13 15:25:56.143473 kernel: Early memory node ranges
Feb 13 15:25:56.143490 kernel:   node   0: [mem 0x0000000000001000-0x0000000000054fff]
Feb 13 15:25:56.143508 kernel:   node   0: [mem 0x0000000000060000-0x0000000000097fff]
Feb 13 15:25:56.143524 kernel:   node   0: [mem 0x0000000000100000-0x00000000bd327fff]
Feb 13 15:25:56.143546 kernel:   node   0: [mem 0x00000000bd331000-0x00000000bf8ecfff]
Feb 13 15:25:56.143563 kernel:   node   0: [mem 0x00000000bfbff000-0x00000000bffdffff]
Feb 13 15:25:56.143580 kernel:   node   0: [mem 0x0000000100000000-0x000000021fffffff]
Feb 13 15:25:56.143598 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff]
Feb 13 15:25:56.143614 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Feb 13 15:25:56.143631 kernel: On node 0, zone DMA: 11 pages in unavailable ranges
Feb 13 15:25:56.143648 kernel: On node 0, zone DMA: 104 pages in unavailable ranges
Feb 13 15:25:56.143665 kernel: On node 0, zone DMA32: 9 pages in unavailable ranges
Feb 13 15:25:56.143683 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges
Feb 13 15:25:56.143705 kernel: On node 0, zone Normal: 32 pages in unavailable ranges
Feb 13 15:25:56.143721 kernel: ACPI: PM-Timer IO Port: 0xb008
Feb 13 15:25:56.143739 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Feb 13 15:25:56.143756 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Feb 13 15:25:56.143773 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Feb 13 15:25:56.143808 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Feb 13 15:25:56.143825 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Feb 13 15:25:56.143843 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Feb 13 15:25:56.143860 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Feb 13 15:25:56.143896 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs
Feb 13 15:25:56.143914 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices
Feb 13 15:25:56.143930 kernel: Booting paravirtualized kernel on KVM
Feb 13 15:25:56.143948 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Feb 13 15:25:56.143965 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1
Feb 13 15:25:56.143990 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576
Feb 13 15:25:56.144008 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152
Feb 13 15:25:56.144024 kernel: pcpu-alloc: [0] 0 1 
Feb 13 15:25:56.144041 kernel: kvm-guest: PV spinlocks enabled
Feb 13 15:25:56.144063 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Feb 13 15:25:56.144082 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2
Feb 13 15:25:56.144098 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Feb 13 15:25:56.144114 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Feb 13 15:25:56.144148 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb 13 15:25:56.144163 kernel: Fallback order for Node 0: 0 
Feb 13 15:25:56.144179 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 1932271
Feb 13 15:25:56.144197 kernel: Policy zone: Normal
Feb 13 15:25:56.144225 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 13 15:25:56.144241 kernel: software IO TLB: area num 2.
Feb 13 15:25:56.144257 kernel: Memory: 7513364K/7860548K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 346928K reserved, 0K cma-reserved)
Feb 13 15:25:56.144275 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Feb 13 15:25:56.144294 kernel: Kernel/User page tables isolation: enabled
Feb 13 15:25:56.144312 kernel: ftrace: allocating 37920 entries in 149 pages
Feb 13 15:25:56.144331 kernel: ftrace: allocated 149 pages with 4 groups
Feb 13 15:25:56.144350 kernel: Dynamic Preempt: voluntary
Feb 13 15:25:56.144390 kernel: rcu: Preemptible hierarchical RCU implementation.
Feb 13 15:25:56.144411 kernel: rcu:         RCU event tracing is enabled.
Feb 13 15:25:56.144431 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Feb 13 15:25:56.144451 kernel:         Trampoline variant of Tasks RCU enabled.
Feb 13 15:25:56.144475 kernel:         Rude variant of Tasks RCU enabled.
Feb 13 15:25:56.144494 kernel:         Tracing variant of Tasks RCU enabled.
Feb 13 15:25:56.144514 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 13 15:25:56.144533 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Feb 13 15:25:56.144553 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16
Feb 13 15:25:56.144577 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb 13 15:25:56.144596 kernel: Console: colour dummy device 80x25
Feb 13 15:25:56.144616 kernel: printk: console [ttyS0] enabled
Feb 13 15:25:56.144644 kernel: ACPI: Core revision 20230628
Feb 13 15:25:56.144663 kernel: APIC: Switch to symmetric I/O mode setup
Feb 13 15:25:56.144683 kernel: x2apic enabled
Feb 13 15:25:56.144703 kernel: APIC: Switched APIC routing to: physical x2apic
Feb 13 15:25:56.144722 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1
Feb 13 15:25:56.144742 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns
Feb 13 15:25:56.144767 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998)
Feb 13 15:25:56.144787 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024
Feb 13 15:25:56.144807 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4
Feb 13 15:25:56.144826 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Feb 13 15:25:56.144846 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit
Feb 13 15:25:56.144865 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall
Feb 13 15:25:56.144900 kernel: Spectre V2 : Mitigation: IBRS
Feb 13 15:25:56.144920 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Feb 13 15:25:56.144944 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Feb 13 15:25:56.144964 kernel: RETBleed: Mitigation: IBRS
Feb 13 15:25:56.144992 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Feb 13 15:25:56.145012 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl
Feb 13 15:25:56.145031 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Feb 13 15:25:56.145051 kernel: MDS: Mitigation: Clear CPU buffers
Feb 13 15:25:56.145070 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode
Feb 13 15:25:56.145090 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb 13 15:25:56.145109 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb 13 15:25:56.145133 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb 13 15:25:56.145153 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Feb 13 15:25:56.145173 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Feb 13 15:25:56.145192 kernel: Freeing SMP alternatives memory: 32K
Feb 13 15:25:56.145211 kernel: pid_max: default: 32768 minimum: 301
Feb 13 15:25:56.145230 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Feb 13 15:25:56.145250 kernel: landlock: Up and running.
Feb 13 15:25:56.145269 kernel: SELinux:  Initializing.
Feb 13 15:25:56.145289 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 13 15:25:56.145313 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 13 15:25:56.145333 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0)
Feb 13 15:25:56.145352 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 15:25:56.145372 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 15:25:56.145392 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 15:25:56.145412 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only.
Feb 13 15:25:56.145431 kernel: signal: max sigframe size: 1776
Feb 13 15:25:56.145451 kernel: rcu: Hierarchical SRCU implementation.
Feb 13 15:25:56.145471 kernel: rcu:         Max phase no-delay instances is 400.
Feb 13 15:25:56.145494 kernel: NMI watchdog: Perf NMI watchdog permanently disabled
Feb 13 15:25:56.145514 kernel: smp: Bringing up secondary CPUs ...
Feb 13 15:25:56.145533 kernel: smpboot: x86: Booting SMP configuration:
Feb 13 15:25:56.145553 kernel: .... node  #0, CPUs:      #1
Feb 13 15:25:56.145573 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
Feb 13 15:25:56.145594 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
Feb 13 15:25:56.145613 kernel: smp: Brought up 1 node, 2 CPUs
Feb 13 15:25:56.145632 kernel: smpboot: Max logical packages: 1
Feb 13 15:25:56.145656 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS)
Feb 13 15:25:56.145676 kernel: devtmpfs: initialized
Feb 13 15:25:56.145695 kernel: x86/mm: Memory block size: 128MB
Feb 13 15:25:56.145715 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes)
Feb 13 15:25:56.145735 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 13 15:25:56.145755 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Feb 13 15:25:56.145774 kernel: pinctrl core: initialized pinctrl subsystem
Feb 13 15:25:56.145794 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 13 15:25:56.145813 kernel: audit: initializing netlink subsys (disabled)
Feb 13 15:25:56.145837 kernel: audit: type=2000 audit(1739460354.331:1): state=initialized audit_enabled=0 res=1
Feb 13 15:25:56.145857 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 13 15:25:56.145888 kernel: thermal_sys: Registered thermal governor 'user_space'
Feb 13 15:25:56.145920 kernel: cpuidle: using governor menu
Feb 13 15:25:56.145939 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 13 15:25:56.145959 kernel: dca service started, version 1.12.1
Feb 13 15:25:56.145985 kernel: PCI: Using configuration type 1 for base access
Feb 13 15:25:56.146004 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Feb 13 15:25:56.146024 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb 13 15:25:56.146049 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Feb 13 15:25:56.146069 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb 13 15:25:56.146089 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Feb 13 15:25:56.146108 kernel: ACPI: Added _OSI(Module Device)
Feb 13 15:25:56.146128 kernel: ACPI: Added _OSI(Processor Device)
Feb 13 15:25:56.146148 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Feb 13 15:25:56.146167 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 13 15:25:56.146187 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded
Feb 13 15:25:56.146207 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Feb 13 15:25:56.146230 kernel: ACPI: Interpreter enabled
Feb 13 15:25:56.146249 kernel: ACPI: PM: (supports S0 S3 S5)
Feb 13 15:25:56.146269 kernel: ACPI: Using IOAPIC for interrupt routing
Feb 13 15:25:56.146288 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Feb 13 15:25:56.146307 kernel: PCI: Ignoring E820 reservations for host bridge windows
Feb 13 15:25:56.146327 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F
Feb 13 15:25:56.146346 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb 13 15:25:56.146687 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3]
Feb 13 15:25:56.146919 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI]
Feb 13 15:25:56.147453 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
Feb 13 15:25:56.147483 kernel: PCI host bridge to bus 0000:00
Feb 13 15:25:56.147668 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Feb 13 15:25:56.147834 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Feb 13 15:25:56.148033 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Feb 13 15:25:56.148197 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window]
Feb 13 15:25:56.148370 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb 13 15:25:56.148585 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Feb 13 15:25:56.149985 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100
Feb 13 15:25:56.150399 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
Feb 13 15:25:56.150603 kernel: pci 0000:00:01.3: quirk: [io  0xb000-0xb03f] claimed by PIIX4 ACPI
Feb 13 15:25:56.150813 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000
Feb 13 15:25:56.151049 kernel: pci 0000:00:03.0: reg 0x10: [io  0xc040-0xc07f]
Feb 13 15:25:56.151227 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f]
Feb 13 15:25:56.151442 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000
Feb 13 15:25:56.151795 kernel: pci 0000:00:04.0: reg 0x10: [io  0xc000-0xc03f]
Feb 13 15:25:56.152038 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f]
Feb 13 15:25:56.152240 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00
Feb 13 15:25:56.152428 kernel: pci 0000:00:05.0: reg 0x10: [io  0xc080-0xc09f]
Feb 13 15:25:56.152625 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f]
Feb 13 15:25:56.152650 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Feb 13 15:25:56.152671 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Feb 13 15:25:56.152691 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Feb 13 15:25:56.152710 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Feb 13 15:25:56.152730 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Feb 13 15:25:56.152750 kernel: iommu: Default domain type: Translated
Feb 13 15:25:56.152769 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Feb 13 15:25:56.152788 kernel: efivars: Registered efivars operations
Feb 13 15:25:56.152814 kernel: PCI: Using ACPI for IRQ routing
Feb 13 15:25:56.152833 kernel: PCI: pci_cache_line_size set to 64 bytes
Feb 13 15:25:56.152853 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff]
Feb 13 15:25:56.152871 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff]
Feb 13 15:25:56.153055 kernel: e820: reserve RAM buffer [mem 0xbd328000-0xbfffffff]
Feb 13 15:25:56.153074 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff]
Feb 13 15:25:56.153094 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff]
Feb 13 15:25:56.153112 kernel: vgaarb: loaded
Feb 13 15:25:56.153131 kernel: clocksource: Switched to clocksource kvm-clock
Feb 13 15:25:56.153157 kernel: VFS: Disk quotas dquot_6.6.0
Feb 13 15:25:56.153177 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 13 15:25:56.153197 kernel: pnp: PnP ACPI init
Feb 13 15:25:56.153217 kernel: pnp: PnP ACPI: found 7 devices
Feb 13 15:25:56.153237 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Feb 13 15:25:56.153257 kernel: NET: Registered PF_INET protocol family
Feb 13 15:25:56.153277 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Feb 13 15:25:56.153297 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Feb 13 15:25:56.153316 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 13 15:25:56.153341 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb 13 15:25:56.153361 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear)
Feb 13 15:25:56.153380 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Feb 13 15:25:56.153399 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb 13 15:25:56.153418 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Feb 13 15:25:56.153437 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 13 15:25:56.153462 kernel: NET: Registered PF_XDP protocol family
Feb 13 15:25:56.153661 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Feb 13 15:25:56.153836 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Feb 13 15:25:56.154113 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Feb 13 15:25:56.154429 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window]
Feb 13 15:25:56.154643 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Feb 13 15:25:56.154670 kernel: PCI: CLS 0 bytes, default 64
Feb 13 15:25:56.154690 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Feb 13 15:25:56.154710 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB)
Feb 13 15:25:56.154737 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer
Feb 13 15:25:56.154758 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns
Feb 13 15:25:56.154777 kernel: clocksource: Switched to clocksource tsc
Feb 13 15:25:56.154796 kernel: Initialise system trusted keyrings
Feb 13 15:25:56.154816 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0
Feb 13 15:25:56.154835 kernel: Key type asymmetric registered
Feb 13 15:25:56.154853 kernel: Asymmetric key parser 'x509' registered
Feb 13 15:25:56.154887 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
Feb 13 15:25:56.154919 kernel: io scheduler mq-deadline registered
Feb 13 15:25:56.154943 kernel: io scheduler kyber registered
Feb 13 15:25:56.154962 kernel: io scheduler bfq registered
Feb 13 15:25:56.154991 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Feb 13 15:25:56.155012 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Feb 13 15:25:56.155224 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver
Feb 13 15:25:56.155251 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10
Feb 13 15:25:56.155445 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver
Feb 13 15:25:56.155471 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Feb 13 15:25:56.155663 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver
Feb 13 15:25:56.155691 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 13 15:25:56.155708 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Feb 13 15:25:56.155726 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A
Feb 13 15:25:56.155744 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A
Feb 13 15:25:56.155760 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A
Feb 13 15:25:56.156030 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0)
Feb 13 15:25:56.156061 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Feb 13 15:25:56.156081 kernel: i8042: Warning: Keylock active
Feb 13 15:25:56.156106 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Feb 13 15:25:56.156126 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Feb 13 15:25:56.156316 kernel: rtc_cmos 00:00: RTC can wake from S4
Feb 13 15:25:56.156489 kernel: rtc_cmos 00:00: registered as rtc0
Feb 13 15:25:56.156661 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T15:25:55 UTC (1739460355)
Feb 13 15:25:56.156833 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram
Feb 13 15:25:56.156859 kernel: intel_pstate: CPU model not supported
Feb 13 15:25:56.166016 kernel: pstore: Using crash dump compression: deflate
Feb 13 15:25:56.166076 kernel: pstore: Registered efi_pstore as persistent store backend
Feb 13 15:25:56.166098 kernel: NET: Registered PF_INET6 protocol family
Feb 13 15:25:56.166118 kernel: Segment Routing with IPv6
Feb 13 15:25:56.166138 kernel: In-situ OAM (IOAM) with IPv6
Feb 13 15:25:56.166159 kernel: NET: Registered PF_PACKET protocol family
Feb 13 15:25:56.166179 kernel: Key type dns_resolver registered
Feb 13 15:25:56.166199 kernel: IPI shorthand broadcast: enabled
Feb 13 15:25:56.166219 kernel: sched_clock: Marking stable (1016004099, 168671150)->(1244751286, -60076037)
Feb 13 15:25:56.166238 kernel: registered taskstats version 1
Feb 13 15:25:56.166263 kernel: Loading compiled-in X.509 certificates
Feb 13 15:25:56.166284 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 9ec780e1db69d46be90bbba73ae62b0106e27ae0'
Feb 13 15:25:56.166304 kernel: Key type .fscrypt registered
Feb 13 15:25:56.166323 kernel: Key type fscrypt-provisioning registered
Feb 13 15:25:56.166343 kernel: ima: Allocated hash algorithm: sha1
Feb 13 15:25:56.166362 kernel: ima: No architecture policies found
Feb 13 15:25:56.166382 kernel: clk: Disabling unused clocks
Feb 13 15:25:56.166401 kernel: Freeing unused kernel image (initmem) memory: 42976K
Feb 13 15:25:56.166422 kernel: Write protecting the kernel read-only data: 36864k
Feb 13 15:25:56.166447 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
Feb 13 15:25:56.166468 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K
Feb 13 15:25:56.166488 kernel: Run /init as init process
Feb 13 15:25:56.166509 kernel:   with arguments:
Feb 13 15:25:56.166529 kernel:     /init
Feb 13 15:25:56.166549 kernel:   with environment:
Feb 13 15:25:56.166568 kernel:     HOME=/
Feb 13 15:25:56.166585 kernel:     TERM=linux
Feb 13 15:25:56.166605 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Feb 13 15:25:56.166638 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Feb 13 15:25:56.166663 systemd[1]: Detected virtualization google.
Feb 13 15:25:56.166684 systemd[1]: Detected architecture x86-64.
Feb 13 15:25:56.166704 systemd[1]: Running in initrd.
Feb 13 15:25:56.166723 systemd[1]: No hostname configured, using default hostname.
Feb 13 15:25:56.166742 systemd[1]: Hostname set to <localhost>.
Feb 13 15:25:56.166764 systemd[1]: Initializing machine ID from random generator.
Feb 13 15:25:56.166792 systemd[1]: Queued start job for default target initrd.target.
Feb 13 15:25:56.166813 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:25:56.166834 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:25:56.166857 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Feb 13 15:25:56.166913 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 15:25:56.166936 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Feb 13 15:25:56.166957 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Feb 13 15:25:56.166994 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Feb 13 15:25:56.167037 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Feb 13 15:25:56.167063 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:25:56.167085 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:25:56.167107 systemd[1]: Reached target paths.target - Path Units.
Feb 13 15:25:56.167132 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 15:25:56.167154 systemd[1]: Reached target swap.target - Swaps.
Feb 13 15:25:56.167176 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 15:25:56.167198 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 15:25:56.167219 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 15:25:56.167240 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Feb 13 15:25:56.167262 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Feb 13 15:25:56.167289 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:25:56.167311 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:25:56.167337 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:25:56.167359 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 15:25:56.167379 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Feb 13 15:25:56.167401 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 15:25:56.167423 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Feb 13 15:25:56.167445 systemd[1]: Starting systemd-fsck-usr.service...
Feb 13 15:25:56.167467 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 15:25:56.167488 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 15:25:56.167511 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:25:56.167605 systemd-journald[184]: Collecting audit messages is disabled.
Feb 13 15:25:56.167654 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Feb 13 15:25:56.167676 systemd-journald[184]: Journal started
Feb 13 15:25:56.167726 systemd-journald[184]: Runtime Journal (/run/log/journal/6514cbd2b27d457bbfea50ce367cd6d0) is 8.0M, max 148.7M, 140.7M free.
Feb 13 15:25:56.140745 systemd-modules-load[185]: Inserted module 'overlay'
Feb 13 15:25:56.196915 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 13 15:25:56.203557 systemd-modules-load[185]: Inserted module 'br_netfilter'
Feb 13 15:25:56.204918 kernel: Bridge firewalling registered
Feb 13 15:25:56.214264 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 15:25:56.214403 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:25:56.215293 systemd[1]: Finished systemd-fsck-usr.service.
Feb 13 15:25:56.215638 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:25:56.224188 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:25:56.296168 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Feb 13 15:25:56.327828 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 15:25:56.339756 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:25:56.351735 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:25:56.372693 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 15:25:56.393648 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:25:56.421301 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:25:56.463600 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 15:25:56.467194 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 15:25:56.499089 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:25:56.510522 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:25:56.527197 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Feb 13 15:25:56.527310 systemd-resolved[206]: Positive Trust Anchors:
Feb 13 15:25:56.527324 systemd-resolved[206]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 15:25:56.527395 systemd-resolved[206]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 15:25:56.533222 systemd-resolved[206]: Defaulting to hostname 'linux'.
Feb 13 15:25:56.545432 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 15:25:56.556736 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:25:56.682120 dracut-cmdline[219]: dracut-dracut-053
Feb 13 15:25:56.682120 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2
Feb 13 15:25:56.779956 kernel: SCSI subsystem initialized
Feb 13 15:25:56.795928 kernel: Loading iSCSI transport class v2.0-870.
Feb 13 15:25:56.813989 kernel: iscsi: registered transport (tcp)
Feb 13 15:25:56.845267 kernel: iscsi: registered transport (qla4xxx)
Feb 13 15:25:56.845390 kernel: QLogic iSCSI HBA Driver
Feb 13 15:25:56.901651 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Feb 13 15:25:56.907220 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Feb 13 15:25:56.986835 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 13 15:25:56.986971 kernel: device-mapper: uevent: version 1.0.3
Feb 13 15:25:56.987051 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Feb 13 15:25:57.044914 kernel: raid6: avx2x4   gen() 17928 MB/s
Feb 13 15:25:57.065913 kernel: raid6: avx2x2   gen() 17912 MB/s
Feb 13 15:25:57.091904 kernel: raid6: avx2x1   gen() 13564 MB/s
Feb 13 15:25:57.091966 kernel: raid6: using algorithm avx2x4 gen() 17928 MB/s
Feb 13 15:25:57.118917 kernel: raid6: .... xor() 6978 MB/s, rmw enabled
Feb 13 15:25:57.119005 kernel: raid6: using avx2x2 recovery algorithm
Feb 13 15:25:57.147914 kernel: xor: automatically using best checksumming function   avx       
Feb 13 15:25:57.326910 kernel: Btrfs loaded, zoned=no, fsverity=no
Feb 13 15:25:57.340451 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 15:25:57.356167 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:25:57.406995 systemd-udevd[401]: Using default interface naming scheme 'v255'.
Feb 13 15:25:57.414347 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:25:57.445153 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Feb 13 15:25:57.484631 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation
Feb 13 15:25:57.524060 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 15:25:57.548114 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 15:25:57.665719 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:25:57.702572 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Feb 13 15:25:57.753161 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Feb 13 15:25:57.795044 kernel: cryptd: max_cpu_qlen set to 1000
Feb 13 15:25:57.765937 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 15:25:57.872040 kernel: AVX2 version of gcm_enc/dec engaged.
Feb 13 15:25:57.872089 kernel: AES CTR mode by8 optimization enabled
Feb 13 15:25:57.872116 kernel: scsi host0: Virtio SCSI HBA
Feb 13 15:25:57.880965 kernel: scsi 0:0:1:0: Direct-Access     Google   PersistentDisk   1    PQ: 0 ANSI: 6
Feb 13 15:25:57.784333 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:25:57.810105 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 15:25:57.846328 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Feb 13 15:25:57.923772 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 15:25:57.924609 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:25:58.024274 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB)
Feb 13 15:25:58.045239 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks
Feb 13 15:25:58.045530 kernel: sd 0:0:1:0: [sda] Write Protect is off
Feb 13 15:25:58.045786 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08
Feb 13 15:25:58.046061 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Feb 13 15:25:58.046823 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Feb 13 15:25:58.046857 kernel: GPT:17805311 != 25165823
Feb 13 15:25:58.046916 kernel: GPT:Alternate GPT header not at the end of the disk.
Feb 13 15:25:58.046944 kernel: GPT:17805311 != 25165823
Feb 13 15:25:58.046968 kernel: GPT: Use GNU Parted to correct GPT errors.
Feb 13 15:25:58.047001 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Feb 13 15:25:58.047027 kernel: sd 0:0:1:0: [sda] Attached SCSI disk
Feb 13 15:25:58.038403 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:25:58.054037 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 15:25:58.054350 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:25:58.054714 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:25:58.132089 kernel: BTRFS: device fsid 966d6124-9067-4089-b000-5e99065fe7e2 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (450)
Feb 13 15:25:58.132140 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (445)
Feb 13 15:25:58.130532 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:25:58.144030 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 15:25:58.190260 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM.
Feb 13 15:25:58.190804 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:25:58.235242 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT.
Feb 13 15:25:58.266016 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A.
Feb 13 15:25:58.266354 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A.
Feb 13 15:25:58.297595 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM.
Feb 13 15:25:58.330375 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Feb 13 15:25:58.351243 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:25:58.369793 disk-uuid[543]: Primary Header is updated.
Feb 13 15:25:58.369793 disk-uuid[543]: Secondary Entries is updated.
Feb 13 15:25:58.369793 disk-uuid[543]: Secondary Header is updated.
Feb 13 15:25:58.400297 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Feb 13 15:25:58.415923 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Feb 13 15:25:58.430731 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:25:59.433266 kernel:  sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9
Feb 13 15:25:59.433378 disk-uuid[544]: The operation has completed successfully.
Feb 13 15:25:59.520247 systemd[1]: disk-uuid.service: Deactivated successfully.
Feb 13 15:25:59.520416 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Feb 13 15:25:59.545216 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Feb 13 15:25:59.581196 sh[566]: Success
Feb 13 15:25:59.608208 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2"
Feb 13 15:25:59.712903 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Feb 13 15:25:59.721704 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Feb 13 15:25:59.748717 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Feb 13 15:25:59.796712 kernel: BTRFS info (device dm-0): first mount of filesystem 966d6124-9067-4089-b000-5e99065fe7e2
Feb 13 15:25:59.796855 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm
Feb 13 15:25:59.796902 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Feb 13 15:25:59.806167 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Feb 13 15:25:59.818802 kernel: BTRFS info (device dm-0): using free space tree
Feb 13 15:25:59.844948 kernel: BTRFS info (device dm-0): enabling ssd optimizations
Feb 13 15:25:59.855008 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Feb 13 15:25:59.856211 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Feb 13 15:25:59.861159 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Feb 13 15:25:59.910183 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Feb 13 15:25:59.965083 kernel: BTRFS info (device sda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1
Feb 13 15:25:59.965142 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Feb 13 15:25:59.965167 kernel: BTRFS info (device sda6): using free space tree
Feb 13 15:25:59.965189 kernel: BTRFS info (device sda6): enabling ssd optimizations
Feb 13 15:25:59.965213 kernel: BTRFS info (device sda6): auto enabling async discard
Feb 13 15:25:59.976773 systemd[1]: mnt-oem.mount: Deactivated successfully.
Feb 13 15:25:59.995145 kernel: BTRFS info (device sda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1
Feb 13 15:26:00.003373 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Feb 13 15:26:00.031264 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Feb 13 15:26:00.119755 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 15:26:00.151190 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 15:26:00.236220 ignition[677]: Ignition 2.20.0
Feb 13 15:26:00.236241 ignition[677]: Stage: fetch-offline
Feb 13 15:26:00.240132 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 15:26:00.236320 ignition[677]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:26:00.246177 systemd-networkd[749]: lo: Link UP
Feb 13 15:26:00.236336 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/gcp"
Feb 13 15:26:00.246184 systemd-networkd[749]: lo: Gained carrier
Feb 13 15:26:00.237233 ignition[677]: parsed url from cmdline: ""
Feb 13 15:26:00.248285 systemd-networkd[749]: Enumeration completed
Feb 13 15:26:00.237243 ignition[677]: no config URL provided
Feb 13 15:26:00.248945 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:26:00.237257 ignition[677]: reading system config file "/usr/lib/ignition/user.ign"
Feb 13 15:26:00.248953 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 15:26:00.237276 ignition[677]: no config at "/usr/lib/ignition/user.ign"
Feb 13 15:26:00.252111 systemd-networkd[749]: eth0: Link UP
Feb 13 15:26:00.237292 ignition[677]: failed to fetch config: resource requires networking
Feb 13 15:26:00.252118 systemd-networkd[749]: eth0: Gained carrier
Feb 13 15:26:00.237649 ignition[677]: Ignition finished successfully
Feb 13 15:26:00.252136 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:26:00.337306 ignition[758]: Ignition 2.20.0
Feb 13 15:26:00.263495 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 15:26:00.337317 ignition[758]: Stage: fetch
Feb 13 15:26:00.270032 systemd-networkd[749]: eth0: DHCPv4 address 10.128.0.79/32, gateway 10.128.0.1 acquired from 169.254.169.254
Feb 13 15:26:00.337532 ignition[758]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:26:00.290461 systemd[1]: Reached target network.target - Network.
Feb 13 15:26:00.337543 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/gcp"
Feb 13 15:26:00.313217 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)...
Feb 13 15:26:00.337668 ignition[758]: parsed url from cmdline: ""
Feb 13 15:26:00.352277 unknown[758]: fetched base config from "system"
Feb 13 15:26:00.337676 ignition[758]: no config URL provided
Feb 13 15:26:00.352292 unknown[758]: fetched base config from "system"
Feb 13 15:26:00.337683 ignition[758]: reading system config file "/usr/lib/ignition/user.ign"
Feb 13 15:26:00.352309 unknown[758]: fetched user config from "gcp"
Feb 13 15:26:00.337693 ignition[758]: no config at "/usr/lib/ignition/user.ign"
Feb 13 15:26:00.357494 systemd[1]: Finished ignition-fetch.service - Ignition (fetch).
Feb 13 15:26:00.337723 ignition[758]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1
Feb 13 15:26:00.384941 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Feb 13 15:26:00.343130 ignition[758]: GET result: OK
Feb 13 15:26:00.426780 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Feb 13 15:26:00.343239 ignition[758]: parsing config with SHA512: 9d4ab887cabd97aa27b724f84dc65be09104071117e316b5b821f11b6854a69f5b54845ef43b816b46412ee195f075821fa5a2b0d5301484f40c750636530bc2
Feb 13 15:26:00.449210 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Feb 13 15:26:00.354592 ignition[758]: fetch: fetch complete
Feb 13 15:26:00.499994 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Feb 13 15:26:00.354604 ignition[758]: fetch: fetch passed
Feb 13 15:26:00.511409 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Feb 13 15:26:00.354699 ignition[758]: Ignition finished successfully
Feb 13 15:26:00.528190 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Feb 13 15:26:00.423632 ignition[764]: Ignition 2.20.0
Feb 13 15:26:00.550145 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 15:26:00.423643 ignition[764]: Stage: kargs
Feb 13 15:26:00.566163 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 15:26:00.423897 ignition[764]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:26:00.582137 systemd[1]: Reached target basic.target - Basic System.
Feb 13 15:26:00.423918 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/gcp"
Feb 13 15:26:00.606206 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Feb 13 15:26:00.425200 ignition[764]: kargs: kargs passed
Feb 13 15:26:00.425278 ignition[764]: Ignition finished successfully
Feb 13 15:26:00.497221 ignition[770]: Ignition 2.20.0
Feb 13 15:26:00.497232 ignition[770]: Stage: disks
Feb 13 15:26:00.497459 ignition[770]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:26:00.497471 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/gcp"
Feb 13 15:26:00.498647 ignition[770]: disks: disks passed
Feb 13 15:26:00.498743 ignition[770]: Ignition finished successfully
Feb 13 15:26:00.666325 systemd-fsck[778]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks
Feb 13 15:26:00.849018 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Feb 13 15:26:00.855221 systemd[1]: Mounting sysroot.mount - /sysroot...
Feb 13 15:26:00.998371 kernel: EXT4-fs (sda9): mounted filesystem 85ed0b0d-7f0f-4eeb-80d8-6213e9fcc55d r/w with ordered data mode. Quota mode: none.
Feb 13 15:26:00.999472 systemd[1]: Mounted sysroot.mount - /sysroot.
Feb 13 15:26:01.000566 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Feb 13 15:26:01.035349 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 15:26:01.050061 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Feb 13 15:26:01.070517 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met.
Feb 13 15:26:01.070635 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Feb 13 15:26:01.155302 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (786)
Feb 13 15:26:01.155361 kernel: BTRFS info (device sda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1
Feb 13 15:26:01.155378 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Feb 13 15:26:01.155393 kernel: BTRFS info (device sda6): using free space tree
Feb 13 15:26:01.155408 kernel: BTRFS info (device sda6): enabling ssd optimizations
Feb 13 15:26:01.155424 kernel: BTRFS info (device sda6): auto enabling async discard
Feb 13 15:26:01.070697 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 15:26:01.126357 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Feb 13 15:26:01.165098 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 15:26:01.187234 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Feb 13 15:26:01.331471 initrd-setup-root[810]: cut: /sysroot/etc/passwd: No such file or directory
Feb 13 15:26:01.342115 initrd-setup-root[817]: cut: /sysroot/etc/group: No such file or directory
Feb 13 15:26:01.353085 initrd-setup-root[824]: cut: /sysroot/etc/shadow: No such file or directory
Feb 13 15:26:01.365096 initrd-setup-root[831]: cut: /sysroot/etc/gshadow: No such file or directory
Feb 13 15:26:01.521811 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Feb 13 15:26:01.527067 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Feb 13 15:26:01.557291 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Feb 13 15:26:01.577969 kernel: BTRFS info (device sda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1
Feb 13 15:26:01.585548 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Feb 13 15:26:01.604738 systemd-networkd[749]: eth0: Gained IPv6LL
Feb 13 15:26:01.633390 ignition[898]: INFO     : Ignition 2.20.0
Feb 13 15:26:01.633390 ignition[898]: INFO     : Stage: mount
Feb 13 15:26:01.641290 ignition[898]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:26:01.641290 ignition[898]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/gcp"
Feb 13 15:26:01.641290 ignition[898]: INFO     : mount: mount passed
Feb 13 15:26:01.641290 ignition[898]: INFO     : Ignition finished successfully
Feb 13 15:26:01.635151 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Feb 13 15:26:01.655689 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Feb 13 15:26:01.682140 systemd[1]: Starting ignition-files.service - Ignition (files)...
Feb 13 15:26:02.007347 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 15:26:02.058937 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (910)
Feb 13 15:26:02.069925 kernel: BTRFS info (device sda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1
Feb 13 15:26:02.070049 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm
Feb 13 15:26:02.082848 kernel: BTRFS info (device sda6): using free space tree
Feb 13 15:26:02.100921 kernel: BTRFS info (device sda6): enabling ssd optimizations
Feb 13 15:26:02.101029 kernel: BTRFS info (device sda6): auto enabling async discard
Feb 13 15:26:02.104678 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 15:26:02.140308 ignition[927]: INFO     : Ignition 2.20.0
Feb 13 15:26:02.140308 ignition[927]: INFO     : Stage: files
Feb 13 15:26:02.155053 ignition[927]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:26:02.155053 ignition[927]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/gcp"
Feb 13 15:26:02.155053 ignition[927]: DEBUG    : files: compiled without relabeling support, skipping
Feb 13 15:26:02.155053 ignition[927]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Feb 13 15:26:02.155053 ignition[927]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Feb 13 15:26:02.155053 ignition[927]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Feb 13 15:26:02.155053 ignition[927]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Feb 13 15:26:02.155053 ignition[927]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Feb 13 15:26:02.154950 unknown[927]: wrote ssh authorized keys file for user: core
Feb 13 15:26:02.257079 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/etc/flatcar-cgroupv1"
Feb 13 15:26:02.257079 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1"
Feb 13 15:26:02.257079 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Feb 13 15:26:02.257079 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1
Feb 13 15:26:02.324081 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(4): GET result: OK
Feb 13 15:26:02.438280 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz"
Feb 13 15:26:02.455055 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/opt/bin/cilium.tar.gz"
Feb 13 15:26:02.455055 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1
Feb 13 15:26:02.749577 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(5): GET result: OK
Feb 13 15:26:02.918632 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz"
Feb 13 15:26:02.934057 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/install.sh"
Feb 13 15:26:02.934057 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh"
Feb 13 15:26:02.934057 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nginx.yaml"
Feb 13 15:26:02.934057 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml"
Feb 13 15:26:02.934057 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Feb 13 15:26:02.934057 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Feb 13 15:26:02.934057 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb 13 15:26:02.934057 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Feb 13 15:26:02.934057 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 15:26:02.934057 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 15:26:02.934057 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Feb 13 15:26:02.934057 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Feb 13 15:26:02.934057 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(c): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Feb 13 15:26:02.934057 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1
Feb 13 15:26:03.169129 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(c): GET result: OK
Feb 13 15:26:04.026165 ignition[927]: INFO     : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw"
Feb 13 15:26:04.026165 ignition[927]: INFO     : files: op(d): [started]  processing unit "containerd.service"
Feb 13 15:26:04.045309 ignition[927]: INFO     : files: op(d): op(e): [started]  writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf"
Feb 13 15:26:04.045309 ignition[927]: INFO     : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf"
Feb 13 15:26:04.045309 ignition[927]: INFO     : files: op(d): [finished] processing unit "containerd.service"
Feb 13 15:26:04.045309 ignition[927]: INFO     : files: op(f): [started]  processing unit "prepare-helm.service"
Feb 13 15:26:04.045309 ignition[927]: INFO     : files: op(f): op(10): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb 13 15:26:04.045309 ignition[927]: INFO     : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Feb 13 15:26:04.045309 ignition[927]: INFO     : files: op(f): [finished] processing unit "prepare-helm.service"
Feb 13 15:26:04.045309 ignition[927]: INFO     : files: op(11): [started]  setting preset to enabled for "prepare-helm.service"
Feb 13 15:26:04.045309 ignition[927]: INFO     : files: op(11): [finished] setting preset to enabled for "prepare-helm.service"
Feb 13 15:26:04.045309 ignition[927]: INFO     : files: createResultFile: createFiles: op(12): [started]  writing file "/sysroot/etc/.ignition-result.json"
Feb 13 15:26:04.045309 ignition[927]: INFO     : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json"
Feb 13 15:26:04.045309 ignition[927]: INFO     : files: files passed
Feb 13 15:26:04.045309 ignition[927]: INFO     : Ignition finished successfully
Feb 13 15:26:04.032554 systemd[1]: Finished ignition-files.service - Ignition (files).
Feb 13 15:26:04.070410 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Feb 13 15:26:04.118200 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Feb 13 15:26:04.123469 systemd[1]: ignition-quench.service: Deactivated successfully.
Feb 13 15:26:04.343205 initrd-setup-root-after-ignition[955]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:26:04.343205 initrd-setup-root-after-ignition[955]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:26:04.123620 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Feb 13 15:26:04.400235 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:26:04.172673 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 15:26:04.200591 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Feb 13 15:26:04.228277 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Feb 13 15:26:04.341547 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 13 15:26:04.341704 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Feb 13 15:26:04.354555 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Feb 13 15:26:04.390225 systemd[1]: Reached target initrd.target - Initrd Default Target.
Feb 13 15:26:04.410426 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Feb 13 15:26:04.417382 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Feb 13 15:26:04.470281 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 15:26:04.488210 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Feb 13 15:26:04.562057 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:26:04.579456 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:26:04.601542 systemd[1]: Stopped target timers.target - Timer Units.
Feb 13 15:26:04.611578 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 13 15:26:04.611804 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 15:26:04.646597 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Feb 13 15:26:04.665542 systemd[1]: Stopped target basic.target - Basic System.
Feb 13 15:26:04.693521 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Feb 13 15:26:04.702599 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 15:26:04.731518 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Feb 13 15:26:04.760389 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Feb 13 15:26:04.777335 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 15:26:04.777818 systemd[1]: Stopped target sysinit.target - System Initialization.
Feb 13 15:26:04.798704 systemd[1]: Stopped target local-fs.target - Local File Systems.
Feb 13 15:26:04.825528 systemd[1]: Stopped target swap.target - Swaps.
Feb 13 15:26:04.843494 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 13 15:26:04.843782 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 15:26:04.877492 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:26:04.888429 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:26:04.909448 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Feb 13 15:26:04.909685 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:26:04.931504 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 13 15:26:04.931742 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Feb 13 15:26:04.963538 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Feb 13 15:26:04.963822 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 15:26:04.983482 systemd[1]: ignition-files.service: Deactivated successfully.
Feb 13 15:26:04.983702 systemd[1]: Stopped ignition-files.service - Ignition (files).
Feb 13 15:26:05.010468 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Feb 13 15:26:05.022126 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 13 15:26:05.105131 ignition[980]: INFO     : Ignition 2.20.0
Feb 13 15:26:05.105131 ignition[980]: INFO     : Stage: umount
Feb 13 15:26:05.105131 ignition[980]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:26:05.105131 ignition[980]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/gcp"
Feb 13 15:26:05.105131 ignition[980]: INFO     : umount: umount passed
Feb 13 15:26:05.105131 ignition[980]: INFO     : Ignition finished successfully
Feb 13 15:26:05.022539 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:26:05.058544 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Feb 13 15:26:05.083122 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 13 15:26:05.083609 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:26:05.095733 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 13 15:26:05.096087 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 15:26:05.175896 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Feb 13 15:26:05.177223 systemd[1]: ignition-mount.service: Deactivated successfully.
Feb 13 15:26:05.177363 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Feb 13 15:26:05.191152 systemd[1]: sysroot-boot.service: Deactivated successfully.
Feb 13 15:26:05.191292 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Feb 13 15:26:05.213051 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 13 15:26:05.213237 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Feb 13 15:26:05.233707 systemd[1]: ignition-disks.service: Deactivated successfully.
Feb 13 15:26:05.233794 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Feb 13 15:26:05.251373 systemd[1]: ignition-kargs.service: Deactivated successfully.
Feb 13 15:26:05.251475 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Feb 13 15:26:05.271335 systemd[1]: ignition-fetch.service: Deactivated successfully.
Feb 13 15:26:05.271429 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch).
Feb 13 15:26:05.289410 systemd[1]: Stopped target network.target - Network.
Feb 13 15:26:05.305317 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Feb 13 15:26:05.305442 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 15:26:05.313540 systemd[1]: Stopped target paths.target - Path Units.
Feb 13 15:26:05.331418 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 13 15:26:05.336120 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:26:05.357163 systemd[1]: Stopped target slices.target - Slice Units.
Feb 13 15:26:05.375291 systemd[1]: Stopped target sockets.target - Socket Units.
Feb 13 15:26:05.396393 systemd[1]: iscsid.socket: Deactivated successfully.
Feb 13 15:26:05.396479 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 15:26:05.415345 systemd[1]: iscsiuio.socket: Deactivated successfully.
Feb 13 15:26:05.415434 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 15:26:05.424461 systemd[1]: ignition-setup.service: Deactivated successfully.
Feb 13 15:26:05.424562 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Feb 13 15:26:05.451416 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Feb 13 15:26:05.451521 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Feb 13 15:26:05.470400 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Feb 13 15:26:05.470507 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Feb 13 15:26:05.496661 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Feb 13 15:26:05.501013 systemd-networkd[749]: eth0: DHCPv6 lease lost
Feb 13 15:26:05.515446 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Feb 13 15:26:05.535781 systemd[1]: systemd-resolved.service: Deactivated successfully.
Feb 13 15:26:05.535974 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Feb 13 15:26:05.557554 systemd[1]: systemd-networkd.service: Deactivated successfully.
Feb 13 15:26:05.557851 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Feb 13 15:26:05.576454 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Feb 13 15:26:05.576519 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:26:05.612222 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Feb 13 15:26:05.623090 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Feb 13 15:26:05.623260 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 15:26:05.635189 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 13 15:26:05.635317 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:26:05.635490 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 13 15:26:05.635554 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:26:06.070123 systemd-journald[184]: Received SIGTERM from PID 1 (systemd).
Feb 13 15:26:05.662374 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb 13 15:26:05.662476 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:26:05.682542 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:26:05.706707 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 13 15:26:05.707034 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:26:05.731940 systemd[1]: network-cleanup.service: Deactivated successfully.
Feb 13 15:26:05.732099 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Feb 13 15:26:05.751957 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 13 15:26:05.752045 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:26:05.769388 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 13 15:26:05.769463 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:26:05.789315 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 13 15:26:05.789433 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 15:26:05.833328 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 13 15:26:05.833471 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Feb 13 15:26:05.860433 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 15:26:05.860553 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:26:05.895393 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Feb 13 15:26:05.909108 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 13 15:26:05.909307 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:26:05.921232 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 15:26:05.921353 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:26:05.933892 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 13 15:26:05.934105 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Feb 13 15:26:05.955133 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Feb 13 15:26:05.980437 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Feb 13 15:26:06.031377 systemd[1]: Switching root.
Feb 13 15:26:06.342072 systemd-journald[184]: Journal stopped
Feb 13 15:26:08.972016 kernel: SELinux:  policy capability network_peer_controls=1
Feb 13 15:26:08.972055 kernel: SELinux:  policy capability open_perms=1
Feb 13 15:26:08.972069 kernel: SELinux:  policy capability extended_socket_class=1
Feb 13 15:26:08.972084 kernel: SELinux:  policy capability always_check_network=0
Feb 13 15:26:08.972094 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 13 15:26:08.972105 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 13 15:26:08.972117 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Feb 13 15:26:08.972132 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Feb 13 15:26:08.972143 kernel: audit: type=1403 audit(1739460366.870:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb 13 15:26:08.972157 systemd[1]: Successfully loaded SELinux policy in 82.667ms.
Feb 13 15:26:08.972171 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.276ms.
Feb 13 15:26:08.972185 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Feb 13 15:26:08.972197 systemd[1]: Detected virtualization google.
Feb 13 15:26:08.972209 systemd[1]: Detected architecture x86-64.
Feb 13 15:26:08.972225 systemd[1]: Detected first boot.
Feb 13 15:26:08.972238 systemd[1]: Initializing machine ID from random generator.
Feb 13 15:26:08.972251 zram_generator::config[1039]: No configuration found.
Feb 13 15:26:08.972265 systemd[1]: Populated /etc with preset unit settings.
Feb 13 15:26:08.972277 systemd[1]: Queued start job for default target multi-user.target.
Feb 13 15:26:08.972293 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6.
Feb 13 15:26:08.972408 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Feb 13 15:26:08.972426 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Feb 13 15:26:08.972439 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Feb 13 15:26:08.972451 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Feb 13 15:26:08.972467 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Feb 13 15:26:08.972480 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Feb 13 15:26:08.972497 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Feb 13 15:26:08.972511 systemd[1]: Created slice user.slice - User and Session Slice.
Feb 13 15:26:08.972524 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:26:08.972537 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:26:08.972551 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Feb 13 15:26:08.972564 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Feb 13 15:26:08.972577 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Feb 13 15:26:08.972591 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 15:26:08.972607 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0...
Feb 13 15:26:08.972620 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:26:08.972636 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Feb 13 15:26:08.972649 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:26:08.972662 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 15:26:08.972676 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 15:26:08.972693 systemd[1]: Reached target swap.target - Swaps.
Feb 13 15:26:08.972707 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Feb 13 15:26:08.972720 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Feb 13 15:26:08.972737 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Feb 13 15:26:08.972750 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Feb 13 15:26:08.972765 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:26:08.972778 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:26:08.972791 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:26:08.972804 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Feb 13 15:26:08.972818 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Feb 13 15:26:08.972835 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Feb 13 15:26:08.972849 systemd[1]: Mounting media.mount - External Media Directory...
Feb 13 15:26:08.972862 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:26:08.972923 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Feb 13 15:26:08.972949 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Feb 13 15:26:08.972968 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Feb 13 15:26:08.972988 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Feb 13 15:26:08.973009 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:26:08.973024 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 15:26:08.973037 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Feb 13 15:26:08.973051 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:26:08.973065 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Feb 13 15:26:08.973078 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 15:26:08.973095 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Feb 13 15:26:08.973109 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 15:26:08.973123 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Feb 13 15:26:08.973137 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
Feb 13 15:26:08.973151 kernel: ACPI: bus type drm_connector registered
Feb 13 15:26:08.973164 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.)
Feb 13 15:26:08.973178 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 15:26:08.973191 kernel: loop: module loaded
Feb 13 15:26:08.973207 kernel: fuse: init (API version 7.39)
Feb 13 15:26:08.973219 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 15:26:08.973233 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Feb 13 15:26:08.973282 systemd-journald[1143]: Collecting audit messages is disabled.
Feb 13 15:26:08.973316 systemd-journald[1143]: Journal started
Feb 13 15:26:08.973343 systemd-journald[1143]: Runtime Journal (/run/log/journal/c3e6c3c595634eb9a2f20065290e2946) is 8.0M, max 148.7M, 140.7M free.
Feb 13 15:26:08.999995 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Feb 13 15:26:09.029008 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 15:26:09.055903 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:26:09.065947 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 15:26:09.078700 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Feb 13 15:26:09.090352 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Feb 13 15:26:09.101291 systemd[1]: Mounted media.mount - External Media Directory.
Feb 13 15:26:09.112293 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Feb 13 15:26:09.123337 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Feb 13 15:26:09.133372 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Feb 13 15:26:09.143540 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Feb 13 15:26:09.155495 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:26:09.167472 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 13 15:26:09.167765 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Feb 13 15:26:09.179505 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:26:09.179790 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:26:09.191492 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 13 15:26:09.191793 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Feb 13 15:26:09.202497 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 15:26:09.202897 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 15:26:09.214478 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb 13 15:26:09.214775 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Feb 13 15:26:09.225483 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 15:26:09.225832 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 15:26:09.236555 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:26:09.247488 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Feb 13 15:26:09.259519 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Feb 13 15:26:09.271541 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:26:09.295426 systemd[1]: Reached target network-pre.target - Preparation for Network.
Feb 13 15:26:09.311039 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Feb 13 15:26:09.330039 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Feb 13 15:26:09.340080 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Feb 13 15:26:09.354178 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Feb 13 15:26:09.372571 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Feb 13 15:26:09.384128 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 15:26:09.390490 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Feb 13 15:26:09.400159 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 15:26:09.401577 systemd-journald[1143]: Time spent on flushing to /var/log/journal/c3e6c3c595634eb9a2f20065290e2946 is 65.223ms for 923 entries.
Feb 13 15:26:09.401577 systemd-journald[1143]: System Journal (/var/log/journal/c3e6c3c595634eb9a2f20065290e2946) is 8.0M, max 584.8M, 576.8M free.
Feb 13 15:26:09.492481 systemd-journald[1143]: Received client request to flush runtime journal.
Feb 13 15:26:09.416447 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:26:09.436275 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Feb 13 15:26:09.469127 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Feb 13 15:26:09.484967 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Feb 13 15:26:09.498639 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Feb 13 15:26:09.510611 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Feb 13 15:26:09.522705 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Feb 13 15:26:09.534451 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:26:09.547564 systemd-tmpfiles[1181]: ACLs are not supported, ignoring.
Feb 13 15:26:09.547612 systemd-tmpfiles[1181]: ACLs are not supported, ignoring.
Feb 13 15:26:09.558515 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Feb 13 15:26:09.570756 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 15:26:09.585961 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in.
Feb 13 15:26:09.604402 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Feb 13 15:26:09.660761 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Feb 13 15:26:09.681117 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 15:26:09.727398 systemd-tmpfiles[1201]: ACLs are not supported, ignoring.
Feb 13 15:26:09.727965 systemd-tmpfiles[1201]: ACLs are not supported, ignoring.
Feb 13 15:26:09.736962 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:26:10.275115 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Feb 13 15:26:10.294319 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:26:10.351518 systemd-udevd[1207]: Using default interface naming scheme 'v255'.
Feb 13 15:26:10.405398 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:26:10.431960 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 15:26:10.469496 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Feb 13 15:26:10.567292 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0.
Feb 13 15:26:10.588239 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Feb 13 15:26:10.741907 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3
Feb 13 15:26:10.756914 kernel: ACPI: button: Power Button [PWRF]
Feb 13 15:26:10.767917 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4
Feb 13 15:26:10.773029 kernel: ACPI: button: Sleep Button [SLPF]
Feb 13 15:26:10.783334 systemd-networkd[1216]: lo: Link UP
Feb 13 15:26:10.783960 systemd-networkd[1216]: lo: Gained carrier
Feb 13 15:26:10.789758 systemd-networkd[1216]: Enumeration completed
Feb 13 15:26:10.791132 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 15:26:10.794342 systemd-networkd[1216]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:26:10.794360 systemd-networkd[1216]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 15:26:10.795784 systemd-networkd[1216]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:26:10.797052 systemd-networkd[1216]: eth0: Link UP
Feb 13 15:26:10.797067 systemd-networkd[1216]: eth0: Gained carrier
Feb 13 15:26:10.797092 systemd-networkd[1216]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:26:10.798958 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr
Feb 13 15:26:10.809059 systemd-networkd[1216]: eth0: DHCPv4 address 10.128.0.79/32, gateway 10.128.0.1 acquired from 169.254.169.254
Feb 13 15:26:10.827557 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Feb 13 15:26:10.859433 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5
Feb 13 15:26:10.917959 kernel: mousedev: PS/2 mouse device common for all mice
Feb 13 15:26:10.934161 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1222)
Feb 13 15:26:10.942624 kernel: EDAC MC: Ver: 3.0.0
Feb 13 15:26:10.942465 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:26:11.050661 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM.
Feb 13 15:26:11.051548 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Feb 13 15:26:11.058585 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Feb 13 15:26:11.093572 lvm[1251]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 13 15:26:11.114438 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:26:11.137440 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Feb 13 15:26:11.150687 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:26:11.169327 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Feb 13 15:26:11.197989 lvm[1258]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 13 15:26:11.234086 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Feb 13 15:26:11.245700 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Feb 13 15:26:11.258162 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb 13 15:26:11.258228 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 15:26:11.269185 systemd[1]: Reached target machines.target - Containers.
Feb 13 15:26:11.279987 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink).
Feb 13 15:26:11.297224 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Feb 13 15:26:11.322895 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Feb 13 15:26:11.333489 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:26:11.341895 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Feb 13 15:26:11.359973 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Feb 13 15:26:11.383840 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Feb 13 15:26:11.396735 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Feb 13 15:26:11.416020 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Feb 13 15:26:11.430842 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Feb 13 15:26:11.436374 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Feb 13 15:26:11.455416 kernel: loop0: detected capacity change from 0 to 52056
Feb 13 15:26:11.496923 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Feb 13 15:26:11.534920 kernel: loop1: detected capacity change from 0 to 138184
Feb 13 15:26:11.598265 kernel: loop2: detected capacity change from 0 to 140992
Feb 13 15:26:11.684927 kernel: loop3: detected capacity change from 0 to 211296
Feb 13 15:26:11.795974 kernel: loop4: detected capacity change from 0 to 52056
Feb 13 15:26:11.826932 kernel: loop5: detected capacity change from 0 to 138184
Feb 13 15:26:11.870959 kernel: loop6: detected capacity change from 0 to 140992
Feb 13 15:26:11.924924 kernel: loop7: detected capacity change from 0 to 211296
Feb 13 15:26:11.961183 (sd-merge)[1279]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'.
Feb 13 15:26:11.962467 (sd-merge)[1279]: Merged extensions into '/usr'.
Feb 13 15:26:11.969788 systemd[1]: Reloading requested from client PID 1267 ('systemd-sysext') (unit systemd-sysext.service)...
Feb 13 15:26:11.969822 systemd[1]: Reloading...
Feb 13 15:26:12.097916 zram_generator::config[1303]: No configuration found.
Feb 13 15:26:12.289720 ldconfig[1262]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Feb 13 15:26:12.346283 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:26:12.431839 systemd[1]: Reloading finished in 461 ms.
Feb 13 15:26:12.452971 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Feb 13 15:26:12.463856 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Feb 13 15:26:12.485213 systemd[1]: Starting ensure-sysext.service...
Feb 13 15:26:12.503260 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 15:26:12.523135 systemd[1]: Reloading requested from client PID 1354 ('systemctl') (unit ensure-sysext.service)...
Feb 13 15:26:12.523187 systemd[1]: Reloading...
Feb 13 15:26:12.559585 systemd-tmpfiles[1355]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Feb 13 15:26:12.561158 systemd-tmpfiles[1355]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Feb 13 15:26:12.563214 systemd-tmpfiles[1355]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Feb 13 15:26:12.563959 systemd-tmpfiles[1355]: ACLs are not supported, ignoring.
Feb 13 15:26:12.564249 systemd-tmpfiles[1355]: ACLs are not supported, ignoring.
Feb 13 15:26:12.574743 systemd-tmpfiles[1355]: Detected autofs mount point /boot during canonicalization of boot.
Feb 13 15:26:12.576809 systemd-tmpfiles[1355]: Skipping /boot
Feb 13 15:26:12.603461 systemd-tmpfiles[1355]: Detected autofs mount point /boot during canonicalization of boot.
Feb 13 15:26:12.605503 systemd-tmpfiles[1355]: Skipping /boot
Feb 13 15:26:12.680930 zram_generator::config[1380]: No configuration found.
Feb 13 15:26:12.741073 systemd-networkd[1216]: eth0: Gained IPv6LL
Feb 13 15:26:12.851436 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:26:12.939306 systemd[1]: Reloading finished in 415 ms.
Feb 13 15:26:12.961500 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Feb 13 15:26:12.980047 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:26:13.001189 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Feb 13 15:26:13.018363 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Feb 13 15:26:13.043070 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Feb 13 15:26:13.066197 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 15:26:13.091977 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Feb 13 15:26:13.113260 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:26:13.113707 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:26:13.127161 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:26:13.157403 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 15:26:13.160780 augenrules[1458]: No rules
Feb 13 15:26:13.177353 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 15:26:13.186198 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:26:13.186526 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:26:13.190744 systemd[1]: audit-rules.service: Deactivated successfully.
Feb 13 15:26:13.193655 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Feb 13 15:26:13.205366 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Feb 13 15:26:13.218515 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:26:13.219177 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:26:13.229425 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 15:26:13.229757 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 15:26:13.242146 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Feb 13 15:26:13.254117 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 15:26:13.254448 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 15:26:13.280063 systemd-resolved[1450]: Positive Trust Anchors:
Feb 13 15:26:13.280678 systemd-resolved[1450]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 15:26:13.280823 systemd-resolved[1450]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 15:26:13.283533 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:26:13.284638 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:26:13.295033 systemd-resolved[1450]: Defaulting to hostname 'linux'.
Feb 13 15:26:13.295822 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:26:13.315538 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 15:26:13.337525 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 15:26:13.349307 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:26:13.359413 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Feb 13 15:26:13.369154 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Feb 13 15:26:13.369588 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:26:13.377523 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 15:26:13.389054 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Feb 13 15:26:13.403078 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:26:13.403443 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:26:13.415978 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 15:26:13.416371 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 15:26:13.429085 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 15:26:13.429430 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 15:26:13.440126 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Feb 13 15:26:13.463208 systemd[1]: Reached target network.target - Network.
Feb 13 15:26:13.472381 systemd[1]: Reached target network-online.target - Network is Online.
Feb 13 15:26:13.482414 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:26:13.494375 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:26:13.500501 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Feb 13 15:26:13.509448 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:26:13.518479 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:26:13.535473 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Feb 13 15:26:13.555095 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 15:26:13.559237 augenrules[1494]: /sbin/augenrules: No change
Feb 13 15:26:13.575210 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 15:26:13.581383 augenrules[1517]: No rules
Feb 13 15:26:13.593368 systemd[1]: Starting setup-oem.service - Setup OEM...
Feb 13 15:26:13.603340 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:26:13.604341 systemd[1]: Reached target time-set.target - System Time Set.
Feb 13 15:26:13.614326 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Feb 13 15:26:13.614849 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:26:13.619371 systemd[1]: audit-rules.service: Deactivated successfully.
Feb 13 15:26:13.620192 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Feb 13 15:26:13.631069 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:26:13.631452 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:26:13.644078 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 13 15:26:13.644422 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Feb 13 15:26:13.655013 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 15:26:13.655405 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 15:26:13.668071 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 15:26:13.668408 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 15:26:13.691704 systemd[1]: Finished ensure-sysext.service.
Feb 13 15:26:13.701515 systemd[1]: Finished setup-oem.service - Setup OEM.
Feb 13 15:26:13.724226 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login...
Feb 13 15:26:13.734188 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 15:26:13.734406 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 15:26:13.744461 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Feb 13 15:26:13.756257 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Feb 13 15:26:13.768434 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Feb 13 15:26:13.778363 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Feb 13 15:26:13.790150 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Feb 13 15:26:13.802177 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Feb 13 15:26:13.802365 systemd[1]: Reached target paths.target - Path Units.
Feb 13 15:26:13.811127 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 15:26:13.821806 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Feb 13 15:26:13.834812 systemd[1]: Starting docker.socket - Docker Socket for the API...
Feb 13 15:26:13.843584 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 15:26:13.845930 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login.
Feb 13 15:26:13.857551 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Feb 13 15:26:13.875035 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Feb 13 15:26:13.885112 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 15:26:13.895135 systemd[1]: Reached target basic.target - Basic System.
Feb 13 15:26:13.904514 systemd[1]: System is tainted: cgroupsv1
Feb 13 15:26:13.904635 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Feb 13 15:26:13.904678 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Feb 13 15:26:13.911106 systemd[1]: Starting containerd.service - containerd container runtime...
Feb 13 15:26:13.931032 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent...
Feb 13 15:26:13.948924 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Feb 13 15:26:13.971062 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Feb 13 15:26:14.001372 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Feb 13 15:26:14.012036 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Feb 13 15:26:14.017831 jq[1558]: false
Feb 13 15:26:14.024184 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:26:14.047265 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Feb 13 15:26:14.073772 coreos-metadata[1555]: Feb 13 15:26:14.067 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1
Feb 13 15:26:14.073772 coreos-metadata[1555]: Feb 13 15:26:14.069 INFO Fetch successful
Feb 13 15:26:14.073772 coreos-metadata[1555]: Feb 13 15:26:14.069 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1
Feb 13 15:26:14.073772 coreos-metadata[1555]: Feb 13 15:26:14.073 INFO Fetch successful
Feb 13 15:26:14.073772 coreos-metadata[1555]: Feb 13 15:26:14.073 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1
Feb 13 15:26:14.075547 coreos-metadata[1555]: Feb 13 15:26:14.074 INFO Fetch successful
Feb 13 15:26:14.075547 coreos-metadata[1555]: Feb 13 15:26:14.074 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1
Feb 13 15:26:14.079065 coreos-metadata[1555]: Feb 13 15:26:14.075 INFO Fetch successful
Feb 13 15:26:14.075960 systemd[1]: Started ntpd.service - Network Time Service.
Feb 13 15:26:14.106187 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Feb 13 15:26:14.128572 extend-filesystems[1561]: Found loop4
Feb 13 15:26:14.128572 extend-filesystems[1561]: Found loop5
Feb 13 15:26:14.128572 extend-filesystems[1561]: Found loop6
Feb 13 15:26:14.128572 extend-filesystems[1561]: Found loop7
Feb 13 15:26:14.128572 extend-filesystems[1561]: Found sda
Feb 13 15:26:14.128572 extend-filesystems[1561]: Found sda1
Feb 13 15:26:14.128572 extend-filesystems[1561]: Found sda2
Feb 13 15:26:14.128572 extend-filesystems[1561]: Found sda3
Feb 13 15:26:14.128572 extend-filesystems[1561]: Found usr
Feb 13 15:26:14.128572 extend-filesystems[1561]: Found sda4
Feb 13 15:26:14.128572 extend-filesystems[1561]: Found sda6
Feb 13 15:26:14.128572 extend-filesystems[1561]: Found sda7
Feb 13 15:26:14.128572 extend-filesystems[1561]: Found sda9
Feb 13 15:26:14.128572 extend-filesystems[1561]: Checking size of /dev/sda9
Feb 13 15:26:14.349154 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks
Feb 13 15:26:14.349257 kernel: EXT4-fs (sda9): resized filesystem to 2538491
Feb 13 15:26:14.349289 extend-filesystems[1561]: Resized partition /dev/sda9
Feb 13 15:26:14.360519 ntpd[1567]: 13 Feb 15:26:14 ntpd[1567]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:21:05 UTC 2025 (1): Starting
Feb 13 15:26:14.360519 ntpd[1567]: 13 Feb 15:26:14 ntpd[1567]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp
Feb 13 15:26:14.360519 ntpd[1567]: 13 Feb 15:26:14 ntpd[1567]: ----------------------------------------------------
Feb 13 15:26:14.360519 ntpd[1567]: 13 Feb 15:26:14 ntpd[1567]: ntp-4 is maintained by Network Time Foundation,
Feb 13 15:26:14.360519 ntpd[1567]: 13 Feb 15:26:14 ntpd[1567]: Inc. (NTF), a non-profit 501(c)(3) public-benefit
Feb 13 15:26:14.360519 ntpd[1567]: 13 Feb 15:26:14 ntpd[1567]: corporation.  Support and training for ntp-4 are
Feb 13 15:26:14.360519 ntpd[1567]: 13 Feb 15:26:14 ntpd[1567]: available at https://www.nwtime.org/support
Feb 13 15:26:14.360519 ntpd[1567]: 13 Feb 15:26:14 ntpd[1567]: ----------------------------------------------------
Feb 13 15:26:14.360519 ntpd[1567]: 13 Feb 15:26:14 ntpd[1567]: proto: precision = 0.079 usec (-24)
Feb 13 15:26:14.360519 ntpd[1567]: 13 Feb 15:26:14 ntpd[1567]: basedate set to 2025-02-01
Feb 13 15:26:14.360519 ntpd[1567]: 13 Feb 15:26:14 ntpd[1567]: gps base set to 2025-02-02 (week 2352)
Feb 13 15:26:14.360519 ntpd[1567]: 13 Feb 15:26:14 ntpd[1567]: Listen and drop on 0 v6wildcard [::]:123
Feb 13 15:26:14.360519 ntpd[1567]: 13 Feb 15:26:14 ntpd[1567]: Listen and drop on 1 v4wildcard 0.0.0.0:123
Feb 13 15:26:14.360519 ntpd[1567]: 13 Feb 15:26:14 ntpd[1567]: Listen normally on 2 lo 127.0.0.1:123
Feb 13 15:26:14.360519 ntpd[1567]: 13 Feb 15:26:14 ntpd[1567]: Listen normally on 3 eth0 10.128.0.79:123
Feb 13 15:26:14.360519 ntpd[1567]: 13 Feb 15:26:14 ntpd[1567]: Listen normally on 4 lo [::1]:123
Feb 13 15:26:14.360519 ntpd[1567]: 13 Feb 15:26:14 ntpd[1567]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:4f%2]:123
Feb 13 15:26:14.360519 ntpd[1567]: 13 Feb 15:26:14 ntpd[1567]: Listening on routing socket on fd #22 for interface updates
Feb 13 15:26:14.360519 ntpd[1567]: 13 Feb 15:26:14 ntpd[1567]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 15:26:14.360519 ntpd[1567]: 13 Feb 15:26:14 ntpd[1567]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 15:26:14.131218 systemd[1]: Starting oem-gce.service - GCE Linux Agent...
Feb 13 15:26:14.183819 dbus-daemon[1557]: [system] SELinux support is enabled
Feb 13 15:26:14.379364 extend-filesystems[1588]: resize2fs 1.47.1 (20-May-2024)
Feb 13 15:26:14.379364 extend-filesystems[1588]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required
Feb 13 15:26:14.379364 extend-filesystems[1588]: old_desc_blocks = 1, new_desc_blocks = 2
Feb 13 15:26:14.379364 extend-filesystems[1588]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long.
Feb 13 15:26:14.154923 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin...
Feb 13 15:26:14.194296 ntpd[1567]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:21:05 UTC 2025 (1): Starting
Feb 13 15:26:14.384728 extend-filesystems[1561]: Resized filesystem in /dev/sda9
Feb 13 15:26:14.198238 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Feb 13 15:26:14.194336 ntpd[1567]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp
Feb 13 15:26:14.441403 init.sh[1577]: + '[' -e /etc/default/instance_configs.cfg.template ']'
Feb 13 15:26:14.441403 init.sh[1577]: + echo -e '[InstanceSetup]\nset_host_keys = false'
Feb 13 15:26:14.441403 init.sh[1577]: + /usr/bin/google_instance_setup
Feb 13 15:26:14.236150 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Feb 13 15:26:14.194351 ntpd[1567]: ----------------------------------------------------
Feb 13 15:26:14.265208 systemd[1]: Starting systemd-logind.service - User Login Management...
Feb 13 15:26:14.194366 ntpd[1567]: ntp-4 is maintained by Network Time Foundation,
Feb 13 15:26:14.301839 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2).
Feb 13 15:26:14.194382 ntpd[1567]: Inc. (NTF), a non-profit 501(c)(3) public-benefit
Feb 13 15:26:14.311378 systemd[1]: Starting update-engine.service - Update Engine...
Feb 13 15:26:14.194397 ntpd[1567]: corporation.  Support and training for ntp-4 are
Feb 13 15:26:14.340467 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Feb 13 15:26:14.194414 ntpd[1567]: available at https://www.nwtime.org/support
Feb 13 15:26:14.452756 jq[1606]: true
Feb 13 15:26:14.463438 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1609)
Feb 13 15:26:14.378809 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Feb 13 15:26:14.194428 ntpd[1567]: ----------------------------------------------------
Feb 13 15:26:14.446890 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Feb 13 15:26:14.209271 dbus-daemon[1557]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1216 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0")
Feb 13 15:26:14.462861 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Feb 13 15:26:14.214371 ntpd[1567]: proto: precision = 0.079 usec (-24)
Feb 13 15:26:14.222261 ntpd[1567]: basedate set to 2025-02-01
Feb 13 15:26:14.222297 ntpd[1567]: gps base set to 2025-02-02 (week 2352)
Feb 13 15:26:14.244823 ntpd[1567]: Listen and drop on 0 v6wildcard [::]:123
Feb 13 15:26:14.244984 ntpd[1567]: Listen and drop on 1 v4wildcard 0.0.0.0:123
Feb 13 15:26:14.245301 ntpd[1567]: Listen normally on 2 lo 127.0.0.1:123
Feb 13 15:26:14.245360 ntpd[1567]: Listen normally on 3 eth0 10.128.0.79:123
Feb 13 15:26:14.249310 ntpd[1567]: Listen normally on 4 lo [::1]:123
Feb 13 15:26:14.249412 ntpd[1567]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:4f%2]:123
Feb 13 15:26:14.249486 ntpd[1567]: Listening on routing socket on fd #22 for interface updates
Feb 13 15:26:14.467832 systemd[1]: extend-filesystems.service: Deactivated successfully.
Feb 13 15:26:14.255233 ntpd[1567]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 15:26:14.255297 ntpd[1567]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 15:26:14.472062 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Feb 13 15:26:14.492769 systemd[1]: motdgen.service: Deactivated successfully.
Feb 13 15:26:14.493483 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Feb 13 15:26:14.505195 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Feb 13 15:26:14.524023 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Feb 13 15:26:14.530031 update_engine[1602]: I20250213 15:26:14.525691  1602 main.cc:92] Flatcar Update Engine starting
Feb 13 15:26:14.524461 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Feb 13 15:26:14.547940 update_engine[1602]: I20250213 15:26:14.545231  1602 update_check_scheduler.cc:74] Next update check in 11m25s
Feb 13 15:26:14.616750 (ntainerd)[1622]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Feb 13 15:26:14.639437 dbus-daemon[1557]: [system] Successfully activated service 'org.freedesktop.systemd1'
Feb 13 15:26:14.684183 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent.
Feb 13 15:26:14.688171 jq[1621]: true
Feb 13 15:26:14.713446 systemd[1]: Started update-engine.service - Update Engine.
Feb 13 15:26:14.727484 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Feb 13 15:26:14.728181 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Feb 13 15:26:14.728252 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Feb 13 15:26:14.733589 tar[1619]: linux-amd64/helm
Feb 13 15:26:14.755368 systemd[1]: Starting systemd-hostnamed.service - Hostname Service...
Feb 13 15:26:14.767198 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Feb 13 15:26:14.767254 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Feb 13 15:26:14.780462 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Feb 13 15:26:14.800276 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Feb 13 15:26:14.845001 systemd-logind[1596]: Watching system buttons on /dev/input/event1 (Power Button)
Feb 13 15:26:14.845042 systemd-logind[1596]: Watching system buttons on /dev/input/event2 (Sleep Button)
Feb 13 15:26:14.845074 systemd-logind[1596]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
Feb 13 15:26:14.850569 systemd-logind[1596]: New seat seat0.
Feb 13 15:26:14.898466 systemd[1]: Started systemd-logind.service - User Login Management.
Feb 13 15:26:15.107928 bash[1662]: Updated "/home/core/.ssh/authorized_keys"
Feb 13 15:26:15.109652 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Feb 13 15:26:15.135443 systemd[1]: Starting sshkeys.service...
Feb 13 15:26:15.205101 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys.
Feb 13 15:26:15.227045 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)...
Feb 13 15:26:15.326972 sshd_keygen[1607]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Feb 13 15:26:15.380778 locksmithd[1644]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Feb 13 15:26:15.418153 coreos-metadata[1667]: Feb 13 15:26:15.418 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1
Feb 13 15:26:15.437914 coreos-metadata[1667]: Feb 13 15:26:15.433 INFO Fetch failed with 404: resource not found
Feb 13 15:26:15.437914 coreos-metadata[1667]: Feb 13 15:26:15.434 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1
Feb 13 15:26:15.437914 coreos-metadata[1667]: Feb 13 15:26:15.436 INFO Fetch successful
Feb 13 15:26:15.437914 coreos-metadata[1667]: Feb 13 15:26:15.436 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1
Feb 13 15:26:15.442437 coreos-metadata[1667]: Feb 13 15:26:15.442 INFO Fetch failed with 404: resource not found
Feb 13 15:26:15.442437 coreos-metadata[1667]: Feb 13 15:26:15.442 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1
Feb 13 15:26:15.449185 coreos-metadata[1667]: Feb 13 15:26:15.449 INFO Fetch failed with 404: resource not found
Feb 13 15:26:15.449402 coreos-metadata[1667]: Feb 13 15:26:15.449 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1
Feb 13 15:26:15.451917 coreos-metadata[1667]: Feb 13 15:26:15.451 INFO Fetch successful
Feb 13 15:26:15.463854 unknown[1667]: wrote ssh authorized keys file for user: core
Feb 13 15:26:15.476846 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Feb 13 15:26:15.478132 dbus-daemon[1557]: [system] Successfully activated service 'org.freedesktop.hostname1'
Feb 13 15:26:15.479476 dbus-daemon[1557]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1643 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0")
Feb 13 15:26:15.486756 systemd[1]: Started systemd-hostnamed.service - Hostname Service.
Feb 13 15:26:15.545603 systemd[1]: Starting issuegen.service - Generate /run/issue...
Feb 13 15:26:15.573958 systemd[1]: Starting polkit.service - Authorization Manager...
Feb 13 15:26:15.618975 update-ssh-keys[1689]: Updated "/home/core/.ssh/authorized_keys"
Feb 13 15:26:15.616498 systemd[1]: issuegen.service: Deactivated successfully.
Feb 13 15:26:15.617054 systemd[1]: Finished issuegen.service - Generate /run/issue.
Feb 13 15:26:15.630607 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys).
Feb 13 15:26:15.658342 systemd[1]: Finished sshkeys.service.
Feb 13 15:26:15.689524 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Feb 13 15:26:15.761809 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Feb 13 15:26:15.790406 systemd[1]: Started getty@tty1.service - Getty on tty1.
Feb 13 15:26:15.810328 polkitd[1692]: Started polkitd version 121
Feb 13 15:26:15.812965 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0.
Feb 13 15:26:15.823745 systemd[1]: Reached target getty.target - Login Prompts.
Feb 13 15:26:15.848254 polkitd[1692]: Loading rules from directory /etc/polkit-1/rules.d
Feb 13 15:26:15.848396 polkitd[1692]: Loading rules from directory /usr/share/polkit-1/rules.d
Feb 13 15:26:15.852247 polkitd[1692]: Finished loading, compiling and executing 2 rules
Feb 13 15:26:15.855341 dbus-daemon[1557]: [system] Successfully activated service 'org.freedesktop.PolicyKit1'
Feb 13 15:26:15.855656 systemd[1]: Started polkit.service - Authorization Manager.
Feb 13 15:26:15.856479 polkitd[1692]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Feb 13 15:26:15.942255 systemd-hostnamed[1643]: Hostname set to <ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal> (transient)
Feb 13 15:26:15.943022 systemd-resolved[1450]: System hostname changed to 'ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal'.
Feb 13 15:26:15.951917 containerd[1622]: time="2025-02-13T15:26:15.947089036Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23
Feb 13 15:26:16.086937 containerd[1622]: time="2025-02-13T15:26:16.086745094Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:26:16.091326 containerd[1622]: time="2025-02-13T15:26:16.091249744Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:26:16.091555 containerd[1622]: time="2025-02-13T15:26:16.091531013Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Feb 13 15:26:16.091689 containerd[1622]: time="2025-02-13T15:26:16.091669478Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Feb 13 15:26:16.092112 containerd[1622]: time="2025-02-13T15:26:16.092075672Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Feb 13 15:26:16.092264 containerd[1622]: time="2025-02-13T15:26:16.092242800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Feb 13 15:26:16.092482 containerd[1622]: time="2025-02-13T15:26:16.092455617Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:26:16.092575 containerd[1622]: time="2025-02-13T15:26:16.092558242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:26:16.093264 containerd[1622]: time="2025-02-13T15:26:16.093227888Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:26:16.094592 containerd[1622]: time="2025-02-13T15:26:16.093378342Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Feb 13 15:26:16.094592 containerd[1622]: time="2025-02-13T15:26:16.093414866Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:26:16.094592 containerd[1622]: time="2025-02-13T15:26:16.093432772Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Feb 13 15:26:16.094592 containerd[1622]: time="2025-02-13T15:26:16.093590916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:26:16.094592 containerd[1622]: time="2025-02-13T15:26:16.093999495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:26:16.094592 containerd[1622]: time="2025-02-13T15:26:16.094314520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:26:16.094592 containerd[1622]: time="2025-02-13T15:26:16.094349383Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Feb 13 15:26:16.094592 containerd[1622]: time="2025-02-13T15:26:16.094474640Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Feb 13 15:26:16.094592 containerd[1622]: time="2025-02-13T15:26:16.094551710Z" level=info msg="metadata content store policy set" policy=shared
Feb 13 15:26:16.107506 containerd[1622]: time="2025-02-13T15:26:16.107431007Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Feb 13 15:26:16.107824 containerd[1622]: time="2025-02-13T15:26:16.107543114Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Feb 13 15:26:16.107824 containerd[1622]: time="2025-02-13T15:26:16.107574701Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Feb 13 15:26:16.107824 containerd[1622]: time="2025-02-13T15:26:16.107625358Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Feb 13 15:26:16.107824 containerd[1622]: time="2025-02-13T15:26:16.107653350Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Feb 13 15:26:16.108109 containerd[1622]: time="2025-02-13T15:26:16.107939566Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Feb 13 15:26:16.108575 containerd[1622]: time="2025-02-13T15:26:16.108513551Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Feb 13 15:26:16.112088 containerd[1622]: time="2025-02-13T15:26:16.108760473Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Feb 13 15:26:16.112088 containerd[1622]: time="2025-02-13T15:26:16.108793107Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Feb 13 15:26:16.112088 containerd[1622]: time="2025-02-13T15:26:16.108824489Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Feb 13 15:26:16.112088 containerd[1622]: time="2025-02-13T15:26:16.108850257Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Feb 13 15:26:16.112088 containerd[1622]: time="2025-02-13T15:26:16.110988747Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Feb 13 15:26:16.112088 containerd[1622]: time="2025-02-13T15:26:16.111036570Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Feb 13 15:26:16.112088 containerd[1622]: time="2025-02-13T15:26:16.111071497Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Feb 13 15:26:16.112088 containerd[1622]: time="2025-02-13T15:26:16.111117151Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Feb 13 15:26:16.112088 containerd[1622]: time="2025-02-13T15:26:16.111142060Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Feb 13 15:26:16.112088 containerd[1622]: time="2025-02-13T15:26:16.111165299Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Feb 13 15:26:16.112088 containerd[1622]: time="2025-02-13T15:26:16.111186815Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Feb 13 15:26:16.112088 containerd[1622]: time="2025-02-13T15:26:16.111226270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Feb 13 15:26:16.112088 containerd[1622]: time="2025-02-13T15:26:16.111254485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Feb 13 15:26:16.112088 containerd[1622]: time="2025-02-13T15:26:16.111278666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Feb 13 15:26:16.112742 containerd[1622]: time="2025-02-13T15:26:16.111304289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Feb 13 15:26:16.112742 containerd[1622]: time="2025-02-13T15:26:16.111327026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Feb 13 15:26:16.112742 containerd[1622]: time="2025-02-13T15:26:16.111356416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Feb 13 15:26:16.112742 containerd[1622]: time="2025-02-13T15:26:16.111385187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Feb 13 15:26:16.112742 containerd[1622]: time="2025-02-13T15:26:16.111410736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Feb 13 15:26:16.112742 containerd[1622]: time="2025-02-13T15:26:16.111433243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Feb 13 15:26:16.112742 containerd[1622]: time="2025-02-13T15:26:16.111473525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Feb 13 15:26:16.112742 containerd[1622]: time="2025-02-13T15:26:16.111495140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Feb 13 15:26:16.112742 containerd[1622]: time="2025-02-13T15:26:16.111516543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Feb 13 15:26:16.112742 containerd[1622]: time="2025-02-13T15:26:16.111539902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Feb 13 15:26:16.112742 containerd[1622]: time="2025-02-13T15:26:16.111565131Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Feb 13 15:26:16.112742 containerd[1622]: time="2025-02-13T15:26:16.111610161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Feb 13 15:26:16.112742 containerd[1622]: time="2025-02-13T15:26:16.111638535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Feb 13 15:26:16.112742 containerd[1622]: time="2025-02-13T15:26:16.111659654Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Feb 13 15:26:16.113381 containerd[1622]: time="2025-02-13T15:26:16.111746457Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Feb 13 15:26:16.113381 containerd[1622]: time="2025-02-13T15:26:16.111782484Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Feb 13 15:26:16.113381 containerd[1622]: time="2025-02-13T15:26:16.111804976Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Feb 13 15:26:16.113381 containerd[1622]: time="2025-02-13T15:26:16.111831534Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Feb 13 15:26:16.113381 containerd[1622]: time="2025-02-13T15:26:16.111851577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Feb 13 15:26:16.113381 containerd[1622]: time="2025-02-13T15:26:16.111900033Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Feb 13 15:26:16.113381 containerd[1622]: time="2025-02-13T15:26:16.111924077Z" level=info msg="NRI interface is disabled by configuration."
Feb 13 15:26:16.113381 containerd[1622]: time="2025-02-13T15:26:16.111940280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Feb 13 15:26:16.113747 containerd[1622]: time="2025-02-13T15:26:16.112465563Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Feb 13 15:26:16.113747 containerd[1622]: time="2025-02-13T15:26:16.112554195Z" level=info msg="Connect containerd service"
Feb 13 15:26:16.113747 containerd[1622]: time="2025-02-13T15:26:16.112627094Z" level=info msg="using legacy CRI server"
Feb 13 15:26:16.113747 containerd[1622]: time="2025-02-13T15:26:16.112641240Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Feb 13 15:26:16.116346 containerd[1622]: time="2025-02-13T15:26:16.112868766Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Feb 13 15:26:16.121185 containerd[1622]: time="2025-02-13T15:26:16.121099489Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 13 15:26:16.128316 containerd[1622]: time="2025-02-13T15:26:16.128156027Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Feb 13 15:26:16.129909 containerd[1622]: time="2025-02-13T15:26:16.128581244Z" level=info msg=serving... address=/run/containerd/containerd.sock
Feb 13 15:26:16.129909 containerd[1622]: time="2025-02-13T15:26:16.128729104Z" level=info msg="Start subscribing containerd event"
Feb 13 15:26:16.129909 containerd[1622]: time="2025-02-13T15:26:16.128832089Z" level=info msg="Start recovering state"
Feb 13 15:26:16.132426 containerd[1622]: time="2025-02-13T15:26:16.132347766Z" level=info msg="Start event monitor"
Feb 13 15:26:16.133561 containerd[1622]: time="2025-02-13T15:26:16.133501512Z" level=info msg="Start snapshots syncer"
Feb 13 15:26:16.133731 containerd[1622]: time="2025-02-13T15:26:16.133712915Z" level=info msg="Start cni network conf syncer for default"
Feb 13 15:26:16.133839 containerd[1622]: time="2025-02-13T15:26:16.133818894Z" level=info msg="Start streaming server"
Feb 13 15:26:16.134303 systemd[1]: Started containerd.service - containerd container runtime.
Feb 13 15:26:16.135074 containerd[1622]: time="2025-02-13T15:26:16.134761102Z" level=info msg="containerd successfully booted in 0.196484s"
Feb 13 15:26:16.377726 tar[1619]: linux-amd64/LICENSE
Feb 13 15:26:16.378664 tar[1619]: linux-amd64/README.md
Feb 13 15:26:16.423952 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin.
Feb 13 15:26:16.434861 instance-setup[1586]: INFO Running google_set_multiqueue.
Feb 13 15:26:16.459227 instance-setup[1586]: INFO Set channels for eth0 to 2.
Feb 13 15:26:16.464868 instance-setup[1586]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1.
Feb 13 15:26:16.468738 instance-setup[1586]: INFO /proc/irq/31/smp_affinity_list: real affinity 0
Feb 13 15:26:16.468818 instance-setup[1586]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1.
Feb 13 15:26:16.471245 instance-setup[1586]: INFO /proc/irq/32/smp_affinity_list: real affinity 0
Feb 13 15:26:16.471365 instance-setup[1586]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1.
Feb 13 15:26:16.474071 instance-setup[1586]: INFO /proc/irq/33/smp_affinity_list: real affinity 1
Feb 13 15:26:16.474161 instance-setup[1586]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1.
Feb 13 15:26:16.476521 instance-setup[1586]: INFO /proc/irq/34/smp_affinity_list: real affinity 1
Feb 13 15:26:16.487128 instance-setup[1586]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type
Feb 13 15:26:16.492139 instance-setup[1586]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type
Feb 13 15:26:16.494790 instance-setup[1586]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus
Feb 13 15:26:16.494846 instance-setup[1586]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus
Feb 13 15:26:16.529315 init.sh[1577]: + /usr/bin/google_metadata_script_runner --script-type startup
Feb 13 15:26:16.725027 startup-script[1754]: INFO Starting startup scripts.
Feb 13 15:26:16.733538 startup-script[1754]: INFO No startup scripts found in metadata.
Feb 13 15:26:16.733632 startup-script[1754]: INFO Finished running startup scripts.
Feb 13 15:26:16.774042 init.sh[1577]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM
Feb 13 15:26:16.774042 init.sh[1577]: + daemon_pids=()
Feb 13 15:26:16.774042 init.sh[1577]: + for d in accounts clock_skew network
Feb 13 15:26:16.776165 init.sh[1577]: + daemon_pids+=($!)
Feb 13 15:26:16.776165 init.sh[1577]: + for d in accounts clock_skew network
Feb 13 15:26:16.776165 init.sh[1577]: + daemon_pids+=($!)
Feb 13 15:26:16.776446 init.sh[1757]: + /usr/bin/google_accounts_daemon
Feb 13 15:26:16.777014 init.sh[1758]: + /usr/bin/google_clock_skew_daemon
Feb 13 15:26:16.777371 init.sh[1577]: + for d in accounts clock_skew network
Feb 13 15:26:16.777371 init.sh[1577]: + daemon_pids+=($!)
Feb 13 15:26:16.777371 init.sh[1577]: + NOTIFY_SOCKET=/run/systemd/notify
Feb 13 15:26:16.777371 init.sh[1577]: + /usr/bin/systemd-notify --ready
Feb 13 15:26:16.777536 init.sh[1759]: + /usr/bin/google_network_daemon
Feb 13 15:26:16.796161 systemd[1]: Started oem-gce.service - GCE Linux Agent.
Feb 13 15:26:16.811434 init.sh[1577]: + wait -n 1757 1758 1759
Feb 13 15:26:17.264214 google-networking[1759]: INFO Starting Google Networking daemon.
Feb 13 15:26:17.269828 google-clock-skew[1758]: INFO Starting Google Clock Skew daemon.
Feb 13 15:26:17.281526 google-clock-skew[1758]: INFO Clock drift token has changed: 0.
Feb 13 15:26:17.000534 systemd-resolved[1450]: Clock change detected. Flushing caches.
Feb 13 15:26:17.027538 systemd-journald[1143]: Time jumped backwards, rotating.
Feb 13 15:26:17.004321 google-clock-skew[1758]: INFO Synced system time with hardware clock.
Feb 13 15:26:17.047469 groupadd[1767]: group added to /etc/group: name=google-sudoers, GID=1000
Feb 13 15:26:17.054374 groupadd[1767]: group added to /etc/gshadow: name=google-sudoers
Feb 13 15:26:17.070462 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:26:17.082967 systemd[1]: Reached target multi-user.target - Multi-User System.
Feb 13 15:26:17.087061 (kubelet)[1782]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 15:26:17.096817 systemd[1]: Startup finished in 12.283s (kernel) + 10.599s (userspace) = 22.882s.
Feb 13 15:26:17.126335 groupadd[1767]: new group: name=google-sudoers, GID=1000
Feb 13 15:26:17.172355 google-accounts[1757]: INFO Starting Google Accounts daemon.
Feb 13 15:26:17.188433 google-accounts[1757]: WARNING OS Login not installed.
Feb 13 15:26:17.191367 google-accounts[1757]: INFO Creating a new user account for 0.
Feb 13 15:26:17.197568 init.sh[1791]: useradd: invalid user name '0': use --badname to ignore
Feb 13 15:26:17.197774 google-accounts[1757]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3..
Feb 13 15:26:18.283674 kubelet[1782]: E0213 15:26:18.283531    1782 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 15:26:18.287556 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 15:26:18.288052 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 15:26:21.228520 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Feb 13 15:26:21.236764 systemd[1]: Started sshd@0-10.128.0.79:22-139.178.68.195:46624.service - OpenSSH per-connection server daemon (139.178.68.195:46624).
Feb 13 15:26:21.559661 sshd[1801]: Accepted publickey for core from 139.178.68.195 port 46624 ssh2: RSA SHA256:nliKGUuHmIEF0YlcCyeDlTLj9V4wT+5POUaa07fHb80
Feb 13 15:26:21.562636 sshd-session[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:26:21.579615 systemd-logind[1596]: New session 1 of user core.
Feb 13 15:26:21.582439 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Feb 13 15:26:21.589776 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Feb 13 15:26:21.615927 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Feb 13 15:26:21.627450 systemd[1]: Starting user@500.service - User Manager for UID 500...
Feb 13 15:26:21.655616 (systemd)[1807]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Feb 13 15:26:21.797350 systemd[1807]: Queued start job for default target default.target.
Feb 13 15:26:21.798102 systemd[1807]: Created slice app.slice - User Application Slice.
Feb 13 15:26:21.798162 systemd[1807]: Reached target paths.target - Paths.
Feb 13 15:26:21.798189 systemd[1807]: Reached target timers.target - Timers.
Feb 13 15:26:21.803375 systemd[1807]: Starting dbus.socket - D-Bus User Message Bus Socket...
Feb 13 15:26:21.826532 systemd[1807]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Feb 13 15:26:21.826675 systemd[1807]: Reached target sockets.target - Sockets.
Feb 13 15:26:21.826707 systemd[1807]: Reached target basic.target - Basic System.
Feb 13 15:26:21.826813 systemd[1807]: Reached target default.target - Main User Target.
Feb 13 15:26:21.826879 systemd[1807]: Startup finished in 160ms.
Feb 13 15:26:21.827316 systemd[1]: Started user@500.service - User Manager for UID 500.
Feb 13 15:26:21.835609 systemd[1]: Started session-1.scope - Session 1 of User core.
Feb 13 15:26:22.067798 systemd[1]: Started sshd@1-10.128.0.79:22-139.178.68.195:46636.service - OpenSSH per-connection server daemon (139.178.68.195:46636).
Feb 13 15:26:22.374843 sshd[1819]: Accepted publickey for core from 139.178.68.195 port 46636 ssh2: RSA SHA256:nliKGUuHmIEF0YlcCyeDlTLj9V4wT+5POUaa07fHb80
Feb 13 15:26:22.377055 sshd-session[1819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:26:22.383856 systemd-logind[1596]: New session 2 of user core.
Feb 13 15:26:22.392718 systemd[1]: Started session-2.scope - Session 2 of User core.
Feb 13 15:26:22.593302 sshd[1822]: Connection closed by 139.178.68.195 port 46636
Feb 13 15:26:22.594500 sshd-session[1819]: pam_unix(sshd:session): session closed for user core
Feb 13 15:26:22.602384 systemd[1]: sshd@1-10.128.0.79:22-139.178.68.195:46636.service: Deactivated successfully.
Feb 13 15:26:22.607725 systemd-logind[1596]: Session 2 logged out. Waiting for processes to exit.
Feb 13 15:26:22.608167 systemd[1]: session-2.scope: Deactivated successfully.
Feb 13 15:26:22.611466 systemd-logind[1596]: Removed session 2.
Feb 13 15:26:22.646743 systemd[1]: Started sshd@2-10.128.0.79:22-139.178.68.195:46650.service - OpenSSH per-connection server daemon (139.178.68.195:46650).
Feb 13 15:26:22.942670 sshd[1827]: Accepted publickey for core from 139.178.68.195 port 46650 ssh2: RSA SHA256:nliKGUuHmIEF0YlcCyeDlTLj9V4wT+5POUaa07fHb80
Feb 13 15:26:22.944750 sshd-session[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:26:22.951978 systemd-logind[1596]: New session 3 of user core.
Feb 13 15:26:22.962779 systemd[1]: Started session-3.scope - Session 3 of User core.
Feb 13 15:26:23.153472 sshd[1830]: Connection closed by 139.178.68.195 port 46650
Feb 13 15:26:23.154581 sshd-session[1827]: pam_unix(sshd:session): session closed for user core
Feb 13 15:26:23.162410 systemd[1]: sshd@2-10.128.0.79:22-139.178.68.195:46650.service: Deactivated successfully.
Feb 13 15:26:23.168093 systemd[1]: session-3.scope: Deactivated successfully.
Feb 13 15:26:23.169134 systemd-logind[1596]: Session 3 logged out. Waiting for processes to exit.
Feb 13 15:26:23.170790 systemd-logind[1596]: Removed session 3.
Feb 13 15:26:23.202686 systemd[1]: Started sshd@3-10.128.0.79:22-139.178.68.195:46662.service - OpenSSH per-connection server daemon (139.178.68.195:46662).
Feb 13 15:26:23.510388 sshd[1835]: Accepted publickey for core from 139.178.68.195 port 46662 ssh2: RSA SHA256:nliKGUuHmIEF0YlcCyeDlTLj9V4wT+5POUaa07fHb80
Feb 13 15:26:23.512217 sshd-session[1835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:26:23.518420 systemd-logind[1596]: New session 4 of user core.
Feb 13 15:26:23.529605 systemd[1]: Started session-4.scope - Session 4 of User core.
Feb 13 15:26:23.726868 sshd[1838]: Connection closed by 139.178.68.195 port 46662
Feb 13 15:26:23.727783 sshd-session[1835]: pam_unix(sshd:session): session closed for user core
Feb 13 15:26:23.733866 systemd[1]: sshd@3-10.128.0.79:22-139.178.68.195:46662.service: Deactivated successfully.
Feb 13 15:26:23.738304 systemd[1]: session-4.scope: Deactivated successfully.
Feb 13 15:26:23.739488 systemd-logind[1596]: Session 4 logged out. Waiting for processes to exit.
Feb 13 15:26:23.740971 systemd-logind[1596]: Removed session 4.
Feb 13 15:26:23.784927 systemd[1]: Started sshd@4-10.128.0.79:22-139.178.68.195:46672.service - OpenSSH per-connection server daemon (139.178.68.195:46672).
Feb 13 15:26:24.114865 sshd[1843]: Accepted publickey for core from 139.178.68.195 port 46672 ssh2: RSA SHA256:nliKGUuHmIEF0YlcCyeDlTLj9V4wT+5POUaa07fHb80
Feb 13 15:26:24.116655 sshd-session[1843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:26:24.123083 systemd-logind[1596]: New session 5 of user core.
Feb 13 15:26:24.129648 systemd[1]: Started session-5.scope - Session 5 of User core.
Feb 13 15:26:24.331744 sudo[1847]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Feb 13 15:26:24.332289 sudo[1847]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:26:24.350725 sudo[1847]: pam_unix(sudo:session): session closed for user root
Feb 13 15:26:24.398251 sshd[1846]: Connection closed by 139.178.68.195 port 46672
Feb 13 15:26:24.399725 sshd-session[1843]: pam_unix(sshd:session): session closed for user core
Feb 13 15:26:24.404757 systemd[1]: sshd@4-10.128.0.79:22-139.178.68.195:46672.service: Deactivated successfully.
Feb 13 15:26:24.410059 systemd-logind[1596]: Session 5 logged out. Waiting for processes to exit.
Feb 13 15:26:24.411598 systemd[1]: session-5.scope: Deactivated successfully.
Feb 13 15:26:24.414881 systemd-logind[1596]: Removed session 5.
Feb 13 15:26:24.442906 systemd[1]: Started sshd@5-10.128.0.79:22-139.178.68.195:46684.service - OpenSSH per-connection server daemon (139.178.68.195:46684).
Feb 13 15:26:24.747984 sshd[1852]: Accepted publickey for core from 139.178.68.195 port 46684 ssh2: RSA SHA256:nliKGUuHmIEF0YlcCyeDlTLj9V4wT+5POUaa07fHb80
Feb 13 15:26:24.749852 sshd-session[1852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:26:24.756462 systemd-logind[1596]: New session 6 of user core.
Feb 13 15:26:24.763617 systemd[1]: Started session-6.scope - Session 6 of User core.
Feb 13 15:26:24.928771 sudo[1857]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Feb 13 15:26:24.929336 sudo[1857]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:26:24.934499 sudo[1857]: pam_unix(sudo:session): session closed for user root
Feb 13 15:26:24.948093 sudo[1856]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules
Feb 13 15:26:24.948615 sudo[1856]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:26:24.965934 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Feb 13 15:26:25.007009 augenrules[1879]: No rules
Feb 13 15:26:25.008417 systemd[1]: audit-rules.service: Deactivated successfully.
Feb 13 15:26:25.008880 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Feb 13 15:26:25.013611 sudo[1856]: pam_unix(sudo:session): session closed for user root
Feb 13 15:26:25.057242 sshd[1855]: Connection closed by 139.178.68.195 port 46684
Feb 13 15:26:25.058099 sshd-session[1852]: pam_unix(sshd:session): session closed for user core
Feb 13 15:26:25.063839 systemd[1]: sshd@5-10.128.0.79:22-139.178.68.195:46684.service: Deactivated successfully.
Feb 13 15:26:25.068951 systemd[1]: session-6.scope: Deactivated successfully.
Feb 13 15:26:25.069908 systemd-logind[1596]: Session 6 logged out. Waiting for processes to exit.
Feb 13 15:26:25.071320 systemd-logind[1596]: Removed session 6.
Feb 13 15:26:25.105956 systemd[1]: Started sshd@6-10.128.0.79:22-139.178.68.195:46686.service - OpenSSH per-connection server daemon (139.178.68.195:46686).
Feb 13 15:26:25.407982 sshd[1888]: Accepted publickey for core from 139.178.68.195 port 46686 ssh2: RSA SHA256:nliKGUuHmIEF0YlcCyeDlTLj9V4wT+5POUaa07fHb80
Feb 13 15:26:25.410494 sshd-session[1888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:26:25.418288 systemd-logind[1596]: New session 7 of user core.
Feb 13 15:26:25.425808 systemd[1]: Started session-7.scope - Session 7 of User core.
Feb 13 15:26:25.590264 sudo[1892]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Feb 13 15:26:25.590817 sudo[1892]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:26:26.097007 systemd[1]: Starting docker.service - Docker Application Container Engine...
Feb 13 15:26:26.097128 (dockerd)[1910]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU
Feb 13 15:26:26.567330 dockerd[1910]: time="2025-02-13T15:26:26.565525732Z" level=info msg="Starting up"
Feb 13 15:26:27.177494 systemd[1]: var-lib-docker-metacopy\x2dcheck1248329678-merged.mount: Deactivated successfully.
Feb 13 15:26:27.199548 dockerd[1910]: time="2025-02-13T15:26:27.199457080Z" level=info msg="Loading containers: start."
Feb 13 15:26:27.444411 kernel: Initializing XFRM netlink socket
Feb 13 15:26:27.568854 systemd-networkd[1216]: docker0: Link UP
Feb 13 15:26:27.600352 dockerd[1910]: time="2025-02-13T15:26:27.600282410Z" level=info msg="Loading containers: done."
Feb 13 15:26:27.626511 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck486927021-merged.mount: Deactivated successfully.
Feb 13 15:26:27.628443 dockerd[1910]: time="2025-02-13T15:26:27.628306433Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Feb 13 15:26:27.628599 dockerd[1910]: time="2025-02-13T15:26:27.628498305Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
Feb 13 15:26:27.628744 dockerd[1910]: time="2025-02-13T15:26:27.628696685Z" level=info msg="Daemon has completed initialization"
Feb 13 15:26:27.683564 dockerd[1910]: time="2025-02-13T15:26:27.683472767Z" level=info msg="API listen on /run/docker.sock"
Feb 13 15:26:27.683952 systemd[1]: Started docker.service - Docker Application Container Engine.
Feb 13 15:26:28.327786 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Feb 13 15:26:28.341076 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:26:28.652636 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:26:28.666024 (kubelet)[2111]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 15:26:28.877850 kubelet[2111]: E0213 15:26:28.877767    2111 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 15:26:28.886711 containerd[1622]: time="2025-02-13T15:26:28.886423858Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\""
Feb 13 15:26:28.890525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 15:26:28.890943 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 15:26:29.389930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2992545511.mount: Deactivated successfully.
Feb 13 15:26:31.279716 containerd[1622]: time="2025-02-13T15:26:31.279625206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.14\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:26:31.281475 containerd[1622]: time="2025-02-13T15:26:31.281376620Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.14: active requests=0, bytes read=35148911"
Feb 13 15:26:31.283410 containerd[1622]: time="2025-02-13T15:26:31.283354721Z" level=info msg="ImageCreate event name:\"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:26:31.290751 containerd[1622]: time="2025-02-13T15:26:31.290637249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:26:31.292979 containerd[1622]: time="2025-02-13T15:26:31.292597134Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.14\" with image id \"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.14\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1432b456b21015c99783d2b3a2010873fb67bf946c89d45e6d356449e083dcfb\", size \"35139083\" in 2.40609041s"
Feb 13 15:26:31.292979 containerd[1622]: time="2025-02-13T15:26:31.292719021Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.14\" returns image reference \"sha256:41955df92b2799aec2c2840b2fc079945d248b6c88ab18062545d8065a0cd2ce\""
Feb 13 15:26:31.329598 containerd[1622]: time="2025-02-13T15:26:31.329532344Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\""
Feb 13 15:26:33.113408 containerd[1622]: time="2025-02-13T15:26:33.113307477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.14\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:26:33.115356 containerd[1622]: time="2025-02-13T15:26:33.115251280Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.14: active requests=0, bytes read=32215098"
Feb 13 15:26:33.117415 containerd[1622]: time="2025-02-13T15:26:33.116818476Z" level=info msg="ImageCreate event name:\"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:26:33.122557 containerd[1622]: time="2025-02-13T15:26:33.122435597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:26:33.124200 containerd[1622]: time="2025-02-13T15:26:33.124107074Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.14\" with image id \"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.14\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23ccdb5e7e2c317f5727652ef7e64ef91ead34a3c73dfa9c3ab23b3a5028e280\", size \"33659710\" in 1.794514444s"
Feb 13 15:26:33.124351 containerd[1622]: time="2025-02-13T15:26:33.124219172Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.14\" returns image reference \"sha256:2c6e411a187e5df0e7d583a21e7ace20746e47cec95bf4cd597e0617e47f328b\""
Feb 13 15:26:33.162735 containerd[1622]: time="2025-02-13T15:26:33.162077600Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\""
Feb 13 15:26:34.423049 containerd[1622]: time="2025-02-13T15:26:34.422955039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.14\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:26:34.424860 containerd[1622]: time="2025-02-13T15:26:34.424778828Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.14: active requests=0, bytes read=17335972"
Feb 13 15:26:34.426575 containerd[1622]: time="2025-02-13T15:26:34.426502828Z" level=info msg="ImageCreate event name:\"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:26:34.430841 containerd[1622]: time="2025-02-13T15:26:34.430761592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:26:34.432488 containerd[1622]: time="2025-02-13T15:26:34.432305116Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.14\" with image id \"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.14\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf0046be3eb6c4831b6b2a1b3e24f18e27778663890144478f11a82622b48c48\", size \"18780620\" in 1.270167493s"
Feb 13 15:26:34.432488 containerd[1622]: time="2025-02-13T15:26:34.432356982Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.14\" returns image reference \"sha256:94dd66cb984e2a4209d2cb2cad88e199b7efb440fc198324ab2e12642de735fc\""
Feb 13 15:26:34.464470 containerd[1622]: time="2025-02-13T15:26:34.464421475Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\""
Feb 13 15:26:35.591050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3962909069.mount: Deactivated successfully.
Feb 13 15:26:36.174920 containerd[1622]: time="2025-02-13T15:26:36.174826202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:26:36.176449 containerd[1622]: time="2025-02-13T15:26:36.176359611Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=28622487"
Feb 13 15:26:36.178302 containerd[1622]: time="2025-02-13T15:26:36.178192247Z" level=info msg="ImageCreate event name:\"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:26:36.181647 containerd[1622]: time="2025-02-13T15:26:36.181542754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:26:36.182812 containerd[1622]: time="2025-02-13T15:26:36.182591894Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"28619611\" in 1.718059449s"
Feb 13 15:26:36.182812 containerd[1622]: time="2025-02-13T15:26:36.182653351Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\""
Feb 13 15:26:36.218199 containerd[1622]: time="2025-02-13T15:26:36.218120299Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\""
Feb 13 15:26:36.676314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3930855808.mount: Deactivated successfully.
Feb 13 15:26:37.802656 containerd[1622]: time="2025-02-13T15:26:37.802564355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:26:37.804603 containerd[1622]: time="2025-02-13T15:26:37.804502606Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419"
Feb 13 15:26:37.806174 containerd[1622]: time="2025-02-13T15:26:37.805940883Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:26:37.812514 containerd[1622]: time="2025-02-13T15:26:37.812383436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:26:37.814460 containerd[1622]: time="2025-02-13T15:26:37.814127366Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.59592134s"
Feb 13 15:26:37.814460 containerd[1622]: time="2025-02-13T15:26:37.814236085Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\""
Feb 13 15:26:37.849974 containerd[1622]: time="2025-02-13T15:26:37.849913257Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\""
Feb 13 15:26:38.226060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1111777764.mount: Deactivated successfully.
Feb 13 15:26:38.232604 containerd[1622]: time="2025-02-13T15:26:38.232438971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:26:38.234285 containerd[1622]: time="2025-02-13T15:26:38.234188470Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=324188"
Feb 13 15:26:38.236182 containerd[1622]: time="2025-02-13T15:26:38.235683597Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:26:38.240357 containerd[1622]: time="2025-02-13T15:26:38.240281947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:26:38.242133 containerd[1622]: time="2025-02-13T15:26:38.242061528Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 392.08118ms"
Feb 13 15:26:38.242133 containerd[1622]: time="2025-02-13T15:26:38.242126660Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\""
Feb 13 15:26:38.283427 containerd[1622]: time="2025-02-13T15:26:38.283357846Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\""
Feb 13 15:26:38.690977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1492470388.mount: Deactivated successfully.
Feb 13 15:26:39.143619 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Feb 13 15:26:39.152824 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:26:39.472498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:26:39.481672 (kubelet)[2311]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 15:26:39.558076 kubelet[2311]: E0213 15:26:39.557972    2311 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 15:26:39.562290 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 15:26:39.562777 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 15:26:41.151340 containerd[1622]: time="2025-02-13T15:26:41.151246985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:26:41.153577 containerd[1622]: time="2025-02-13T15:26:41.153486358Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56659115"
Feb 13 15:26:41.155254 containerd[1622]: time="2025-02-13T15:26:41.155188008Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:26:41.161246 containerd[1622]: time="2025-02-13T15:26:41.161124031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:26:41.163740 containerd[1622]: time="2025-02-13T15:26:41.163062682Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.879637851s"
Feb 13 15:26:41.163740 containerd[1622]: time="2025-02-13T15:26:41.163163905Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\""
Feb 13 15:26:45.222840 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:26:45.230620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:26:45.289411 systemd[1]: Reloading requested from client PID 2403 ('systemctl') (unit session-7.scope)...
Feb 13 15:26:45.289447 systemd[1]: Reloading...
Feb 13 15:26:45.473181 zram_generator::config[2444]: No configuration found.
Feb 13 15:26:45.659179 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:26:45.762409 systemd[1]: Reloading finished in 471 ms.
Feb 13 15:26:45.793917 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 13 15:26:45.842268 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM
Feb 13 15:26:45.842789 systemd[1]: kubelet.service: Failed with result 'signal'.
Feb 13 15:26:45.843438 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:26:45.856932 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:26:46.111490 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:26:46.132051 (kubelet)[2511]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Feb 13 15:26:46.201701 kubelet[2511]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 15:26:46.201701 kubelet[2511]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Feb 13 15:26:46.201701 kubelet[2511]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 15:26:46.202433 kubelet[2511]: I0213 15:26:46.201845    2511 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 13 15:26:46.887581 kubelet[2511]: I0213 15:26:46.887514    2511 server.go:487] "Kubelet version" kubeletVersion="v1.29.2"
Feb 13 15:26:46.887581 kubelet[2511]: I0213 15:26:46.887560    2511 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 13 15:26:46.887994 kubelet[2511]: I0213 15:26:46.887957    2511 server.go:919] "Client rotation is on, will bootstrap in background"
Feb 13 15:26:46.919200 kubelet[2511]: E0213 15:26:46.919104    2511 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.79:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.79:6443: connect: connection refused
Feb 13 15:26:46.920359 kubelet[2511]: I0213 15:26:46.920115    2511 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 13 15:26:46.938924 kubelet[2511]: I0213 15:26:46.938867    2511 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb 13 15:26:46.939682 kubelet[2511]: I0213 15:26:46.939642    2511 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb 13 15:26:46.939994 kubelet[2511]: I0213 15:26:46.939958    2511 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Feb 13 15:26:46.941295 kubelet[2511]: I0213 15:26:46.941258    2511 topology_manager.go:138] "Creating topology manager with none policy"
Feb 13 15:26:46.941405 kubelet[2511]: I0213 15:26:46.941301    2511 container_manager_linux.go:301] "Creating device plugin manager"
Feb 13 15:26:46.941526 kubelet[2511]: I0213 15:26:46.941498    2511 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 15:26:46.941690 kubelet[2511]: I0213 15:26:46.941676    2511 kubelet.go:396] "Attempting to sync node with API server"
Feb 13 15:26:46.941769 kubelet[2511]: I0213 15:26:46.941702    2511 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 13 15:26:46.941769 kubelet[2511]: I0213 15:26:46.941749    2511 kubelet.go:312] "Adding apiserver pod source"
Feb 13 15:26:46.941841 kubelet[2511]: I0213 15:26:46.941776    2511 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 13 15:26:46.944813 kubelet[2511]: W0213 15:26:46.944746    2511 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.128.0.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.79:6443: connect: connection refused
Feb 13 15:26:46.945638 kubelet[2511]: E0213 15:26:46.944983    2511 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.79:6443: connect: connection refused
Feb 13 15:26:46.945638 kubelet[2511]: I0213 15:26:46.945129    2511 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1"
Feb 13 15:26:46.952179 kubelet[2511]: I0213 15:26:46.950635    2511 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Feb 13 15:26:46.952179 kubelet[2511]: W0213 15:26:46.950758    2511 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Feb 13 15:26:46.952393 kubelet[2511]: W0213 15:26:46.952200    2511 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.128.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.79:6443: connect: connection refused
Feb 13 15:26:46.952393 kubelet[2511]: E0213 15:26:46.952284    2511 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.79:6443: connect: connection refused
Feb 13 15:26:46.952762 kubelet[2511]: I0213 15:26:46.952731    2511 server.go:1256] "Started kubelet"
Feb 13 15:26:46.955120 kubelet[2511]: I0213 15:26:46.955094    2511 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 13 15:26:46.964479 kubelet[2511]: I0213 15:26:46.964432    2511 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Feb 13 15:26:46.965962 kubelet[2511]: I0213 15:26:46.965900    2511 server.go:461] "Adding debug handlers to kubelet server"
Feb 13 15:26:46.967579 kubelet[2511]: I0213 15:26:46.967553    2511 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Feb 13 15:26:46.968056 kubelet[2511]: I0213 15:26:46.968035    2511 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Feb 13 15:26:46.970309 kubelet[2511]: I0213 15:26:46.970284    2511 volume_manager.go:291] "Starting Kubelet Volume Manager"
Feb 13 15:26:46.973891 kubelet[2511]: E0213 15:26:46.972814    2511 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.79:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.79:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal.1823ce0629454819  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal,UID:ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal,},FirstTimestamp:2025-02-13 15:26:46.952675353 +0000 UTC m=+0.813053668,LastTimestamp:2025-02-13 15:26:46.952675353 +0000 UTC m=+0.813053668,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal,}"
Feb 13 15:26:46.974786 kubelet[2511]: E0213 15:26:46.974760    2511 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.79:6443: connect: connection refused" interval="200ms"
Feb 13 15:26:46.976177 kubelet[2511]: I0213 15:26:46.975607    2511 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Feb 13 15:26:46.976957 kubelet[2511]: I0213 15:26:46.976930    2511 factory.go:221] Registration of the containerd container factory successfully
Feb 13 15:26:46.976957 kubelet[2511]: I0213 15:26:46.976957    2511 factory.go:221] Registration of the systemd container factory successfully
Feb 13 15:26:46.977097 kubelet[2511]: I0213 15:26:46.977049    2511 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Feb 13 15:26:46.980903 kubelet[2511]: I0213 15:26:46.980872    2511 reconciler_new.go:29] "Reconciler: start to sync state"
Feb 13 15:26:46.998305 kubelet[2511]: W0213 15:26:46.996818    2511 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.128.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.79:6443: connect: connection refused
Feb 13 15:26:47.005406 kubelet[2511]: E0213 15:26:47.004485    2511 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.79:6443: connect: connection refused
Feb 13 15:26:47.005406 kubelet[2511]: E0213 15:26:46.998480    2511 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb 13 15:26:47.016244 kubelet[2511]: I0213 15:26:47.016199    2511 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Feb 13 15:26:47.019293 kubelet[2511]: I0213 15:26:47.019255    2511 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Feb 13 15:26:47.019718 kubelet[2511]: I0213 15:26:47.019537    2511 status_manager.go:217] "Starting to sync pod status with apiserver"
Feb 13 15:26:47.019886 kubelet[2511]: I0213 15:26:47.019870    2511 kubelet.go:2329] "Starting kubelet main sync loop"
Feb 13 15:26:47.020203 kubelet[2511]: E0213 15:26:47.020043    2511 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Feb 13 15:26:47.022934 kubelet[2511]: W0213 15:26:47.022884    2511 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.128.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.79:6443: connect: connection refused
Feb 13 15:26:47.025525 kubelet[2511]: E0213 15:26:47.025085    2511 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.79:6443: connect: connection refused
Feb 13 15:26:47.027210 kubelet[2511]: I0213 15:26:47.026887    2511 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb 13 15:26:47.027210 kubelet[2511]: I0213 15:26:47.026914    2511 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb 13 15:26:47.027210 kubelet[2511]: I0213 15:26:47.026951    2511 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 15:26:47.030229 kubelet[2511]: I0213 15:26:47.030200    2511 policy_none.go:49] "None policy: Start"
Feb 13 15:26:47.031366 kubelet[2511]: I0213 15:26:47.031329    2511 memory_manager.go:170] "Starting memorymanager" policy="None"
Feb 13 15:26:47.031366 kubelet[2511]: I0213 15:26:47.031367    2511 state_mem.go:35] "Initializing new in-memory state store"
Feb 13 15:26:47.043392 kubelet[2511]: I0213 15:26:47.042201    2511 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 13 15:26:47.043392 kubelet[2511]: I0213 15:26:47.042828    2511 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb 13 15:26:47.048814 kubelet[2511]: E0213 15:26:47.048783    2511 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" not found"
Feb 13 15:26:47.077328 kubelet[2511]: I0213 15:26:47.077262    2511 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:47.077953 kubelet[2511]: E0213 15:26:47.077897    2511 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.79:6443/api/v1/nodes\": dial tcp 10.128.0.79:6443: connect: connection refused" node="ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:47.120408 kubelet[2511]: I0213 15:26:47.120340    2511 topology_manager.go:215] "Topology Admit Handler" podUID="a37b3110cb724969c5690d811efd4eaa" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:47.128781 kubelet[2511]: I0213 15:26:47.128718    2511 topology_manager.go:215] "Topology Admit Handler" podUID="70af7aae056e2964b4a43f53e4123d3d" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:47.136625 kubelet[2511]: I0213 15:26:47.136097    2511 topology_manager.go:215] "Topology Admit Handler" podUID="4f7e2e4344628b6cc536721195964d08" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:47.176013 kubelet[2511]: E0213 15:26:47.175837    2511 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.79:6443: connect: connection refused" interval="400ms"
Feb 13 15:26:47.182378 kubelet[2511]: I0213 15:26:47.182322    2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f7e2e4344628b6cc536721195964d08-ca-certs\") pod \"kube-controller-manager-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" (UID: \"4f7e2e4344628b6cc536721195964d08\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:47.182610 kubelet[2511]: I0213 15:26:47.182518    2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f7e2e4344628b6cc536721195964d08-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" (UID: \"4f7e2e4344628b6cc536721195964d08\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:47.182610 kubelet[2511]: I0213 15:26:47.182583    2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f7e2e4344628b6cc536721195964d08-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" (UID: \"4f7e2e4344628b6cc536721195964d08\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:47.182725 kubelet[2511]: I0213 15:26:47.182627    2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f7e2e4344628b6cc536721195964d08-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" (UID: \"4f7e2e4344628b6cc536721195964d08\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:47.182725 kubelet[2511]: I0213 15:26:47.182667    2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70af7aae056e2964b4a43f53e4123d3d-ca-certs\") pod \"kube-apiserver-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" (UID: \"70af7aae056e2964b4a43f53e4123d3d\") " pod="kube-system/kube-apiserver-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:47.182725 kubelet[2511]: I0213 15:26:47.182708    2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70af7aae056e2964b4a43f53e4123d3d-k8s-certs\") pod \"kube-apiserver-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" (UID: \"70af7aae056e2964b4a43f53e4123d3d\") " pod="kube-system/kube-apiserver-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:47.182871 kubelet[2511]: I0213 15:26:47.182749    2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70af7aae056e2964b4a43f53e4123d3d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" (UID: \"70af7aae056e2964b4a43f53e4123d3d\") " pod="kube-system/kube-apiserver-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:47.184064 kubelet[2511]: I0213 15:26:47.184028    2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f7e2e4344628b6cc536721195964d08-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" (UID: \"4f7e2e4344628b6cc536721195964d08\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:47.184240 kubelet[2511]: I0213 15:26:47.184096    2511 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a37b3110cb724969c5690d811efd4eaa-kubeconfig\") pod \"kube-scheduler-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" (UID: \"a37b3110cb724969c5690d811efd4eaa\") " pod="kube-system/kube-scheduler-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:47.285433 kubelet[2511]: I0213 15:26:47.285381    2511 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:47.286354 kubelet[2511]: E0213 15:26:47.286006    2511 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.79:6443/api/v1/nodes\": dial tcp 10.128.0.79:6443: connect: connection refused" node="ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:47.445588 containerd[1622]: time="2025-02-13T15:26:47.445396380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal,Uid:a37b3110cb724969c5690d811efd4eaa,Namespace:kube-system,Attempt:0,}"
Feb 13 15:26:47.456740 containerd[1622]: time="2025-02-13T15:26:47.456665208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal,Uid:70af7aae056e2964b4a43f53e4123d3d,Namespace:kube-system,Attempt:0,}"
Feb 13 15:26:47.461217 containerd[1622]: time="2025-02-13T15:26:47.461161946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal,Uid:4f7e2e4344628b6cc536721195964d08,Namespace:kube-system,Attempt:0,}"
Feb 13 15:26:47.577583 kubelet[2511]: E0213 15:26:47.577519    2511 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.79:6443: connect: connection refused" interval="800ms"
Feb 13 15:26:47.697260 kubelet[2511]: I0213 15:26:47.697077    2511 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:47.697723 kubelet[2511]: E0213 15:26:47.697677    2511 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.79:6443/api/v1/nodes\": dial tcp 10.128.0.79:6443: connect: connection refused" node="ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:47.846266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1673638644.mount: Deactivated successfully.
Feb 13 15:26:47.855441 containerd[1622]: time="2025-02-13T15:26:47.855355245Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Feb 13 15:26:47.858275 containerd[1622]: time="2025-02-13T15:26:47.858190390Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Feb 13 15:26:47.861275 containerd[1622]: time="2025-02-13T15:26:47.861193816Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954"
Feb 13 15:26:47.862371 containerd[1622]: time="2025-02-13T15:26:47.862301060Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Feb 13 15:26:47.865442 containerd[1622]: time="2025-02-13T15:26:47.865372481Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Feb 13 15:26:47.867379 containerd[1622]: time="2025-02-13T15:26:47.867279495Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Feb 13 15:26:47.868513 containerd[1622]: time="2025-02-13T15:26:47.868412199Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Feb 13 15:26:47.873182 containerd[1622]: time="2025-02-13T15:26:47.873041681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Feb 13 15:26:47.875002 containerd[1622]: time="2025-02-13T15:26:47.874580324Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 413.273611ms"
Feb 13 15:26:47.876107 containerd[1622]: time="2025-02-13T15:26:47.876049238Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 429.726751ms"
Feb 13 15:26:47.878042 containerd[1622]: time="2025-02-13T15:26:47.877976995Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 421.168798ms"
Feb 13 15:26:47.900696 kubelet[2511]: W0213 15:26:47.894008    2511 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.128.0.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.79:6443: connect: connection refused
Feb 13 15:26:47.900696 kubelet[2511]: E0213 15:26:47.894135    2511 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.79:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.79:6443: connect: connection refused
Feb 13 15:26:48.021011 kubelet[2511]: W0213 15:26:48.020817    2511 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.128.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.79:6443: connect: connection refused
Feb 13 15:26:48.021011 kubelet[2511]: E0213 15:26:48.020874    2511 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.79:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.79:6443: connect: connection refused
Feb 13 15:26:48.079084 containerd[1622]: time="2025-02-13T15:26:48.078260995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:26:48.079084 containerd[1622]: time="2025-02-13T15:26:48.078355227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:26:48.079084 containerd[1622]: time="2025-02-13T15:26:48.078382472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:26:48.081580 containerd[1622]: time="2025-02-13T15:26:48.081433667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:26:48.089976 containerd[1622]: time="2025-02-13T15:26:48.089632506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:26:48.089976 containerd[1622]: time="2025-02-13T15:26:48.089715694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:26:48.089976 containerd[1622]: time="2025-02-13T15:26:48.089740764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:26:48.089976 containerd[1622]: time="2025-02-13T15:26:48.089898727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:26:48.091465 containerd[1622]: time="2025-02-13T15:26:48.091332640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:26:48.094896 containerd[1622]: time="2025-02-13T15:26:48.091445451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:26:48.094896 containerd[1622]: time="2025-02-13T15:26:48.094594340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:26:48.094896 containerd[1622]: time="2025-02-13T15:26:48.094766367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:26:48.164533 kubelet[2511]: W0213 15:26:48.164299    2511 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.128.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.79:6443: connect: connection refused
Feb 13 15:26:48.164747 kubelet[2511]: E0213 15:26:48.164557    2511 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.79:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.79:6443: connect: connection refused
Feb 13 15:26:48.255392 containerd[1622]: time="2025-02-13T15:26:48.255340731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal,Uid:4f7e2e4344628b6cc536721195964d08,Namespace:kube-system,Attempt:0,} returns sandbox id \"e969ddd134b45f67df8f138017c4d1d55dc5872b4e679d054f385f406fdd896c\""
Feb 13 15:26:48.260222 containerd[1622]: time="2025-02-13T15:26:48.260056398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal,Uid:70af7aae056e2964b4a43f53e4123d3d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffc1ab5ac1c80a6c5adcebfa9533a9c94e95842f51f861a11b04f10726a667ca\""
Feb 13 15:26:48.260408 kubelet[2511]: E0213 15:26:48.260043    2511 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4152-2-1-3109ca0bb39f90f2236e.c.flat"
Feb 13 15:26:48.263992 kubelet[2511]: E0213 15:26:48.263571    2511 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-21291"
Feb 13 15:26:48.268065 containerd[1622]: time="2025-02-13T15:26:48.268019388Z" level=info msg="CreateContainer within sandbox \"e969ddd134b45f67df8f138017c4d1d55dc5872b4e679d054f385f406fdd896c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Feb 13 15:26:48.268797 containerd[1622]: time="2025-02-13T15:26:48.268759520Z" level=info msg="CreateContainer within sandbox \"ffc1ab5ac1c80a6c5adcebfa9533a9c94e95842f51f861a11b04f10726a667ca\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Feb 13 15:26:48.269578 containerd[1622]: time="2025-02-13T15:26:48.269542937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal,Uid:a37b3110cb724969c5690d811efd4eaa,Namespace:kube-system,Attempt:0,} returns sandbox id \"93322fe05c5fbf9a1510d24f0d5e77adede0b17349fc268272d78b828aba8095\""
Feb 13 15:26:48.272252 kubelet[2511]: E0213 15:26:48.272160    2511 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-21291"
Feb 13 15:26:48.275005 containerd[1622]: time="2025-02-13T15:26:48.274831550Z" level=info msg="CreateContainer within sandbox \"93322fe05c5fbf9a1510d24f0d5e77adede0b17349fc268272d78b828aba8095\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Feb 13 15:26:48.297783 containerd[1622]: time="2025-02-13T15:26:48.297715654Z" level=info msg="CreateContainer within sandbox \"e969ddd134b45f67df8f138017c4d1d55dc5872b4e679d054f385f406fdd896c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"16271df2bf4a4840d9ca0b9ced0e25c93c730693052be49f6ba6b079aa77d229\""
Feb 13 15:26:48.298703 containerd[1622]: time="2025-02-13T15:26:48.298648908Z" level=info msg="StartContainer for \"16271df2bf4a4840d9ca0b9ced0e25c93c730693052be49f6ba6b079aa77d229\""
Feb 13 15:26:48.300072 containerd[1622]: time="2025-02-13T15:26:48.299886740Z" level=info msg="CreateContainer within sandbox \"ffc1ab5ac1c80a6c5adcebfa9533a9c94e95842f51f861a11b04f10726a667ca\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"819e104390ca7d82de61979ee8e47aee87dcb80e3f9e7cf7bd526f98a93c506c\""
Feb 13 15:26:48.301015 containerd[1622]: time="2025-02-13T15:26:48.300978674Z" level=info msg="StartContainer for \"819e104390ca7d82de61979ee8e47aee87dcb80e3f9e7cf7bd526f98a93c506c\""
Feb 13 15:26:48.313499 containerd[1622]: time="2025-02-13T15:26:48.313440798Z" level=info msg="CreateContainer within sandbox \"93322fe05c5fbf9a1510d24f0d5e77adede0b17349fc268272d78b828aba8095\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"eb85d15d5e7741dd6001f8835414e5d0041fe295ab5ee69dd74ab67f571864cd\""
Feb 13 15:26:48.314962 containerd[1622]: time="2025-02-13T15:26:48.314828949Z" level=info msg="StartContainer for \"eb85d15d5e7741dd6001f8835414e5d0041fe295ab5ee69dd74ab67f571864cd\""
Feb 13 15:26:48.379178 kubelet[2511]: E0213 15:26:48.378597    2511 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.79:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.79:6443: connect: connection refused" interval="1.6s"
Feb 13 15:26:48.493521 containerd[1622]: time="2025-02-13T15:26:48.492195736Z" level=info msg="StartContainer for \"819e104390ca7d82de61979ee8e47aee87dcb80e3f9e7cf7bd526f98a93c506c\" returns successfully"
Feb 13 15:26:48.513180 kubelet[2511]: I0213 15:26:48.508187    2511 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:48.513180 kubelet[2511]: E0213 15:26:48.508684    2511 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.79:6443/api/v1/nodes\": dial tcp 10.128.0.79:6443: connect: connection refused" node="ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:48.529454 containerd[1622]: time="2025-02-13T15:26:48.529211352Z" level=info msg="StartContainer for \"16271df2bf4a4840d9ca0b9ced0e25c93c730693052be49f6ba6b079aa77d229\" returns successfully"
Feb 13 15:26:48.543180 containerd[1622]: time="2025-02-13T15:26:48.543089380Z" level=info msg="StartContainer for \"eb85d15d5e7741dd6001f8835414e5d0041fe295ab5ee69dd74ab67f571864cd\" returns successfully"
Feb 13 15:26:48.603180 kubelet[2511]: W0213 15:26:48.603074    2511 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.128.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.79:6443: connect: connection refused
Feb 13 15:26:48.604297 kubelet[2511]: E0213 15:26:48.604238    2511 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.79:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.79:6443: connect: connection refused
Feb 13 15:26:50.118339 kubelet[2511]: I0213 15:26:50.116137    2511 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:52.115572 kubelet[2511]: I0213 15:26:52.115409    2511 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:52.233382 kubelet[2511]: E0213 15:26:52.233276    2511 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s"
Feb 13 15:26:52.247184 kubelet[2511]: E0213 15:26:52.245717    2511 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal.1823ce0629454819  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal,UID:ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal,},FirstTimestamp:2025-02-13 15:26:46.952675353 +0000 UTC m=+0.813053668,LastTimestamp:2025-02-13 15:26:46.952675353 +0000 UTC m=+0.813053668,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal,}"
Feb 13 15:26:52.273632 kubelet[2511]: E0213 15:26:52.272988    2511 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:52.956562 kubelet[2511]: I0213 15:26:52.956446    2511 apiserver.go:52] "Watching apiserver"
Feb 13 15:26:52.977419 kubelet[2511]: I0213 15:26:52.977220    2511 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Feb 13 15:26:54.240892 kubelet[2511]: W0213 15:26:54.240721    2511 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]
Feb 13 15:26:54.822997 systemd[1]: Reloading requested from client PID 2789 ('systemctl') (unit session-7.scope)...
Feb 13 15:26:54.823026 systemd[1]: Reloading...
Feb 13 15:26:54.986360 zram_generator::config[2836]: No configuration found.
Feb 13 15:26:55.041994 kubelet[2511]: W0213 15:26:55.041947    2511 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]
Feb 13 15:26:55.153621 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:26:55.267928 systemd[1]: Reloading finished in 444 ms.
Feb 13 15:26:55.316234 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:26:55.317178 kubelet[2511]: I0213 15:26:55.316422    2511 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 13 15:26:55.337008 systemd[1]: kubelet.service: Deactivated successfully.
Feb 13 15:26:55.337719 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:26:55.350696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:26:55.636486 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:26:55.658509 (kubelet)[2887]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Feb 13 15:26:55.760450 kubelet[2887]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 15:26:55.760450 kubelet[2887]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Feb 13 15:26:55.760450 kubelet[2887]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 15:26:55.760450 kubelet[2887]: I0213 15:26:55.760026    2887 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 13 15:26:55.770758 kubelet[2887]: I0213 15:26:55.770661    2887 server.go:487] "Kubelet version" kubeletVersion="v1.29.2"
Feb 13 15:26:55.770758 kubelet[2887]: I0213 15:26:55.770716    2887 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 13 15:26:55.772724 kubelet[2887]: I0213 15:26:55.771309    2887 server.go:919] "Client rotation is on, will bootstrap in background"
Feb 13 15:26:55.774629 kubelet[2887]: I0213 15:26:55.774590    2887 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Feb 13 15:26:55.778358 kubelet[2887]: I0213 15:26:55.778130    2887 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 13 15:26:55.803728 sudo[2902]:     root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin
Feb 13 15:26:55.805174 sudo[2902]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0)
Feb 13 15:26:55.811183 kubelet[2887]: I0213 15:26:55.810550    2887 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb 13 15:26:55.811448 kubelet[2887]: I0213 15:26:55.811424    2887 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb 13 15:26:55.811777 kubelet[2887]: I0213 15:26:55.811747    2887 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Feb 13 15:26:55.811963 kubelet[2887]: I0213 15:26:55.811805    2887 topology_manager.go:138] "Creating topology manager with none policy"
Feb 13 15:26:55.811963 kubelet[2887]: I0213 15:26:55.811824    2887 container_manager_linux.go:301] "Creating device plugin manager"
Feb 13 15:26:55.811963 kubelet[2887]: I0213 15:26:55.811888    2887 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 15:26:55.812108 kubelet[2887]: I0213 15:26:55.812041    2887 kubelet.go:396] "Attempting to sync node with API server"
Feb 13 15:26:55.812108 kubelet[2887]: I0213 15:26:55.812067    2887 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 13 15:26:55.812108 kubelet[2887]: I0213 15:26:55.812110    2887 kubelet.go:312] "Adding apiserver pod source"
Feb 13 15:26:55.812361 kubelet[2887]: I0213 15:26:55.812138    2887 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 13 15:26:55.824270 kubelet[2887]: I0213 15:26:55.822203    2887 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1"
Feb 13 15:26:55.824270 kubelet[2887]: I0213 15:26:55.822526    2887 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Feb 13 15:26:55.824270 kubelet[2887]: I0213 15:26:55.823842    2887 server.go:1256] "Started kubelet"
Feb 13 15:26:55.849909 kubelet[2887]: I0213 15:26:55.849818    2887 server.go:162] "Starting to listen" address="0.0.0.0" port=10250
Feb 13 15:26:55.858269 kubelet[2887]: I0213 15:26:55.854425    2887 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Feb 13 15:26:55.858269 kubelet[2887]: I0213 15:26:55.854926    2887 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Feb 13 15:26:55.858269 kubelet[2887]: I0213 15:26:55.856253    2887 server.go:461] "Adding debug handlers to kubelet server"
Feb 13 15:26:55.866728 kubelet[2887]: E0213 15:26:55.866686    2887 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb 13 15:26:55.870201 kubelet[2887]: I0213 15:26:55.869647    2887 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 13 15:26:55.875133 kubelet[2887]: I0213 15:26:55.875083    2887 volume_manager.go:291] "Starting Kubelet Volume Manager"
Feb 13 15:26:55.879570 kubelet[2887]: I0213 15:26:55.875351    2887 desired_state_of_world_populator.go:151] "Desired state populator starts to run"
Feb 13 15:26:55.880584 kubelet[2887]: I0213 15:26:55.880535    2887 reconciler_new.go:29] "Reconciler: start to sync state"
Feb 13 15:26:55.886177 kubelet[2887]: I0213 15:26:55.883880    2887 factory.go:221] Registration of the systemd container factory successfully
Feb 13 15:26:55.886177 kubelet[2887]: I0213 15:26:55.884055    2887 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Feb 13 15:26:55.894258 kubelet[2887]: I0213 15:26:55.893936    2887 factory.go:221] Registration of the containerd container factory successfully
Feb 13 15:26:55.942189 kubelet[2887]: I0213 15:26:55.941848    2887 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Feb 13 15:26:55.947209 kubelet[2887]: I0213 15:26:55.946247    2887 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Feb 13 15:26:55.947209 kubelet[2887]: I0213 15:26:55.946341    2887 status_manager.go:217] "Starting to sync pod status with apiserver"
Feb 13 15:26:55.947209 kubelet[2887]: I0213 15:26:55.946383    2887 kubelet.go:2329] "Starting kubelet main sync loop"
Feb 13 15:26:55.947209 kubelet[2887]: E0213 15:26:55.946472    2887 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Feb 13 15:26:55.999360 kubelet[2887]: I0213 15:26:55.998252    2887 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:56.022256 kubelet[2887]: I0213 15:26:56.020467    2887 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:56.022256 kubelet[2887]: I0213 15:26:56.021676    2887 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:56.046589 kubelet[2887]: E0213 15:26:56.046525    2887 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Feb 13 15:26:56.127423 kubelet[2887]: I0213 15:26:56.127376    2887 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb 13 15:26:56.127423 kubelet[2887]: I0213 15:26:56.127441    2887 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb 13 15:26:56.128572 kubelet[2887]: I0213 15:26:56.127471    2887 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 15:26:56.128572 kubelet[2887]: I0213 15:26:56.127737    2887 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Feb 13 15:26:56.128572 kubelet[2887]: I0213 15:26:56.127773    2887 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Feb 13 15:26:56.128572 kubelet[2887]: I0213 15:26:56.127786    2887 policy_none.go:49] "None policy: Start"
Feb 13 15:26:56.130739 kubelet[2887]: I0213 15:26:56.129523    2887 memory_manager.go:170] "Starting memorymanager" policy="None"
Feb 13 15:26:56.130739 kubelet[2887]: I0213 15:26:56.129604    2887 state_mem.go:35] "Initializing new in-memory state store"
Feb 13 15:26:56.130739 kubelet[2887]: I0213 15:26:56.129854    2887 state_mem.go:75] "Updated machine memory state"
Feb 13 15:26:56.133331 kubelet[2887]: I0213 15:26:56.133282    2887 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 13 15:26:56.135003 kubelet[2887]: I0213 15:26:56.134979    2887 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb 13 15:26:56.247278 kubelet[2887]: I0213 15:26:56.246933    2887 topology_manager.go:215] "Topology Admit Handler" podUID="70af7aae056e2964b4a43f53e4123d3d" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:56.247278 kubelet[2887]: I0213 15:26:56.247094    2887 topology_manager.go:215] "Topology Admit Handler" podUID="4f7e2e4344628b6cc536721195964d08" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:56.252456 kubelet[2887]: I0213 15:26:56.250210    2887 topology_manager.go:215] "Topology Admit Handler" podUID="a37b3110cb724969c5690d811efd4eaa" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:56.262033 kubelet[2887]: W0213 15:26:56.261991    2887 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]
Feb 13 15:26:56.262441 kubelet[2887]: E0213 15:26:56.262404    2887 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:56.264517 kubelet[2887]: W0213 15:26:56.264481    2887 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]
Feb 13 15:26:56.267120 kubelet[2887]: W0213 15:26:56.267090    2887 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]
Feb 13 15:26:56.267282 kubelet[2887]: E0213 15:26:56.267225    2887 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:56.282831 kubelet[2887]: I0213 15:26:56.282774    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/70af7aae056e2964b4a43f53e4123d3d-ca-certs\") pod \"kube-apiserver-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" (UID: \"70af7aae056e2964b4a43f53e4123d3d\") " pod="kube-system/kube-apiserver-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:56.282831 kubelet[2887]: I0213 15:26:56.282854    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/70af7aae056e2964b4a43f53e4123d3d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" (UID: \"70af7aae056e2964b4a43f53e4123d3d\") " pod="kube-system/kube-apiserver-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:56.283130 kubelet[2887]: I0213 15:26:56.282893    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f7e2e4344628b6cc536721195964d08-ca-certs\") pod \"kube-controller-manager-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" (UID: \"4f7e2e4344628b6cc536721195964d08\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:56.283130 kubelet[2887]: I0213 15:26:56.282929    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a37b3110cb724969c5690d811efd4eaa-kubeconfig\") pod \"kube-scheduler-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" (UID: \"a37b3110cb724969c5690d811efd4eaa\") " pod="kube-system/kube-scheduler-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:56.283130 kubelet[2887]: I0213 15:26:56.282963    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/70af7aae056e2964b4a43f53e4123d3d-k8s-certs\") pod \"kube-apiserver-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" (UID: \"70af7aae056e2964b4a43f53e4123d3d\") " pod="kube-system/kube-apiserver-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:56.283130 kubelet[2887]: I0213 15:26:56.283028    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f7e2e4344628b6cc536721195964d08-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" (UID: \"4f7e2e4344628b6cc536721195964d08\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:56.283367 kubelet[2887]: I0213 15:26:56.283066    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f7e2e4344628b6cc536721195964d08-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" (UID: \"4f7e2e4344628b6cc536721195964d08\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:56.283367 kubelet[2887]: I0213 15:26:56.283105    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f7e2e4344628b6cc536721195964d08-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" (UID: \"4f7e2e4344628b6cc536721195964d08\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:56.283367 kubelet[2887]: I0213 15:26:56.283166    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f7e2e4344628b6cc536721195964d08-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" (UID: \"4f7e2e4344628b6cc536721195964d08\") " pod="kube-system/kube-controller-manager-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal"
Feb 13 15:26:56.745012 sudo[2902]: pam_unix(sudo:session): session closed for user root
Feb 13 15:26:56.823187 kubelet[2887]: I0213 15:26:56.820922    2887 apiserver.go:52] "Watching apiserver"
Feb 13 15:26:56.881623 kubelet[2887]: I0213 15:26:56.881489    2887 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
Feb 13 15:26:57.132188 kubelet[2887]: I0213 15:26:57.130953    2887 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal" podStartSLOduration=2.130850169 podStartE2EDuration="2.130850169s" podCreationTimestamp="2025-02-13 15:26:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:26:57.116789075 +0000 UTC m=+1.448049505" watchObservedRunningTime="2025-02-13 15:26:57.130850169 +0000 UTC m=+1.462110601"
Feb 13 15:26:57.150604 kubelet[2887]: I0213 15:26:57.148703    2887 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal" podStartSLOduration=1.148625957 podStartE2EDuration="1.148625957s" podCreationTimestamp="2025-02-13 15:26:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:26:57.13289 +0000 UTC m=+1.464150429" watchObservedRunningTime="2025-02-13 15:26:57.148625957 +0000 UTC m=+1.479886394"
Feb 13 15:26:59.218227 update_engine[1602]: I20250213 15:26:59.217305  1602 update_attempter.cc:509] Updating boot flags...
Feb 13 15:26:59.325907 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2953)
Feb 13 15:26:59.501205 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2952)
Feb 13 15:26:59.573941 sudo[1892]: pam_unix(sudo:session): session closed for user root
Feb 13 15:26:59.622239 sshd[1891]: Connection closed by 139.178.68.195 port 46686
Feb 13 15:26:59.620958 sshd-session[1888]: pam_unix(sshd:session): session closed for user core
Feb 13 15:26:59.629593 systemd[1]: sshd@6-10.128.0.79:22-139.178.68.195:46686.service: Deactivated successfully.
Feb 13 15:26:59.639857 systemd[1]: session-7.scope: Deactivated successfully.
Feb 13 15:26:59.643303 systemd-logind[1596]: Session 7 logged out. Waiting for processes to exit.
Feb 13 15:26:59.646315 systemd-logind[1596]: Removed session 7.
Feb 13 15:26:59.680317 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2952)
Feb 13 15:27:00.228597 kubelet[2887]: I0213 15:27:00.228536    2887 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal" podStartSLOduration=6.228449968 podStartE2EDuration="6.228449968s" podCreationTimestamp="2025-02-13 15:26:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:26:57.152274415 +0000 UTC m=+1.483534851" watchObservedRunningTime="2025-02-13 15:27:00.228449968 +0000 UTC m=+4.559710389"
Feb 13 15:27:07.877239 systemd[1]: Started sshd@7-10.128.0.79:22-92.255.85.188:48360.service - OpenSSH per-connection server daemon (92.255.85.188:48360).
Feb 13 15:27:08.761922 kubelet[2887]: I0213 15:27:08.761721    2887 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Feb 13 15:27:08.762836 containerd[1622]: time="2025-02-13T15:27:08.762534730Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Feb 13 15:27:08.763624 kubelet[2887]: I0213 15:27:08.762912    2887 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Feb 13 15:27:08.799349 sshd[2982]: Connection closed by authenticating user root 92.255.85.188 port 48360 [preauth]
Feb 13 15:27:08.807196 systemd[1]: sshd@7-10.128.0.79:22-92.255.85.188:48360.service: Deactivated successfully.
Feb 13 15:27:09.092178 kubelet[2887]: I0213 15:27:09.089037    2887 topology_manager.go:215] "Topology Admit Handler" podUID="87559cc3-b76a-4aa8-a2e2-7dbf0dc07d66" podNamespace="kube-system" podName="kube-proxy-7265m"
Feb 13 15:27:09.132767 kubelet[2887]: I0213 15:27:09.130300    2887 topology_manager.go:215] "Topology Admit Handler" podUID="e66f400f-d3c2-4cb9-8d84-3b710327b783" podNamespace="kube-system" podName="cilium-rmxzv"
Feb 13 15:27:09.274727 kubelet[2887]: I0213 15:27:09.274653    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87559cc3-b76a-4aa8-a2e2-7dbf0dc07d66-xtables-lock\") pod \"kube-proxy-7265m\" (UID: \"87559cc3-b76a-4aa8-a2e2-7dbf0dc07d66\") " pod="kube-system/kube-proxy-7265m"
Feb 13 15:27:09.274975 kubelet[2887]: I0213 15:27:09.274750    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-etc-cni-netd\") pod \"cilium-rmxzv\" (UID: \"e66f400f-d3c2-4cb9-8d84-3b710327b783\") " pod="kube-system/cilium-rmxzv"
Feb 13 15:27:09.274975 kubelet[2887]: I0213 15:27:09.274785    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/87559cc3-b76a-4aa8-a2e2-7dbf0dc07d66-kube-proxy\") pod \"kube-proxy-7265m\" (UID: \"87559cc3-b76a-4aa8-a2e2-7dbf0dc07d66\") " pod="kube-system/kube-proxy-7265m"
Feb 13 15:27:09.274975 kubelet[2887]: I0213 15:27:09.274820    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-bpf-maps\") pod \"cilium-rmxzv\" (UID: \"e66f400f-d3c2-4cb9-8d84-3b710327b783\") " pod="kube-system/cilium-rmxzv"
Feb 13 15:27:09.274975 kubelet[2887]: I0213 15:27:09.274863    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-hostproc\") pod \"cilium-rmxzv\" (UID: \"e66f400f-d3c2-4cb9-8d84-3b710327b783\") " pod="kube-system/cilium-rmxzv"
Feb 13 15:27:09.274975 kubelet[2887]: I0213 15:27:09.274904    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-host-proc-sys-net\") pod \"cilium-rmxzv\" (UID: \"e66f400f-d3c2-4cb9-8d84-3b710327b783\") " pod="kube-system/cilium-rmxzv"
Feb 13 15:27:09.274975 kubelet[2887]: I0213 15:27:09.274940    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggwwh\" (UniqueName: \"kubernetes.io/projected/87559cc3-b76a-4aa8-a2e2-7dbf0dc07d66-kube-api-access-ggwwh\") pod \"kube-proxy-7265m\" (UID: \"87559cc3-b76a-4aa8-a2e2-7dbf0dc07d66\") " pod="kube-system/kube-proxy-7265m"
Feb 13 15:27:09.275337 kubelet[2887]: I0213 15:27:09.274971    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-cilium-cgroup\") pod \"cilium-rmxzv\" (UID: \"e66f400f-d3c2-4cb9-8d84-3b710327b783\") " pod="kube-system/cilium-rmxzv"
Feb 13 15:27:09.275337 kubelet[2887]: I0213 15:27:09.275008    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e66f400f-d3c2-4cb9-8d84-3b710327b783-clustermesh-secrets\") pod \"cilium-rmxzv\" (UID: \"e66f400f-d3c2-4cb9-8d84-3b710327b783\") " pod="kube-system/cilium-rmxzv"
Feb 13 15:27:09.275337 kubelet[2887]: I0213 15:27:09.275045    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e66f400f-d3c2-4cb9-8d84-3b710327b783-cilium-config-path\") pod \"cilium-rmxzv\" (UID: \"e66f400f-d3c2-4cb9-8d84-3b710327b783\") " pod="kube-system/cilium-rmxzv"
Feb 13 15:27:09.275337 kubelet[2887]: I0213 15:27:09.275087    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs96k\" (UniqueName: \"kubernetes.io/projected/e66f400f-d3c2-4cb9-8d84-3b710327b783-kube-api-access-vs96k\") pod \"cilium-rmxzv\" (UID: \"e66f400f-d3c2-4cb9-8d84-3b710327b783\") " pod="kube-system/cilium-rmxzv"
Feb 13 15:27:09.275337 kubelet[2887]: I0213 15:27:09.275138    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-cni-path\") pod \"cilium-rmxzv\" (UID: \"e66f400f-d3c2-4cb9-8d84-3b710327b783\") " pod="kube-system/cilium-rmxzv"
Feb 13 15:27:09.275337 kubelet[2887]: I0213 15:27:09.275200    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-cilium-run\") pod \"cilium-rmxzv\" (UID: \"e66f400f-d3c2-4cb9-8d84-3b710327b783\") " pod="kube-system/cilium-rmxzv"
Feb 13 15:27:09.275738 kubelet[2887]: I0213 15:27:09.275239    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-xtables-lock\") pod \"cilium-rmxzv\" (UID: \"e66f400f-d3c2-4cb9-8d84-3b710327b783\") " pod="kube-system/cilium-rmxzv"
Feb 13 15:27:09.275738 kubelet[2887]: I0213 15:27:09.275276    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-host-proc-sys-kernel\") pod \"cilium-rmxzv\" (UID: \"e66f400f-d3c2-4cb9-8d84-3b710327b783\") " pod="kube-system/cilium-rmxzv"
Feb 13 15:27:09.275738 kubelet[2887]: I0213 15:27:09.275314    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87559cc3-b76a-4aa8-a2e2-7dbf0dc07d66-lib-modules\") pod \"kube-proxy-7265m\" (UID: \"87559cc3-b76a-4aa8-a2e2-7dbf0dc07d66\") " pod="kube-system/kube-proxy-7265m"
Feb 13 15:27:09.275738 kubelet[2887]: I0213 15:27:09.275351    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-lib-modules\") pod \"cilium-rmxzv\" (UID: \"e66f400f-d3c2-4cb9-8d84-3b710327b783\") " pod="kube-system/cilium-rmxzv"
Feb 13 15:27:09.275738 kubelet[2887]: I0213 15:27:09.275402    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e66f400f-d3c2-4cb9-8d84-3b710327b783-hubble-tls\") pod \"cilium-rmxzv\" (UID: \"e66f400f-d3c2-4cb9-8d84-3b710327b783\") " pod="kube-system/cilium-rmxzv"
Feb 13 15:27:09.414317 kubelet[2887]: E0213 15:27:09.412733    2887 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
Feb 13 15:27:09.414317 kubelet[2887]: E0213 15:27:09.412779    2887 projected.go:200] Error preparing data for projected volume kube-api-access-vs96k for pod kube-system/cilium-rmxzv: configmap "kube-root-ca.crt" not found
Feb 13 15:27:09.414317 kubelet[2887]: E0213 15:27:09.412883    2887 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e66f400f-d3c2-4cb9-8d84-3b710327b783-kube-api-access-vs96k podName:e66f400f-d3c2-4cb9-8d84-3b710327b783 nodeName:}" failed. No retries permitted until 2025-02-13 15:27:09.9128375 +0000 UTC m=+14.244097923 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vs96k" (UniqueName: "kubernetes.io/projected/e66f400f-d3c2-4cb9-8d84-3b710327b783-kube-api-access-vs96k") pod "cilium-rmxzv" (UID: "e66f400f-d3c2-4cb9-8d84-3b710327b783") : configmap "kube-root-ca.crt" not found
Feb 13 15:27:09.414668 kubelet[2887]: E0213 15:27:09.414444    2887 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
Feb 13 15:27:09.414668 kubelet[2887]: E0213 15:27:09.414475    2887 projected.go:200] Error preparing data for projected volume kube-api-access-ggwwh for pod kube-system/kube-proxy-7265m: configmap "kube-root-ca.crt" not found
Feb 13 15:27:09.414668 kubelet[2887]: E0213 15:27:09.414547    2887 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/87559cc3-b76a-4aa8-a2e2-7dbf0dc07d66-kube-api-access-ggwwh podName:87559cc3-b76a-4aa8-a2e2-7dbf0dc07d66 nodeName:}" failed. No retries permitted until 2025-02-13 15:27:09.914518512 +0000 UTC m=+14.245778942 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ggwwh" (UniqueName: "kubernetes.io/projected/87559cc3-b76a-4aa8-a2e2-7dbf0dc07d66-kube-api-access-ggwwh") pod "kube-proxy-7265m" (UID: "87559cc3-b76a-4aa8-a2e2-7dbf0dc07d66") : configmap "kube-root-ca.crt" not found
Feb 13 15:27:09.483993 kubelet[2887]: I0213 15:27:09.482749    2887 topology_manager.go:215] "Topology Admit Handler" podUID="33e23bef-aac9-40f6-9eb2-43bef12a4a18" podNamespace="kube-system" podName="cilium-operator-5cc964979-68fg2"
Feb 13 15:27:09.578407 kubelet[2887]: I0213 15:27:09.578341    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33e23bef-aac9-40f6-9eb2-43bef12a4a18-cilium-config-path\") pod \"cilium-operator-5cc964979-68fg2\" (UID: \"33e23bef-aac9-40f6-9eb2-43bef12a4a18\") " pod="kube-system/cilium-operator-5cc964979-68fg2"
Feb 13 15:27:09.578655 kubelet[2887]: I0213 15:27:09.578432    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtwh6\" (UniqueName: \"kubernetes.io/projected/33e23bef-aac9-40f6-9eb2-43bef12a4a18-kube-api-access-qtwh6\") pod \"cilium-operator-5cc964979-68fg2\" (UID: \"33e23bef-aac9-40f6-9eb2-43bef12a4a18\") " pod="kube-system/cilium-operator-5cc964979-68fg2"
Feb 13 15:27:09.797665 containerd[1622]: time="2025-02-13T15:27:09.797456502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-68fg2,Uid:33e23bef-aac9-40f6-9eb2-43bef12a4a18,Namespace:kube-system,Attempt:0,}"
Feb 13 15:27:10.011531 containerd[1622]: time="2025-02-13T15:27:10.011415942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7265m,Uid:87559cc3-b76a-4aa8-a2e2-7dbf0dc07d66,Namespace:kube-system,Attempt:0,}"
Feb 13 15:27:10.061783 containerd[1622]: time="2025-02-13T15:27:10.061708289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rmxzv,Uid:e66f400f-d3c2-4cb9-8d84-3b710327b783,Namespace:kube-system,Attempt:0,}"
Feb 13 15:27:10.610564 containerd[1622]: time="2025-02-13T15:27:10.610204202Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:27:10.610564 containerd[1622]: time="2025-02-13T15:27:10.610280379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:27:10.610564 containerd[1622]: time="2025-02-13T15:27:10.610301858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:27:10.610564 containerd[1622]: time="2025-02-13T15:27:10.610433067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:27:10.611135 containerd[1622]: time="2025-02-13T15:27:10.609306262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:27:10.611135 containerd[1622]: time="2025-02-13T15:27:10.609423530Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:27:10.611135 containerd[1622]: time="2025-02-13T15:27:10.609453411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:27:10.611135 containerd[1622]: time="2025-02-13T15:27:10.609646651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:27:10.652847 containerd[1622]: time="2025-02-13T15:27:10.651099433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:27:10.652847 containerd[1622]: time="2025-02-13T15:27:10.651230752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:27:10.652847 containerd[1622]: time="2025-02-13T15:27:10.651261923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:27:10.652847 containerd[1622]: time="2025-02-13T15:27:10.651416480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:27:10.790108 containerd[1622]: time="2025-02-13T15:27:10.789876859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rmxzv,Uid:e66f400f-d3c2-4cb9-8d84-3b710327b783,Namespace:kube-system,Attempt:0,} returns sandbox id \"94502aef6bf1f17df14c5db1119eea058051b17f11095b37072befbc528153ef\""
Feb 13 15:27:10.800618 containerd[1622]: time="2025-02-13T15:27:10.800338846Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\""
Feb 13 15:27:10.803981 containerd[1622]: time="2025-02-13T15:27:10.803754564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-68fg2,Uid:33e23bef-aac9-40f6-9eb2-43bef12a4a18,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4bcf1688ebc9428f5134cad721cae5bb917a2f457cbd2fd4583c89329b92736\""
Feb 13 15:27:10.811931 containerd[1622]: time="2025-02-13T15:27:10.811774993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7265m,Uid:87559cc3-b76a-4aa8-a2e2-7dbf0dc07d66,Namespace:kube-system,Attempt:0,} returns sandbox id \"df2e6e3ff9ad6246d6e0b13394714bc228bc1ee4d6ffeeb75bb26bcf2d6cfb9b\""
Feb 13 15:27:10.818590 containerd[1622]: time="2025-02-13T15:27:10.818526559Z" level=info msg="CreateContainer within sandbox \"df2e6e3ff9ad6246d6e0b13394714bc228bc1ee4d6ffeeb75bb26bcf2d6cfb9b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Feb 13 15:27:10.844451 containerd[1622]: time="2025-02-13T15:27:10.844369808Z" level=info msg="CreateContainer within sandbox \"df2e6e3ff9ad6246d6e0b13394714bc228bc1ee4d6ffeeb75bb26bcf2d6cfb9b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"07477d77f28bb0717fcdad100c3d06c8d516e2af4ed9cefc056c769521e80e75\""
Feb 13 15:27:10.845745 containerd[1622]: time="2025-02-13T15:27:10.845590591Z" level=info msg="StartContainer for \"07477d77f28bb0717fcdad100c3d06c8d516e2af4ed9cefc056c769521e80e75\""
Feb 13 15:27:10.947107 containerd[1622]: time="2025-02-13T15:27:10.946766758Z" level=info msg="StartContainer for \"07477d77f28bb0717fcdad100c3d06c8d516e2af4ed9cefc056c769521e80e75\" returns successfully"
Feb 13 15:27:15.970434 kubelet[2887]: I0213 15:27:15.970363    2887 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-7265m" podStartSLOduration=6.97028741 podStartE2EDuration="6.97028741s" podCreationTimestamp="2025-02-13 15:27:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:27:11.104048481 +0000 UTC m=+15.435308918" watchObservedRunningTime="2025-02-13 15:27:15.97028741 +0000 UTC m=+20.301547845"
Feb 13 15:27:15.983064 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2628496646.mount: Deactivated successfully.
Feb 13 15:27:18.924165 containerd[1622]: time="2025-02-13T15:27:18.924038519Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:27:18.926768 containerd[1622]: time="2025-02-13T15:27:18.926610971Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503"
Feb 13 15:27:18.928177 containerd[1622]: time="2025-02-13T15:27:18.928047027Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:27:18.933091 containerd[1622]: time="2025-02-13T15:27:18.931382262Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.130973459s"
Feb 13 15:27:18.933091 containerd[1622]: time="2025-02-13T15:27:18.931445686Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\""
Feb 13 15:27:18.934291 containerd[1622]: time="2025-02-13T15:27:18.934251962Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\""
Feb 13 15:27:18.936567 containerd[1622]: time="2025-02-13T15:27:18.936499007Z" level=info msg="CreateContainer within sandbox \"94502aef6bf1f17df14c5db1119eea058051b17f11095b37072befbc528153ef\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb 13 15:27:18.960762 containerd[1622]: time="2025-02-13T15:27:18.960690051Z" level=info msg="CreateContainer within sandbox \"94502aef6bf1f17df14c5db1119eea058051b17f11095b37072befbc528153ef\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f10ffaad03b228605987870b96c85ade216054ba50fed5c9a6001e00a451cc23\""
Feb 13 15:27:18.963199 containerd[1622]: time="2025-02-13T15:27:18.961855715Z" level=info msg="StartContainer for \"f10ffaad03b228605987870b96c85ade216054ba50fed5c9a6001e00a451cc23\""
Feb 13 15:27:19.068957 containerd[1622]: time="2025-02-13T15:27:19.068874660Z" level=info msg="StartContainer for \"f10ffaad03b228605987870b96c85ade216054ba50fed5c9a6001e00a451cc23\" returns successfully"
Feb 13 15:27:19.137988 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f10ffaad03b228605987870b96c85ade216054ba50fed5c9a6001e00a451cc23-rootfs.mount: Deactivated successfully.
Feb 13 15:27:21.227046 containerd[1622]: time="2025-02-13T15:27:21.226954859Z" level=info msg="shim disconnected" id=f10ffaad03b228605987870b96c85ade216054ba50fed5c9a6001e00a451cc23 namespace=k8s.io
Feb 13 15:27:21.228124 containerd[1622]: time="2025-02-13T15:27:21.227080605Z" level=warning msg="cleaning up after shim disconnected" id=f10ffaad03b228605987870b96c85ade216054ba50fed5c9a6001e00a451cc23 namespace=k8s.io
Feb 13 15:27:21.228124 containerd[1622]: time="2025-02-13T15:27:21.227101079Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:27:21.824915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3290054527.mount: Deactivated successfully.
Feb 13 15:27:22.146925 containerd[1622]: time="2025-02-13T15:27:22.146790306Z" level=info msg="CreateContainer within sandbox \"94502aef6bf1f17df14c5db1119eea058051b17f11095b37072befbc528153ef\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb 13 15:27:22.184852 containerd[1622]: time="2025-02-13T15:27:22.184796438Z" level=info msg="CreateContainer within sandbox \"94502aef6bf1f17df14c5db1119eea058051b17f11095b37072befbc528153ef\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8db6b22f1dc88244b38c9e46841187d657cb5067ae20d740699e3b1a0c00f70d\""
Feb 13 15:27:22.188329 containerd[1622]: time="2025-02-13T15:27:22.188272432Z" level=info msg="StartContainer for \"8db6b22f1dc88244b38c9e46841187d657cb5067ae20d740699e3b1a0c00f70d\""
Feb 13 15:27:22.321366 containerd[1622]: time="2025-02-13T15:27:22.320055294Z" level=info msg="StartContainer for \"8db6b22f1dc88244b38c9e46841187d657cb5067ae20d740699e3b1a0c00f70d\" returns successfully"
Feb 13 15:27:22.332255 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 13 15:27:22.332816 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:27:22.332936 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:27:22.345630 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:27:22.391477 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:27:22.499935 containerd[1622]: time="2025-02-13T15:27:22.499496381Z" level=info msg="shim disconnected" id=8db6b22f1dc88244b38c9e46841187d657cb5067ae20d740699e3b1a0c00f70d namespace=k8s.io
Feb 13 15:27:22.499935 containerd[1622]: time="2025-02-13T15:27:22.499600265Z" level=warning msg="cleaning up after shim disconnected" id=8db6b22f1dc88244b38c9e46841187d657cb5067ae20d740699e3b1a0c00f70d namespace=k8s.io
Feb 13 15:27:22.499935 containerd[1622]: time="2025-02-13T15:27:22.499615907Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:27:22.756554 containerd[1622]: time="2025-02-13T15:27:22.756198940Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:27:22.757783 containerd[1622]: time="2025-02-13T15:27:22.757687896Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197"
Feb 13 15:27:22.759182 containerd[1622]: time="2025-02-13T15:27:22.759051484Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:27:22.761909 containerd[1622]: time="2025-02-13T15:27:22.761670637Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.827151245s"
Feb 13 15:27:22.761909 containerd[1622]: time="2025-02-13T15:27:22.761746478Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\""
Feb 13 15:27:22.765740 containerd[1622]: time="2025-02-13T15:27:22.765550673Z" level=info msg="CreateContainer within sandbox \"a4bcf1688ebc9428f5134cad721cae5bb917a2f457cbd2fd4583c89329b92736\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}"
Feb 13 15:27:22.778770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8db6b22f1dc88244b38c9e46841187d657cb5067ae20d740699e3b1a0c00f70d-rootfs.mount: Deactivated successfully.
Feb 13 15:27:22.800531 containerd[1622]: time="2025-02-13T15:27:22.800425926Z" level=info msg="CreateContainer within sandbox \"a4bcf1688ebc9428f5134cad721cae5bb917a2f457cbd2fd4583c89329b92736\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1e1858739fda42d770644197be27cb11801f4f297ae7d7b66fa8d699d460f586\""
Feb 13 15:27:22.803682 containerd[1622]: time="2025-02-13T15:27:22.802707501Z" level=info msg="StartContainer for \"1e1858739fda42d770644197be27cb11801f4f297ae7d7b66fa8d699d460f586\""
Feb 13 15:27:22.914192 containerd[1622]: time="2025-02-13T15:27:22.913400297Z" level=info msg="StartContainer for \"1e1858739fda42d770644197be27cb11801f4f297ae7d7b66fa8d699d460f586\" returns successfully"
Feb 13 15:27:23.158029 containerd[1622]: time="2025-02-13T15:27:23.157577693Z" level=info msg="CreateContainer within sandbox \"94502aef6bf1f17df14c5db1119eea058051b17f11095b37072befbc528153ef\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb 13 15:27:23.193878 containerd[1622]: time="2025-02-13T15:27:23.193656958Z" level=info msg="CreateContainer within sandbox \"94502aef6bf1f17df14c5db1119eea058051b17f11095b37072befbc528153ef\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6554a4e787f2adfc970d44b92839447dcbe1952b9ff37db8f1ef62a6209264b2\""
Feb 13 15:27:23.197187 containerd[1622]: time="2025-02-13T15:27:23.195075561Z" level=info msg="StartContainer for \"6554a4e787f2adfc970d44b92839447dcbe1952b9ff37db8f1ef62a6209264b2\""
Feb 13 15:27:23.394238 containerd[1622]: time="2025-02-13T15:27:23.394170354Z" level=info msg="StartContainer for \"6554a4e787f2adfc970d44b92839447dcbe1952b9ff37db8f1ef62a6209264b2\" returns successfully"
Feb 13 15:27:23.573234 containerd[1622]: time="2025-02-13T15:27:23.572704016Z" level=info msg="shim disconnected" id=6554a4e787f2adfc970d44b92839447dcbe1952b9ff37db8f1ef62a6209264b2 namespace=k8s.io
Feb 13 15:27:23.573234 containerd[1622]: time="2025-02-13T15:27:23.572790708Z" level=warning msg="cleaning up after shim disconnected" id=6554a4e787f2adfc970d44b92839447dcbe1952b9ff37db8f1ef62a6209264b2 namespace=k8s.io
Feb 13 15:27:23.573234 containerd[1622]: time="2025-02-13T15:27:23.572806209Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:27:24.194351 containerd[1622]: time="2025-02-13T15:27:24.192587733Z" level=info msg="CreateContainer within sandbox \"94502aef6bf1f17df14c5db1119eea058051b17f11095b37072befbc528153ef\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb 13 15:27:24.244700 containerd[1622]: time="2025-02-13T15:27:24.244480480Z" level=info msg="CreateContainer within sandbox \"94502aef6bf1f17df14c5db1119eea058051b17f11095b37072befbc528153ef\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bd613e2e89e80f3a29089ce50f843e75954871758e47afabd0245aa19bafd7db\""
Feb 13 15:27:24.248921 containerd[1622]: time="2025-02-13T15:27:24.248855598Z" level=info msg="StartContainer for \"bd613e2e89e80f3a29089ce50f843e75954871758e47afabd0245aa19bafd7db\""
Feb 13 15:27:24.397548 kubelet[2887]: I0213 15:27:24.395581    2887 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-68fg2" podStartSLOduration=3.440480185 podStartE2EDuration="15.395497791s" podCreationTimestamp="2025-02-13 15:27:09 +0000 UTC" firstStartedPulling="2025-02-13 15:27:10.807391933 +0000 UTC m=+15.138652359" lastFinishedPulling="2025-02-13 15:27:22.762409538 +0000 UTC m=+27.093669965" observedRunningTime="2025-02-13 15:27:23.407290164 +0000 UTC m=+27.738550599" watchObservedRunningTime="2025-02-13 15:27:24.395497791 +0000 UTC m=+28.726758232"
Feb 13 15:27:24.531308 containerd[1622]: time="2025-02-13T15:27:24.530573478Z" level=info msg="StartContainer for \"bd613e2e89e80f3a29089ce50f843e75954871758e47afabd0245aa19bafd7db\" returns successfully"
Feb 13 15:27:24.577030 containerd[1622]: time="2025-02-13T15:27:24.576911629Z" level=info msg="shim disconnected" id=bd613e2e89e80f3a29089ce50f843e75954871758e47afabd0245aa19bafd7db namespace=k8s.io
Feb 13 15:27:24.577030 containerd[1622]: time="2025-02-13T15:27:24.577021390Z" level=warning msg="cleaning up after shim disconnected" id=bd613e2e89e80f3a29089ce50f843e75954871758e47afabd0245aa19bafd7db namespace=k8s.io
Feb 13 15:27:24.577030 containerd[1622]: time="2025-02-13T15:27:24.577037403Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:27:24.778138 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd613e2e89e80f3a29089ce50f843e75954871758e47afabd0245aa19bafd7db-rootfs.mount: Deactivated successfully.
Feb 13 15:27:25.192023 containerd[1622]: time="2025-02-13T15:27:25.191951212Z" level=info msg="CreateContainer within sandbox \"94502aef6bf1f17df14c5db1119eea058051b17f11095b37072befbc528153ef\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb 13 15:27:25.242899 containerd[1622]: time="2025-02-13T15:27:25.242821343Z" level=info msg="CreateContainer within sandbox \"94502aef6bf1f17df14c5db1119eea058051b17f11095b37072befbc528153ef\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c03444ac2f6bc712d65ec07f1ae5381193af5763ed5dba906c1c494e1147fd90\""
Feb 13 15:27:25.246208 containerd[1622]: time="2025-02-13T15:27:25.245460379Z" level=info msg="StartContainer for \"c03444ac2f6bc712d65ec07f1ae5381193af5763ed5dba906c1c494e1147fd90\""
Feb 13 15:27:25.475782 containerd[1622]: time="2025-02-13T15:27:25.475413114Z" level=info msg="StartContainer for \"c03444ac2f6bc712d65ec07f1ae5381193af5763ed5dba906c1c494e1147fd90\" returns successfully"
Feb 13 15:27:25.666855 kubelet[2887]: I0213 15:27:25.666803    2887 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
Feb 13 15:27:25.717697 kubelet[2887]: I0213 15:27:25.717616    2887 topology_manager.go:215] "Topology Admit Handler" podUID="920ea170-0b70-47ae-b4b5-9b57d9f2c0ba" podNamespace="kube-system" podName="coredns-76f75df574-9bk8q"
Feb 13 15:27:25.723852 kubelet[2887]: I0213 15:27:25.723702    2887 topology_manager.go:215] "Topology Admit Handler" podUID="0b3fb87f-341f-4db1-86b9-ecc85b7bdc2d" podNamespace="kube-system" podName="coredns-76f75df574-j4fzn"
Feb 13 15:27:25.779647 systemd[1]: run-containerd-runc-k8s.io-c03444ac2f6bc712d65ec07f1ae5381193af5763ed5dba906c1c494e1147fd90-runc.W4GpWL.mount: Deactivated successfully.
Feb 13 15:27:25.907230 kubelet[2887]: I0213 15:27:25.906672    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b3fb87f-341f-4db1-86b9-ecc85b7bdc2d-config-volume\") pod \"coredns-76f75df574-j4fzn\" (UID: \"0b3fb87f-341f-4db1-86b9-ecc85b7bdc2d\") " pod="kube-system/coredns-76f75df574-j4fzn"
Feb 13 15:27:25.908282 kubelet[2887]: I0213 15:27:25.908136    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7cgn\" (UniqueName: \"kubernetes.io/projected/920ea170-0b70-47ae-b4b5-9b57d9f2c0ba-kube-api-access-d7cgn\") pod \"coredns-76f75df574-9bk8q\" (UID: \"920ea170-0b70-47ae-b4b5-9b57d9f2c0ba\") " pod="kube-system/coredns-76f75df574-9bk8q"
Feb 13 15:27:25.908844 kubelet[2887]: I0213 15:27:25.908524    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/920ea170-0b70-47ae-b4b5-9b57d9f2c0ba-config-volume\") pod \"coredns-76f75df574-9bk8q\" (UID: \"920ea170-0b70-47ae-b4b5-9b57d9f2c0ba\") " pod="kube-system/coredns-76f75df574-9bk8q"
Feb 13 15:27:25.909198 kubelet[2887]: I0213 15:27:25.909017    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9lxn\" (UniqueName: \"kubernetes.io/projected/0b3fb87f-341f-4db1-86b9-ecc85b7bdc2d-kube-api-access-z9lxn\") pod \"coredns-76f75df574-j4fzn\" (UID: \"0b3fb87f-341f-4db1-86b9-ecc85b7bdc2d\") " pod="kube-system/coredns-76f75df574-j4fzn"
Feb 13 15:27:26.054022 containerd[1622]: time="2025-02-13T15:27:26.053947844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9bk8q,Uid:920ea170-0b70-47ae-b4b5-9b57d9f2c0ba,Namespace:kube-system,Attempt:0,}"
Feb 13 15:27:26.060958 containerd[1622]: time="2025-02-13T15:27:26.059392566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-j4fzn,Uid:0b3fb87f-341f-4db1-86b9-ecc85b7bdc2d,Namespace:kube-system,Attempt:0,}"
Feb 13 15:27:28.019030 systemd-networkd[1216]: cilium_host: Link UP
Feb 13 15:27:28.020678 systemd-networkd[1216]: cilium_net: Link UP
Feb 13 15:27:28.025352 systemd-networkd[1216]: cilium_net: Gained carrier
Feb 13 15:27:28.025749 systemd-networkd[1216]: cilium_host: Gained carrier
Feb 13 15:27:28.189349 systemd-networkd[1216]: cilium_vxlan: Link UP
Feb 13 15:27:28.189363 systemd-networkd[1216]: cilium_vxlan: Gained carrier
Feb 13 15:27:28.497180 kernel: NET: Registered PF_ALG protocol family
Feb 13 15:27:28.606332 systemd-networkd[1216]: cilium_host: Gained IPv6LL
Feb 13 15:27:28.798881 systemd-networkd[1216]: cilium_net: Gained IPv6LL
Feb 13 15:27:29.505373 systemd-networkd[1216]: lxc_health: Link UP
Feb 13 15:27:29.520125 systemd-networkd[1216]: lxc_health: Gained carrier
Feb 13 15:27:29.568313 systemd-networkd[1216]: cilium_vxlan: Gained IPv6LL
Feb 13 15:27:30.099515 kubelet[2887]: I0213 15:27:30.096797    2887 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-rmxzv" podStartSLOduration=12.962166384 podStartE2EDuration="21.096719643s" podCreationTimestamp="2025-02-13 15:27:09 +0000 UTC" firstStartedPulling="2025-02-13 15:27:10.797476023 +0000 UTC m=+15.128736450" lastFinishedPulling="2025-02-13 15:27:18.932029289 +0000 UTC m=+23.263289709" observedRunningTime="2025-02-13 15:27:26.264066207 +0000 UTC m=+30.595326641" watchObservedRunningTime="2025-02-13 15:27:30.096719643 +0000 UTC m=+34.427980074"
Feb 13 15:27:30.183944 systemd-networkd[1216]: lxc15c98bf13de2: Link UP
Feb 13 15:27:30.195185 kernel: eth0: renamed from tmp57f74
Feb 13 15:27:30.203971 systemd-networkd[1216]: lxc15c98bf13de2: Gained carrier
Feb 13 15:27:30.275511 systemd-networkd[1216]: lxc71817ece0e1f: Link UP
Feb 13 15:27:30.295699 kernel: eth0: renamed from tmpf96a9
Feb 13 15:27:30.313616 systemd-networkd[1216]: lxc71817ece0e1f: Gained carrier
Feb 13 15:27:31.360180 systemd-networkd[1216]: lxc_health: Gained IPv6LL
Feb 13 15:27:31.807545 systemd-networkd[1216]: lxc71817ece0e1f: Gained IPv6LL
Feb 13 15:27:31.935271 systemd-networkd[1216]: lxc15c98bf13de2: Gained IPv6LL
Feb 13 15:27:34.902712 ntpd[1567]: Listen normally on 6 cilium_host 192.168.0.193:123
Feb 13 15:27:34.904172 ntpd[1567]: 13 Feb 15:27:34 ntpd[1567]: Listen normally on 6 cilium_host 192.168.0.193:123
Feb 13 15:27:34.904172 ntpd[1567]: 13 Feb 15:27:34 ntpd[1567]: Listen normally on 7 cilium_net [fe80::2811:4cff:fe24:ff9a%4]:123
Feb 13 15:27:34.904172 ntpd[1567]: 13 Feb 15:27:34 ntpd[1567]: Listen normally on 8 cilium_host [fe80::78d9:64ff:fede:e8d0%5]:123
Feb 13 15:27:34.904172 ntpd[1567]: 13 Feb 15:27:34 ntpd[1567]: Listen normally on 9 cilium_vxlan [fe80::c0a7:20ff:fe2f:1f05%6]:123
Feb 13 15:27:34.904172 ntpd[1567]: 13 Feb 15:27:34 ntpd[1567]: Listen normally on 10 lxc_health [fe80::3880:aeff:fe99:a5cc%8]:123
Feb 13 15:27:34.904172 ntpd[1567]: 13 Feb 15:27:34 ntpd[1567]: Listen normally on 11 lxc15c98bf13de2 [fe80::b0ca:d3ff:fe5b:8c2%10]:123
Feb 13 15:27:34.904172 ntpd[1567]: 13 Feb 15:27:34 ntpd[1567]: Listen normally on 12 lxc71817ece0e1f [fe80::1c65:b5ff:fe7d:6bb3%12]:123
Feb 13 15:27:34.902866 ntpd[1567]: Listen normally on 7 cilium_net [fe80::2811:4cff:fe24:ff9a%4]:123
Feb 13 15:27:34.902967 ntpd[1567]: Listen normally on 8 cilium_host [fe80::78d9:64ff:fede:e8d0%5]:123
Feb 13 15:27:34.903037 ntpd[1567]: Listen normally on 9 cilium_vxlan [fe80::c0a7:20ff:fe2f:1f05%6]:123
Feb 13 15:27:34.903114 ntpd[1567]: Listen normally on 10 lxc_health [fe80::3880:aeff:fe99:a5cc%8]:123
Feb 13 15:27:34.903201 ntpd[1567]: Listen normally on 11 lxc15c98bf13de2 [fe80::b0ca:d3ff:fe5b:8c2%10]:123
Feb 13 15:27:34.903265 ntpd[1567]: Listen normally on 12 lxc71817ece0e1f [fe80::1c65:b5ff:fe7d:6bb3%12]:123
Feb 13 15:27:36.045956 containerd[1622]: time="2025-02-13T15:27:36.042446366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:27:36.045956 containerd[1622]: time="2025-02-13T15:27:36.042572255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:27:36.045956 containerd[1622]: time="2025-02-13T15:27:36.042604149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:27:36.045956 containerd[1622]: time="2025-02-13T15:27:36.044322518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:27:36.171287 containerd[1622]: time="2025-02-13T15:27:36.170525972Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:27:36.171287 containerd[1622]: time="2025-02-13T15:27:36.170907462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:27:36.171287 containerd[1622]: time="2025-02-13T15:27:36.170940845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:27:36.171287 containerd[1622]: time="2025-02-13T15:27:36.171111326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:27:36.291783 containerd[1622]: time="2025-02-13T15:27:36.291705631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-9bk8q,Uid:920ea170-0b70-47ae-b4b5-9b57d9f2c0ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"f96a9be081ebe3a4d2550144efdc150a99c8a27e37f5967c9c7ec62b5d0de9b2\""
Feb 13 15:27:36.307904 containerd[1622]: time="2025-02-13T15:27:36.307828719Z" level=info msg="CreateContainer within sandbox \"f96a9be081ebe3a4d2550144efdc150a99c8a27e37f5967c9c7ec62b5d0de9b2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Feb 13 15:27:36.348561 containerd[1622]: time="2025-02-13T15:27:36.348473031Z" level=info msg="CreateContainer within sandbox \"f96a9be081ebe3a4d2550144efdc150a99c8a27e37f5967c9c7ec62b5d0de9b2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"41d6f3e048ca5ad93e7a45ebd7e9e64ddea6bd7edafb4bc1c87340b35934b570\""
Feb 13 15:27:36.353464 containerd[1622]: time="2025-02-13T15:27:36.353391929Z" level=info msg="StartContainer for \"41d6f3e048ca5ad93e7a45ebd7e9e64ddea6bd7edafb4bc1c87340b35934b570\""
Feb 13 15:27:36.425883 containerd[1622]: time="2025-02-13T15:27:36.425713277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-j4fzn,Uid:0b3fb87f-341f-4db1-86b9-ecc85b7bdc2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"57f746d18fde95c22aca3f1ce80eba0cf535b9194474bb8ab87e54793104cf18\""
Feb 13 15:27:36.440004 containerd[1622]: time="2025-02-13T15:27:36.439755965Z" level=info msg="CreateContainer within sandbox \"57f746d18fde95c22aca3f1ce80eba0cf535b9194474bb8ab87e54793104cf18\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Feb 13 15:27:36.475494 containerd[1622]: time="2025-02-13T15:27:36.475079713Z" level=info msg="CreateContainer within sandbox \"57f746d18fde95c22aca3f1ce80eba0cf535b9194474bb8ab87e54793104cf18\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c81a2f15e17a38a2252f60ad0cba7f1122cd4b759372b90580c8d0f896d84269\""
Feb 13 15:27:36.479277 containerd[1622]: time="2025-02-13T15:27:36.479200723Z" level=info msg="StartContainer for \"c81a2f15e17a38a2252f60ad0cba7f1122cd4b759372b90580c8d0f896d84269\""
Feb 13 15:27:36.541957 containerd[1622]: time="2025-02-13T15:27:36.541281889Z" level=info msg="StartContainer for \"41d6f3e048ca5ad93e7a45ebd7e9e64ddea6bd7edafb4bc1c87340b35934b570\" returns successfully"
Feb 13 15:27:36.636569 containerd[1622]: time="2025-02-13T15:27:36.636300225Z" level=info msg="StartContainer for \"c81a2f15e17a38a2252f60ad0cba7f1122cd4b759372b90580c8d0f896d84269\" returns successfully"
Feb 13 15:27:37.345164 kubelet[2887]: I0213 15:27:37.344969    2887 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-9bk8q" podStartSLOduration=28.344689738 podStartE2EDuration="28.344689738s" podCreationTimestamp="2025-02-13 15:27:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:27:37.313501554 +0000 UTC m=+41.644761986" watchObservedRunningTime="2025-02-13 15:27:37.344689738 +0000 UTC m=+41.675950175"
Feb 13 15:27:38.027419 kubelet[2887]: I0213 15:27:38.027124    2887 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Feb 13 15:27:38.053270 kubelet[2887]: I0213 15:27:38.053215    2887 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-j4fzn" podStartSLOduration=29.052093373 podStartE2EDuration="29.052093373s" podCreationTimestamp="2025-02-13 15:27:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:27:37.370686974 +0000 UTC m=+41.701947413" watchObservedRunningTime="2025-02-13 15:27:38.052093373 +0000 UTC m=+42.383353805"
Feb 13 15:27:55.899969 systemd[1]: Started sshd@8-10.128.0.79:22-139.178.68.195:41158.service - OpenSSH per-connection server daemon (139.178.68.195:41158).
Feb 13 15:27:56.208111 sshd[4282]: Accepted publickey for core from 139.178.68.195 port 41158 ssh2: RSA SHA256:nliKGUuHmIEF0YlcCyeDlTLj9V4wT+5POUaa07fHb80
Feb 13 15:27:56.210625 sshd-session[4282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:27:56.217455 systemd-logind[1596]: New session 8 of user core.
Feb 13 15:27:56.225030 systemd[1]: Started session-8.scope - Session 8 of User core.
Feb 13 15:27:56.549998 sshd[4287]: Connection closed by 139.178.68.195 port 41158
Feb 13 15:27:56.551037 sshd-session[4282]: pam_unix(sshd:session): session closed for user core
Feb 13 15:27:56.559075 systemd-logind[1596]: Session 8 logged out. Waiting for processes to exit.
Feb 13 15:27:56.560883 systemd[1]: sshd@8-10.128.0.79:22-139.178.68.195:41158.service: Deactivated successfully.
Feb 13 15:27:56.570628 systemd[1]: session-8.scope: Deactivated successfully.
Feb 13 15:27:56.574426 systemd-logind[1596]: Removed session 8.
Feb 13 15:28:01.603584 systemd[1]: Started sshd@9-10.128.0.79:22-139.178.68.195:56118.service - OpenSSH per-connection server daemon (139.178.68.195:56118).
Feb 13 15:28:01.906122 sshd[4299]: Accepted publickey for core from 139.178.68.195 port 56118 ssh2: RSA SHA256:nliKGUuHmIEF0YlcCyeDlTLj9V4wT+5POUaa07fHb80
Feb 13 15:28:01.908440 sshd-session[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:28:01.914946 systemd-logind[1596]: New session 9 of user core.
Feb 13 15:28:01.927730 systemd[1]: Started session-9.scope - Session 9 of User core.
Feb 13 15:28:02.213636 sshd[4302]: Connection closed by 139.178.68.195 port 56118
Feb 13 15:28:02.214934 sshd-session[4299]: pam_unix(sshd:session): session closed for user core
Feb 13 15:28:02.219467 systemd[1]: sshd@9-10.128.0.79:22-139.178.68.195:56118.service: Deactivated successfully.
Feb 13 15:28:02.225675 systemd[1]: session-9.scope: Deactivated successfully.
Feb 13 15:28:02.227753 systemd-logind[1596]: Session 9 logged out. Waiting for processes to exit.
Feb 13 15:28:02.230630 systemd-logind[1596]: Removed session 9.
Feb 13 15:28:07.262733 systemd[1]: Started sshd@10-10.128.0.79:22-139.178.68.195:35982.service - OpenSSH per-connection server daemon (139.178.68.195:35982).
Feb 13 15:28:07.567954 sshd[4313]: Accepted publickey for core from 139.178.68.195 port 35982 ssh2: RSA SHA256:nliKGUuHmIEF0YlcCyeDlTLj9V4wT+5POUaa07fHb80
Feb 13 15:28:07.570052 sshd-session[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:28:07.576755 systemd-logind[1596]: New session 10 of user core.
Feb 13 15:28:07.582064 systemd[1]: Started session-10.scope - Session 10 of User core.
Feb 13 15:28:07.874968 sshd[4316]: Connection closed by 139.178.68.195 port 35982
Feb 13 15:28:07.876506 sshd-session[4313]: pam_unix(sshd:session): session closed for user core
Feb 13 15:28:07.884958 systemd[1]: sshd@10-10.128.0.79:22-139.178.68.195:35982.service: Deactivated successfully.
Feb 13 15:28:07.890964 systemd-logind[1596]: Session 10 logged out. Waiting for processes to exit.
Feb 13 15:28:07.891912 systemd[1]: session-10.scope: Deactivated successfully.
Feb 13 15:28:07.894796 systemd-logind[1596]: Removed session 10.
Feb 13 15:28:12.928704 systemd[1]: Started sshd@11-10.128.0.79:22-139.178.68.195:35986.service - OpenSSH per-connection server daemon (139.178.68.195:35986).
Feb 13 15:28:13.242288 sshd[4329]: Accepted publickey for core from 139.178.68.195 port 35986 ssh2: RSA SHA256:nliKGUuHmIEF0YlcCyeDlTLj9V4wT+5POUaa07fHb80
Feb 13 15:28:13.244463 sshd-session[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:28:13.251243 systemd-logind[1596]: New session 11 of user core.
Feb 13 15:28:13.257749 systemd[1]: Started session-11.scope - Session 11 of User core.
Feb 13 15:28:13.556857 sshd[4332]: Connection closed by 139.178.68.195 port 35986
Feb 13 15:28:13.557728 sshd-session[4329]: pam_unix(sshd:session): session closed for user core
Feb 13 15:28:13.566009 systemd[1]: sshd@11-10.128.0.79:22-139.178.68.195:35986.service: Deactivated successfully.
Feb 13 15:28:13.572500 systemd[1]: session-11.scope: Deactivated successfully.
Feb 13 15:28:13.573877 systemd-logind[1596]: Session 11 logged out. Waiting for processes to exit.
Feb 13 15:28:13.575698 systemd-logind[1596]: Removed session 11.
Feb 13 15:28:13.605296 systemd[1]: Started sshd@12-10.128.0.79:22-139.178.68.195:36002.service - OpenSSH per-connection server daemon (139.178.68.195:36002).
Feb 13 15:28:13.908773 sshd[4344]: Accepted publickey for core from 139.178.68.195 port 36002 ssh2: RSA SHA256:nliKGUuHmIEF0YlcCyeDlTLj9V4wT+5POUaa07fHb80
Feb 13 15:28:13.910708 sshd-session[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:28:13.917451 systemd-logind[1596]: New session 12 of user core.
Feb 13 15:28:13.923641 systemd[1]: Started session-12.scope - Session 12 of User core.
Feb 13 15:28:14.245088 sshd[4347]: Connection closed by 139.178.68.195 port 36002
Feb 13 15:28:14.246597 sshd-session[4344]: pam_unix(sshd:session): session closed for user core
Feb 13 15:28:14.251419 systemd[1]: sshd@12-10.128.0.79:22-139.178.68.195:36002.service: Deactivated successfully.
Feb 13 15:28:14.259242 systemd-logind[1596]: Session 12 logged out. Waiting for processes to exit.
Feb 13 15:28:14.260877 systemd[1]: session-12.scope: Deactivated successfully.
Feb 13 15:28:14.262431 systemd-logind[1596]: Removed session 12.
Feb 13 15:28:14.296603 systemd[1]: Started sshd@13-10.128.0.79:22-139.178.68.195:36006.service - OpenSSH per-connection server daemon (139.178.68.195:36006).
Feb 13 15:28:14.596605 sshd[4355]: Accepted publickey for core from 139.178.68.195 port 36006 ssh2: RSA SHA256:nliKGUuHmIEF0YlcCyeDlTLj9V4wT+5POUaa07fHb80
Feb 13 15:28:14.598535 sshd-session[4355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:28:14.604619 systemd-logind[1596]: New session 13 of user core.
Feb 13 15:28:14.610567 systemd[1]: Started session-13.scope - Session 13 of User core.
Feb 13 15:28:14.927140 sshd[4358]: Connection closed by 139.178.68.195 port 36006
Feb 13 15:28:14.927663 sshd-session[4355]: pam_unix(sshd:session): session closed for user core
Feb 13 15:28:14.936580 systemd[1]: sshd@13-10.128.0.79:22-139.178.68.195:36006.service: Deactivated successfully.
Feb 13 15:28:14.942959 systemd[1]: session-13.scope: Deactivated successfully.
Feb 13 15:28:14.944100 systemd-logind[1596]: Session 13 logged out. Waiting for processes to exit.
Feb 13 15:28:14.945787 systemd-logind[1596]: Removed session 13.
Feb 13 15:28:19.979059 systemd[1]: Started sshd@14-10.128.0.79:22-139.178.68.195:59926.service - OpenSSH per-connection server daemon (139.178.68.195:59926).
Feb 13 15:28:20.287992 sshd[4370]: Accepted publickey for core from 139.178.68.195 port 59926 ssh2: RSA SHA256:nliKGUuHmIEF0YlcCyeDlTLj9V4wT+5POUaa07fHb80
Feb 13 15:28:20.290403 sshd-session[4370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:28:20.297356 systemd-logind[1596]: New session 14 of user core.
Feb 13 15:28:20.303731 systemd[1]: Started session-14.scope - Session 14 of User core.
Feb 13 15:28:20.596112 sshd[4373]: Connection closed by 139.178.68.195 port 59926
Feb 13 15:28:20.596744 sshd-session[4370]: pam_unix(sshd:session): session closed for user core
Feb 13 15:28:20.602118 systemd[1]: sshd@14-10.128.0.79:22-139.178.68.195:59926.service: Deactivated successfully.
Feb 13 15:28:20.610781 systemd-logind[1596]: Session 14 logged out. Waiting for processes to exit.
Feb 13 15:28:20.612005 systemd[1]: session-14.scope: Deactivated successfully.
Feb 13 15:28:20.613849 systemd-logind[1596]: Removed session 14.
Feb 13 15:28:25.643572 systemd[1]: Started sshd@15-10.128.0.79:22-139.178.68.195:59942.service - OpenSSH per-connection server daemon (139.178.68.195:59942).
Feb 13 15:28:25.943193 sshd[4384]: Accepted publickey for core from 139.178.68.195 port 59942 ssh2: RSA SHA256:nliKGUuHmIEF0YlcCyeDlTLj9V4wT+5POUaa07fHb80
Feb 13 15:28:25.945700 sshd-session[4384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:28:25.954130 systemd-logind[1596]: New session 15 of user core.
Feb 13 15:28:25.960542 systemd[1]: Started session-15.scope - Session 15 of User core.
Feb 13 15:28:26.238942 sshd[4387]: Connection closed by 139.178.68.195 port 59942
Feb 13 15:28:26.240381 sshd-session[4384]: pam_unix(sshd:session): session closed for user core
Feb 13 15:28:26.245009 systemd[1]: sshd@15-10.128.0.79:22-139.178.68.195:59942.service: Deactivated successfully.
Feb 13 15:28:26.252950 systemd[1]: session-15.scope: Deactivated successfully.
Feb 13 15:28:26.254353 systemd-logind[1596]: Session 15 logged out. Waiting for processes to exit.
Feb 13 15:28:26.256256 systemd-logind[1596]: Removed session 15.
Feb 13 15:28:26.293546 systemd[1]: Started sshd@16-10.128.0.79:22-139.178.68.195:59948.service - OpenSSH per-connection server daemon (139.178.68.195:59948).
Feb 13 15:28:26.596823 sshd[4398]: Accepted publickey for core from 139.178.68.195 port 59948 ssh2: RSA SHA256:nliKGUuHmIEF0YlcCyeDlTLj9V4wT+5POUaa07fHb80
Feb 13 15:28:26.598882 sshd-session[4398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:28:26.605798 systemd-logind[1596]: New session 16 of user core.
Feb 13 15:28:26.612652 systemd[1]: Started session-16.scope - Session 16 of User core.
Feb 13 15:28:26.960191 sshd[4401]: Connection closed by 139.178.68.195 port 59948
Feb 13 15:28:26.961244 sshd-session[4398]: pam_unix(sshd:session): session closed for user core
Feb 13 15:28:26.967488 systemd[1]: sshd@16-10.128.0.79:22-139.178.68.195:59948.service: Deactivated successfully.
Feb 13 15:28:26.973044 systemd[1]: session-16.scope: Deactivated successfully.
Feb 13 15:28:26.974181 systemd-logind[1596]: Session 16 logged out. Waiting for processes to exit.
Feb 13 15:28:26.975662 systemd-logind[1596]: Removed session 16.
Feb 13 15:28:27.009130 systemd[1]: Started sshd@17-10.128.0.79:22-139.178.68.195:52276.service - OpenSSH per-connection server daemon (139.178.68.195:52276).
Feb 13 15:28:27.315607 sshd[4410]: Accepted publickey for core from 139.178.68.195 port 52276 ssh2: RSA SHA256:nliKGUuHmIEF0YlcCyeDlTLj9V4wT+5POUaa07fHb80
Feb 13 15:28:27.317810 sshd-session[4410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:28:27.324481 systemd-logind[1596]: New session 17 of user core.
Feb 13 15:28:27.333968 systemd[1]: Started session-17.scope - Session 17 of User core.
Feb 13 15:28:29.277064 sshd[4413]: Connection closed by 139.178.68.195 port 52276
Feb 13 15:28:29.278765 sshd-session[4410]: pam_unix(sshd:session): session closed for user core
Feb 13 15:28:29.284527 systemd[1]: sshd@17-10.128.0.79:22-139.178.68.195:52276.service: Deactivated successfully.
Feb 13 15:28:29.294368 systemd[1]: session-17.scope: Deactivated successfully.
Feb 13 15:28:29.295124 systemd-logind[1596]: Session 17 logged out. Waiting for processes to exit.
Feb 13 15:28:29.298641 systemd-logind[1596]: Removed session 17.
Feb 13 15:28:29.330652 systemd[1]: Started sshd@18-10.128.0.79:22-139.178.68.195:52284.service - OpenSSH per-connection server daemon (139.178.68.195:52284).
Feb 13 15:28:29.629195 sshd[4430]: Accepted publickey for core from 139.178.68.195 port 52284 ssh2: RSA SHA256:nliKGUuHmIEF0YlcCyeDlTLj9V4wT+5POUaa07fHb80
Feb 13 15:28:29.631330 sshd-session[4430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:28:29.641884 systemd-logind[1596]: New session 18 of user core.
Feb 13 15:28:29.649682 systemd[1]: Started session-18.scope - Session 18 of User core.
Feb 13 15:28:30.077111 sshd[4433]: Connection closed by 139.178.68.195 port 52284
Feb 13 15:28:30.078426 sshd-session[4430]: pam_unix(sshd:session): session closed for user core
Feb 13 15:28:30.088120 systemd[1]: sshd@18-10.128.0.79:22-139.178.68.195:52284.service: Deactivated successfully.
Feb 13 15:28:30.094539 systemd[1]: session-18.scope: Deactivated successfully.
Feb 13 15:28:30.096093 systemd-logind[1596]: Session 18 logged out. Waiting for processes to exit.
Feb 13 15:28:30.098028 systemd-logind[1596]: Removed session 18.
Feb 13 15:28:30.128722 systemd[1]: Started sshd@19-10.128.0.79:22-139.178.68.195:52296.service - OpenSSH per-connection server daemon (139.178.68.195:52296).
Feb 13 15:28:30.436106 sshd[4442]: Accepted publickey for core from 139.178.68.195 port 52296 ssh2: RSA SHA256:nliKGUuHmIEF0YlcCyeDlTLj9V4wT+5POUaa07fHb80
Feb 13 15:28:30.438862 sshd-session[4442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:28:30.445225 systemd-logind[1596]: New session 19 of user core.
Feb 13 15:28:30.453022 systemd[1]: Started session-19.scope - Session 19 of User core.
Feb 13 15:28:30.731247 sshd[4445]: Connection closed by 139.178.68.195 port 52296
Feb 13 15:28:30.730445 sshd-session[4442]: pam_unix(sshd:session): session closed for user core
Feb 13 15:28:30.735939 systemd[1]: sshd@19-10.128.0.79:22-139.178.68.195:52296.service: Deactivated successfully.
Feb 13 15:28:30.743558 systemd-logind[1596]: Session 19 logged out. Waiting for processes to exit.
Feb 13 15:28:30.744276 systemd[1]: session-19.scope: Deactivated successfully.
Feb 13 15:28:30.745893 systemd-logind[1596]: Removed session 19.
Feb 13 15:28:35.782026 systemd[1]: Started sshd@20-10.128.0.79:22-139.178.68.195:52298.service - OpenSSH per-connection server daemon (139.178.68.195:52298).
Feb 13 15:28:36.098437 sshd[4456]: Accepted publickey for core from 139.178.68.195 port 52298 ssh2: RSA SHA256:nliKGUuHmIEF0YlcCyeDlTLj9V4wT+5POUaa07fHb80
Feb 13 15:28:36.100697 sshd-session[4456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:28:36.107692 systemd-logind[1596]: New session 20 of user core.
Feb 13 15:28:36.115787 systemd[1]: Started session-20.scope - Session 20 of User core.
Feb 13 15:28:36.409093 sshd[4462]: Connection closed by 139.178.68.195 port 52298
Feb 13 15:28:36.410692 sshd-session[4456]: pam_unix(sshd:session): session closed for user core
Feb 13 15:28:36.418523 systemd[1]: sshd@20-10.128.0.79:22-139.178.68.195:52298.service: Deactivated successfully.
Feb 13 15:28:36.423631 systemd[1]: session-20.scope: Deactivated successfully.
Feb 13 15:28:36.424078 systemd-logind[1596]: Session 20 logged out. Waiting for processes to exit.
Feb 13 15:28:36.426556 systemd-logind[1596]: Removed session 20.
Feb 13 15:28:41.460602 systemd[1]: Started sshd@21-10.128.0.79:22-139.178.68.195:51572.service - OpenSSH per-connection server daemon (139.178.68.195:51572).
Feb 13 15:28:41.759697 sshd[4476]: Accepted publickey for core from 139.178.68.195 port 51572 ssh2: RSA SHA256:nliKGUuHmIEF0YlcCyeDlTLj9V4wT+5POUaa07fHb80
Feb 13 15:28:41.761895 sshd-session[4476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:28:41.768528 systemd-logind[1596]: New session 21 of user core.
Feb 13 15:28:41.773575 systemd[1]: Started session-21.scope - Session 21 of User core.
Feb 13 15:28:42.064215 sshd[4479]: Connection closed by 139.178.68.195 port 51572
Feb 13 15:28:42.065438 sshd-session[4476]: pam_unix(sshd:session): session closed for user core
Feb 13 15:28:42.073079 systemd[1]: sshd@21-10.128.0.79:22-139.178.68.195:51572.service: Deactivated successfully.
Feb 13 15:28:42.078755 systemd-logind[1596]: Session 21 logged out. Waiting for processes to exit.
Feb 13 15:28:42.080076 systemd[1]: session-21.scope: Deactivated successfully.
Feb 13 15:28:42.082269 systemd-logind[1596]: Removed session 21.
Feb 13 15:28:47.117135 systemd[1]: Started sshd@22-10.128.0.79:22-139.178.68.195:60060.service - OpenSSH per-connection server daemon (139.178.68.195:60060).
Feb 13 15:28:47.427591 sshd[4490]: Accepted publickey for core from 139.178.68.195 port 60060 ssh2: RSA SHA256:nliKGUuHmIEF0YlcCyeDlTLj9V4wT+5POUaa07fHb80
Feb 13 15:28:47.429609 sshd-session[4490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:28:47.437730 systemd-logind[1596]: New session 22 of user core.
Feb 13 15:28:47.444829 systemd[1]: Started session-22.scope - Session 22 of User core.
Feb 13 15:28:47.735639 sshd[4493]: Connection closed by 139.178.68.195 port 60060
Feb 13 15:28:47.737007 sshd-session[4490]: pam_unix(sshd:session): session closed for user core
Feb 13 15:28:47.742446 systemd[1]: sshd@22-10.128.0.79:22-139.178.68.195:60060.service: Deactivated successfully.
Feb 13 15:28:47.751982 systemd[1]: session-22.scope: Deactivated successfully.
Feb 13 15:28:47.753633 systemd-logind[1596]: Session 22 logged out. Waiting for processes to exit.
Feb 13 15:28:47.755566 systemd-logind[1596]: Removed session 22.
Feb 13 15:28:47.784606 systemd[1]: Started sshd@23-10.128.0.79:22-139.178.68.195:60064.service - OpenSSH per-connection server daemon (139.178.68.195:60064).
Feb 13 15:28:48.092685 sshd[4503]: Accepted publickey for core from 139.178.68.195 port 60064 ssh2: RSA SHA256:nliKGUuHmIEF0YlcCyeDlTLj9V4wT+5POUaa07fHb80
Feb 13 15:28:48.095034 sshd-session[4503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:28:48.102426 systemd-logind[1596]: New session 23 of user core.
Feb 13 15:28:48.111503 systemd[1]: Started session-23.scope - Session 23 of User core.
Feb 13 15:28:50.210824 containerd[1622]: time="2025-02-13T15:28:50.210566425Z" level=info msg="StopContainer for \"1e1858739fda42d770644197be27cb11801f4f297ae7d7b66fa8d699d460f586\" with timeout 30 (s)"
Feb 13 15:28:50.212061 containerd[1622]: time="2025-02-13T15:28:50.211974139Z" level=info msg="Stop container \"1e1858739fda42d770644197be27cb11801f4f297ae7d7b66fa8d699d460f586\" with signal terminated"
Feb 13 15:28:50.278187 containerd[1622]: time="2025-02-13T15:28:50.276665832Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE        \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 13 15:28:50.293255 containerd[1622]: time="2025-02-13T15:28:50.292133906Z" level=info msg="StopContainer for \"c03444ac2f6bc712d65ec07f1ae5381193af5763ed5dba906c1c494e1147fd90\" with timeout 2 (s)"
Feb 13 15:28:50.293255 containerd[1622]: time="2025-02-13T15:28:50.293237435Z" level=info msg="Stop container \"c03444ac2f6bc712d65ec07f1ae5381193af5763ed5dba906c1c494e1147fd90\" with signal terminated"
Feb 13 15:28:50.302250 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e1858739fda42d770644197be27cb11801f4f297ae7d7b66fa8d699d460f586-rootfs.mount: Deactivated successfully.
Feb 13 15:28:50.308689 systemd-networkd[1216]: lxc_health: Link DOWN
Feb 13 15:28:50.308707 systemd-networkd[1216]: lxc_health: Lost carrier
Feb 13 15:28:50.331936 containerd[1622]: time="2025-02-13T15:28:50.331565303Z" level=info msg="shim disconnected" id=1e1858739fda42d770644197be27cb11801f4f297ae7d7b66fa8d699d460f586 namespace=k8s.io
Feb 13 15:28:50.331936 containerd[1622]: time="2025-02-13T15:28:50.331664763Z" level=warning msg="cleaning up after shim disconnected" id=1e1858739fda42d770644197be27cb11801f4f297ae7d7b66fa8d699d460f586 namespace=k8s.io
Feb 13 15:28:50.331936 containerd[1622]: time="2025-02-13T15:28:50.331680583Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:28:50.371398 containerd[1622]: time="2025-02-13T15:28:50.371206742Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:28:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Feb 13 15:28:50.377267 containerd[1622]: time="2025-02-13T15:28:50.376849418Z" level=info msg="StopContainer for \"1e1858739fda42d770644197be27cb11801f4f297ae7d7b66fa8d699d460f586\" returns successfully"
Feb 13 15:28:50.378935 containerd[1622]: time="2025-02-13T15:28:50.378834245Z" level=info msg="StopPodSandbox for \"a4bcf1688ebc9428f5134cad721cae5bb917a2f457cbd2fd4583c89329b92736\""
Feb 13 15:28:50.379131 containerd[1622]: time="2025-02-13T15:28:50.378923157Z" level=info msg="Container to stop \"1e1858739fda42d770644197be27cb11801f4f297ae7d7b66fa8d699d460f586\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:28:50.389267 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a4bcf1688ebc9428f5134cad721cae5bb917a2f457cbd2fd4583c89329b92736-shm.mount: Deactivated successfully.
Feb 13 15:28:50.401783 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c03444ac2f6bc712d65ec07f1ae5381193af5763ed5dba906c1c494e1147fd90-rootfs.mount: Deactivated successfully.
Feb 13 15:28:50.405992 containerd[1622]: time="2025-02-13T15:28:50.405473704Z" level=info msg="shim disconnected" id=c03444ac2f6bc712d65ec07f1ae5381193af5763ed5dba906c1c494e1147fd90 namespace=k8s.io
Feb 13 15:28:50.405992 containerd[1622]: time="2025-02-13T15:28:50.405609181Z" level=warning msg="cleaning up after shim disconnected" id=c03444ac2f6bc712d65ec07f1ae5381193af5763ed5dba906c1c494e1147fd90 namespace=k8s.io
Feb 13 15:28:50.405992 containerd[1622]: time="2025-02-13T15:28:50.405626271Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:28:50.444403 containerd[1622]: time="2025-02-13T15:28:50.444311000Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:28:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Feb 13 15:28:50.457710 containerd[1622]: time="2025-02-13T15:28:50.455936388Z" level=info msg="StopContainer for \"c03444ac2f6bc712d65ec07f1ae5381193af5763ed5dba906c1c494e1147fd90\" returns successfully"
Feb 13 15:28:50.459719 containerd[1622]: time="2025-02-13T15:28:50.459628409Z" level=info msg="StopPodSandbox for \"94502aef6bf1f17df14c5db1119eea058051b17f11095b37072befbc528153ef\""
Feb 13 15:28:50.459719 containerd[1622]: time="2025-02-13T15:28:50.459697275Z" level=info msg="Container to stop \"6554a4e787f2adfc970d44b92839447dcbe1952b9ff37db8f1ef62a6209264b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:28:50.459975 containerd[1622]: time="2025-02-13T15:28:50.459737326Z" level=info msg="Container to stop \"bd613e2e89e80f3a29089ce50f843e75954871758e47afabd0245aa19bafd7db\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:28:50.459975 containerd[1622]: time="2025-02-13T15:28:50.459753198Z" level=info msg="Container to stop \"c03444ac2f6bc712d65ec07f1ae5381193af5763ed5dba906c1c494e1147fd90\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:28:50.459975 containerd[1622]: time="2025-02-13T15:28:50.459767233Z" level=info msg="Container to stop \"8db6b22f1dc88244b38c9e46841187d657cb5067ae20d740699e3b1a0c00f70d\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:28:50.459975 containerd[1622]: time="2025-02-13T15:28:50.459782694Z" level=info msg="Container to stop \"f10ffaad03b228605987870b96c85ade216054ba50fed5c9a6001e00a451cc23\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:28:50.464883 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-94502aef6bf1f17df14c5db1119eea058051b17f11095b37072befbc528153ef-shm.mount: Deactivated successfully.
Feb 13 15:28:50.482418 containerd[1622]: time="2025-02-13T15:28:50.482018643Z" level=info msg="shim disconnected" id=a4bcf1688ebc9428f5134cad721cae5bb917a2f457cbd2fd4583c89329b92736 namespace=k8s.io
Feb 13 15:28:50.482418 containerd[1622]: time="2025-02-13T15:28:50.482111166Z" level=warning msg="cleaning up after shim disconnected" id=a4bcf1688ebc9428f5134cad721cae5bb917a2f457cbd2fd4583c89329b92736 namespace=k8s.io
Feb 13 15:28:50.482418 containerd[1622]: time="2025-02-13T15:28:50.482126933Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:28:50.539293 containerd[1622]: time="2025-02-13T15:28:50.539185554Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:28:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Feb 13 15:28:50.542419 containerd[1622]: time="2025-02-13T15:28:50.542300701Z" level=info msg="TearDown network for sandbox \"a4bcf1688ebc9428f5134cad721cae5bb917a2f457cbd2fd4583c89329b92736\" successfully"
Feb 13 15:28:50.542419 containerd[1622]: time="2025-02-13T15:28:50.542355182Z" level=info msg="StopPodSandbox for \"a4bcf1688ebc9428f5134cad721cae5bb917a2f457cbd2fd4583c89329b92736\" returns successfully"
Feb 13 15:28:50.565947 containerd[1622]: time="2025-02-13T15:28:50.565733902Z" level=info msg="shim disconnected" id=94502aef6bf1f17df14c5db1119eea058051b17f11095b37072befbc528153ef namespace=k8s.io
Feb 13 15:28:50.565947 containerd[1622]: time="2025-02-13T15:28:50.565821276Z" level=warning msg="cleaning up after shim disconnected" id=94502aef6bf1f17df14c5db1119eea058051b17f11095b37072befbc528153ef namespace=k8s.io
Feb 13 15:28:50.565947 containerd[1622]: time="2025-02-13T15:28:50.565836926Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:28:50.594559 containerd[1622]: time="2025-02-13T15:28:50.594448812Z" level=info msg="TearDown network for sandbox \"94502aef6bf1f17df14c5db1119eea058051b17f11095b37072befbc528153ef\" successfully"
Feb 13 15:28:50.594559 containerd[1622]: time="2025-02-13T15:28:50.594504467Z" level=info msg="StopPodSandbox for \"94502aef6bf1f17df14c5db1119eea058051b17f11095b37072befbc528153ef\" returns successfully"
Feb 13 15:28:50.715399 kubelet[2887]: I0213 15:28:50.715237    2887 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs96k\" (UniqueName: \"kubernetes.io/projected/e66f400f-d3c2-4cb9-8d84-3b710327b783-kube-api-access-vs96k\") pod \"e66f400f-d3c2-4cb9-8d84-3b710327b783\" (UID: \"e66f400f-d3c2-4cb9-8d84-3b710327b783\") "
Feb 13 15:28:50.718114 kubelet[2887]: I0213 15:28:50.716998    2887 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33e23bef-aac9-40f6-9eb2-43bef12a4a18-cilium-config-path\") pod \"33e23bef-aac9-40f6-9eb2-43bef12a4a18\" (UID: \"33e23bef-aac9-40f6-9eb2-43bef12a4a18\") "
Feb 13 15:28:50.718114 kubelet[2887]: I0213 15:28:50.717078    2887 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e66f400f-d3c2-4cb9-8d84-3b710327b783-hubble-tls\") pod \"e66f400f-d3c2-4cb9-8d84-3b710327b783\" (UID: \"e66f400f-d3c2-4cb9-8d84-3b710327b783\") "
Feb 13 15:28:50.718114 kubelet[2887]: I0213 15:28:50.717119    2887 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-etc-cni-netd\") pod \"e66f400f-d3c2-4cb9-8d84-3b710327b783\" (UID: \"e66f400f-d3c2-4cb9-8d84-3b710327b783\") "
Feb 13 15:28:50.718114 kubelet[2887]: I0213 15:28:50.717168    2887 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-cilium-cgroup\") pod \"e66f400f-d3c2-4cb9-8d84-3b710327b783\" (UID: \"e66f400f-d3c2-4cb9-8d84-3b710327b783\") "
Feb 13 15:28:50.718114 kubelet[2887]: I0213 15:28:50.717201    2887 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-bpf-maps\") pod \"e66f400f-d3c2-4cb9-8d84-3b710327b783\" (UID: \"e66f400f-d3c2-4cb9-8d84-3b710327b783\") "
Feb 13 15:28:50.718114 kubelet[2887]: I0213 15:28:50.717268    2887 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-cni-path\") pod \"e66f400f-d3c2-4cb9-8d84-3b710327b783\" (UID: \"e66f400f-d3c2-4cb9-8d84-3b710327b783\") "
Feb 13 15:28:50.718563 kubelet[2887]: I0213 15:28:50.717310    2887 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-hostproc\") pod \"e66f400f-d3c2-4cb9-8d84-3b710327b783\" (UID: \"e66f400f-d3c2-4cb9-8d84-3b710327b783\") "
Feb 13 15:28:50.718563 kubelet[2887]: I0213 15:28:50.717354    2887 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e66f400f-d3c2-4cb9-8d84-3b710327b783-clustermesh-secrets\") pod \"e66f400f-d3c2-4cb9-8d84-3b710327b783\" (UID: \"e66f400f-d3c2-4cb9-8d84-3b710327b783\") "
Feb 13 15:28:50.718563 kubelet[2887]: I0213 15:28:50.717392    2887 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e66f400f-d3c2-4cb9-8d84-3b710327b783-cilium-config-path\") pod \"e66f400f-d3c2-4cb9-8d84-3b710327b783\" (UID: \"e66f400f-d3c2-4cb9-8d84-3b710327b783\") "
Feb 13 15:28:50.718563 kubelet[2887]: I0213 15:28:50.717429    2887 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-cilium-run\") pod \"e66f400f-d3c2-4cb9-8d84-3b710327b783\" (UID: \"e66f400f-d3c2-4cb9-8d84-3b710327b783\") "
Feb 13 15:28:50.718563 kubelet[2887]: I0213 15:28:50.717465    2887 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-lib-modules\") pod \"e66f400f-d3c2-4cb9-8d84-3b710327b783\" (UID: \"e66f400f-d3c2-4cb9-8d84-3b710327b783\") "
Feb 13 15:28:50.718563 kubelet[2887]: I0213 15:28:50.717475    2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e66f400f-d3c2-4cb9-8d84-3b710327b783" (UID: "e66f400f-d3c2-4cb9-8d84-3b710327b783"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:28:50.718879 kubelet[2887]: I0213 15:28:50.717503    2887 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtwh6\" (UniqueName: \"kubernetes.io/projected/33e23bef-aac9-40f6-9eb2-43bef12a4a18-kube-api-access-qtwh6\") pod \"33e23bef-aac9-40f6-9eb2-43bef12a4a18\" (UID: \"33e23bef-aac9-40f6-9eb2-43bef12a4a18\") "
Feb 13 15:28:50.718879 kubelet[2887]: I0213 15:28:50.717542    2887 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-xtables-lock\") pod \"e66f400f-d3c2-4cb9-8d84-3b710327b783\" (UID: \"e66f400f-d3c2-4cb9-8d84-3b710327b783\") "
Feb 13 15:28:50.718879 kubelet[2887]: I0213 15:28:50.717572    2887 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-host-proc-sys-kernel\") pod \"e66f400f-d3c2-4cb9-8d84-3b710327b783\" (UID: \"e66f400f-d3c2-4cb9-8d84-3b710327b783\") "
Feb 13 15:28:50.718879 kubelet[2887]: I0213 15:28:50.717604    2887 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-host-proc-sys-net\") pod \"e66f400f-d3c2-4cb9-8d84-3b710327b783\" (UID: \"e66f400f-d3c2-4cb9-8d84-3b710327b783\") "
Feb 13 15:28:50.718879 kubelet[2887]: I0213 15:28:50.717675    2887 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-bpf-maps\") on node \"ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" DevicePath \"\""
Feb 13 15:28:50.718879 kubelet[2887]: I0213 15:28:50.717739    2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e66f400f-d3c2-4cb9-8d84-3b710327b783" (UID: "e66f400f-d3c2-4cb9-8d84-3b710327b783"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:28:50.719221 kubelet[2887]: I0213 15:28:50.717777    2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-cni-path" (OuterVolumeSpecName: "cni-path") pod "e66f400f-d3c2-4cb9-8d84-3b710327b783" (UID: "e66f400f-d3c2-4cb9-8d84-3b710327b783"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:28:50.719221 kubelet[2887]: I0213 15:28:50.717807    2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-hostproc" (OuterVolumeSpecName: "hostproc") pod "e66f400f-d3c2-4cb9-8d84-3b710327b783" (UID: "e66f400f-d3c2-4cb9-8d84-3b710327b783"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:28:50.721183 kubelet[2887]: I0213 15:28:50.720438    2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e66f400f-d3c2-4cb9-8d84-3b710327b783" (UID: "e66f400f-d3c2-4cb9-8d84-3b710327b783"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:28:50.721183 kubelet[2887]: I0213 15:28:50.720508    2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e66f400f-d3c2-4cb9-8d84-3b710327b783" (UID: "e66f400f-d3c2-4cb9-8d84-3b710327b783"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:28:50.722854 kubelet[2887]: I0213 15:28:50.721381    2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e66f400f-d3c2-4cb9-8d84-3b710327b783" (UID: "e66f400f-d3c2-4cb9-8d84-3b710327b783"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:28:50.722985 kubelet[2887]: I0213 15:28:50.722903    2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e66f400f-d3c2-4cb9-8d84-3b710327b783" (UID: "e66f400f-d3c2-4cb9-8d84-3b710327b783"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:28:50.722985 kubelet[2887]: I0213 15:28:50.722954    2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e66f400f-d3c2-4cb9-8d84-3b710327b783" (UID: "e66f400f-d3c2-4cb9-8d84-3b710327b783"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:28:50.723550 kubelet[2887]: I0213 15:28:50.723506    2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e66f400f-d3c2-4cb9-8d84-3b710327b783" (UID: "e66f400f-d3c2-4cb9-8d84-3b710327b783"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:28:50.729707 kubelet[2887]: I0213 15:28:50.729453    2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e66f400f-d3c2-4cb9-8d84-3b710327b783-kube-api-access-vs96k" (OuterVolumeSpecName: "kube-api-access-vs96k") pod "e66f400f-d3c2-4cb9-8d84-3b710327b783" (UID: "e66f400f-d3c2-4cb9-8d84-3b710327b783"). InnerVolumeSpecName "kube-api-access-vs96k". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb 13 15:28:50.730552 kubelet[2887]: I0213 15:28:50.730515    2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e66f400f-d3c2-4cb9-8d84-3b710327b783-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e66f400f-d3c2-4cb9-8d84-3b710327b783" (UID: "e66f400f-d3c2-4cb9-8d84-3b710327b783"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb 13 15:28:50.730852 kubelet[2887]: I0213 15:28:50.730824    2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e66f400f-d3c2-4cb9-8d84-3b710327b783-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e66f400f-d3c2-4cb9-8d84-3b710327b783" (UID: "e66f400f-d3c2-4cb9-8d84-3b710327b783"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb 13 15:28:50.731609 kubelet[2887]: I0213 15:28:50.731544    2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33e23bef-aac9-40f6-9eb2-43bef12a4a18-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "33e23bef-aac9-40f6-9eb2-43bef12a4a18" (UID: "33e23bef-aac9-40f6-9eb2-43bef12a4a18"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Feb 13 15:28:50.732933 kubelet[2887]: I0213 15:28:50.732884    2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e66f400f-d3c2-4cb9-8d84-3b710327b783-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e66f400f-d3c2-4cb9-8d84-3b710327b783" (UID: "e66f400f-d3c2-4cb9-8d84-3b710327b783"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Feb 13 15:28:50.734013 kubelet[2887]: I0213 15:28:50.733980    2887 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33e23bef-aac9-40f6-9eb2-43bef12a4a18-kube-api-access-qtwh6" (OuterVolumeSpecName: "kube-api-access-qtwh6") pod "33e23bef-aac9-40f6-9eb2-43bef12a4a18" (UID: "33e23bef-aac9-40f6-9eb2-43bef12a4a18"). InnerVolumeSpecName "kube-api-access-qtwh6". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb 13 15:28:50.818211 kubelet[2887]: I0213 15:28:50.818114    2887 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-host-proc-sys-kernel\") on node \"ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" DevicePath \"\""
Feb 13 15:28:50.818211 kubelet[2887]: I0213 15:28:50.818217    2887 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-lib-modules\") on node \"ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" DevicePath \"\""
Feb 13 15:28:50.818539 kubelet[2887]: I0213 15:28:50.818248    2887 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qtwh6\" (UniqueName: \"kubernetes.io/projected/33e23bef-aac9-40f6-9eb2-43bef12a4a18-kube-api-access-qtwh6\") on node \"ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" DevicePath \"\""
Feb 13 15:28:50.818539 kubelet[2887]: I0213 15:28:50.818273    2887 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-xtables-lock\") on node \"ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" DevicePath \"\""
Feb 13 15:28:50.818539 kubelet[2887]: I0213 15:28:50.818293    2887 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-host-proc-sys-net\") on node \"ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" DevicePath \"\""
Feb 13 15:28:50.818539 kubelet[2887]: I0213 15:28:50.818311    2887 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vs96k\" (UniqueName: \"kubernetes.io/projected/e66f400f-d3c2-4cb9-8d84-3b710327b783-kube-api-access-vs96k\") on node \"ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" DevicePath \"\""
Feb 13 15:28:50.818539 kubelet[2887]: I0213 15:28:50.818331    2887 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/33e23bef-aac9-40f6-9eb2-43bef12a4a18-cilium-config-path\") on node \"ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" DevicePath \"\""
Feb 13 15:28:50.818539 kubelet[2887]: I0213 15:28:50.818349    2887 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-cilium-cgroup\") on node \"ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" DevicePath \"\""
Feb 13 15:28:50.818539 kubelet[2887]: I0213 15:28:50.818367    2887 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e66f400f-d3c2-4cb9-8d84-3b710327b783-hubble-tls\") on node \"ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" DevicePath \"\""
Feb 13 15:28:50.818764 kubelet[2887]: I0213 15:28:50.818385    2887 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-etc-cni-netd\") on node \"ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" DevicePath \"\""
Feb 13 15:28:50.818764 kubelet[2887]: I0213 15:28:50.818404    2887 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e66f400f-d3c2-4cb9-8d84-3b710327b783-cilium-config-path\") on node \"ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" DevicePath \"\""
Feb 13 15:28:50.818764 kubelet[2887]: I0213 15:28:50.818426    2887 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-cni-path\") on node \"ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" DevicePath \"\""
Feb 13 15:28:50.818764 kubelet[2887]: I0213 15:28:50.818445    2887 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-hostproc\") on node \"ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" DevicePath \"\""
Feb 13 15:28:50.818764 kubelet[2887]: I0213 15:28:50.818464    2887 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e66f400f-d3c2-4cb9-8d84-3b710327b783-clustermesh-secrets\") on node \"ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" DevicePath \"\""
Feb 13 15:28:50.818764 kubelet[2887]: I0213 15:28:50.818488    2887 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e66f400f-d3c2-4cb9-8d84-3b710327b783-cilium-run\") on node \"ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal\" DevicePath \"\""
Feb 13 15:28:51.189391 kubelet[2887]: E0213 15:28:51.189344    2887 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 13 15:28:51.248369 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94502aef6bf1f17df14c5db1119eea058051b17f11095b37072befbc528153ef-rootfs.mount: Deactivated successfully.
Feb 13 15:28:51.248631 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4bcf1688ebc9428f5134cad721cae5bb917a2f457cbd2fd4583c89329b92736-rootfs.mount: Deactivated successfully.
Feb 13 15:28:51.248823 systemd[1]: var-lib-kubelet-pods-e66f400f\x2dd3c2\x2d4cb9\x2d8d84\x2d3b710327b783-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvs96k.mount: Deactivated successfully.
Feb 13 15:28:51.249012 systemd[1]: var-lib-kubelet-pods-33e23bef\x2daac9\x2d40f6\x2d9eb2\x2d43bef12a4a18-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqtwh6.mount: Deactivated successfully.
Feb 13 15:28:51.249246 systemd[1]: var-lib-kubelet-pods-e66f400f\x2dd3c2\x2d4cb9\x2d8d84\x2d3b710327b783-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Feb 13 15:28:51.249442 systemd[1]: var-lib-kubelet-pods-e66f400f\x2dd3c2\x2d4cb9\x2d8d84\x2d3b710327b783-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Feb 13 15:28:51.536916 kubelet[2887]: I0213 15:28:51.536092    2887 scope.go:117] "RemoveContainer" containerID="c03444ac2f6bc712d65ec07f1ae5381193af5763ed5dba906c1c494e1147fd90"
Feb 13 15:28:51.544223 containerd[1622]: time="2025-02-13T15:28:51.543429576Z" level=info msg="RemoveContainer for \"c03444ac2f6bc712d65ec07f1ae5381193af5763ed5dba906c1c494e1147fd90\""
Feb 13 15:28:51.553116 containerd[1622]: time="2025-02-13T15:28:51.552022932Z" level=info msg="RemoveContainer for \"c03444ac2f6bc712d65ec07f1ae5381193af5763ed5dba906c1c494e1147fd90\" returns successfully"
Feb 13 15:28:51.553351 kubelet[2887]: I0213 15:28:51.552608    2887 scope.go:117] "RemoveContainer" containerID="bd613e2e89e80f3a29089ce50f843e75954871758e47afabd0245aa19bafd7db"
Feb 13 15:28:51.555975 containerd[1622]: time="2025-02-13T15:28:51.555414855Z" level=info msg="RemoveContainer for \"bd613e2e89e80f3a29089ce50f843e75954871758e47afabd0245aa19bafd7db\""
Feb 13 15:28:51.561657 containerd[1622]: time="2025-02-13T15:28:51.561589522Z" level=info msg="RemoveContainer for \"bd613e2e89e80f3a29089ce50f843e75954871758e47afabd0245aa19bafd7db\" returns successfully"
Feb 13 15:28:51.562267 kubelet[2887]: I0213 15:28:51.562015    2887 scope.go:117] "RemoveContainer" containerID="6554a4e787f2adfc970d44b92839447dcbe1952b9ff37db8f1ef62a6209264b2"
Feb 13 15:28:51.564032 containerd[1622]: time="2025-02-13T15:28:51.563924729Z" level=info msg="RemoveContainer for \"6554a4e787f2adfc970d44b92839447dcbe1952b9ff37db8f1ef62a6209264b2\""
Feb 13 15:28:51.569126 containerd[1622]: time="2025-02-13T15:28:51.569061335Z" level=info msg="RemoveContainer for \"6554a4e787f2adfc970d44b92839447dcbe1952b9ff37db8f1ef62a6209264b2\" returns successfully"
Feb 13 15:28:51.569845 kubelet[2887]: I0213 15:28:51.569646    2887 scope.go:117] "RemoveContainer" containerID="8db6b22f1dc88244b38c9e46841187d657cb5067ae20d740699e3b1a0c00f70d"
Feb 13 15:28:51.571622 containerd[1622]: time="2025-02-13T15:28:51.571582442Z" level=info msg="RemoveContainer for \"8db6b22f1dc88244b38c9e46841187d657cb5067ae20d740699e3b1a0c00f70d\""
Feb 13 15:28:51.578285 containerd[1622]: time="2025-02-13T15:28:51.578222802Z" level=info msg="RemoveContainer for \"8db6b22f1dc88244b38c9e46841187d657cb5067ae20d740699e3b1a0c00f70d\" returns successfully"
Feb 13 15:28:51.579188 kubelet[2887]: I0213 15:28:51.578706    2887 scope.go:117] "RemoveContainer" containerID="f10ffaad03b228605987870b96c85ade216054ba50fed5c9a6001e00a451cc23"
Feb 13 15:28:51.582865 containerd[1622]: time="2025-02-13T15:28:51.582053757Z" level=info msg="RemoveContainer for \"f10ffaad03b228605987870b96c85ade216054ba50fed5c9a6001e00a451cc23\""
Feb 13 15:28:51.587812 containerd[1622]: time="2025-02-13T15:28:51.587587101Z" level=info msg="RemoveContainer for \"f10ffaad03b228605987870b96c85ade216054ba50fed5c9a6001e00a451cc23\" returns successfully"
Feb 13 15:28:51.588114 kubelet[2887]: I0213 15:28:51.588046    2887 scope.go:117] "RemoveContainer" containerID="1e1858739fda42d770644197be27cb11801f4f297ae7d7b66fa8d699d460f586"
Feb 13 15:28:51.590552 containerd[1622]: time="2025-02-13T15:28:51.590504900Z" level=info msg="RemoveContainer for \"1e1858739fda42d770644197be27cb11801f4f297ae7d7b66fa8d699d460f586\""
Feb 13 15:28:51.598756 containerd[1622]: time="2025-02-13T15:28:51.598687625Z" level=info msg="RemoveContainer for \"1e1858739fda42d770644197be27cb11801f4f297ae7d7b66fa8d699d460f586\" returns successfully"
Feb 13 15:28:51.951185 kubelet[2887]: I0213 15:28:51.951087    2887 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="33e23bef-aac9-40f6-9eb2-43bef12a4a18" path="/var/lib/kubelet/pods/33e23bef-aac9-40f6-9eb2-43bef12a4a18/volumes"
Feb 13 15:28:51.951962 kubelet[2887]: I0213 15:28:51.951931    2887 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e66f400f-d3c2-4cb9-8d84-3b710327b783" path="/var/lib/kubelet/pods/e66f400f-d3c2-4cb9-8d84-3b710327b783/volumes"
Feb 13 15:28:52.181866 sshd[4506]: Connection closed by 139.178.68.195 port 60064
Feb 13 15:28:52.183042 sshd-session[4503]: pam_unix(sshd:session): session closed for user core
Feb 13 15:28:52.188639 systemd[1]: sshd@23-10.128.0.79:22-139.178.68.195:60064.service: Deactivated successfully.
Feb 13 15:28:52.197384 systemd[1]: session-23.scope: Deactivated successfully.
Feb 13 15:28:52.200194 systemd-logind[1596]: Session 23 logged out. Waiting for processes to exit.
Feb 13 15:28:52.202253 systemd-logind[1596]: Removed session 23.
Feb 13 15:28:52.232740 systemd[1]: Started sshd@24-10.128.0.79:22-139.178.68.195:60074.service - OpenSSH per-connection server daemon (139.178.68.195:60074).
Feb 13 15:28:52.547351 sshd[4674]: Accepted publickey for core from 139.178.68.195 port 60074 ssh2: RSA SHA256:nliKGUuHmIEF0YlcCyeDlTLj9V4wT+5POUaa07fHb80
Feb 13 15:28:52.552219 sshd-session[4674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:28:52.559040 systemd-logind[1596]: New session 24 of user core.
Feb 13 15:28:52.566559 systemd[1]: Started session-24.scope - Session 24 of User core.
Feb 13 15:28:52.902432 ntpd[1567]: Deleting interface #10 lxc_health, fe80::3880:aeff:fe99:a5cc%8#123, interface stats: received=0, sent=0, dropped=0, active_time=78 secs
Feb 13 15:28:52.903107 ntpd[1567]: 13 Feb 15:28:52 ntpd[1567]: Deleting interface #10 lxc_health, fe80::3880:aeff:fe99:a5cc%8#123, interface stats: received=0, sent=0, dropped=0, active_time=78 secs
Feb 13 15:28:53.472740 kubelet[2887]: I0213 15:28:53.470133    2887 topology_manager.go:215] "Topology Admit Handler" podUID="0d052350-07bd-4556-819c-99fe28ac87eb" podNamespace="kube-system" podName="cilium-flpnl"
Feb 13 15:28:53.472740 kubelet[2887]: E0213 15:28:53.470261    2887 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e66f400f-d3c2-4cb9-8d84-3b710327b783" containerName="apply-sysctl-overwrites"
Feb 13 15:28:53.472740 kubelet[2887]: E0213 15:28:53.470281    2887 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e66f400f-d3c2-4cb9-8d84-3b710327b783" containerName="cilium-agent"
Feb 13 15:28:53.472740 kubelet[2887]: E0213 15:28:53.470294    2887 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e66f400f-d3c2-4cb9-8d84-3b710327b783" containerName="mount-cgroup"
Feb 13 15:28:53.472740 kubelet[2887]: E0213 15:28:53.470310    2887 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="33e23bef-aac9-40f6-9eb2-43bef12a4a18" containerName="cilium-operator"
Feb 13 15:28:53.472740 kubelet[2887]: E0213 15:28:53.470322    2887 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e66f400f-d3c2-4cb9-8d84-3b710327b783" containerName="mount-bpf-fs"
Feb 13 15:28:53.472740 kubelet[2887]: E0213 15:28:53.470337    2887 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e66f400f-d3c2-4cb9-8d84-3b710327b783" containerName="clean-cilium-state"
Feb 13 15:28:53.472740 kubelet[2887]: I0213 15:28:53.470380    2887 memory_manager.go:354] "RemoveStaleState removing state" podUID="33e23bef-aac9-40f6-9eb2-43bef12a4a18" containerName="cilium-operator"
Feb 13 15:28:53.472740 kubelet[2887]: I0213 15:28:53.470393    2887 memory_manager.go:354] "RemoveStaleState removing state" podUID="e66f400f-d3c2-4cb9-8d84-3b710327b783" containerName="cilium-agent"
Feb 13 15:28:53.479182 sshd[4677]: Connection closed by 139.178.68.195 port 60074
Feb 13 15:28:53.481557 sshd-session[4674]: pam_unix(sshd:session): session closed for user core
Feb 13 15:28:53.503969 systemd[1]: sshd@24-10.128.0.79:22-139.178.68.195:60074.service: Deactivated successfully.
Feb 13 15:28:53.505571 systemd-logind[1596]: Session 24 logged out. Waiting for processes to exit.
Feb 13 15:28:53.523655 systemd[1]: session-24.scope: Deactivated successfully.
Feb 13 15:28:53.538835 systemd-logind[1596]: Removed session 24.
Feb 13 15:28:53.548538 systemd[1]: Started sshd@25-10.128.0.79:22-139.178.68.195:60086.service - OpenSSH per-connection server daemon (139.178.68.195:60086).
Feb 13 15:28:53.635874 kubelet[2887]: I0213 15:28:53.635783    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0d052350-07bd-4556-819c-99fe28ac87eb-clustermesh-secrets\") pod \"cilium-flpnl\" (UID: \"0d052350-07bd-4556-819c-99fe28ac87eb\") " pod="kube-system/cilium-flpnl"
Feb 13 15:28:53.636135 kubelet[2887]: I0213 15:28:53.635896    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0d052350-07bd-4556-819c-99fe28ac87eb-cilium-config-path\") pod \"cilium-flpnl\" (UID: \"0d052350-07bd-4556-819c-99fe28ac87eb\") " pod="kube-system/cilium-flpnl"
Feb 13 15:28:53.636135 kubelet[2887]: I0213 15:28:53.635935    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74vmv\" (UniqueName: \"kubernetes.io/projected/0d052350-07bd-4556-819c-99fe28ac87eb-kube-api-access-74vmv\") pod \"cilium-flpnl\" (UID: \"0d052350-07bd-4556-819c-99fe28ac87eb\") " pod="kube-system/cilium-flpnl"
Feb 13 15:28:53.636135 kubelet[2887]: I0213 15:28:53.635970    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0d052350-07bd-4556-819c-99fe28ac87eb-etc-cni-netd\") pod \"cilium-flpnl\" (UID: \"0d052350-07bd-4556-819c-99fe28ac87eb\") " pod="kube-system/cilium-flpnl"
Feb 13 15:28:53.636135 kubelet[2887]: I0213 15:28:53.636003    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0d052350-07bd-4556-819c-99fe28ac87eb-cilium-run\") pod \"cilium-flpnl\" (UID: \"0d052350-07bd-4556-819c-99fe28ac87eb\") " pod="kube-system/cilium-flpnl"
Feb 13 15:28:53.636135 kubelet[2887]: I0213 15:28:53.636033    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d052350-07bd-4556-819c-99fe28ac87eb-lib-modules\") pod \"cilium-flpnl\" (UID: \"0d052350-07bd-4556-819c-99fe28ac87eb\") " pod="kube-system/cilium-flpnl"
Feb 13 15:28:53.636135 kubelet[2887]: I0213 15:28:53.636081    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0d052350-07bd-4556-819c-99fe28ac87eb-hostproc\") pod \"cilium-flpnl\" (UID: \"0d052350-07bd-4556-819c-99fe28ac87eb\") " pod="kube-system/cilium-flpnl"
Feb 13 15:28:53.636600 kubelet[2887]: I0213 15:28:53.636114    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0d052350-07bd-4556-819c-99fe28ac87eb-host-proc-sys-net\") pod \"cilium-flpnl\" (UID: \"0d052350-07bd-4556-819c-99fe28ac87eb\") " pod="kube-system/cilium-flpnl"
Feb 13 15:28:53.636600 kubelet[2887]: I0213 15:28:53.636188    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d052350-07bd-4556-819c-99fe28ac87eb-xtables-lock\") pod \"cilium-flpnl\" (UID: \"0d052350-07bd-4556-819c-99fe28ac87eb\") " pod="kube-system/cilium-flpnl"
Feb 13 15:28:53.636600 kubelet[2887]: I0213 15:28:53.636231    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0d052350-07bd-4556-819c-99fe28ac87eb-bpf-maps\") pod \"cilium-flpnl\" (UID: \"0d052350-07bd-4556-819c-99fe28ac87eb\") " pod="kube-system/cilium-flpnl"
Feb 13 15:28:53.636600 kubelet[2887]: I0213 15:28:53.636269    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0d052350-07bd-4556-819c-99fe28ac87eb-host-proc-sys-kernel\") pod \"cilium-flpnl\" (UID: \"0d052350-07bd-4556-819c-99fe28ac87eb\") " pod="kube-system/cilium-flpnl"
Feb 13 15:28:53.636600 kubelet[2887]: I0213 15:28:53.636309    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0d052350-07bd-4556-819c-99fe28ac87eb-hubble-tls\") pod \"cilium-flpnl\" (UID: \"0d052350-07bd-4556-819c-99fe28ac87eb\") " pod="kube-system/cilium-flpnl"
Feb 13 15:28:53.636600 kubelet[2887]: I0213 15:28:53.636352    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0d052350-07bd-4556-819c-99fe28ac87eb-cilium-ipsec-secrets\") pod \"cilium-flpnl\" (UID: \"0d052350-07bd-4556-819c-99fe28ac87eb\") " pod="kube-system/cilium-flpnl"
Feb 13 15:28:53.636976 kubelet[2887]: I0213 15:28:53.636394    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0d052350-07bd-4556-819c-99fe28ac87eb-cilium-cgroup\") pod \"cilium-flpnl\" (UID: \"0d052350-07bd-4556-819c-99fe28ac87eb\") " pod="kube-system/cilium-flpnl"
Feb 13 15:28:53.636976 kubelet[2887]: I0213 15:28:53.636434    2887 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0d052350-07bd-4556-819c-99fe28ac87eb-cni-path\") pod \"cilium-flpnl\" (UID: \"0d052350-07bd-4556-819c-99fe28ac87eb\") " pod="kube-system/cilium-flpnl"
Feb 13 15:28:53.801399 containerd[1622]: time="2025-02-13T15:28:53.801234886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-flpnl,Uid:0d052350-07bd-4556-819c-99fe28ac87eb,Namespace:kube-system,Attempt:0,}"
Feb 13 15:28:53.851297 containerd[1622]: time="2025-02-13T15:28:53.851049992Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:28:53.851297 containerd[1622]: time="2025-02-13T15:28:53.851125231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:28:53.851297 containerd[1622]: time="2025-02-13T15:28:53.851173470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:28:53.851661 containerd[1622]: time="2025-02-13T15:28:53.851331712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:28:53.910902 sshd[4687]: Accepted publickey for core from 139.178.68.195 port 60086 ssh2: RSA SHA256:nliKGUuHmIEF0YlcCyeDlTLj9V4wT+5POUaa07fHb80
Feb 13 15:28:53.913365 sshd-session[4687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:28:53.926104 containerd[1622]: time="2025-02-13T15:28:53.925995757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-flpnl,Uid:0d052350-07bd-4556-819c-99fe28ac87eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f5ae544b1314e1853607bac4c9103a18cfadef1f8b279e65a77c87dede541ec\""
Feb 13 15:28:53.926536 systemd-logind[1596]: New session 25 of user core.
Feb 13 15:28:53.933627 systemd[1]: Started session-25.scope - Session 25 of User core.
Feb 13 15:28:53.941982 containerd[1622]: time="2025-02-13T15:28:53.941861325Z" level=info msg="CreateContainer within sandbox \"1f5ae544b1314e1853607bac4c9103a18cfadef1f8b279e65a77c87dede541ec\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb 13 15:28:53.966570 containerd[1622]: time="2025-02-13T15:28:53.966493257Z" level=info msg="CreateContainer within sandbox \"1f5ae544b1314e1853607bac4c9103a18cfadef1f8b279e65a77c87dede541ec\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e049576ef7273110c95033e290b1a02f292c0a517fb2ad81432bfe180a2ee256\""
Feb 13 15:28:53.967621 containerd[1622]: time="2025-02-13T15:28:53.967572821Z" level=info msg="StartContainer for \"e049576ef7273110c95033e290b1a02f292c0a517fb2ad81432bfe180a2ee256\""
Feb 13 15:28:54.055566 containerd[1622]: time="2025-02-13T15:28:54.054900945Z" level=info msg="StartContainer for \"e049576ef7273110c95033e290b1a02f292c0a517fb2ad81432bfe180a2ee256\" returns successfully"
Feb 13 15:28:54.113649 containerd[1622]: time="2025-02-13T15:28:54.113524480Z" level=info msg="shim disconnected" id=e049576ef7273110c95033e290b1a02f292c0a517fb2ad81432bfe180a2ee256 namespace=k8s.io
Feb 13 15:28:54.113649 containerd[1622]: time="2025-02-13T15:28:54.113622474Z" level=warning msg="cleaning up after shim disconnected" id=e049576ef7273110c95033e290b1a02f292c0a517fb2ad81432bfe180a2ee256 namespace=k8s.io
Feb 13 15:28:54.113649 containerd[1622]: time="2025-02-13T15:28:54.113638861Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:28:54.132961 sshd[4739]: Connection closed by 139.178.68.195 port 60086
Feb 13 15:28:54.134775 sshd-session[4687]: pam_unix(sshd:session): session closed for user core
Feb 13 15:28:54.144815 systemd[1]: sshd@25-10.128.0.79:22-139.178.68.195:60086.service: Deactivated successfully.
Feb 13 15:28:54.146190 systemd-logind[1596]: Session 25 logged out. Waiting for processes to exit.
Feb 13 15:28:54.155680 systemd[1]: session-25.scope: Deactivated successfully.
Feb 13 15:28:54.159073 systemd-logind[1596]: Removed session 25.
Feb 13 15:28:54.182132 systemd[1]: Started sshd@26-10.128.0.79:22-139.178.68.195:60094.service - OpenSSH per-connection server daemon (139.178.68.195:60094).
Feb 13 15:28:54.491200 sshd[4804]: Accepted publickey for core from 139.178.68.195 port 60094 ssh2: RSA SHA256:nliKGUuHmIEF0YlcCyeDlTLj9V4wT+5POUaa07fHb80
Feb 13 15:28:54.493618 sshd-session[4804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:28:54.500668 systemd-logind[1596]: New session 26 of user core.
Feb 13 15:28:54.508336 systemd[1]: Started session-26.scope - Session 26 of User core.
Feb 13 15:28:54.581171 containerd[1622]: time="2025-02-13T15:28:54.577779286Z" level=info msg="CreateContainer within sandbox \"1f5ae544b1314e1853607bac4c9103a18cfadef1f8b279e65a77c87dede541ec\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb 13 15:28:54.598348 containerd[1622]: time="2025-02-13T15:28:54.598052284Z" level=info msg="CreateContainer within sandbox \"1f5ae544b1314e1853607bac4c9103a18cfadef1f8b279e65a77c87dede541ec\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a6f3e668afde59f35e9ed648752684a65ac74f5a5c1d741621aff24ee78d1b74\""
Feb 13 15:28:54.599402 containerd[1622]: time="2025-02-13T15:28:54.599351167Z" level=info msg="StartContainer for \"a6f3e668afde59f35e9ed648752684a65ac74f5a5c1d741621aff24ee78d1b74\""
Feb 13 15:28:54.716046 containerd[1622]: time="2025-02-13T15:28:54.715075803Z" level=info msg="StartContainer for \"a6f3e668afde59f35e9ed648752684a65ac74f5a5c1d741621aff24ee78d1b74\" returns successfully"
Feb 13 15:28:54.790167 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6f3e668afde59f35e9ed648752684a65ac74f5a5c1d741621aff24ee78d1b74-rootfs.mount: Deactivated successfully.
Feb 13 15:28:54.793755 containerd[1622]: time="2025-02-13T15:28:54.792935702Z" level=info msg="shim disconnected" id=a6f3e668afde59f35e9ed648752684a65ac74f5a5c1d741621aff24ee78d1b74 namespace=k8s.io
Feb 13 15:28:54.793755 containerd[1622]: time="2025-02-13T15:28:54.793068285Z" level=warning msg="cleaning up after shim disconnected" id=a6f3e668afde59f35e9ed648752684a65ac74f5a5c1d741621aff24ee78d1b74 namespace=k8s.io
Feb 13 15:28:54.793755 containerd[1622]: time="2025-02-13T15:28:54.793087590Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:28:55.585262 containerd[1622]: time="2025-02-13T15:28:55.584920193Z" level=info msg="CreateContainer within sandbox \"1f5ae544b1314e1853607bac4c9103a18cfadef1f8b279e65a77c87dede541ec\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb 13 15:28:55.629967 containerd[1622]: time="2025-02-13T15:28:55.629888430Z" level=info msg="CreateContainer within sandbox \"1f5ae544b1314e1853607bac4c9103a18cfadef1f8b279e65a77c87dede541ec\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9319edeab0a6c6009198826be8f9593b9fa03587ecbdbe544bd6d65da73085ed\""
Feb 13 15:28:55.632226 containerd[1622]: time="2025-02-13T15:28:55.630986760Z" level=info msg="StartContainer for \"9319edeab0a6c6009198826be8f9593b9fa03587ecbdbe544bd6d65da73085ed\""
Feb 13 15:28:55.743559 containerd[1622]: time="2025-02-13T15:28:55.742494185Z" level=info msg="StartContainer for \"9319edeab0a6c6009198826be8f9593b9fa03587ecbdbe544bd6d65da73085ed\" returns successfully"
Feb 13 15:28:55.809084 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9319edeab0a6c6009198826be8f9593b9fa03587ecbdbe544bd6d65da73085ed-rootfs.mount: Deactivated successfully.
Feb 13 15:28:55.810723 containerd[1622]: time="2025-02-13T15:28:55.810363539Z" level=info msg="shim disconnected" id=9319edeab0a6c6009198826be8f9593b9fa03587ecbdbe544bd6d65da73085ed namespace=k8s.io
Feb 13 15:28:55.810723 containerd[1622]: time="2025-02-13T15:28:55.810455323Z" level=warning msg="cleaning up after shim disconnected" id=9319edeab0a6c6009198826be8f9593b9fa03587ecbdbe544bd6d65da73085ed namespace=k8s.io
Feb 13 15:28:55.810723 containerd[1622]: time="2025-02-13T15:28:55.810470968Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:28:55.873999 containerd[1622]: time="2025-02-13T15:28:55.873816649Z" level=info msg="StopPodSandbox for \"94502aef6bf1f17df14c5db1119eea058051b17f11095b37072befbc528153ef\""
Feb 13 15:28:55.873999 containerd[1622]: time="2025-02-13T15:28:55.873973050Z" level=info msg="TearDown network for sandbox \"94502aef6bf1f17df14c5db1119eea058051b17f11095b37072befbc528153ef\" successfully"
Feb 13 15:28:55.873999 containerd[1622]: time="2025-02-13T15:28:55.873994559Z" level=info msg="StopPodSandbox for \"94502aef6bf1f17df14c5db1119eea058051b17f11095b37072befbc528153ef\" returns successfully"
Feb 13 15:28:55.877898 containerd[1622]: time="2025-02-13T15:28:55.876019987Z" level=info msg="RemovePodSandbox for \"94502aef6bf1f17df14c5db1119eea058051b17f11095b37072befbc528153ef\""
Feb 13 15:28:55.877898 containerd[1622]: time="2025-02-13T15:28:55.876105812Z" level=info msg="Forcibly stopping sandbox \"94502aef6bf1f17df14c5db1119eea058051b17f11095b37072befbc528153ef\""
Feb 13 15:28:55.877898 containerd[1622]: time="2025-02-13T15:28:55.876261202Z" level=info msg="TearDown network for sandbox \"94502aef6bf1f17df14c5db1119eea058051b17f11095b37072befbc528153ef\" successfully"
Feb 13 15:28:55.882487 containerd[1622]: time="2025-02-13T15:28:55.882093608Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94502aef6bf1f17df14c5db1119eea058051b17f11095b37072befbc528153ef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:28:55.882687 containerd[1622]: time="2025-02-13T15:28:55.882540198Z" level=info msg="RemovePodSandbox \"94502aef6bf1f17df14c5db1119eea058051b17f11095b37072befbc528153ef\" returns successfully"
Feb 13 15:28:55.886127 containerd[1622]: time="2025-02-13T15:28:55.884804070Z" level=info msg="StopPodSandbox for \"a4bcf1688ebc9428f5134cad721cae5bb917a2f457cbd2fd4583c89329b92736\""
Feb 13 15:28:55.886127 containerd[1622]: time="2025-02-13T15:28:55.885065812Z" level=info msg="TearDown network for sandbox \"a4bcf1688ebc9428f5134cad721cae5bb917a2f457cbd2fd4583c89329b92736\" successfully"
Feb 13 15:28:55.886127 containerd[1622]: time="2025-02-13T15:28:55.885112349Z" level=info msg="StopPodSandbox for \"a4bcf1688ebc9428f5134cad721cae5bb917a2f457cbd2fd4583c89329b92736\" returns successfully"
Feb 13 15:28:55.886432 containerd[1622]: time="2025-02-13T15:28:55.886304192Z" level=info msg="RemovePodSandbox for \"a4bcf1688ebc9428f5134cad721cae5bb917a2f457cbd2fd4583c89329b92736\""
Feb 13 15:28:55.886432 containerd[1622]: time="2025-02-13T15:28:55.886392151Z" level=info msg="Forcibly stopping sandbox \"a4bcf1688ebc9428f5134cad721cae5bb917a2f457cbd2fd4583c89329b92736\""
Feb 13 15:28:55.887426 containerd[1622]: time="2025-02-13T15:28:55.886552967Z" level=info msg="TearDown network for sandbox \"a4bcf1688ebc9428f5134cad721cae5bb917a2f457cbd2fd4583c89329b92736\" successfully"
Feb 13 15:28:55.892476 containerd[1622]: time="2025-02-13T15:28:55.892390409Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a4bcf1688ebc9428f5134cad721cae5bb917a2f457cbd2fd4583c89329b92736\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:28:55.892707 containerd[1622]: time="2025-02-13T15:28:55.892488839Z" level=info msg="RemovePodSandbox \"a4bcf1688ebc9428f5134cad721cae5bb917a2f457cbd2fd4583c89329b92736\" returns successfully"
Feb 13 15:28:56.191250 kubelet[2887]: E0213 15:28:56.191067    2887 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 13 15:28:56.591769 containerd[1622]: time="2025-02-13T15:28:56.591250236Z" level=info msg="CreateContainer within sandbox \"1f5ae544b1314e1853607bac4c9103a18cfadef1f8b279e65a77c87dede541ec\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb 13 15:28:56.623718 containerd[1622]: time="2025-02-13T15:28:56.623018718Z" level=info msg="CreateContainer within sandbox \"1f5ae544b1314e1853607bac4c9103a18cfadef1f8b279e65a77c87dede541ec\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6bd7513f5d1dece42cfdab2fb96e4b01889452d1a2b754c9ececb91878833c65\""
Feb 13 15:28:56.624704 containerd[1622]: time="2025-02-13T15:28:56.624569339Z" level=info msg="StartContainer for \"6bd7513f5d1dece42cfdab2fb96e4b01889452d1a2b754c9ececb91878833c65\""
Feb 13 15:28:56.719585 containerd[1622]: time="2025-02-13T15:28:56.719394052Z" level=info msg="StartContainer for \"6bd7513f5d1dece42cfdab2fb96e4b01889452d1a2b754c9ececb91878833c65\" returns successfully"
Feb 13 15:28:56.756829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bd7513f5d1dece42cfdab2fb96e4b01889452d1a2b754c9ececb91878833c65-rootfs.mount: Deactivated successfully.
Feb 13 15:28:56.760050 containerd[1622]: time="2025-02-13T15:28:56.759933877Z" level=info msg="shim disconnected" id=6bd7513f5d1dece42cfdab2fb96e4b01889452d1a2b754c9ececb91878833c65 namespace=k8s.io
Feb 13 15:28:56.760050 containerd[1622]: time="2025-02-13T15:28:56.760029825Z" level=warning msg="cleaning up after shim disconnected" id=6bd7513f5d1dece42cfdab2fb96e4b01889452d1a2b754c9ececb91878833c65 namespace=k8s.io
Feb 13 15:28:56.760050 containerd[1622]: time="2025-02-13T15:28:56.760047155Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:28:56.796104 containerd[1622]: time="2025-02-13T15:28:56.796035085Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:28:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Feb 13 15:28:57.598035 containerd[1622]: time="2025-02-13T15:28:57.597724117Z" level=info msg="CreateContainer within sandbox \"1f5ae544b1314e1853607bac4c9103a18cfadef1f8b279e65a77c87dede541ec\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb 13 15:28:57.629704 containerd[1622]: time="2025-02-13T15:28:57.628659152Z" level=info msg="CreateContainer within sandbox \"1f5ae544b1314e1853607bac4c9103a18cfadef1f8b279e65a77c87dede541ec\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"941bb688760394bdf7ad9e90ba30d6ab0bea101191c694327719c5c93d90ed6c\""
Feb 13 15:28:57.630397 containerd[1622]: time="2025-02-13T15:28:57.630294615Z" level=info msg="StartContainer for \"941bb688760394bdf7ad9e90ba30d6ab0bea101191c694327719c5c93d90ed6c\""
Feb 13 15:28:57.728096 containerd[1622]: time="2025-02-13T15:28:57.728027188Z" level=info msg="StartContainer for \"941bb688760394bdf7ad9e90ba30d6ab0bea101191c694327719c5c93d90ed6c\" returns successfully"
Feb 13 15:28:58.233252 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni))
Feb 13 15:28:58.620509 kubelet[2887]: I0213 15:28:58.620437    2887 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-flpnl" podStartSLOduration=5.620352336 podStartE2EDuration="5.620352336s" podCreationTimestamp="2025-02-13 15:28:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:28:58.618832292 +0000 UTC m=+122.950092729" watchObservedRunningTime="2025-02-13 15:28:58.620352336 +0000 UTC m=+122.951612774"
Feb 13 15:28:58.965458 kubelet[2887]: I0213 15:28:58.965307    2887 setters.go:568] "Node became not ready" node="ci-4152-2-1-3109ca0bb39f90f2236e.c.flatcar-212911.internal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:28:58Z","lastTransitionTime":"2025-02-13T15:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"}
Feb 13 15:28:59.949521 kubelet[2887]: E0213 15:28:59.948384    2887 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-9bk8q" podUID="920ea170-0b70-47ae-b4b5-9b57d9f2c0ba"
Feb 13 15:29:01.298736 kubelet[2887]: E0213 15:29:01.298467    2887 upgradeaware.go:439] Error proxying data from backend to client: read tcp 127.0.0.1:52554->127.0.0.1:38907: read: connection reset by peer
Feb 13 15:29:01.931881 systemd-networkd[1216]: lxc_health: Link UP
Feb 13 15:29:01.934938 systemd-networkd[1216]: lxc_health: Gained carrier
Feb 13 15:29:03.500836 systemd[1]: run-containerd-runc-k8s.io-941bb688760394bdf7ad9e90ba30d6ab0bea101191c694327719c5c93d90ed6c-runc.9OrtcZ.mount: Deactivated successfully.
Feb 13 15:29:03.966482 systemd-networkd[1216]: lxc_health: Gained IPv6LL
Feb 13 15:29:05.844204 systemd[1]: run-containerd-runc-k8s.io-941bb688760394bdf7ad9e90ba30d6ab0bea101191c694327719c5c93d90ed6c-runc.U6rrXB.mount: Deactivated successfully.
Feb 13 15:29:06.902579 ntpd[1567]: Listen normally on 13 lxc_health [fe80::c09d:21ff:fea0:2065%14]:123
Feb 13 15:29:06.903587 ntpd[1567]: 13 Feb 15:29:06 ntpd[1567]: Listen normally on 13 lxc_health [fe80::c09d:21ff:fea0:2065%14]:123
Feb 13 15:29:08.298575 sshd[4807]: Connection closed by 139.178.68.195 port 60094
Feb 13 15:29:08.301536 sshd-session[4804]: pam_unix(sshd:session): session closed for user core
Feb 13 15:29:08.310423 systemd-logind[1596]: Session 26 logged out. Waiting for processes to exit.
Feb 13 15:29:08.313007 systemd[1]: sshd@26-10.128.0.79:22-139.178.68.195:60094.service: Deactivated successfully.
Feb 13 15:29:08.330887 systemd[1]: session-26.scope: Deactivated successfully.
Feb 13 15:29:08.334112 systemd-logind[1596]: Removed session 26.