Feb 13 19:42:37.152536 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:41:03 -00 2025 Feb 13 19:42:37.152593 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:42:37.152613 kernel: BIOS-provided physical RAM map: Feb 13 19:42:37.152631 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Feb 13 19:42:37.152650 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Feb 13 19:42:37.152833 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Feb 13 19:42:37.152860 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Feb 13 19:42:37.152884 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Feb 13 19:42:37.153064 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd328fff] usable Feb 13 19:42:37.153089 kernel: BIOS-e820: [mem 0x00000000bd329000-0x00000000bd330fff] ACPI data Feb 13 19:42:37.153111 kernel: BIOS-e820: [mem 0x00000000bd331000-0x00000000bf8ecfff] usable Feb 13 19:42:37.153134 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Feb 13 19:42:37.153157 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Feb 13 19:42:37.153181 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Feb 13 19:42:37.153215 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Feb 13 19:42:37.153420 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Feb 13 19:42:37.153446 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Feb 13 19:42:37.153471 kernel: NX (Execute Disable) protection: active Feb 13 19:42:37.153496 kernel: APIC: Static calls initialized Feb 13 19:42:37.153520 kernel: efi: EFI v2.7 by EDK II Feb 13 19:42:37.153546 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd329018 Feb 13 19:42:37.153571 kernel: random: crng init done Feb 13 19:42:37.153596 kernel: secureboot: Secure boot disabled Feb 13 19:42:37.153621 kernel: SMBIOS 2.4 present. Feb 13 19:42:37.153650 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024 Feb 13 19:42:37.153675 kernel: Hypervisor detected: KVM Feb 13 19:42:37.153699 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:42:37.153726 kernel: kvm-clock: using sched offset of 13364046224 cycles Feb 13 19:42:37.153753 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:42:37.153780 kernel: tsc: Detected 2299.998 MHz processor Feb 13 19:42:37.153806 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:42:37.153832 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:42:37.153858 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Feb 13 19:42:37.153883 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Feb 13 19:42:37.153914 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:42:37.153940 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Feb 13 19:42:37.153966 kernel: Using GB pages for direct mapping Feb 13 19:42:37.153993 kernel: ACPI: Early table checksum verification disabled Feb 13 19:42:37.154018 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Feb 13 19:42:37.154056 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Feb 13 19:42:37.154094 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Feb 13 19:42:37.154127 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Feb 13 19:42:37.154155 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Feb 13 19:42:37.154185 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Feb 13 19:42:37.154215 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Feb 13 19:42:37.154245 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Feb 13 19:42:37.154274 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Feb 13 19:42:37.154297 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Feb 13 19:42:37.154359 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Feb 13 19:42:37.154388 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Feb 13 19:42:37.154418 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Feb 13 19:42:37.154447 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Feb 13 19:42:37.154477 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Feb 13 19:42:37.154506 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Feb 13 19:42:37.154534 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Feb 13 19:42:37.154562 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Feb 13 19:42:37.154595 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Feb 13 19:42:37.154623 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Feb 13 19:42:37.154657 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 19:42:37.154687 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 19:42:37.154716 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 19:42:37.154746 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Feb 13 19:42:37.154776 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Feb 13 19:42:37.154805 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Feb 13 19:42:37.154835 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Feb 13 19:42:37.154864 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Feb 13 19:42:37.154899 kernel: Zone ranges: Feb 13 19:42:37.154928 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:42:37.154957 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 19:42:37.154986 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Feb 13 19:42:37.155015 kernel: Movable zone start for each node Feb 13 19:42:37.155050 kernel: Early memory node ranges Feb 13 19:42:37.155079 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Feb 13 19:42:37.155108 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Feb 13 19:42:37.155137 kernel: node 0: [mem 0x0000000000100000-0x00000000bd328fff] Feb 13 19:42:37.155171 kernel: node 0: [mem 0x00000000bd331000-0x00000000bf8ecfff] Feb 13 19:42:37.155200 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Feb 13 19:42:37.155229 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Feb 13 19:42:37.155258 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Feb 13 19:42:37.155285 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:42:37.155329 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Feb 13 19:42:37.155359 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Feb 13 19:42:37.155388 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Feb 13 19:42:37.155417 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 19:42:37.155451 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Feb 13 19:42:37.155481 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 19:42:37.155510 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:42:37.155539 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 19:42:37.155568 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:42:37.155598 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:42:37.155627 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:42:37.155656 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:42:37.155685 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:42:37.155718 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 19:42:37.155748 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Feb 13 19:42:37.155777 kernel: Booting paravirtualized kernel on KVM Feb 13 19:42:37.155806 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:42:37.155835 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 19:42:37.155864 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 19:42:37.155893 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 19:42:37.155922 kernel: pcpu-alloc: [0] 0 1 Feb 13 19:42:37.155950 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:42:37.155984 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:42:37.156016 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:42:37.156053 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:42:37.156082 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 19:42:37.156111 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:42:37.156140 kernel: Fallback order for Node 0: 0 Feb 13 19:42:37.156169 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932272 Feb 13 19:42:37.156198 kernel: Policy zone: Normal Feb 13 19:42:37.156231 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:42:37.156260 kernel: software IO TLB: area num 2. Feb 13 19:42:37.156287 kernel: Memory: 7511328K/7860552K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 348968K reserved, 0K cma-reserved) Feb 13 19:42:37.156639 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:42:37.156671 kernel: Kernel/User page tables isolation: enabled Feb 13 19:42:37.156701 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 19:42:37.156730 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:42:37.156760 kernel: Dynamic Preempt: voluntary Feb 13 19:42:37.156814 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:42:37.156847 kernel: rcu: RCU event tracing is enabled. Feb 13 19:42:37.156878 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:42:37.156909 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:42:37.156944 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:42:37.156975 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:42:37.157006 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:42:37.157051 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:42:37.157083 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 19:42:37.157119 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:42:37.157151 kernel: Console: colour dummy device 80x25 Feb 13 19:42:37.157182 kernel: printk: console [ttyS0] enabled Feb 13 19:42:37.157213 kernel: ACPI: Core revision 20230628 Feb 13 19:42:37.157244 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:42:37.157275 kernel: x2apic enabled Feb 13 19:42:37.157300 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:42:37.157358 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Feb 13 19:42:37.157390 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 19:42:37.157427 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Feb 13 19:42:37.157458 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Feb 13 19:42:37.157489 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Feb 13 19:42:37.157520 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:42:37.157551 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 19:42:37.157582 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 19:42:37.157614 kernel: Spectre V2 : Mitigation: IBRS Feb 13 19:42:37.157645 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:42:37.157676 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:42:37.157712 kernel: RETBleed: Mitigation: IBRS Feb 13 19:42:37.157743 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 19:42:37.157774 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Feb 13 19:42:37.157805 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 19:42:37.157836 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 19:42:37.157868 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 19:42:37.157899 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:42:37.157930 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:42:37.157961 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:42:37.157996 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:42:37.158033 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 19:42:37.158065 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:42:37.158095 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:42:37.158127 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:42:37.158158 kernel: landlock: Up and running. Feb 13 19:42:37.158188 kernel: SELinux: Initializing. Feb 13 19:42:37.158219 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 19:42:37.158251 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 19:42:37.158283 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Feb 13 19:42:37.158326 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:42:37.158357 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:42:37.158389 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:42:37.158420 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Feb 13 19:42:37.158451 kernel: signal: max sigframe size: 1776 Feb 13 19:42:37.158480 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:42:37.158499 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:42:37.158530 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 19:42:37.158555 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:42:37.158580 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:42:37.158603 kernel: .... node #0, CPUs: #1 Feb 13 19:42:37.158631 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 19:42:37.158655 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 19:42:37.158678 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:42:37.158706 kernel: smpboot: Max logical packages: 1 Feb 13 19:42:37.158735 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Feb 13 19:42:37.158770 kernel: devtmpfs: initialized Feb 13 19:42:37.158799 kernel: x86/mm: Memory block size: 128MB Feb 13 19:42:37.158828 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Feb 13 19:42:37.158858 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:42:37.158887 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:42:37.158916 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:42:37.158945 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:42:37.158975 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:42:37.159004 kernel: audit: type=2000 audit(1739475755.380:1): state=initialized audit_enabled=0 res=1 Feb 13 19:42:37.159048 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:42:37.160736 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:42:37.160771 kernel: cpuidle: using governor menu Feb 13 19:42:37.160795 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:42:37.160813 kernel: dca service started, version 1.12.1 Feb 13 19:42:37.160837 kernel: PCI: Using configuration type 1 for base access Feb 13 19:42:37.160861 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:42:37.160883 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:42:37.160905 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:42:37.160934 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:42:37.160953 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:42:37.160972 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:42:37.160991 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:42:37.161010 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:42:37.161037 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:42:37.161056 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 19:42:37.161075 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:42:37.161094 kernel: ACPI: Interpreter enabled Feb 13 19:42:37.161117 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 19:42:37.161137 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:42:37.161156 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:42:37.161175 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 19:42:37.161194 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 19:42:37.161213 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:42:37.161553 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:42:37.161807 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 19:42:37.162090 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 19:42:37.162125 kernel: PCI host bridge to bus 0000:00 Feb 13 19:42:37.164539 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:42:37.164865 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:42:37.165069 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:42:37.165256 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Feb 13 19:42:37.165487 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:42:37.165715 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 19:42:37.165933 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Feb 13 19:42:37.166160 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 13 19:42:37.167721 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 19:42:37.167963 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Feb 13 19:42:37.168186 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Feb 13 19:42:37.169004 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Feb 13 19:42:37.170503 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 19:42:37.170777 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Feb 13 19:42:37.171048 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Feb 13 19:42:37.171660 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:42:37.171896 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 13 19:42:37.172129 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Feb 13 19:42:37.172154 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:42:37.172173 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:42:37.172193 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:42:37.172212 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:42:37.172241 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 19:42:37.172260 kernel: iommu: Default domain type: Translated Feb 13 19:42:37.172279 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:42:37.172298 kernel: efivars: Registered efivars operations Feb 13 19:42:37.173398 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:42:37.173429 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:42:37.173456 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Feb 13 19:42:37.173479 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Feb 13 19:42:37.173507 kernel: e820: reserve RAM buffer [mem 0xbd329000-0xbfffffff] Feb 13 19:42:37.173532 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Feb 13 19:42:37.173555 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Feb 13 19:42:37.173582 kernel: vgaarb: loaded Feb 13 19:42:37.173607 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:42:37.173644 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:42:37.173671 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:42:37.173697 kernel: pnp: PnP ACPI init Feb 13 19:42:37.173725 kernel: pnp: PnP ACPI: found 7 devices Feb 13 19:42:37.173748 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:42:37.173777 kernel: NET: Registered PF_INET protocol family Feb 13 19:42:37.173801 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 19:42:37.173826 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 19:42:37.173854 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:42:37.173888 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:42:37.173915 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 19:42:37.173940 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 19:42:37.173969 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 19:42:37.173996 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 19:42:37.174021 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:42:37.174061 kernel: NET: Registered PF_XDP protocol family Feb 13 19:42:37.174298 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:42:37.175624 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:42:37.175845 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:42:37.176042 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Feb 13 19:42:37.176253 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 19:42:37.176277 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:42:37.176296 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 19:42:37.176361 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Feb 13 19:42:37.176391 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 19:42:37.176417 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 19:42:37.176435 kernel: clocksource: Switched to clocksource tsc Feb 13 19:42:37.176455 kernel: Initialise system trusted keyrings Feb 13 19:42:37.176473 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 19:42:37.176493 kernel: Key type asymmetric registered Feb 13 19:42:37.176511 kernel: Asymmetric key parser 'x509' registered Feb 13 19:42:37.176528 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:42:37.176547 kernel: io scheduler mq-deadline registered Feb 13 19:42:37.176565 kernel: io scheduler kyber registered Feb 13 19:42:37.176588 kernel: io scheduler bfq registered Feb 13 19:42:37.176606 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:42:37.176625 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 13 19:42:37.176843 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Feb 13 19:42:37.176868 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 13 19:42:37.177107 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Feb 13 19:42:37.177135 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 13 19:42:37.180277 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Feb 13 19:42:37.180347 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:42:37.180375 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:42:37.180395 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 19:42:37.180415 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Feb 13 19:42:37.180434 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Feb 13 19:42:37.180658 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Feb 13 19:42:37.180685 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:42:37.180704 kernel: i8042: Warning: Keylock active Feb 13 19:42:37.180723 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:42:37.180748 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:42:37.180950 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 19:42:37.181148 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 19:42:37.181448 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T19:42:36 UTC (1739475756) Feb 13 19:42:37.181685 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 19:42:37.181715 kernel: intel_pstate: CPU model not supported Feb 13 19:42:37.181741 kernel: pstore: Using crash dump compression: deflate Feb 13 19:42:37.181770 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 19:42:37.181793 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:42:37.181816 kernel: Segment Routing with IPv6 Feb 13 19:42:37.181841 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:42:37.181864 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:42:37.181888 kernel: Key type dns_resolver registered Feb 13 19:42:37.181911 kernel: IPI shorthand broadcast: enabled Feb 13 19:42:37.181941 kernel: sched_clock: Marking stable (974005486, 180781549)->(1244213029, -89425994) Feb 13 19:42:37.181964 kernel: registered taskstats version 1 Feb 13 19:42:37.181989 kernel: Loading compiled-in X.509 certificates Feb 13 19:42:37.182022 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: b3acedbed401b3cd9632ee9302ddcce254d8924d' Feb 13 19:42:37.182053 kernel: Key type .fscrypt registered Feb 13 19:42:37.182073 kernel: Key type fscrypt-provisioning registered Feb 13 19:42:37.182097 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:42:37.182119 kernel: ima: No architecture policies found Feb 13 19:42:37.182145 kernel: clk: Disabling unused clocks Feb 13 19:42:37.182166 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 19:42:37.182190 kernel: Write protecting the kernel read-only data: 38912k Feb 13 19:42:37.182220 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 19:42:37.182243 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 19:42:37.182263 kernel: Run /init as init process Feb 13 19:42:37.182287 kernel: with arguments: Feb 13 19:42:37.183353 kernel: /init Feb 13 19:42:37.183378 kernel: with environment: Feb 13 19:42:37.183399 kernel: HOME=/ Feb 13 19:42:37.183418 kernel: TERM=linux Feb 13 19:42:37.183438 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:42:37.183469 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:42:37.183493 systemd[1]: Detected virtualization google. Feb 13 19:42:37.183514 systemd[1]: Detected architecture x86-64. Feb 13 19:42:37.183534 systemd[1]: Running in initrd. Feb 13 19:42:37.183554 systemd[1]: No hostname configured, using default hostname. Feb 13 19:42:37.183574 systemd[1]: Hostname set to . Feb 13 19:42:37.183596 systemd[1]: Initializing machine ID from random generator. Feb 13 19:42:37.183621 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:42:37.183641 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:42:37.183661 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:42:37.183685 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:42:37.183705 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:42:37.183726 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:42:37.183747 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:42:37.183776 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:42:37.183816 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:42:37.183842 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:42:37.183863 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:42:37.183884 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:42:37.183908 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:42:37.183934 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:42:37.183956 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:42:37.183976 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:42:37.183997 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:42:37.184018 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:42:37.184048 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:42:37.184069 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:42:37.184090 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:42:37.184116 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:42:37.184137 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:42:37.184158 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:42:37.184179 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:42:37.184201 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:42:37.184221 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:42:37.184242 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:42:37.184263 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:42:37.184285 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:42:37.184343 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:42:37.184417 systemd-journald[184]: Collecting audit messages is disabled. Feb 13 19:42:37.184480 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:42:37.184506 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:42:37.184541 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:42:37.184568 systemd-journald[184]: Journal started Feb 13 19:42:37.184625 systemd-journald[184]: Runtime Journal (/run/log/journal/d16e16dbe64b4ba6952f021896bb8cb0) is 8.0M, max 148.6M, 140.6M free. Feb 13 19:42:37.160722 systemd-modules-load[185]: Inserted module 'overlay' Feb 13 19:42:37.188456 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:42:37.211366 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:42:37.216111 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:42:37.228331 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:42:37.229071 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:42:37.233540 kernel: Bridge firewalling registered Feb 13 19:42:37.232747 systemd-modules-load[185]: Inserted module 'br_netfilter' Feb 13 19:42:37.242923 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:42:37.243783 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:42:37.262708 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:42:37.266680 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:42:37.277591 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:42:37.293716 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:42:37.294571 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:42:37.305718 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:42:37.321375 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:42:37.333682 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:42:37.363722 dracut-cmdline[220]: dracut-dracut-053 Feb 13 19:42:37.366950 systemd-resolved[210]: Positive Trust Anchors: Feb 13 19:42:37.366965 systemd-resolved[210]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:42:37.376455 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:42:37.367044 systemd-resolved[210]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:42:37.374973 systemd-resolved[210]: Defaulting to hostname 'linux'. Feb 13 19:42:37.378997 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:42:37.390574 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:42:37.483363 kernel: SCSI subsystem initialized Feb 13 19:42:37.495371 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:42:37.511350 kernel: iscsi: registered transport (tcp) Feb 13 19:42:37.544345 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:42:37.544439 kernel: QLogic iSCSI HBA Driver Feb 13 19:42:37.605281 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:42:37.612577 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:42:37.693703 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:42:37.693809 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:42:37.693846 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:42:37.753356 kernel: raid6: avx2x4 gen() 17578 MB/s Feb 13 19:42:37.774357 kernel: raid6: avx2x2 gen() 17648 MB/s Feb 13 19:42:37.800398 kernel: raid6: avx2x1 gen() 14056 MB/s Feb 13 19:42:37.800472 kernel: raid6: using algorithm avx2x2 gen() 17648 MB/s Feb 13 19:42:37.827467 kernel: raid6: .... xor() 18150 MB/s, rmw enabled Feb 13 19:42:37.827557 kernel: raid6: using avx2x2 recovery algorithm Feb 13 19:42:37.858355 kernel: xor: automatically using best checksumming function avx Feb 13 19:42:38.043357 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:42:38.058721 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:42:38.064703 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:42:38.113508 systemd-udevd[402]: Using default interface naming scheme 'v255'. Feb 13 19:42:38.121285 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:42:38.152620 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:42:38.197130 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Feb 13 19:42:38.239301 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:42:38.258595 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:42:38.369167 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:42:38.408682 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:42:38.463815 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:42:38.484341 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:42:38.509275 kernel: scsi host0: Virtio SCSI HBA Feb 13 19:42:38.526355 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Feb 13 19:42:38.531497 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:42:38.571482 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:42:38.547515 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:42:38.566341 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:42:38.653936 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:42:38.667090 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:42:38.667145 kernel: AES CTR mode by8 optimization enabled Feb 13 19:42:38.654265 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:42:38.705842 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Feb 13 19:42:38.754717 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Feb 13 19:42:38.755012 kernel: sd 0:0:1:0: [sda] Write Protect is off Feb 13 19:42:38.755346 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Feb 13 19:42:38.755643 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 19:42:38.755914 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:42:38.755954 kernel: GPT:17805311 != 25165823 Feb 13 19:42:38.755988 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:42:38.756024 kernel: GPT:17805311 != 25165823 Feb 13 19:42:38.756060 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:42:38.756097 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:42:38.756134 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Feb 13 19:42:38.747377 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:42:38.763429 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:42:38.763782 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:42:38.775586 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:42:38.844712 kernel: BTRFS: device fsid c7adc9b8-df7f-4a5f-93bf-204def2767a9 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (464) Feb 13 19:42:38.844770 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (459) Feb 13 19:42:38.802746 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:42:38.855280 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:42:38.886301 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Feb 13 19:42:38.905104 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Feb 13 19:42:38.924688 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Feb 13 19:42:38.951515 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Feb 13 19:42:38.962954 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:42:38.993027 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Feb 13 19:42:39.001605 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:42:39.021730 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:42:39.058001 disk-uuid[543]: Primary Header is updated. Feb 13 19:42:39.058001 disk-uuid[543]: Secondary Entries is updated. Feb 13 19:42:39.058001 disk-uuid[543]: Secondary Header is updated. Feb 13 19:42:39.076361 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:42:39.104345 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:42:39.117712 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:42:40.119536 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:42:40.119632 disk-uuid[544]: The operation has completed successfully. Feb 13 19:42:40.211066 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:42:40.211252 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:42:40.237563 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:42:40.271161 sh[566]: Success Feb 13 19:42:40.298345 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 19:42:40.377187 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:42:40.404487 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:42:40.410589 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:42:40.460357 kernel: BTRFS info (device dm-0): first mount of filesystem c7adc9b8-df7f-4a5f-93bf-204def2767a9 Feb 13 19:42:40.460479 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:42:40.477676 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:42:40.477779 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:42:40.490403 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:42:40.517426 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:42:40.525657 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:42:40.526803 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:42:40.532628 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:42:40.546537 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:42:40.617168 kernel: BTRFS info (device sda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:42:40.617271 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:42:40.617323 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:42:40.639018 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:42:40.639120 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:42:40.667274 kernel: BTRFS info (device sda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:42:40.666618 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:42:40.681848 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:42:40.701583 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:42:40.723584 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:42:40.754588 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:42:40.849693 systemd-networkd[749]: lo: Link UP Feb 13 19:42:40.849710 systemd-networkd[749]: lo: Gained carrier Feb 13 19:42:40.854128 systemd-networkd[749]: Enumeration completed Feb 13 19:42:40.854933 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:42:40.854942 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:42:40.892327 ignition[728]: Ignition 2.20.0 Feb 13 19:42:40.856656 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:42:40.892342 ignition[728]: Stage: fetch-offline Feb 13 19:42:40.860037 systemd-networkd[749]: eth0: Link UP Feb 13 19:42:40.892404 ignition[728]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:42:40.860055 systemd-networkd[749]: eth0: Gained carrier Feb 13 19:42:40.892417 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:42:40.860074 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:42:40.892553 ignition[728]: parsed url from cmdline: "" Feb 13 19:42:40.876829 systemd[1]: Reached target network.target - Network. Feb 13 19:42:40.892561 ignition[728]: no config URL provided Feb 13 19:42:40.880469 systemd-networkd[749]: eth0: DHCPv4 address 10.128.0.110/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 13 19:42:40.892573 ignition[728]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:42:40.903018 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:42:40.892588 ignition[728]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:42:40.937579 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:42:40.892600 ignition[728]: failed to fetch config: resource requires networking Feb 13 19:42:40.994034 unknown[757]: fetched base config from "system" Feb 13 19:42:40.892935 ignition[728]: Ignition finished successfully Feb 13 19:42:40.994048 unknown[757]: fetched base config from "system" Feb 13 19:42:40.981030 ignition[757]: Ignition 2.20.0 Feb 13 19:42:40.994061 unknown[757]: fetched user config from "gcp" Feb 13 19:42:40.981041 ignition[757]: Stage: fetch Feb 13 19:42:40.997828 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:42:40.981275 ignition[757]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:42:41.015619 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:42:40.981288 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:42:41.065668 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:42:40.981461 ignition[757]: parsed url from cmdline: "" Feb 13 19:42:41.069779 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:42:40.981472 ignition[757]: no config URL provided Feb 13 19:42:41.132160 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:42:40.981480 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:42:41.150118 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:42:40.981493 ignition[757]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:42:41.166548 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:42:40.981523 ignition[757]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Feb 13 19:42:41.183595 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:42:40.986224 ignition[757]: GET result: OK Feb 13 19:42:41.183762 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:42:40.986347 ignition[757]: parsing config with SHA512: ecf236381e18dd582eecf68dc7d3b9923c62314b6db6feb50b12b7d150253db4bda5cade698809e2b1d783afd5c832aeb9ee34f339ecfad7fa10309fef3d5662 Feb 13 19:42:41.207559 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:42:40.994949 ignition[757]: fetch: fetch complete Feb 13 19:42:41.228606 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:42:40.994962 ignition[757]: fetch: fetch passed Feb 13 19:42:40.995040 ignition[757]: Ignition finished successfully Feb 13 19:42:41.062823 ignition[763]: Ignition 2.20.0 Feb 13 19:42:41.062833 ignition[763]: Stage: kargs Feb 13 19:42:41.063047 ignition[763]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:42:41.063059 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:42:41.064231 ignition[763]: kargs: kargs passed Feb 13 19:42:41.064293 ignition[763]: Ignition finished successfully Feb 13 19:42:41.110902 ignition[769]: Ignition 2.20.0 Feb 13 19:42:41.110912 ignition[769]: Stage: disks Feb 13 19:42:41.111124 ignition[769]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:42:41.111137 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:42:41.112163 ignition[769]: disks: disks passed Feb 13 19:42:41.112223 ignition[769]: Ignition finished successfully Feb 13 19:42:41.282099 systemd-fsck[778]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 19:42:41.463582 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:42:41.495508 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:42:41.625455 kernel: EXT4-fs (sda9): mounted filesystem 7d46b70d-4c30-46e6-9935-e1f7fb523560 r/w with ordered data mode. Quota mode: none. Feb 13 19:42:41.626508 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:42:41.627514 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:42:41.653485 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:42:41.686716 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (786) Feb 13 19:42:41.710746 kernel: BTRFS info (device sda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:42:41.710856 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:42:41.710889 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:42:41.712606 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:42:41.737785 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:42:41.737848 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:42:41.713862 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:42:41.713971 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:42:41.714044 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:42:41.751065 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:42:41.772861 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:42:41.804633 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:42:41.944340 initrd-setup-root[810]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:42:41.954857 initrd-setup-root[817]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:42:41.966473 initrd-setup-root[824]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:42:41.977519 initrd-setup-root[831]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:42:42.142831 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:42:42.148546 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:42:42.148874 systemd-networkd[749]: eth0: Gained IPv6LL Feb 13 19:42:42.168758 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:42:42.204732 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:42:42.222485 kernel: BTRFS info (device sda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:42:42.250576 ignition[898]: INFO : Ignition 2.20.0 Feb 13 19:42:42.251266 ignition[898]: INFO : Stage: mount Feb 13 19:42:42.250931 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:42:42.251687 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:42:42.251687 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:42:42.252805 ignition[898]: INFO : mount: mount passed Feb 13 19:42:42.300516 ignition[898]: INFO : Ignition finished successfully Feb 13 19:42:42.294072 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:42:42.307648 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:42:42.355625 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:42:42.403360 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (910) Feb 13 19:42:42.421762 kernel: BTRFS info (device sda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:42:42.421883 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:42:42.421926 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:42:42.444291 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:42:42.444401 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:42:42.448492 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:42:42.492423 ignition[927]: INFO : Ignition 2.20.0 Feb 13 19:42:42.492423 ignition[927]: INFO : Stage: files Feb 13 19:42:42.507557 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:42:42.507557 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:42:42.507557 ignition[927]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:42:42.507557 ignition[927]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:42:42.507557 ignition[927]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:42:42.507557 ignition[927]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:42:42.507557 ignition[927]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:42:42.507557 ignition[927]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:42:42.505443 unknown[927]: wrote ssh authorized keys file for user: core Feb 13 19:42:42.608502 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:42:42.608502 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 19:42:42.714980 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:42:43.034131 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:42:43.034131 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:42:43.067506 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 19:42:43.321967 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:42:43.489415 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:42:43.505534 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:42:43.505534 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:42:43.505534 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:42:43.505534 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:42:43.505534 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:42:43.505534 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:42:43.505534 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:42:43.505534 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:42:43.505534 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:42:43.505534 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:42:43.505534 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:42:43.505534 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:42:43.505534 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:42:43.505534 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 19:42:43.743518 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:42:44.125125 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:42:44.125125 ignition[927]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 19:42:44.164546 ignition[927]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:42:44.164546 ignition[927]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:42:44.164546 ignition[927]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 19:42:44.164546 ignition[927]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:42:44.164546 ignition[927]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:42:44.164546 ignition[927]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:42:44.164546 ignition[927]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:42:44.164546 ignition[927]: INFO : files: files passed Feb 13 19:42:44.164546 ignition[927]: INFO : Ignition finished successfully Feb 13 19:42:44.130291 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:42:44.149655 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:42:44.169595 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:42:44.226134 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:42:44.374554 initrd-setup-root-after-ignition[954]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:42:44.374554 initrd-setup-root-after-ignition[954]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:42:44.226332 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:42:44.413585 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:42:44.249890 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:42:44.250829 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:42:44.279564 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:42:44.410492 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:42:44.410646 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:42:44.424867 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:42:44.448807 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:42:44.459999 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:42:44.467600 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:42:44.525866 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:42:44.547546 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:42:44.598583 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:42:44.618775 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:42:44.639897 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:42:44.658880 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:42:44.659118 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:42:44.686872 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:42:44.709881 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:42:44.727781 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:42:44.745787 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:42:44.766796 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:42:44.787896 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:42:44.807798 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:42:44.828825 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:42:44.849851 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:42:44.869868 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:42:44.888684 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:42:44.889005 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:42:44.914849 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:42:44.934891 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:42:44.955644 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:42:44.955931 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:42:44.977763 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:42:44.978003 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:42:45.008849 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:42:45.009121 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:42:45.028820 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:42:45.029037 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:42:45.055677 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:42:45.063629 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:42:45.131544 ignition[979]: INFO : Ignition 2.20.0 Feb 13 19:42:45.131544 ignition[979]: INFO : Stage: umount Feb 13 19:42:45.131544 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:42:45.131544 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:42:45.131544 ignition[979]: INFO : umount: umount passed Feb 13 19:42:45.131544 ignition[979]: INFO : Ignition finished successfully Feb 13 19:42:45.085775 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:42:45.086036 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:42:45.122769 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:42:45.122991 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:42:45.156691 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:42:45.157860 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:42:45.157993 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:42:45.160447 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:42:45.160583 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:42:45.195354 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:42:45.195559 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:42:45.202831 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:42:45.202920 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:42:45.228759 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:42:45.228843 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:42:45.248745 systemd[1]: Stopped target network.target - Network. Feb 13 19:42:45.258809 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:42:45.258898 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:42:45.273850 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:42:45.291832 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:42:45.293477 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:42:45.325681 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:42:45.343701 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:42:45.352853 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:42:45.352926 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:42:45.368797 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:42:45.368868 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:42:45.385824 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:42:45.385909 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:42:45.419759 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:42:45.419848 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:42:45.428875 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:42:45.428958 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:42:45.463046 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:42:45.467439 systemd-networkd[749]: eth0: DHCPv6 lease lost Feb 13 19:42:45.480785 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:42:45.502531 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:42:45.502695 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:42:45.523463 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:42:45.523811 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:42:45.541125 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:42:45.541263 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:42:45.566670 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:42:45.566733 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:42:45.598488 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:42:45.609509 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:42:45.609659 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:42:45.620601 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:42:45.620720 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:42:45.630730 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:42:45.630818 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:42:45.638806 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:42:45.638892 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:42:46.111466 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Feb 13 19:42:45.666868 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:42:45.687163 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:42:45.687382 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:42:45.714765 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:42:45.714917 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:42:45.734606 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:42:45.734695 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:42:45.754557 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:42:45.754688 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:42:45.784546 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:42:45.784696 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:42:45.814569 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:42:45.814721 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:42:45.851828 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:42:45.890472 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:42:45.890619 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:42:45.908740 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:42:45.908848 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:42:45.930651 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:42:45.930754 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:42:45.949597 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:42:45.949740 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:42:45.971177 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:42:45.971343 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:42:45.991029 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:42:45.991170 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:42:46.013019 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:42:46.041582 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:42:46.058816 systemd[1]: Switching root. Feb 13 19:42:46.405543 systemd-journald[184]: Journal stopped Feb 13 19:42:37.152536 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:41:03 -00 2025 Feb 13 19:42:37.152593 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:42:37.152613 kernel: BIOS-provided physical RAM map: Feb 13 19:42:37.152631 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Feb 13 19:42:37.152650 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Feb 13 19:42:37.152833 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Feb 13 19:42:37.152860 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Feb 13 19:42:37.152884 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Feb 13 19:42:37.153064 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd328fff] usable Feb 13 19:42:37.153089 kernel: BIOS-e820: [mem 0x00000000bd329000-0x00000000bd330fff] ACPI data Feb 13 19:42:37.153111 kernel: BIOS-e820: [mem 0x00000000bd331000-0x00000000bf8ecfff] usable Feb 13 19:42:37.153134 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Feb 13 19:42:37.153157 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Feb 13 19:42:37.153181 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Feb 13 19:42:37.153215 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Feb 13 19:42:37.153420 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Feb 13 19:42:37.153446 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Feb 13 19:42:37.153471 kernel: NX (Execute Disable) protection: active Feb 13 19:42:37.153496 kernel: APIC: Static calls initialized Feb 13 19:42:37.153520 kernel: efi: EFI v2.7 by EDK II Feb 13 19:42:37.153546 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd329018 Feb 13 19:42:37.153571 kernel: random: crng init done Feb 13 19:42:37.153596 kernel: secureboot: Secure boot disabled Feb 13 19:42:37.153621 kernel: SMBIOS 2.4 present. Feb 13 19:42:37.153650 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024 Feb 13 19:42:37.153675 kernel: Hypervisor detected: KVM Feb 13 19:42:37.153699 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:42:37.153726 kernel: kvm-clock: using sched offset of 13364046224 cycles Feb 13 19:42:37.153753 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:42:37.153780 kernel: tsc: Detected 2299.998 MHz processor Feb 13 19:42:37.153806 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:42:37.153832 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:42:37.153858 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Feb 13 19:42:37.153883 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Feb 13 19:42:37.153914 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:42:37.153940 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Feb 13 19:42:37.153966 kernel: Using GB pages for direct mapping Feb 13 19:42:37.153993 kernel: ACPI: Early table checksum verification disabled Feb 13 19:42:37.154018 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Feb 13 19:42:37.154056 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Feb 13 19:42:37.154094 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Feb 13 19:42:37.154127 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Feb 13 19:42:37.154155 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Feb 13 19:42:37.154185 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Feb 13 19:42:37.154215 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Feb 13 19:42:37.154245 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Feb 13 19:42:37.154274 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Feb 13 19:42:37.154297 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Feb 13 19:42:37.154359 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Feb 13 19:42:37.154388 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Feb 13 19:42:37.154418 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Feb 13 19:42:37.154447 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Feb 13 19:42:37.154477 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Feb 13 19:42:37.154506 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Feb 13 19:42:37.154534 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Feb 13 19:42:37.154562 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Feb 13 19:42:37.154595 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Feb 13 19:42:37.154623 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Feb 13 19:42:37.154657 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 19:42:37.154687 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 19:42:37.154716 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 19:42:37.154746 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Feb 13 19:42:37.154776 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Feb 13 19:42:37.154805 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Feb 13 19:42:37.154835 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Feb 13 19:42:37.154864 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Feb 13 19:42:37.154899 kernel: Zone ranges: Feb 13 19:42:37.154928 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:42:37.154957 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 19:42:37.154986 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Feb 13 19:42:37.155015 kernel: Movable zone start for each node Feb 13 19:42:37.155050 kernel: Early memory node ranges Feb 13 19:42:37.155079 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Feb 13 19:42:37.155108 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Feb 13 19:42:37.155137 kernel: node 0: [mem 0x0000000000100000-0x00000000bd328fff] Feb 13 19:42:37.155171 kernel: node 0: [mem 0x00000000bd331000-0x00000000bf8ecfff] Feb 13 19:42:37.155200 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Feb 13 19:42:37.155229 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Feb 13 19:42:37.155258 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Feb 13 19:42:37.155285 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:42:37.155329 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Feb 13 19:42:37.155359 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Feb 13 19:42:37.155388 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Feb 13 19:42:37.155417 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 19:42:37.155451 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Feb 13 19:42:37.155481 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 19:42:37.155510 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:42:37.155539 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 19:42:37.155568 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:42:37.155598 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:42:37.155627 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:42:37.155656 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:42:37.155685 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:42:37.155718 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 19:42:37.155748 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Feb 13 19:42:37.155777 kernel: Booting paravirtualized kernel on KVM Feb 13 19:42:37.155806 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:42:37.155835 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 19:42:37.155864 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 19:42:37.155893 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 19:42:37.155922 kernel: pcpu-alloc: [0] 0 1 Feb 13 19:42:37.155950 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:42:37.155984 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:42:37.156016 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:42:37.156053 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:42:37.156082 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 19:42:37.156111 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:42:37.156140 kernel: Fallback order for Node 0: 0 Feb 13 19:42:37.156169 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932272 Feb 13 19:42:37.156198 kernel: Policy zone: Normal Feb 13 19:42:37.156231 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:42:37.156260 kernel: software IO TLB: area num 2. Feb 13 19:42:37.156287 kernel: Memory: 7511328K/7860552K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 348968K reserved, 0K cma-reserved) Feb 13 19:42:37.156639 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:42:37.156671 kernel: Kernel/User page tables isolation: enabled Feb 13 19:42:37.156701 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 19:42:37.156730 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:42:37.156760 kernel: Dynamic Preempt: voluntary Feb 13 19:42:37.156814 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:42:37.156847 kernel: rcu: RCU event tracing is enabled. Feb 13 19:42:37.156878 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:42:37.156909 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:42:37.156944 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:42:37.156975 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:42:37.157006 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:42:37.157051 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:42:37.157083 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 19:42:37.157119 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:42:37.157151 kernel: Console: colour dummy device 80x25 Feb 13 19:42:37.157182 kernel: printk: console [ttyS0] enabled Feb 13 19:42:37.157213 kernel: ACPI: Core revision 20230628 Feb 13 19:42:37.157244 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:42:37.157275 kernel: x2apic enabled Feb 13 19:42:37.157300 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:42:37.157358 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Feb 13 19:42:37.157390 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 19:42:37.157427 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Feb 13 19:42:37.157458 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Feb 13 19:42:37.157489 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Feb 13 19:42:37.157520 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:42:37.157551 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 19:42:37.157582 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 19:42:37.157614 kernel: Spectre V2 : Mitigation: IBRS Feb 13 19:42:37.157645 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:42:37.157676 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:42:37.157712 kernel: RETBleed: Mitigation: IBRS Feb 13 19:42:37.157743 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 19:42:37.157774 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Feb 13 19:42:37.157805 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 19:42:37.157836 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 19:42:37.157868 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 19:42:37.157899 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:42:37.157930 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:42:37.157961 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:42:37.157996 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:42:37.158033 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 19:42:37.158065 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:42:37.158095 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:42:37.158127 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:42:37.158158 kernel: landlock: Up and running. Feb 13 19:42:37.158188 kernel: SELinux: Initializing. Feb 13 19:42:37.158219 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 19:42:37.158251 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 19:42:37.158283 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Feb 13 19:42:37.158326 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:42:37.158357 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:42:37.158389 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:42:37.158420 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Feb 13 19:42:37.158451 kernel: signal: max sigframe size: 1776 Feb 13 19:42:37.158480 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:42:37.158499 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:42:37.158530 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 19:42:37.158555 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:42:37.158580 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:42:37.158603 kernel: .... node #0, CPUs: #1 Feb 13 19:42:37.158631 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 19:42:37.158655 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 19:42:37.158678 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:42:37.158706 kernel: smpboot: Max logical packages: 1 Feb 13 19:42:37.158735 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Feb 13 19:42:37.158770 kernel: devtmpfs: initialized Feb 13 19:42:37.158799 kernel: x86/mm: Memory block size: 128MB Feb 13 19:42:37.158828 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Feb 13 19:42:37.158858 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:42:37.158887 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:42:37.158916 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:42:37.158945 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:42:37.158975 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:42:37.159004 kernel: audit: type=2000 audit(1739475755.380:1): state=initialized audit_enabled=0 res=1 Feb 13 19:42:37.159048 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:42:37.160736 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:42:37.160771 kernel: cpuidle: using governor menu Feb 13 19:42:37.160795 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:42:37.160813 kernel: dca service started, version 1.12.1 Feb 13 19:42:37.160837 kernel: PCI: Using configuration type 1 for base access Feb 13 19:42:37.160861 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:42:37.160883 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:42:37.160905 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:42:37.160934 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:42:37.160953 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:42:37.160972 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:42:37.160991 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:42:37.161010 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:42:37.161037 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:42:37.161056 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 19:42:37.161075 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:42:37.161094 kernel: ACPI: Interpreter enabled Feb 13 19:42:37.161117 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 19:42:37.161137 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:42:37.161156 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:42:37.161175 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 19:42:37.161194 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 19:42:37.161213 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:42:37.161553 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:42:37.161807 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 19:42:37.162090 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 19:42:37.162125 kernel: PCI host bridge to bus 0000:00 Feb 13 19:42:37.164539 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:42:37.164865 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:42:37.165069 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:42:37.165256 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Feb 13 19:42:37.165487 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:42:37.165715 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 19:42:37.165933 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Feb 13 19:42:37.166160 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 13 19:42:37.167721 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 19:42:37.167963 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Feb 13 19:42:37.168186 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Feb 13 19:42:37.169004 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Feb 13 19:42:37.170503 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 19:42:37.170777 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Feb 13 19:42:37.171048 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Feb 13 19:42:37.171660 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:42:37.171896 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 13 19:42:37.172129 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Feb 13 19:42:37.172154 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:42:37.172173 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:42:37.172193 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:42:37.172212 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:42:37.172241 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 19:42:37.172260 kernel: iommu: Default domain type: Translated Feb 13 19:42:37.172279 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:42:37.172298 kernel: efivars: Registered efivars operations Feb 13 19:42:37.173398 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:42:37.173429 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:42:37.173456 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Feb 13 19:42:37.173479 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Feb 13 19:42:37.173507 kernel: e820: reserve RAM buffer [mem 0xbd329000-0xbfffffff] Feb 13 19:42:37.173532 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Feb 13 19:42:37.173555 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Feb 13 19:42:37.173582 kernel: vgaarb: loaded Feb 13 19:42:37.173607 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:42:37.173644 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:42:37.173671 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:42:37.173697 kernel: pnp: PnP ACPI init Feb 13 19:42:37.173725 kernel: pnp: PnP ACPI: found 7 devices Feb 13 19:42:37.173748 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:42:37.173777 kernel: NET: Registered PF_INET protocol family Feb 13 19:42:37.173801 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 19:42:37.173826 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 19:42:37.173854 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:42:37.173888 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:42:37.173915 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 19:42:37.173940 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 19:42:37.173969 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 19:42:37.173996 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 19:42:37.174021 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:42:37.174061 kernel: NET: Registered PF_XDP protocol family Feb 13 19:42:37.174298 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:42:37.175624 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:42:37.175845 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:42:37.176042 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Feb 13 19:42:37.176253 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 19:42:37.176277 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:42:37.176296 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 19:42:37.176361 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Feb 13 19:42:37.176391 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 19:42:37.176417 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 19:42:37.176435 kernel: clocksource: Switched to clocksource tsc Feb 13 19:42:37.176455 kernel: Initialise system trusted keyrings Feb 13 19:42:37.176473 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 19:42:37.176493 kernel: Key type asymmetric registered Feb 13 19:42:37.176511 kernel: Asymmetric key parser 'x509' registered Feb 13 19:42:37.176528 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:42:37.176547 kernel: io scheduler mq-deadline registered Feb 13 19:42:37.176565 kernel: io scheduler kyber registered Feb 13 19:42:37.176588 kernel: io scheduler bfq registered Feb 13 19:42:37.176606 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:42:37.176625 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 13 19:42:37.176843 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Feb 13 19:42:37.176868 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 13 19:42:37.177107 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Feb 13 19:42:37.177135 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 13 19:42:37.180277 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Feb 13 19:42:37.180347 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:42:37.180375 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:42:37.180395 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 19:42:37.180415 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Feb 13 19:42:37.180434 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Feb 13 19:42:37.180658 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Feb 13 19:42:37.180685 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:42:37.180704 kernel: i8042: Warning: Keylock active Feb 13 19:42:37.180723 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:42:37.180748 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:42:37.180950 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 19:42:37.181148 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 19:42:37.181448 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T19:42:36 UTC (1739475756) Feb 13 19:42:37.181685 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 19:42:37.181715 kernel: intel_pstate: CPU model not supported Feb 13 19:42:37.181741 kernel: pstore: Using crash dump compression: deflate Feb 13 19:42:37.181770 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 19:42:37.181793 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:42:37.181816 kernel: Segment Routing with IPv6 Feb 13 19:42:37.181841 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:42:37.181864 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:42:37.181888 kernel: Key type dns_resolver registered Feb 13 19:42:37.181911 kernel: IPI shorthand broadcast: enabled Feb 13 19:42:37.181941 kernel: sched_clock: Marking stable (974005486, 180781549)->(1244213029, -89425994) Feb 13 19:42:37.181964 kernel: registered taskstats version 1 Feb 13 19:42:37.181989 kernel: Loading compiled-in X.509 certificates Feb 13 19:42:37.182022 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: b3acedbed401b3cd9632ee9302ddcce254d8924d' Feb 13 19:42:37.182053 kernel: Key type .fscrypt registered Feb 13 19:42:37.182073 kernel: Key type fscrypt-provisioning registered Feb 13 19:42:37.182097 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:42:37.182119 kernel: ima: No architecture policies found Feb 13 19:42:37.182145 kernel: clk: Disabling unused clocks Feb 13 19:42:37.182166 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 19:42:37.182190 kernel: Write protecting the kernel read-only data: 38912k Feb 13 19:42:37.182220 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 19:42:37.182243 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 19:42:37.182263 kernel: Run /init as init process Feb 13 19:42:37.182287 kernel: with arguments: Feb 13 19:42:37.183353 kernel: /init Feb 13 19:42:37.183378 kernel: with environment: Feb 13 19:42:37.183399 kernel: HOME=/ Feb 13 19:42:37.183418 kernel: TERM=linux Feb 13 19:42:37.183438 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:42:37.183469 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:42:37.183493 systemd[1]: Detected virtualization google. Feb 13 19:42:37.183514 systemd[1]: Detected architecture x86-64. Feb 13 19:42:37.183534 systemd[1]: Running in initrd. Feb 13 19:42:37.183554 systemd[1]: No hostname configured, using default hostname. Feb 13 19:42:37.183574 systemd[1]: Hostname set to . Feb 13 19:42:37.183596 systemd[1]: Initializing machine ID from random generator. Feb 13 19:42:37.183621 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:42:37.183641 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:42:37.183661 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:42:37.183685 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:42:37.183705 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:42:37.183726 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:42:37.183747 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:42:37.183776 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:42:37.183816 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:42:37.183842 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:42:37.183863 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:42:37.183884 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:42:37.183908 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:42:37.183934 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:42:37.183956 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:42:37.183976 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:42:37.183997 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:42:37.184018 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:42:37.184048 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:42:37.184069 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:42:37.184090 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:42:37.184116 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:42:37.184137 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:42:37.184158 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:42:37.184179 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:42:37.184201 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:42:37.184221 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:42:37.184242 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:42:37.184263 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:42:37.184285 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:42:37.184343 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:42:37.184417 systemd-journald[184]: Collecting audit messages is disabled. Feb 13 19:42:37.184480 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:42:37.184506 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:42:37.184541 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:42:37.184568 systemd-journald[184]: Journal started Feb 13 19:42:37.184625 systemd-journald[184]: Runtime Journal (/run/log/journal/d16e16dbe64b4ba6952f021896bb8cb0) is 8.0M, max 148.6M, 140.6M free. Feb 13 19:42:37.160722 systemd-modules-load[185]: Inserted module 'overlay' Feb 13 19:42:37.188456 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:42:37.211366 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:42:37.216111 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:42:37.228331 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:42:37.229071 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:42:37.233540 kernel: Bridge firewalling registered Feb 13 19:42:37.232747 systemd-modules-load[185]: Inserted module 'br_netfilter' Feb 13 19:42:37.242923 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:42:37.243783 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:42:37.262708 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:42:37.266680 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:42:37.277591 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:42:37.293716 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:42:37.294571 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:42:37.305718 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:42:37.321375 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:42:37.333682 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:42:37.363722 dracut-cmdline[220]: dracut-dracut-053 Feb 13 19:42:37.366950 systemd-resolved[210]: Positive Trust Anchors: Feb 13 19:42:37.366965 systemd-resolved[210]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:42:37.376455 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:42:37.367044 systemd-resolved[210]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:42:37.374973 systemd-resolved[210]: Defaulting to hostname 'linux'. Feb 13 19:42:37.378997 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:42:37.390574 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:42:37.483363 kernel: SCSI subsystem initialized Feb 13 19:42:37.495371 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:42:37.511350 kernel: iscsi: registered transport (tcp) Feb 13 19:42:37.544345 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:42:37.544439 kernel: QLogic iSCSI HBA Driver Feb 13 19:42:37.605281 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:42:37.612577 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:42:37.693703 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:42:37.693809 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:42:37.693846 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:42:37.753356 kernel: raid6: avx2x4 gen() 17578 MB/s Feb 13 19:42:37.774357 kernel: raid6: avx2x2 gen() 17648 MB/s Feb 13 19:42:37.800398 kernel: raid6: avx2x1 gen() 14056 MB/s Feb 13 19:42:37.800472 kernel: raid6: using algorithm avx2x2 gen() 17648 MB/s Feb 13 19:42:37.827467 kernel: raid6: .... xor() 18150 MB/s, rmw enabled Feb 13 19:42:37.827557 kernel: raid6: using avx2x2 recovery algorithm Feb 13 19:42:37.858355 kernel: xor: automatically using best checksumming function avx Feb 13 19:42:38.043357 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:42:38.058721 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:42:38.064703 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:42:38.113508 systemd-udevd[402]: Using default interface naming scheme 'v255'. Feb 13 19:42:38.121285 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:42:38.152620 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:42:38.197130 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Feb 13 19:42:38.239301 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:42:38.258595 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:42:38.369167 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:42:38.408682 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:42:38.463815 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:42:38.484341 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:42:38.509275 kernel: scsi host0: Virtio SCSI HBA Feb 13 19:42:38.526355 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Feb 13 19:42:38.531497 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:42:38.571482 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:42:38.547515 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:42:38.566341 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:42:38.653936 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:42:38.667090 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:42:38.667145 kernel: AES CTR mode by8 optimization enabled Feb 13 19:42:38.654265 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:42:38.705842 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Feb 13 19:42:38.754717 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Feb 13 19:42:38.755012 kernel: sd 0:0:1:0: [sda] Write Protect is off Feb 13 19:42:38.755346 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Feb 13 19:42:38.755643 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 19:42:38.755914 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:42:38.755954 kernel: GPT:17805311 != 25165823 Feb 13 19:42:38.755988 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:42:38.756024 kernel: GPT:17805311 != 25165823 Feb 13 19:42:38.756060 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:42:38.756097 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:42:38.756134 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Feb 13 19:42:38.747377 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:42:38.763429 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:42:38.763782 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:42:38.775586 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:42:38.844712 kernel: BTRFS: device fsid c7adc9b8-df7f-4a5f-93bf-204def2767a9 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (464) Feb 13 19:42:38.844770 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (459) Feb 13 19:42:38.802746 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:42:38.855280 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:42:38.886301 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Feb 13 19:42:38.905104 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Feb 13 19:42:38.924688 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Feb 13 19:42:38.951515 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Feb 13 19:42:38.962954 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:42:38.993027 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Feb 13 19:42:39.001605 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:42:39.021730 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:42:39.058001 disk-uuid[543]: Primary Header is updated. Feb 13 19:42:39.058001 disk-uuid[543]: Secondary Entries is updated. Feb 13 19:42:39.058001 disk-uuid[543]: Secondary Header is updated. Feb 13 19:42:39.076361 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:42:39.104345 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:42:39.117712 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:42:40.119536 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:42:40.119632 disk-uuid[544]: The operation has completed successfully. Feb 13 19:42:40.211066 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:42:40.211252 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:42:40.237563 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:42:40.271161 sh[566]: Success Feb 13 19:42:40.298345 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 19:42:40.377187 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:42:40.404487 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:42:40.410589 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:42:40.460357 kernel: BTRFS info (device dm-0): first mount of filesystem c7adc9b8-df7f-4a5f-93bf-204def2767a9 Feb 13 19:42:40.460479 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:42:40.477676 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:42:40.477779 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:42:40.490403 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:42:40.517426 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:42:40.525657 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:42:40.526803 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:42:40.532628 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:42:40.546537 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:42:40.617168 kernel: BTRFS info (device sda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:42:40.617271 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:42:40.617323 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:42:40.639018 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:42:40.639120 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:42:40.667274 kernel: BTRFS info (device sda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:42:40.666618 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:42:40.681848 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:42:40.701583 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:42:40.723584 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:42:40.754588 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:42:40.849693 systemd-networkd[749]: lo: Link UP Feb 13 19:42:40.849710 systemd-networkd[749]: lo: Gained carrier Feb 13 19:42:40.854128 systemd-networkd[749]: Enumeration completed Feb 13 19:42:40.854933 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:42:40.854942 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:42:40.892327 ignition[728]: Ignition 2.20.0 Feb 13 19:42:40.856656 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:42:40.892342 ignition[728]: Stage: fetch-offline Feb 13 19:42:40.860037 systemd-networkd[749]: eth0: Link UP Feb 13 19:42:40.892404 ignition[728]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:42:40.860055 systemd-networkd[749]: eth0: Gained carrier Feb 13 19:42:40.892417 ignition[728]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:42:40.860074 systemd-networkd[749]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:42:40.892553 ignition[728]: parsed url from cmdline: "" Feb 13 19:42:40.876829 systemd[1]: Reached target network.target - Network. Feb 13 19:42:40.892561 ignition[728]: no config URL provided Feb 13 19:42:40.880469 systemd-networkd[749]: eth0: DHCPv4 address 10.128.0.110/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 13 19:42:40.892573 ignition[728]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:42:40.903018 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:42:40.892588 ignition[728]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:42:40.937579 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:42:40.892600 ignition[728]: failed to fetch config: resource requires networking Feb 13 19:42:40.994034 unknown[757]: fetched base config from "system" Feb 13 19:42:40.892935 ignition[728]: Ignition finished successfully Feb 13 19:42:40.994048 unknown[757]: fetched base config from "system" Feb 13 19:42:40.981030 ignition[757]: Ignition 2.20.0 Feb 13 19:42:40.994061 unknown[757]: fetched user config from "gcp" Feb 13 19:42:40.981041 ignition[757]: Stage: fetch Feb 13 19:42:40.997828 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:42:40.981275 ignition[757]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:42:41.015619 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:42:40.981288 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:42:41.065668 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:42:40.981461 ignition[757]: parsed url from cmdline: "" Feb 13 19:42:41.069779 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:42:40.981472 ignition[757]: no config URL provided Feb 13 19:42:41.132160 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:42:40.981480 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:42:41.150118 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:42:40.981493 ignition[757]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:42:41.166548 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:42:40.981523 ignition[757]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Feb 13 19:42:41.183595 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:42:40.986224 ignition[757]: GET result: OK Feb 13 19:42:41.183762 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:42:40.986347 ignition[757]: parsing config with SHA512: ecf236381e18dd582eecf68dc7d3b9923c62314b6db6feb50b12b7d150253db4bda5cade698809e2b1d783afd5c832aeb9ee34f339ecfad7fa10309fef3d5662 Feb 13 19:42:41.207559 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:42:40.994949 ignition[757]: fetch: fetch complete Feb 13 19:42:41.228606 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:42:40.994962 ignition[757]: fetch: fetch passed Feb 13 19:42:40.995040 ignition[757]: Ignition finished successfully Feb 13 19:42:41.062823 ignition[763]: Ignition 2.20.0 Feb 13 19:42:41.062833 ignition[763]: Stage: kargs Feb 13 19:42:41.063047 ignition[763]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:42:41.063059 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:42:41.064231 ignition[763]: kargs: kargs passed Feb 13 19:42:41.064293 ignition[763]: Ignition finished successfully Feb 13 19:42:41.110902 ignition[769]: Ignition 2.20.0 Feb 13 19:42:41.110912 ignition[769]: Stage: disks Feb 13 19:42:41.111124 ignition[769]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:42:41.111137 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:42:41.112163 ignition[769]: disks: disks passed Feb 13 19:42:41.112223 ignition[769]: Ignition finished successfully Feb 13 19:42:41.282099 systemd-fsck[778]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 19:42:41.463582 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:42:41.495508 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:42:41.625455 kernel: EXT4-fs (sda9): mounted filesystem 7d46b70d-4c30-46e6-9935-e1f7fb523560 r/w with ordered data mode. Quota mode: none. Feb 13 19:42:41.626508 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:42:41.627514 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:42:41.653485 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:42:41.686716 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (786) Feb 13 19:42:41.710746 kernel: BTRFS info (device sda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:42:41.710856 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:42:41.710889 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:42:41.712606 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:42:41.737785 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:42:41.737848 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:42:41.713862 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:42:41.713971 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:42:41.714044 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:42:41.751065 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:42:41.772861 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:42:41.804633 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:42:41.944340 initrd-setup-root[810]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:42:41.954857 initrd-setup-root[817]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:42:41.966473 initrd-setup-root[824]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:42:41.977519 initrd-setup-root[831]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:42:42.142831 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:42:42.148546 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:42:42.148874 systemd-networkd[749]: eth0: Gained IPv6LL Feb 13 19:42:42.168758 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:42:42.204732 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:42:42.222485 kernel: BTRFS info (device sda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:42:42.250576 ignition[898]: INFO : Ignition 2.20.0 Feb 13 19:42:42.251266 ignition[898]: INFO : Stage: mount Feb 13 19:42:42.250931 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:42:42.251687 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:42:42.251687 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:42:42.252805 ignition[898]: INFO : mount: mount passed Feb 13 19:42:42.300516 ignition[898]: INFO : Ignition finished successfully Feb 13 19:42:42.294072 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:42:42.307648 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:42:42.355625 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:42:42.403360 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (910) Feb 13 19:42:42.421762 kernel: BTRFS info (device sda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:42:42.421883 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:42:42.421926 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:42:42.444291 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:42:42.444401 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:42:42.448492 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:42:42.492423 ignition[927]: INFO : Ignition 2.20.0 Feb 13 19:42:42.492423 ignition[927]: INFO : Stage: files Feb 13 19:42:42.507557 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:42:42.507557 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:42:42.507557 ignition[927]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:42:42.507557 ignition[927]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:42:42.507557 ignition[927]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:42:42.507557 ignition[927]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:42:42.507557 ignition[927]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:42:42.507557 ignition[927]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:42:42.505443 unknown[927]: wrote ssh authorized keys file for user: core Feb 13 19:42:42.608502 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:42:42.608502 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 19:42:42.714980 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:42:43.034131 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:42:43.034131 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:42:43.067506 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 19:42:43.321967 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:42:43.489415 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:42:43.505534 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:42:43.505534 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:42:43.505534 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:42:43.505534 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:42:43.505534 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:42:43.505534 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:42:43.505534 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:42:43.505534 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:42:43.505534 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:42:43.505534 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:42:43.505534 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:42:43.505534 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:42:43.505534 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:42:43.505534 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 19:42:43.743518 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:42:44.125125 ignition[927]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:42:44.125125 ignition[927]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 19:42:44.164546 ignition[927]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:42:44.164546 ignition[927]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:42:44.164546 ignition[927]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 19:42:44.164546 ignition[927]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:42:44.164546 ignition[927]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:42:44.164546 ignition[927]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:42:44.164546 ignition[927]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:42:44.164546 ignition[927]: INFO : files: files passed Feb 13 19:42:44.164546 ignition[927]: INFO : Ignition finished successfully Feb 13 19:42:44.130291 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:42:44.149655 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:42:44.169595 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:42:44.226134 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:42:44.374554 initrd-setup-root-after-ignition[954]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:42:44.374554 initrd-setup-root-after-ignition[954]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:42:44.226332 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:42:44.413585 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:42:44.249890 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:42:44.250829 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:42:44.279564 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:42:44.410492 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:42:44.410646 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:42:44.424867 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:42:44.448807 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:42:44.459999 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:42:44.467600 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:42:44.525866 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:42:44.547546 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:42:44.598583 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:42:44.618775 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:42:44.639897 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:42:44.658880 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:42:44.659118 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:42:44.686872 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:42:44.709881 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:42:44.727781 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:42:44.745787 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:42:44.766796 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:42:44.787896 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:42:44.807798 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:42:44.828825 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:42:44.849851 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:42:44.869868 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:42:44.888684 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:42:44.889005 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:42:44.914849 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:42:44.934891 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:42:44.955644 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:42:44.955931 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:42:44.977763 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:42:44.978003 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:42:45.008849 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:42:45.009121 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:42:45.028820 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:42:45.029037 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:42:45.055677 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:42:45.063629 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:42:45.131544 ignition[979]: INFO : Ignition 2.20.0 Feb 13 19:42:45.131544 ignition[979]: INFO : Stage: umount Feb 13 19:42:45.131544 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:42:45.131544 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:42:45.131544 ignition[979]: INFO : umount: umount passed Feb 13 19:42:45.131544 ignition[979]: INFO : Ignition finished successfully Feb 13 19:42:45.085775 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:42:45.086036 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:42:45.122769 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:42:45.122991 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:42:45.156691 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:42:45.157860 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:42:45.157993 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:42:45.160447 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:42:45.160583 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:42:45.195354 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:42:45.195559 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:42:45.202831 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:42:45.202920 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:42:45.228759 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:42:45.228843 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:42:45.248745 systemd[1]: Stopped target network.target - Network. Feb 13 19:42:45.258809 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:42:45.258898 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:42:45.273850 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:42:45.291832 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:42:45.293477 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:42:45.325681 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:42:45.343701 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:42:45.352853 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:42:45.352926 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:42:45.368797 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:42:45.368868 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:42:45.385824 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:42:45.385909 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:42:45.419759 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:42:45.419848 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:42:45.428875 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:42:45.428958 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:42:45.463046 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:42:45.467439 systemd-networkd[749]: eth0: DHCPv6 lease lost Feb 13 19:42:45.480785 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:42:45.502531 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:42:45.502695 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:42:45.523463 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:42:45.523811 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:42:45.541125 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:42:45.541263 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:42:45.566670 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:42:45.566733 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:42:45.598488 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:42:45.609509 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:42:45.609659 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:42:45.620601 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:42:45.620720 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:42:45.630730 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:42:45.630818 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:42:45.638806 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:42:45.638892 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:42:46.111466 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Feb 13 19:42:45.666868 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:42:45.687163 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:42:45.687382 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:42:45.714765 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:42:45.714917 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:42:45.734606 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:42:45.734695 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:42:45.754557 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:42:45.754688 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:42:45.784546 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:42:45.784696 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:42:45.814569 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:42:45.814721 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:42:45.851828 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:42:45.890472 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:42:45.890619 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:42:45.908740 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:42:45.908848 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:42:45.930651 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:42:45.930754 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:42:45.949597 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:42:45.949740 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:42:45.971177 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:42:45.971343 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:42:45.991029 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:42:45.991170 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:42:46.013019 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:42:46.041582 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:42:46.058816 systemd[1]: Switching root. Feb 13 19:42:46.405543 systemd-journald[184]: Journal stopped Feb 13 19:42:49.072517 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:42:49.072589 kernel: SELinux: policy capability open_perms=1 Feb 13 19:42:49.072617 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:42:49.072639 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:42:49.072660 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:42:49.072682 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:42:49.072710 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:42:49.072734 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:42:49.072762 kernel: audit: type=1403 audit(1739475766.837:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:42:49.072791 systemd[1]: Successfully loaded SELinux policy in 86.323ms. Feb 13 19:42:49.072824 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.785ms. Feb 13 19:42:49.072851 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:42:49.072877 systemd[1]: Detected virtualization google. Feb 13 19:42:49.072904 systemd[1]: Detected architecture x86-64. Feb 13 19:42:49.072936 systemd[1]: Detected first boot. Feb 13 19:42:49.072964 systemd[1]: Initializing machine ID from random generator. Feb 13 19:42:49.072992 zram_generator::config[1020]: No configuration found. Feb 13 19:42:49.073021 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:42:49.073046 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:42:49.073077 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:42:49.073104 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:42:49.073134 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:42:49.073161 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:42:49.073189 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:42:49.073216 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:42:49.073243 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:42:49.073273 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:42:49.073300 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:42:49.073347 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:42:49.073375 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:42:49.073403 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:42:49.073432 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:42:49.073460 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:42:49.073489 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:42:49.073523 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:42:49.073550 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:42:49.073575 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:42:49.073603 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:42:49.073631 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:42:49.073660 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:42:49.073695 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:42:49.073722 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:42:49.073750 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:42:49.073783 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:42:49.073809 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:42:49.073838 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:42:49.073866 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:42:49.073895 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:42:49.073924 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:42:49.073953 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:42:49.073989 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:42:49.074017 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:42:49.074056 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:42:49.074081 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:42:49.074107 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:42:49.074144 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:42:49.074173 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:42:49.074203 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:42:49.074234 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:42:49.074266 systemd[1]: Reached target machines.target - Containers. Feb 13 19:42:49.074299 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:42:49.074353 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:42:49.074385 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:42:49.074420 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:42:49.074455 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:42:49.074489 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:42:49.074522 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:42:49.074553 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:42:49.074579 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:42:49.074605 kernel: fuse: init (API version 7.39) Feb 13 19:42:49.074634 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:42:49.074678 kernel: loop: module loaded Feb 13 19:42:49.074707 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:42:49.074745 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:42:49.074779 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:42:49.074814 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:42:49.074847 kernel: ACPI: bus type drm_connector registered Feb 13 19:42:49.074870 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:42:49.074897 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:42:49.074921 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:42:49.075004 systemd-journald[1107]: Collecting audit messages is disabled. Feb 13 19:42:49.075207 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:42:49.075226 systemd-journald[1107]: Journal started Feb 13 19:42:49.075261 systemd-journald[1107]: Runtime Journal (/run/log/journal/971f013442b349e18b498580af860830) is 8.0M, max 148.6M, 140.6M free. Feb 13 19:42:47.823034 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:42:47.847813 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 19:42:47.848503 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:42:49.110879 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:42:49.110997 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:42:49.111036 systemd[1]: Stopped verity-setup.service. Feb 13 19:42:49.150517 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:42:49.162369 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:42:49.173186 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:42:49.183787 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:42:49.193809 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:42:49.203801 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:42:49.214899 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:42:49.225816 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:42:49.237046 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:42:49.249098 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:42:49.262143 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:42:49.262465 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:42:49.274107 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:42:49.274414 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:42:49.286018 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:42:49.286289 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:42:49.296975 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:42:49.297233 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:42:49.308969 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:42:49.309253 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:42:49.319976 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:42:49.320242 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:42:49.330970 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:42:49.340953 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:42:49.352959 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:42:49.364936 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:42:49.392583 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:42:49.409489 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:42:49.432522 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:42:49.442566 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:42:49.442881 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:42:49.455085 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:42:49.472642 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:42:49.491519 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:42:49.501769 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:42:49.511987 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:42:49.528136 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:42:49.539581 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:42:49.545902 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:42:49.555533 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:42:49.560912 systemd-journald[1107]: Time spent on flushing to /var/log/journal/971f013442b349e18b498580af860830 is 138.161ms for 935 entries. Feb 13 19:42:49.560912 systemd-journald[1107]: System Journal (/var/log/journal/971f013442b349e18b498580af860830) is 8.0M, max 584.8M, 576.8M free. Feb 13 19:42:49.741248 systemd-journald[1107]: Received client request to flush runtime journal. Feb 13 19:42:49.743690 kernel: loop0: detected capacity change from 0 to 141000 Feb 13 19:42:49.569943 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:42:49.589639 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:42:49.615076 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:42:49.640689 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:42:49.661088 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:42:49.671402 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:42:49.672061 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:42:49.701142 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:42:49.720249 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:42:49.741638 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:42:49.755110 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:42:49.767381 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:42:49.787255 systemd-tmpfiles[1139]: ACLs are not supported, ignoring. Feb 13 19:42:49.792631 systemd-tmpfiles[1139]: ACLs are not supported, ignoring. Feb 13 19:42:49.796460 udevadm[1140]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:42:49.818207 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:42:49.840719 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:42:49.846206 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:42:49.857999 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:42:49.859567 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:42:49.877067 kernel: loop1: detected capacity change from 0 to 205544 Feb 13 19:42:49.941904 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:42:49.964289 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:42:50.024107 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Feb 13 19:42:50.024143 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Feb 13 19:42:50.035666 kernel: loop2: detected capacity change from 0 to 138184 Feb 13 19:42:50.039037 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:42:50.147405 kernel: loop3: detected capacity change from 0 to 52184 Feb 13 19:42:50.225020 kernel: loop4: detected capacity change from 0 to 141000 Feb 13 19:42:50.297350 kernel: loop5: detected capacity change from 0 to 205544 Feb 13 19:42:50.353348 kernel: loop6: detected capacity change from 0 to 138184 Feb 13 19:42:50.420385 kernel: loop7: detected capacity change from 0 to 52184 Feb 13 19:42:50.462877 (sd-merge)[1165]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Feb 13 19:42:50.464410 (sd-merge)[1165]: Merged extensions into '/usr'. Feb 13 19:42:50.480235 systemd[1]: Reloading requested from client PID 1138 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:42:50.480519 systemd[1]: Reloading... Feb 13 19:42:50.663661 zram_generator::config[1187]: No configuration found. Feb 13 19:42:50.824872 ldconfig[1133]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:42:50.965448 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:42:51.088125 systemd[1]: Reloading finished in 606 ms. Feb 13 19:42:51.127848 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:42:51.138187 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:42:51.164724 systemd[1]: Starting ensure-sysext.service... Feb 13 19:42:51.179694 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:42:51.207019 systemd[1]: Reloading requested from client PID 1231 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:42:51.207049 systemd[1]: Reloading... Feb 13 19:42:51.263448 systemd-tmpfiles[1232]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:42:51.266071 systemd-tmpfiles[1232]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:42:51.269510 systemd-tmpfiles[1232]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:42:51.270912 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Feb 13 19:42:51.271215 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Feb 13 19:42:51.286085 systemd-tmpfiles[1232]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:42:51.286108 systemd-tmpfiles[1232]: Skipping /boot Feb 13 19:42:51.342831 systemd-tmpfiles[1232]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:42:51.344381 systemd-tmpfiles[1232]: Skipping /boot Feb 13 19:42:51.378359 zram_generator::config[1255]: No configuration found. Feb 13 19:42:51.557461 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:42:51.627785 systemd[1]: Reloading finished in 419 ms. Feb 13 19:42:51.651535 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:42:51.666983 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:42:51.696634 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:42:51.713969 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:42:51.733238 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:42:51.754247 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:42:51.774134 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:42:51.795614 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:42:51.800778 augenrules[1326]: No rules Feb 13 19:42:51.808162 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:42:51.808517 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:42:51.841492 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:42:51.857153 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:42:51.878993 systemd-udevd[1323]: Using default interface naming scheme 'v255'. Feb 13 19:42:51.887915 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:42:51.888546 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:42:51.900840 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:42:51.920458 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:42:51.941550 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:42:51.951622 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:42:51.962480 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:42:51.972465 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:42:51.983730 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:42:51.996415 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:42:52.008067 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:42:52.022672 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:42:52.035377 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:42:52.035661 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:42:52.048131 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:42:52.048772 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:42:52.061686 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:42:52.062653 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:42:52.073219 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:42:52.147130 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:42:52.158816 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:42:52.163411 systemd-resolved[1320]: Positive Trust Anchors: Feb 13 19:42:52.163941 systemd-resolved[1320]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:42:52.164136 systemd-resolved[1320]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:42:52.167869 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:42:52.181485 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:42:52.197761 systemd-resolved[1320]: Defaulting to hostname 'linux'. Feb 13 19:42:52.201738 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:42:52.222503 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:42:52.238721 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:42:52.255796 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 19:42:52.264616 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:42:52.275681 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:42:52.285945 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:42:52.291783 augenrules[1368]: /sbin/augenrules: No change Feb 13 19:42:52.295768 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:42:52.296642 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:42:52.302188 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:42:52.314297 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:42:52.315521 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:42:52.319470 augenrules[1399]: No rules Feb 13 19:42:52.327588 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:42:52.329462 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:42:52.340536 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:42:52.342406 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:42:52.353925 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:42:52.355717 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:42:52.369408 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:42:52.369691 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:42:52.398433 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 19:42:52.402075 systemd[1]: Finished ensure-sysext.service. Feb 13 19:42:52.430396 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 19:42:52.440331 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:42:52.459616 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:42:52.466605 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:42:52.471371 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Feb 13 19:42:52.503656 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Feb 13 19:42:52.487626 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Feb 13 19:42:52.505483 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:42:52.505614 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:42:52.519358 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Feb 13 19:42:52.614079 kernel: ACPI: button: Sleep Button [SLPF] Feb 13 19:42:52.620980 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Feb 13 19:42:52.626494 systemd-networkd[1388]: lo: Link UP Feb 13 19:42:52.626509 systemd-networkd[1388]: lo: Gained carrier Feb 13 19:42:52.635112 kernel: EDAC MC: Ver: 3.0.0 Feb 13 19:42:52.633257 systemd-networkd[1388]: Enumeration completed Feb 13 19:42:52.634017 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:42:52.634026 systemd-networkd[1388]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:42:52.635703 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:42:52.638487 systemd-networkd[1388]: eth0: Link UP Feb 13 19:42:52.639393 systemd-networkd[1388]: eth0: Gained carrier Feb 13 19:42:52.639433 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:42:52.654380 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1349) Feb 13 19:42:52.656620 systemd[1]: Reached target network.target - Network. Feb 13 19:42:52.660401 systemd-networkd[1388]: eth0: DHCPv4 address 10.128.0.110/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 13 19:42:52.675050 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:42:52.681338 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:42:52.744474 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:42:52.758656 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Feb 13 19:42:52.779718 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:42:52.792157 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:42:52.799627 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:42:52.824631 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:42:52.835466 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:42:52.867026 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:42:52.868362 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:42:52.877577 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:42:52.894343 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:42:52.909391 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:42:52.920961 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:42:52.930628 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:42:52.941533 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:42:52.952725 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:42:52.962649 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:42:52.974523 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:42:52.985487 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:42:52.985565 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:42:52.994496 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:42:53.006032 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:42:53.018575 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:42:53.041612 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:42:53.052707 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:42:53.064806 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:42:53.075526 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:42:53.085479 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:42:53.094577 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:42:53.094646 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:42:53.100504 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:42:53.124378 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:42:53.141694 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:42:53.166441 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:42:53.188616 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:42:53.195258 jq[1451]: false Feb 13 19:42:53.198514 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:42:53.210595 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:42:53.229600 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 19:42:53.243469 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:42:53.261711 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:42:53.280602 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:42:53.292784 coreos-metadata[1449]: Feb 13 19:42:53.292 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Feb 13 19:42:53.296061 coreos-metadata[1449]: Feb 13 19:42:53.295 INFO Fetch successful Feb 13 19:42:53.298589 coreos-metadata[1449]: Feb 13 19:42:53.296 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Feb 13 19:42:53.299462 coreos-metadata[1449]: Feb 13 19:42:53.299 INFO Fetch successful Feb 13 19:42:53.305992 coreos-metadata[1449]: Feb 13 19:42:53.301 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Feb 13 19:42:53.305992 coreos-metadata[1449]: Feb 13 19:42:53.301 INFO Fetch successful Feb 13 19:42:53.305992 coreos-metadata[1449]: Feb 13 19:42:53.303 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Feb 13 19:42:53.306344 extend-filesystems[1452]: Found loop4 Feb 13 19:42:53.306344 extend-filesystems[1452]: Found loop5 Feb 13 19:42:53.306344 extend-filesystems[1452]: Found loop6 Feb 13 19:42:53.306344 extend-filesystems[1452]: Found loop7 Feb 13 19:42:53.306344 extend-filesystems[1452]: Found sda Feb 13 19:42:53.303570 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:42:53.356555 ntpd[1456]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:06:12 UTC 2025 (1): Starting Feb 13 19:42:53.361665 ntpd[1456]: 13 Feb 19:42:53 ntpd[1456]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:06:12 UTC 2025 (1): Starting Feb 13 19:42:53.361665 ntpd[1456]: 13 Feb 19:42:53 ntpd[1456]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:42:53.361665 ntpd[1456]: 13 Feb 19:42:53 ntpd[1456]: ---------------------------------------------------- Feb 13 19:42:53.361665 ntpd[1456]: 13 Feb 19:42:53 ntpd[1456]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:42:53.361665 ntpd[1456]: 13 Feb 19:42:53 ntpd[1456]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:42:53.361665 ntpd[1456]: 13 Feb 19:42:53 ntpd[1456]: corporation. Support and training for ntp-4 are Feb 13 19:42:53.361665 ntpd[1456]: 13 Feb 19:42:53 ntpd[1456]: available at https://www.nwtime.org/support Feb 13 19:42:53.361665 ntpd[1456]: 13 Feb 19:42:53 ntpd[1456]: ---------------------------------------------------- Feb 13 19:42:53.361665 ntpd[1456]: 13 Feb 19:42:53 ntpd[1456]: proto: precision = 0.080 usec (-23) Feb 13 19:42:53.379010 coreos-metadata[1449]: Feb 13 19:42:53.308 INFO Fetch successful Feb 13 19:42:53.379194 extend-filesystems[1452]: Found sda1 Feb 13 19:42:53.379194 extend-filesystems[1452]: Found sda2 Feb 13 19:42:53.379194 extend-filesystems[1452]: Found sda3 Feb 13 19:42:53.379194 extend-filesystems[1452]: Found usr Feb 13 19:42:53.379194 extend-filesystems[1452]: Found sda4 Feb 13 19:42:53.379194 extend-filesystems[1452]: Found sda6 Feb 13 19:42:53.379194 extend-filesystems[1452]: Found sda7 Feb 13 19:42:53.379194 extend-filesystems[1452]: Found sda9 Feb 13 19:42:53.379194 extend-filesystems[1452]: Checking size of /dev/sda9 Feb 13 19:42:53.524505 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Feb 13 19:42:53.524562 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Feb 13 19:42:53.314089 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Feb 13 19:42:53.356591 ntpd[1456]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:42:53.525106 ntpd[1456]: 13 Feb 19:42:53 ntpd[1456]: basedate set to 2025-02-01 Feb 13 19:42:53.525106 ntpd[1456]: 13 Feb 19:42:53 ntpd[1456]: gps base set to 2025-02-02 (week 2352) Feb 13 19:42:53.525106 ntpd[1456]: 13 Feb 19:42:53 ntpd[1456]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:42:53.525106 ntpd[1456]: 13 Feb 19:42:53 ntpd[1456]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:42:53.525106 ntpd[1456]: 13 Feb 19:42:53 ntpd[1456]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:42:53.525106 ntpd[1456]: 13 Feb 19:42:53 ntpd[1456]: Listen normally on 3 eth0 10.128.0.110:123 Feb 13 19:42:53.525106 ntpd[1456]: 13 Feb 19:42:53 ntpd[1456]: Listen normally on 4 lo [::1]:123 Feb 13 19:42:53.525106 ntpd[1456]: 13 Feb 19:42:53 ntpd[1456]: bind(21) AF_INET6 fe80::4001:aff:fe80:6e%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:42:53.525106 ntpd[1456]: 13 Feb 19:42:53 ntpd[1456]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:6e%2#123 Feb 13 19:42:53.525106 ntpd[1456]: 13 Feb 19:42:53 ntpd[1456]: failed to init interface for address fe80::4001:aff:fe80:6e%2 Feb 13 19:42:53.525106 ntpd[1456]: 13 Feb 19:42:53 ntpd[1456]: Listening on routing socket on fd #21 for interface updates Feb 13 19:42:53.525106 ntpd[1456]: 13 Feb 19:42:53 ntpd[1456]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:42:53.525106 ntpd[1456]: 13 Feb 19:42:53 ntpd[1456]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:42:53.525835 extend-filesystems[1452]: Resized partition /dev/sda9 Feb 13 19:42:53.538566 update_engine[1469]: I20250213 19:42:53.472636 1469 main.cc:92] Flatcar Update Engine starting Feb 13 19:42:53.538566 update_engine[1469]: I20250213 19:42:53.481207 1469 update_check_scheduler.cc:74] Next update check in 10m48s Feb 13 19:42:53.315021 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:42:53.356610 ntpd[1456]: ---------------------------------------------------- Feb 13 19:42:53.539365 extend-filesystems[1481]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:42:53.539365 extend-filesystems[1481]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 19:42:53.539365 extend-filesystems[1481]: old_desc_blocks = 1, new_desc_blocks = 2 Feb 13 19:42:53.539365 extend-filesystems[1481]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Feb 13 19:42:53.322603 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:42:53.356626 ntpd[1456]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:42:53.583261 extend-filesystems[1452]: Resized filesystem in /dev/sda9 Feb 13 19:42:53.355238 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:42:53.356643 ntpd[1456]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:42:53.592590 jq[1473]: true Feb 13 19:42:53.374045 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:42:53.356659 ntpd[1456]: corporation. Support and training for ntp-4 are Feb 13 19:42:53.411635 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:42:53.356675 ntpd[1456]: available at https://www.nwtime.org/support Feb 13 19:42:53.412417 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:42:53.356776 ntpd[1456]: ---------------------------------------------------- Feb 13 19:42:53.595855 jq[1486]: true Feb 13 19:42:53.414213 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:42:53.359744 ntpd[1456]: proto: precision = 0.080 usec (-23) Feb 13 19:42:53.414541 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:42:53.362303 ntpd[1456]: basedate set to 2025-02-01 Feb 13 19:42:53.456913 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:42:53.363374 ntpd[1456]: gps base set to 2025-02-02 (week 2352) Feb 13 19:42:53.457633 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:42:53.371518 dbus-daemon[1450]: [system] SELinux support is enabled Feb 13 19:42:53.495049 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:42:53.372770 ntpd[1456]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:42:53.496327 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:42:53.372846 ntpd[1456]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:42:53.569719 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:42:53.374537 ntpd[1456]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:42:53.592020 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:42:53.374611 ntpd[1456]: Listen normally on 3 eth0 10.128.0.110:123 Feb 13 19:42:53.592074 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:42:53.374683 ntpd[1456]: Listen normally on 4 lo [::1]:123 Feb 13 19:42:53.608215 (ntainerd)[1487]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:42:53.374767 ntpd[1456]: bind(21) AF_INET6 fe80::4001:aff:fe80:6e%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:42:53.613593 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:42:53.374809 ntpd[1456]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:6e%2#123 Feb 13 19:42:53.613631 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:42:53.374837 ntpd[1456]: failed to init interface for address fe80::4001:aff:fe80:6e%2 Feb 13 19:42:53.374893 ntpd[1456]: Listening on routing socket on fd #21 for interface updates Feb 13 19:42:53.377910 ntpd[1456]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:42:53.377958 ntpd[1456]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:42:53.410990 dbus-daemon[1450]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1388 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 19:42:53.576921 dbus-daemon[1450]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:42:53.625520 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:42:53.652623 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:42:53.671360 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1353) Feb 13 19:42:53.693131 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:42:53.705577 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 19:42:53.724015 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:42:53.740343 tar[1483]: linux-amd64/helm Feb 13 19:42:53.751802 systemd-logind[1467]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 19:42:53.751859 systemd-logind[1467]: Watching system buttons on /dev/input/event3 (Sleep Button) Feb 13 19:42:53.751899 systemd-logind[1467]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:42:53.763227 systemd-logind[1467]: New seat seat0. Feb 13 19:42:53.775117 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:42:53.862584 bash[1520]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:42:53.866387 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:42:53.891814 systemd[1]: Starting sshkeys.service... Feb 13 19:42:53.952074 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:42:53.992834 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:42:54.002688 locksmithd[1513]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:42:54.102894 coreos-metadata[1526]: Feb 13 19:42:54.101 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Feb 13 19:42:54.105398 coreos-metadata[1526]: Feb 13 19:42:54.105 INFO Fetch failed with 404: resource not found Feb 13 19:42:54.105398 coreos-metadata[1526]: Feb 13 19:42:54.105 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Feb 13 19:42:54.109332 coreos-metadata[1526]: Feb 13 19:42:54.105 INFO Fetch successful Feb 13 19:42:54.109332 coreos-metadata[1526]: Feb 13 19:42:54.105 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Feb 13 19:42:54.109332 coreos-metadata[1526]: Feb 13 19:42:54.105 INFO Fetch failed with 404: resource not found Feb 13 19:42:54.109332 coreos-metadata[1526]: Feb 13 19:42:54.106 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Feb 13 19:42:54.109332 coreos-metadata[1526]: Feb 13 19:42:54.106 INFO Fetch failed with 404: resource not found Feb 13 19:42:54.109332 coreos-metadata[1526]: Feb 13 19:42:54.106 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Feb 13 19:42:54.109332 coreos-metadata[1526]: Feb 13 19:42:54.107 INFO Fetch successful Feb 13 19:42:54.109282 unknown[1526]: wrote ssh authorized keys file for user: core Feb 13 19:42:54.178195 update-ssh-keys[1533]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:42:54.178043 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:42:54.196982 systemd[1]: Finished sshkeys.service. Feb 13 19:42:54.200790 dbus-daemon[1450]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 19:42:54.201672 dbus-daemon[1450]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1512 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 19:42:54.206758 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 19:42:54.231673 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 19:42:54.348511 polkitd[1539]: Started polkitd version 121 Feb 13 19:42:54.357255 ntpd[1456]: bind(24) AF_INET6 fe80::4001:aff:fe80:6e%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:42:54.357739 ntpd[1456]: 13 Feb 19:42:54 ntpd[1456]: bind(24) AF_INET6 fe80::4001:aff:fe80:6e%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:42:54.357303 ntpd[1456]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:6e%2#123 Feb 13 19:42:54.358032 ntpd[1456]: 13 Feb 19:42:54 ntpd[1456]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:6e%2#123 Feb 13 19:42:54.358032 ntpd[1456]: 13 Feb 19:42:54 ntpd[1456]: failed to init interface for address fe80::4001:aff:fe80:6e%2 Feb 13 19:42:54.357892 ntpd[1456]: failed to init interface for address fe80::4001:aff:fe80:6e%2 Feb 13 19:42:54.371463 polkitd[1539]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 19:42:54.372533 systemd-networkd[1388]: eth0: Gained IPv6LL Feb 13 19:42:54.375005 polkitd[1539]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 19:42:54.381932 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:42:54.382654 polkitd[1539]: Finished loading, compiling and executing 2 rules Feb 13 19:42:54.387657 dbus-daemon[1450]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 19:42:54.391775 polkitd[1539]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 19:42:54.394171 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 19:42:54.405143 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:42:54.422417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:42:54.442262 sshd_keygen[1474]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:42:54.442558 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:42:54.463479 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Feb 13 19:42:54.515761 systemd-hostnamed[1512]: Hostname set to (transient) Feb 13 19:42:54.517967 systemd-resolved[1320]: System hostname changed to 'ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal'. Feb 13 19:42:54.531272 init.sh[1551]: + '[' -e /etc/default/instance_configs.cfg.template ']' Feb 13 19:42:54.531904 init.sh[1551]: + echo -e '[InstanceSetup]\nset_host_keys = false' Feb 13 19:42:54.533761 init.sh[1551]: + /usr/bin/google_instance_setup Feb 13 19:42:54.585755 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:42:54.593626 containerd[1487]: time="2025-02-13T19:42:54.593491971Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:42:54.596494 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:42:54.625886 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:42:54.645888 systemd[1]: Started sshd@0-10.128.0.110:22-139.178.68.195:39332.service - OpenSSH per-connection server daemon (139.178.68.195:39332). Feb 13 19:42:54.698994 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:42:54.699392 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:42:54.719767 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:42:54.743475 containerd[1487]: time="2025-02-13T19:42:54.743029319Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:42:54.753698 containerd[1487]: time="2025-02-13T19:42:54.750656756Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:42:54.753698 containerd[1487]: time="2025-02-13T19:42:54.750736189Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:42:54.753698 containerd[1487]: time="2025-02-13T19:42:54.750924245Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:42:54.753698 containerd[1487]: time="2025-02-13T19:42:54.752798858Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:42:54.753698 containerd[1487]: time="2025-02-13T19:42:54.752884970Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:42:54.757930 containerd[1487]: time="2025-02-13T19:42:54.754157941Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:42:54.757930 containerd[1487]: time="2025-02-13T19:42:54.754232835Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:42:54.757930 containerd[1487]: time="2025-02-13T19:42:54.757782081Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:42:54.757930 containerd[1487]: time="2025-02-13T19:42:54.757840751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:42:54.757930 containerd[1487]: time="2025-02-13T19:42:54.757880331Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:42:54.758472 containerd[1487]: time="2025-02-13T19:42:54.757900560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:42:54.760130 containerd[1487]: time="2025-02-13T19:42:54.758748135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:42:54.764986 containerd[1487]: time="2025-02-13T19:42:54.764291569Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:42:54.764986 containerd[1487]: time="2025-02-13T19:42:54.764743470Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:42:54.764986 containerd[1487]: time="2025-02-13T19:42:54.764774325Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:42:54.767188 containerd[1487]: time="2025-02-13T19:42:54.765465117Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:42:54.767188 containerd[1487]: time="2025-02-13T19:42:54.767070923Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:42:54.779452 containerd[1487]: time="2025-02-13T19:42:54.778406142Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:42:54.779452 containerd[1487]: time="2025-02-13T19:42:54.778527942Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:42:54.779452 containerd[1487]: time="2025-02-13T19:42:54.778598557Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:42:54.779452 containerd[1487]: time="2025-02-13T19:42:54.778744909Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:42:54.779452 containerd[1487]: time="2025-02-13T19:42:54.778779640Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:42:54.779452 containerd[1487]: time="2025-02-13T19:42:54.779034341Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:42:54.784990 containerd[1487]: time="2025-02-13T19:42:54.783714689Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:42:54.784990 containerd[1487]: time="2025-02-13T19:42:54.784001865Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:42:54.784990 containerd[1487]: time="2025-02-13T19:42:54.784032205Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:42:54.784990 containerd[1487]: time="2025-02-13T19:42:54.784060270Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:42:54.784990 containerd[1487]: time="2025-02-13T19:42:54.784085556Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:42:54.784990 containerd[1487]: time="2025-02-13T19:42:54.784108585Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:42:54.784990 containerd[1487]: time="2025-02-13T19:42:54.784130251Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:42:54.784990 containerd[1487]: time="2025-02-13T19:42:54.784154246Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:42:54.784990 containerd[1487]: time="2025-02-13T19:42:54.784177812Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:42:54.784990 containerd[1487]: time="2025-02-13T19:42:54.784200363Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:42:54.784990 containerd[1487]: time="2025-02-13T19:42:54.784224289Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:42:54.784990 containerd[1487]: time="2025-02-13T19:42:54.784245276Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:42:54.784990 containerd[1487]: time="2025-02-13T19:42:54.784278834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:42:54.784705 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:42:54.788933 containerd[1487]: time="2025-02-13T19:42:54.786592629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:42:54.788933 containerd[1487]: time="2025-02-13T19:42:54.786671158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:42:54.788933 containerd[1487]: time="2025-02-13T19:42:54.786703812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:42:54.788933 containerd[1487]: time="2025-02-13T19:42:54.786726942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:42:54.788933 containerd[1487]: time="2025-02-13T19:42:54.787485451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:42:54.788933 containerd[1487]: time="2025-02-13T19:42:54.787529098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:42:54.788933 containerd[1487]: time="2025-02-13T19:42:54.787555186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:42:54.788933 containerd[1487]: time="2025-02-13T19:42:54.787579388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:42:54.788933 containerd[1487]: time="2025-02-13T19:42:54.787609029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:42:54.788933 containerd[1487]: time="2025-02-13T19:42:54.787630399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:42:54.788933 containerd[1487]: time="2025-02-13T19:42:54.787656083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:42:54.788933 containerd[1487]: time="2025-02-13T19:42:54.787681019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:42:54.788933 containerd[1487]: time="2025-02-13T19:42:54.787720216Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:42:54.788933 containerd[1487]: time="2025-02-13T19:42:54.787765278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:42:54.790948 containerd[1487]: time="2025-02-13T19:42:54.789217491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:42:54.790948 containerd[1487]: time="2025-02-13T19:42:54.789258296Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:42:54.790948 containerd[1487]: time="2025-02-13T19:42:54.789385316Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:42:54.790948 containerd[1487]: time="2025-02-13T19:42:54.789506654Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:42:54.790948 containerd[1487]: time="2025-02-13T19:42:54.789533489Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:42:54.790948 containerd[1487]: time="2025-02-13T19:42:54.789561695Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:42:54.790948 containerd[1487]: time="2025-02-13T19:42:54.789581544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:42:54.790948 containerd[1487]: time="2025-02-13T19:42:54.789606306Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:42:54.790948 containerd[1487]: time="2025-02-13T19:42:54.789628754Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:42:54.790948 containerd[1487]: time="2025-02-13T19:42:54.789649653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:42:54.794008 containerd[1487]: time="2025-02-13T19:42:54.790224855Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:42:54.794008 containerd[1487]: time="2025-02-13T19:42:54.793407972Z" level=info msg="Connect containerd service" Feb 13 19:42:54.794008 containerd[1487]: time="2025-02-13T19:42:54.793492069Z" level=info msg="using legacy CRI server" Feb 13 19:42:54.794008 containerd[1487]: time="2025-02-13T19:42:54.793506452Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:42:54.794008 containerd[1487]: time="2025-02-13T19:42:54.793790046Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:42:54.809692 containerd[1487]: time="2025-02-13T19:42:54.806968293Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:42:54.809692 containerd[1487]: time="2025-02-13T19:42:54.807387774Z" level=info msg="Start subscribing containerd event" Feb 13 19:42:54.809692 containerd[1487]: time="2025-02-13T19:42:54.807473637Z" level=info msg="Start recovering state" Feb 13 19:42:54.809692 containerd[1487]: time="2025-02-13T19:42:54.807580890Z" level=info msg="Start event monitor" Feb 13 19:42:54.809692 containerd[1487]: time="2025-02-13T19:42:54.807599888Z" level=info msg="Start snapshots syncer" Feb 13 19:42:54.809692 containerd[1487]: time="2025-02-13T19:42:54.807623180Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:42:54.809692 containerd[1487]: time="2025-02-13T19:42:54.807591440Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:42:54.809692 containerd[1487]: time="2025-02-13T19:42:54.807637643Z" level=info msg="Start streaming server" Feb 13 19:42:54.809692 containerd[1487]: time="2025-02-13T19:42:54.807726054Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:42:54.809692 containerd[1487]: time="2025-02-13T19:42:54.807808714Z" level=info msg="containerd successfully booted in 0.216514s" Feb 13 19:42:54.809559 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:42:54.827990 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:42:54.838170 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:42:54.847653 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:42:55.176353 sshd[1572]: Accepted publickey for core from 139.178.68.195 port 39332 ssh2: RSA SHA256:r/2m5GDaYI6NVLjjK2e6K38+w/tgqQ1v0R6DcMMXQ1c Feb 13 19:42:55.176034 sshd-session[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:55.217051 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:42:55.240513 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:42:55.260095 systemd-logind[1467]: New session 1 of user core. Feb 13 19:42:55.292370 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:42:55.309672 tar[1483]: linux-amd64/LICENSE Feb 13 19:42:55.309672 tar[1483]: linux-amd64/README.md Feb 13 19:42:55.315889 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:42:55.363438 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:42:55.367563 (systemd)[1586]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:42:55.600178 instance-setup[1561]: INFO Running google_set_multiqueue. Feb 13 19:42:55.643237 instance-setup[1561]: INFO Set channels for eth0 to 2. Feb 13 19:42:55.655570 instance-setup[1561]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Feb 13 19:42:55.659465 instance-setup[1561]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Feb 13 19:42:55.659566 instance-setup[1561]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Feb 13 19:42:55.662387 instance-setup[1561]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Feb 13 19:42:55.662456 instance-setup[1561]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Feb 13 19:42:55.664584 instance-setup[1561]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Feb 13 19:42:55.665500 instance-setup[1561]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Feb 13 19:42:55.669175 instance-setup[1561]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Feb 13 19:42:55.686438 instance-setup[1561]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Feb 13 19:42:55.687135 systemd[1586]: Queued start job for default target default.target. Feb 13 19:42:55.691941 instance-setup[1561]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Feb 13 19:42:55.694252 instance-setup[1561]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Feb 13 19:42:55.694338 instance-setup[1561]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Feb 13 19:42:55.696107 systemd[1586]: Created slice app.slice - User Application Slice. Feb 13 19:42:55.696163 systemd[1586]: Reached target paths.target - Paths. Feb 13 19:42:55.696194 systemd[1586]: Reached target timers.target - Timers. Feb 13 19:42:55.706535 systemd[1586]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:42:55.731969 init.sh[1551]: + /usr/bin/google_metadata_script_runner --script-type startup Feb 13 19:42:55.740062 systemd[1586]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:42:55.740345 systemd[1586]: Reached target sockets.target - Sockets. Feb 13 19:42:55.740386 systemd[1586]: Reached target basic.target - Basic System. Feb 13 19:42:55.740484 systemd[1586]: Reached target default.target - Main User Target. Feb 13 19:42:55.740554 systemd[1586]: Startup finished in 352ms. Feb 13 19:42:55.740566 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:42:55.756751 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:42:55.960058 startup-script[1624]: INFO Starting startup scripts. Feb 13 19:42:55.975591 startup-script[1624]: INFO No startup scripts found in metadata. Feb 13 19:42:55.975679 startup-script[1624]: INFO Finished running startup scripts. Feb 13 19:42:56.015722 systemd[1]: Started sshd@1-10.128.0.110:22-139.178.68.195:39348.service - OpenSSH per-connection server daemon (139.178.68.195:39348). Feb 13 19:42:56.027132 init.sh[1551]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Feb 13 19:42:56.027132 init.sh[1551]: + daemon_pids=() Feb 13 19:42:56.027132 init.sh[1551]: + for d in accounts clock_skew network Feb 13 19:42:56.027544 init.sh[1551]: + daemon_pids+=($!) Feb 13 19:42:56.027697 init.sh[1551]: + for d in accounts clock_skew network Feb 13 19:42:56.030667 init.sh[1551]: + daemon_pids+=($!) Feb 13 19:42:56.030667 init.sh[1551]: + for d in accounts clock_skew network Feb 13 19:42:56.030667 init.sh[1551]: + daemon_pids+=($!) Feb 13 19:42:56.030667 init.sh[1551]: + NOTIFY_SOCKET=/run/systemd/notify Feb 13 19:42:56.030667 init.sh[1551]: + /usr/bin/systemd-notify --ready Feb 13 19:42:56.031381 init.sh[1633]: + /usr/bin/google_clock_skew_daemon Feb 13 19:42:56.039677 init.sh[1634]: + /usr/bin/google_network_daemon Feb 13 19:42:56.047536 init.sh[1632]: + /usr/bin/google_accounts_daemon Feb 13 19:42:56.095632 systemd[1]: Started oem-gce.service - GCE Linux Agent. Feb 13 19:42:56.119347 init.sh[1551]: + wait -n 1632 1633 1634 Feb 13 19:42:56.420531 sshd[1631]: Accepted publickey for core from 139.178.68.195 port 39348 ssh2: RSA SHA256:r/2m5GDaYI6NVLjjK2e6K38+w/tgqQ1v0R6DcMMXQ1c Feb 13 19:42:56.422056 sshd-session[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:56.442971 systemd-logind[1467]: New session 2 of user core. Feb 13 19:42:56.446627 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:42:56.604590 google-clock-skew[1633]: INFO Starting Google Clock Skew daemon. Feb 13 19:42:56.635047 google-clock-skew[1633]: INFO Clock drift token has changed: 0. Feb 13 19:42:56.643564 google-networking[1634]: INFO Starting Google Networking daemon. Feb 13 19:42:56.658329 sshd[1643]: Connection closed by 139.178.68.195 port 39348 Feb 13 19:42:56.659143 sshd-session[1631]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:56.673851 systemd[1]: sshd@1-10.128.0.110:22-139.178.68.195:39348.service: Deactivated successfully. Feb 13 19:42:56.679267 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:42:56.681065 systemd-logind[1467]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:42:56.683253 systemd-logind[1467]: Removed session 2. Feb 13 19:42:56.692679 groupadd[1646]: group added to /etc/group: name=google-sudoers, GID=1000 Feb 13 19:42:56.696755 groupadd[1646]: group added to /etc/gshadow: name=google-sudoers Feb 13 19:42:56.719478 systemd[1]: Started sshd@2-10.128.0.110:22-139.178.68.195:43594.service - OpenSSH per-connection server daemon (139.178.68.195:43594). Feb 13 19:42:56.773696 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:42:56.778910 groupadd[1646]: new group: name=google-sudoers, GID=1000 Feb 13 19:42:56.786495 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:42:56.797791 systemd[1]: Startup finished in 1.157s (kernel) + 10.044s (initrd) + 10.044s (userspace) = 21.246s. Feb 13 19:42:56.806004 (kubelet)[1660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:42:56.822025 google-accounts[1632]: INFO Starting Google Accounts daemon. Feb 13 19:42:56.836128 agetty[1580]: failed to open credentials directory Feb 13 19:42:56.836181 agetty[1581]: failed to open credentials directory Feb 13 19:42:56.848262 google-accounts[1632]: WARNING OS Login not installed. Feb 13 19:42:56.850155 google-accounts[1632]: INFO Creating a new user account for 0. Feb 13 19:42:56.856096 init.sh[1670]: useradd: invalid user name '0': use --badname to ignore Feb 13 19:42:56.856961 google-accounts[1632]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Feb 13 19:42:57.058536 sshd[1656]: Accepted publickey for core from 139.178.68.195 port 43594 ssh2: RSA SHA256:r/2m5GDaYI6NVLjjK2e6K38+w/tgqQ1v0R6DcMMXQ1c Feb 13 19:42:57.061131 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:57.069889 systemd-logind[1467]: New session 3 of user core. Feb 13 19:42:57.079711 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:42:57.000568 google-clock-skew[1633]: INFO Synced system time with hardware clock. Feb 13 19:42:57.019138 systemd-journald[1107]: Time jumped backwards, rotating. Feb 13 19:42:57.003103 systemd-resolved[1320]: Clock change detected. Flushing caches. Feb 13 19:42:57.081120 sshd[1676]: Connection closed by 139.178.68.195 port 43594 Feb 13 19:42:57.082064 sshd-session[1656]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:57.089953 systemd[1]: sshd@2-10.128.0.110:22-139.178.68.195:43594.service: Deactivated successfully. Feb 13 19:42:57.094072 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:42:57.095818 systemd-logind[1467]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:42:57.097953 systemd-logind[1467]: Removed session 3. Feb 13 19:42:57.157591 ntpd[1456]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:6e%2]:123 Feb 13 19:42:57.158322 ntpd[1456]: 13 Feb 19:42:57 ntpd[1456]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:6e%2]:123 Feb 13 19:42:57.522445 kubelet[1660]: E0213 19:42:57.522359 1660 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:42:57.524838 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:42:57.525100 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:42:57.525625 systemd[1]: kubelet.service: Consumed 1.230s CPU time. Feb 13 19:43:07.141015 systemd[1]: Started sshd@3-10.128.0.110:22-139.178.68.195:50396.service - OpenSSH per-connection server daemon (139.178.68.195:50396). Feb 13 19:43:07.438669 sshd[1684]: Accepted publickey for core from 139.178.68.195 port 50396 ssh2: RSA SHA256:r/2m5GDaYI6NVLjjK2e6K38+w/tgqQ1v0R6DcMMXQ1c Feb 13 19:43:07.440877 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:07.449253 systemd-logind[1467]: New session 4 of user core. Feb 13 19:43:07.458949 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:43:07.611133 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:43:07.618174 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:43:07.652769 sshd[1686]: Connection closed by 139.178.68.195 port 50396 Feb 13 19:43:07.655372 sshd-session[1684]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:07.662560 systemd[1]: sshd@3-10.128.0.110:22-139.178.68.195:50396.service: Deactivated successfully. Feb 13 19:43:07.665303 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:43:07.668138 systemd-logind[1467]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:43:07.669826 systemd-logind[1467]: Removed session 4. Feb 13 19:43:07.714937 systemd[1]: Started sshd@4-10.128.0.110:22-139.178.68.195:50408.service - OpenSSH per-connection server daemon (139.178.68.195:50408). Feb 13 19:43:07.925010 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:43:07.937306 (kubelet)[1700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:43:07.994825 kubelet[1700]: E0213 19:43:07.994167 1700 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:43:07.998567 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:43:07.998839 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:43:08.017977 sshd[1694]: Accepted publickey for core from 139.178.68.195 port 50408 ssh2: RSA SHA256:r/2m5GDaYI6NVLjjK2e6K38+w/tgqQ1v0R6DcMMXQ1c Feb 13 19:43:08.019978 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:08.026446 systemd-logind[1467]: New session 5 of user core. Feb 13 19:43:08.033837 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:43:08.224433 sshd[1708]: Connection closed by 139.178.68.195 port 50408 Feb 13 19:43:08.225426 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:08.230664 systemd[1]: sshd@4-10.128.0.110:22-139.178.68.195:50408.service: Deactivated successfully. Feb 13 19:43:08.233474 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:43:08.235863 systemd-logind[1467]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:43:08.237393 systemd-logind[1467]: Removed session 5. Feb 13 19:43:08.285085 systemd[1]: Started sshd@5-10.128.0.110:22-139.178.68.195:50416.service - OpenSSH per-connection server daemon (139.178.68.195:50416). Feb 13 19:43:08.576548 sshd[1713]: Accepted publickey for core from 139.178.68.195 port 50416 ssh2: RSA SHA256:r/2m5GDaYI6NVLjjK2e6K38+w/tgqQ1v0R6DcMMXQ1c Feb 13 19:43:08.578509 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:08.586084 systemd-logind[1467]: New session 6 of user core. Feb 13 19:43:08.595953 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:43:08.791941 sshd[1715]: Connection closed by 139.178.68.195 port 50416 Feb 13 19:43:08.792894 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:08.798951 systemd[1]: sshd@5-10.128.0.110:22-139.178.68.195:50416.service: Deactivated successfully. Feb 13 19:43:08.801773 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:43:08.802908 systemd-logind[1467]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:43:08.804407 systemd-logind[1467]: Removed session 6. Feb 13 19:43:08.855095 systemd[1]: Started sshd@6-10.128.0.110:22-139.178.68.195:50428.service - OpenSSH per-connection server daemon (139.178.68.195:50428). Feb 13 19:43:09.148023 sshd[1720]: Accepted publickey for core from 139.178.68.195 port 50428 ssh2: RSA SHA256:r/2m5GDaYI6NVLjjK2e6K38+w/tgqQ1v0R6DcMMXQ1c Feb 13 19:43:09.150110 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:09.155946 systemd-logind[1467]: New session 7 of user core. Feb 13 19:43:09.163818 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:43:09.348653 sudo[1723]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:43:09.349334 sudo[1723]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:43:09.378268 sudo[1723]: pam_unix(sudo:session): session closed for user root Feb 13 19:43:09.421853 sshd[1722]: Connection closed by 139.178.68.195 port 50428 Feb 13 19:43:09.423862 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:09.430293 systemd[1]: sshd@6-10.128.0.110:22-139.178.68.195:50428.service: Deactivated successfully. Feb 13 19:43:09.433202 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:43:09.434365 systemd-logind[1467]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:43:09.436285 systemd-logind[1467]: Removed session 7. Feb 13 19:43:09.478015 systemd[1]: Started sshd@7-10.128.0.110:22-139.178.68.195:50432.service - OpenSSH per-connection server daemon (139.178.68.195:50432). Feb 13 19:43:09.781661 sshd[1728]: Accepted publickey for core from 139.178.68.195 port 50432 ssh2: RSA SHA256:r/2m5GDaYI6NVLjjK2e6K38+w/tgqQ1v0R6DcMMXQ1c Feb 13 19:43:09.782912 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:09.789966 systemd-logind[1467]: New session 8 of user core. Feb 13 19:43:09.796884 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:43:09.960357 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:43:09.961040 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:43:09.966673 sudo[1732]: pam_unix(sudo:session): session closed for user root Feb 13 19:43:09.982136 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:43:09.982738 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:43:10.003155 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:43:10.057908 augenrules[1754]: No rules Feb 13 19:43:10.060361 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:43:10.060720 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:43:10.062455 sudo[1731]: pam_unix(sudo:session): session closed for user root Feb 13 19:43:10.105843 sshd[1730]: Connection closed by 139.178.68.195 port 50432 Feb 13 19:43:10.106800 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:10.111952 systemd[1]: sshd@7-10.128.0.110:22-139.178.68.195:50432.service: Deactivated successfully. Feb 13 19:43:10.114751 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:43:10.117352 systemd-logind[1467]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:43:10.119044 systemd-logind[1467]: Removed session 8. Feb 13 19:43:10.164070 systemd[1]: Started sshd@8-10.128.0.110:22-139.178.68.195:50438.service - OpenSSH per-connection server daemon (139.178.68.195:50438). Feb 13 19:43:10.454949 sshd[1762]: Accepted publickey for core from 139.178.68.195 port 50438 ssh2: RSA SHA256:r/2m5GDaYI6NVLjjK2e6K38+w/tgqQ1v0R6DcMMXQ1c Feb 13 19:43:10.457323 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:10.463492 systemd-logind[1467]: New session 9 of user core. Feb 13 19:43:10.470907 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:43:10.635775 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:43:10.636352 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:43:11.101170 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:43:11.101343 (dockerd)[1782]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:43:11.524943 dockerd[1782]: time="2025-02-13T19:43:11.524748123Z" level=info msg="Starting up" Feb 13 19:43:11.676823 dockerd[1782]: time="2025-02-13T19:43:11.676424069Z" level=info msg="Loading containers: start." Feb 13 19:43:11.925978 kernel: Initializing XFRM netlink socket Feb 13 19:43:12.057761 systemd-networkd[1388]: docker0: Link UP Feb 13 19:43:12.101154 dockerd[1782]: time="2025-02-13T19:43:12.101081915Z" level=info msg="Loading containers: done." Feb 13 19:43:12.127782 dockerd[1782]: time="2025-02-13T19:43:12.127713077Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:43:12.127985 dockerd[1782]: time="2025-02-13T19:43:12.127849260Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 19:43:12.128050 dockerd[1782]: time="2025-02-13T19:43:12.128027914Z" level=info msg="Daemon has completed initialization" Feb 13 19:43:12.172061 dockerd[1782]: time="2025-02-13T19:43:12.170935516Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:43:12.172715 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:43:13.101508 containerd[1487]: time="2025-02-13T19:43:13.101455455Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 19:43:13.581425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3861710896.mount: Deactivated successfully. Feb 13 19:43:15.113916 containerd[1487]: time="2025-02-13T19:43:15.113832774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:15.115628 containerd[1487]: time="2025-02-13T19:43:15.115554054Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=27983216" Feb 13 19:43:15.117252 containerd[1487]: time="2025-02-13T19:43:15.117168535Z" level=info msg="ImageCreate event name:\"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:15.121607 containerd[1487]: time="2025-02-13T19:43:15.121499678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:15.123771 containerd[1487]: time="2025-02-13T19:43:15.123150419Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"27973388\" in 2.021623773s" Feb 13 19:43:15.123771 containerd[1487]: time="2025-02-13T19:43:15.123209886Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\"" Feb 13 19:43:15.126485 containerd[1487]: time="2025-02-13T19:43:15.126393791Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 19:43:16.632303 containerd[1487]: time="2025-02-13T19:43:16.632225861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:16.634045 containerd[1487]: time="2025-02-13T19:43:16.633963414Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=24710127" Feb 13 19:43:16.635261 containerd[1487]: time="2025-02-13T19:43:16.635210722Z" level=info msg="ImageCreate event name:\"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:16.641647 containerd[1487]: time="2025-02-13T19:43:16.641562539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:16.643498 containerd[1487]: time="2025-02-13T19:43:16.643032795Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"26154739\" in 1.516572048s" Feb 13 19:43:16.643498 containerd[1487]: time="2025-02-13T19:43:16.643090916Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\"" Feb 13 19:43:16.644073 containerd[1487]: time="2025-02-13T19:43:16.644029538Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 19:43:17.827368 containerd[1487]: time="2025-02-13T19:43:17.827296498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:17.829162 containerd[1487]: time="2025-02-13T19:43:17.829088075Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=18654341" Feb 13 19:43:17.830582 containerd[1487]: time="2025-02-13T19:43:17.830467946Z" level=info msg="ImageCreate event name:\"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:17.835773 containerd[1487]: time="2025-02-13T19:43:17.835676318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:17.837633 containerd[1487]: time="2025-02-13T19:43:17.837423807Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"20098989\" in 1.193238459s" Feb 13 19:43:17.837633 containerd[1487]: time="2025-02-13T19:43:17.837469594Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\"" Feb 13 19:43:17.838733 containerd[1487]: time="2025-02-13T19:43:17.838576183Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 19:43:18.066506 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:43:18.076570 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:43:18.335801 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:43:18.343666 (kubelet)[2041]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:43:18.408735 kubelet[2041]: E0213 19:43:18.408677 2041 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:43:18.412522 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:43:18.412859 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:43:19.411259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount350576835.mount: Deactivated successfully. Feb 13 19:43:20.064137 containerd[1487]: time="2025-02-13T19:43:20.064034171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:20.065870 containerd[1487]: time="2025-02-13T19:43:20.065793882Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=30231003" Feb 13 19:43:20.067547 containerd[1487]: time="2025-02-13T19:43:20.067429508Z" level=info msg="ImageCreate event name:\"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:20.071639 containerd[1487]: time="2025-02-13T19:43:20.071545764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:20.073267 containerd[1487]: time="2025-02-13T19:43:20.072683790Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"30228127\" in 2.234063151s" Feb 13 19:43:20.073267 containerd[1487]: time="2025-02-13T19:43:20.072736102Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\"" Feb 13 19:43:20.073890 containerd[1487]: time="2025-02-13T19:43:20.073750366Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:43:20.614681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3849900781.mount: Deactivated successfully. Feb 13 19:43:21.745933 containerd[1487]: time="2025-02-13T19:43:21.745842671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:21.747594 containerd[1487]: time="2025-02-13T19:43:21.747513144Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Feb 13 19:43:21.748587 containerd[1487]: time="2025-02-13T19:43:21.748510617Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:21.753771 containerd[1487]: time="2025-02-13T19:43:21.753715899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:21.756204 containerd[1487]: time="2025-02-13T19:43:21.755588749Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.681789163s" Feb 13 19:43:21.756204 containerd[1487]: time="2025-02-13T19:43:21.755642750Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 19:43:21.756665 containerd[1487]: time="2025-02-13T19:43:21.756634501Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:43:22.149268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount422869901.mount: Deactivated successfully. Feb 13 19:43:22.156306 containerd[1487]: time="2025-02-13T19:43:22.156234553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:22.157713 containerd[1487]: time="2025-02-13T19:43:22.157629398Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Feb 13 19:43:22.158896 containerd[1487]: time="2025-02-13T19:43:22.158821717Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:22.164584 containerd[1487]: time="2025-02-13T19:43:22.163930884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:22.165272 containerd[1487]: time="2025-02-13T19:43:22.165099230Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 408.308392ms" Feb 13 19:43:22.165272 containerd[1487]: time="2025-02-13T19:43:22.165146093Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 19:43:22.166324 containerd[1487]: time="2025-02-13T19:43:22.165922930Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 19:43:22.571889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3590608206.mount: Deactivated successfully. Feb 13 19:43:24.353241 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 19:43:24.809605 containerd[1487]: time="2025-02-13T19:43:24.808771177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:24.811869 containerd[1487]: time="2025-02-13T19:43:24.811788753Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56786556" Feb 13 19:43:24.814496 containerd[1487]: time="2025-02-13T19:43:24.813622595Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:24.819585 containerd[1487]: time="2025-02-13T19:43:24.819486306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:24.823124 containerd[1487]: time="2025-02-13T19:43:24.822641021Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.656675073s" Feb 13 19:43:24.823124 containerd[1487]: time="2025-02-13T19:43:24.822698887Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Feb 13 19:43:28.566772 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 19:43:28.576899 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:43:28.885773 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:43:28.888418 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:43:28.966576 kubelet[2189]: E0213 19:43:28.966479 2189 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:43:28.971829 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:43:28.972146 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:43:29.065575 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:43:29.080088 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:43:29.146566 systemd[1]: Reloading requested from client PID 2203 ('systemctl') (unit session-9.scope)... Feb 13 19:43:29.146598 systemd[1]: Reloading... Feb 13 19:43:29.354587 zram_generator::config[2244]: No configuration found. Feb 13 19:43:29.501109 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:43:29.612216 systemd[1]: Reloading finished in 464 ms. Feb 13 19:43:29.684723 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:43:29.684878 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:43:29.685226 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:43:29.697178 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:43:30.021828 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:43:30.035271 (kubelet)[2293]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:43:30.091312 kubelet[2293]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:43:30.092104 kubelet[2293]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:43:30.092104 kubelet[2293]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:43:30.094978 kubelet[2293]: I0213 19:43:30.094758 2293 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:43:30.433554 kubelet[2293]: I0213 19:43:30.432717 2293 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:43:30.433554 kubelet[2293]: I0213 19:43:30.432759 2293 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:43:30.433554 kubelet[2293]: I0213 19:43:30.433433 2293 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:43:30.474553 kubelet[2293]: E0213 19:43:30.474475 2293 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.110:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.110:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:43:30.475481 kubelet[2293]: I0213 19:43:30.475246 2293 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:43:30.485444 kubelet[2293]: E0213 19:43:30.485378 2293 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:43:30.485444 kubelet[2293]: I0213 19:43:30.485423 2293 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:43:30.490584 kubelet[2293]: I0213 19:43:30.490512 2293 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:43:30.490742 kubelet[2293]: I0213 19:43:30.490706 2293 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:43:30.491009 kubelet[2293]: I0213 19:43:30.490940 2293 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:43:30.491264 kubelet[2293]: I0213 19:43:30.490995 2293 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:43:30.491463 kubelet[2293]: I0213 19:43:30.491278 2293 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:43:30.491463 kubelet[2293]: I0213 19:43:30.491297 2293 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:43:30.491463 kubelet[2293]: I0213 19:43:30.491444 2293 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:43:30.495299 kubelet[2293]: I0213 19:43:30.494952 2293 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:43:30.495299 kubelet[2293]: I0213 19:43:30.494992 2293 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:43:30.495299 kubelet[2293]: I0213 19:43:30.495045 2293 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:43:30.495299 kubelet[2293]: I0213 19:43:30.495063 2293 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:43:30.505936 kubelet[2293]: W0213 19:43:30.505565 2293 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.110:6443: connect: connection refused Feb 13 19:43:30.505936 kubelet[2293]: E0213 19:43:30.505658 2293 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.110:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:43:30.508411 kubelet[2293]: W0213 19:43:30.508024 2293 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.110:6443: connect: connection refused Feb 13 19:43:30.508411 kubelet[2293]: E0213 19:43:30.508106 2293 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.110:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:43:30.508411 kubelet[2293]: I0213 19:43:30.508240 2293 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:43:30.511162 kubelet[2293]: I0213 19:43:30.511009 2293 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:43:30.513267 kubelet[2293]: W0213 19:43:30.513202 2293 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:43:30.515155 kubelet[2293]: I0213 19:43:30.514156 2293 server.go:1269] "Started kubelet" Feb 13 19:43:30.515155 kubelet[2293]: I0213 19:43:30.514577 2293 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:43:30.517463 kubelet[2293]: I0213 19:43:30.517039 2293 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:43:30.522931 kubelet[2293]: I0213 19:43:30.522850 2293 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:43:30.524154 kubelet[2293]: I0213 19:43:30.523236 2293 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:43:30.524703 kubelet[2293]: I0213 19:43:30.524677 2293 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:43:30.531829 kubelet[2293]: E0213 19:43:30.526757 2293 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.110:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.110:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal.1823dc089514087c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal,UID:ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal,},FirstTimestamp:2025-02-13 19:43:30.514110588 +0000 UTC m=+0.473369686,LastTimestamp:2025-02-13 19:43:30.514110588 +0000 UTC m=+0.473369686,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal,}" Feb 13 19:43:30.533563 kubelet[2293]: I0213 19:43:30.533028 2293 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:43:30.534111 kubelet[2293]: E0213 19:43:30.534068 2293 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" not found" Feb 13 19:43:30.534267 kubelet[2293]: I0213 19:43:30.534252 2293 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:43:30.535589 kubelet[2293]: I0213 19:43:30.535120 2293 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:43:30.535589 kubelet[2293]: I0213 19:43:30.535207 2293 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:43:30.536494 kubelet[2293]: W0213 19:43:30.536414 2293 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.110:6443: connect: connection refused Feb 13 19:43:30.536675 kubelet[2293]: E0213 19:43:30.536648 2293 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.110:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:43:30.537084 kubelet[2293]: I0213 19:43:30.537030 2293 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:43:30.540186 kubelet[2293]: I0213 19:43:30.540157 2293 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:43:30.540570 kubelet[2293]: I0213 19:43:30.540323 2293 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:43:30.550858 kubelet[2293]: E0213 19:43:30.550792 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.110:6443: connect: connection refused" interval="200ms" Feb 13 19:43:30.562220 kubelet[2293]: I0213 19:43:30.562154 2293 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:43:30.565100 kubelet[2293]: I0213 19:43:30.564515 2293 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:43:30.565100 kubelet[2293]: I0213 19:43:30.564583 2293 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:43:30.565100 kubelet[2293]: I0213 19:43:30.564618 2293 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:43:30.565100 kubelet[2293]: E0213 19:43:30.564706 2293 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:43:30.572297 kubelet[2293]: E0213 19:43:30.572257 2293 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:43:30.580497 kubelet[2293]: W0213 19:43:30.580307 2293 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.110:6443: connect: connection refused Feb 13 19:43:30.580660 kubelet[2293]: E0213 19:43:30.580622 2293 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.110:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:43:30.588282 kubelet[2293]: I0213 19:43:30.587863 2293 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:43:30.588282 kubelet[2293]: I0213 19:43:30.587915 2293 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:43:30.588282 kubelet[2293]: I0213 19:43:30.587945 2293 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:43:30.590999 kubelet[2293]: I0213 19:43:30.590855 2293 policy_none.go:49] "None policy: Start" Feb 13 19:43:30.592374 kubelet[2293]: I0213 19:43:30.591893 2293 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:43:30.592374 kubelet[2293]: I0213 19:43:30.591927 2293 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:43:30.602595 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:43:30.613415 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:43:30.618045 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:43:30.626930 kubelet[2293]: I0213 19:43:30.626873 2293 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:43:30.627185 kubelet[2293]: I0213 19:43:30.627161 2293 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:43:30.627279 kubelet[2293]: I0213 19:43:30.627223 2293 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:43:30.627997 kubelet[2293]: I0213 19:43:30.627939 2293 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:43:30.629942 kubelet[2293]: E0213 19:43:30.629766 2293 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" not found" Feb 13 19:43:30.687133 systemd[1]: Created slice kubepods-burstable-pod8b2cd3ad88844569b0c903f252dc492d.slice - libcontainer container kubepods-burstable-pod8b2cd3ad88844569b0c903f252dc492d.slice. Feb 13 19:43:30.703518 systemd[1]: Created slice kubepods-burstable-pod6e130324b280eb1f2301a55e54fff4ad.slice - libcontainer container kubepods-burstable-pod6e130324b280eb1f2301a55e54fff4ad.slice. Feb 13 19:43:30.711443 systemd[1]: Created slice kubepods-burstable-pod9ec37c0be58f2c4503f89db38f4662e5.slice - libcontainer container kubepods-burstable-pod9ec37c0be58f2c4503f89db38f4662e5.slice. Feb 13 19:43:30.751955 kubelet[2293]: I0213 19:43:30.751607 2293 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:30.751955 kubelet[2293]: E0213 19:43:30.751808 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.110:6443: connect: connection refused" interval="400ms" Feb 13 19:43:30.752385 kubelet[2293]: E0213 19:43:30.752344 2293 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.110:6443/api/v1/nodes\": dial tcp 10.128.0.110:6443: connect: connection refused" node="ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:30.837046 kubelet[2293]: I0213 19:43:30.836859 2293 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b2cd3ad88844569b0c903f252dc492d-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" (UID: \"8b2cd3ad88844569b0c903f252dc492d\") " pod="kube-system/kube-controller-manager-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:30.837046 kubelet[2293]: I0213 19:43:30.837005 2293 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b2cd3ad88844569b0c903f252dc492d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" (UID: \"8b2cd3ad88844569b0c903f252dc492d\") " pod="kube-system/kube-controller-manager-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:30.837046 kubelet[2293]: I0213 19:43:30.837054 2293 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e130324b280eb1f2301a55e54fff4ad-kubeconfig\") pod \"kube-scheduler-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" (UID: \"6e130324b280eb1f2301a55e54fff4ad\") " pod="kube-system/kube-scheduler-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:30.837487 kubelet[2293]: I0213 19:43:30.837098 2293 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ec37c0be58f2c4503f89db38f4662e5-ca-certs\") pod \"kube-apiserver-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" (UID: \"9ec37c0be58f2c4503f89db38f4662e5\") " pod="kube-system/kube-apiserver-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:30.837487 kubelet[2293]: I0213 19:43:30.837135 2293 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ec37c0be58f2c4503f89db38f4662e5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" (UID: \"9ec37c0be58f2c4503f89db38f4662e5\") " pod="kube-system/kube-apiserver-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:30.837487 kubelet[2293]: I0213 19:43:30.837182 2293 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b2cd3ad88844569b0c903f252dc492d-ca-certs\") pod \"kube-controller-manager-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" (UID: \"8b2cd3ad88844569b0c903f252dc492d\") " pod="kube-system/kube-controller-manager-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:30.837487 kubelet[2293]: I0213 19:43:30.837232 2293 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8b2cd3ad88844569b0c903f252dc492d-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" (UID: \"8b2cd3ad88844569b0c903f252dc492d\") " pod="kube-system/kube-controller-manager-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:30.837684 kubelet[2293]: I0213 19:43:30.837272 2293 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8b2cd3ad88844569b0c903f252dc492d-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" (UID: \"8b2cd3ad88844569b0c903f252dc492d\") " pod="kube-system/kube-controller-manager-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:30.837684 kubelet[2293]: I0213 19:43:30.837303 2293 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ec37c0be58f2c4503f89db38f4662e5-k8s-certs\") pod \"kube-apiserver-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" (UID: \"9ec37c0be58f2c4503f89db38f4662e5\") " pod="kube-system/kube-apiserver-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:30.957493 kubelet[2293]: I0213 19:43:30.957350 2293 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:30.957852 kubelet[2293]: E0213 19:43:30.957795 2293 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.110:6443/api/v1/nodes\": dial tcp 10.128.0.110:6443: connect: connection refused" node="ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:30.999638 containerd[1487]: time="2025-02-13T19:43:30.999581405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal,Uid:8b2cd3ad88844569b0c903f252dc492d,Namespace:kube-system,Attempt:0,}" Feb 13 19:43:31.010516 containerd[1487]: time="2025-02-13T19:43:31.010467026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal,Uid:6e130324b280eb1f2301a55e54fff4ad,Namespace:kube-system,Attempt:0,}" Feb 13 19:43:31.016015 containerd[1487]: time="2025-02-13T19:43:31.015958411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal,Uid:9ec37c0be58f2c4503f89db38f4662e5,Namespace:kube-system,Attempt:0,}" Feb 13 19:43:31.153280 kubelet[2293]: E0213 19:43:31.153192 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.110:6443: connect: connection refused" interval="800ms" Feb 13 19:43:31.340789 kubelet[2293]: W0213 19:43:31.340696 2293 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.110:6443: connect: connection refused Feb 13 19:43:31.340978 kubelet[2293]: E0213 19:43:31.340795 2293 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.110:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.110:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:43:31.365867 kubelet[2293]: I0213 19:43:31.364326 2293 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:31.368401 kubelet[2293]: E0213 19:43:31.367512 2293 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.110:6443/api/v1/nodes\": dial tcp 10.128.0.110:6443: connect: connection refused" node="ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:31.378139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3057884754.mount: Deactivated successfully. Feb 13 19:43:31.387607 containerd[1487]: time="2025-02-13T19:43:31.387544787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:43:31.390396 containerd[1487]: time="2025-02-13T19:43:31.390338071Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:43:31.393214 containerd[1487]: time="2025-02-13T19:43:31.393129993Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Feb 13 19:43:31.394799 containerd[1487]: time="2025-02-13T19:43:31.394713467Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:43:31.397478 containerd[1487]: time="2025-02-13T19:43:31.397415756Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:43:31.400561 containerd[1487]: time="2025-02-13T19:43:31.400055450Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:43:31.403181 containerd[1487]: time="2025-02-13T19:43:31.403123363Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:43:31.409352 containerd[1487]: time="2025-02-13T19:43:31.409293945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:43:31.411296 containerd[1487]: time="2025-02-13T19:43:31.411253369Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 400.64268ms" Feb 13 19:43:31.413938 containerd[1487]: time="2025-02-13T19:43:31.413885252Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 414.116836ms" Feb 13 19:43:31.426508 containerd[1487]: time="2025-02-13T19:43:31.426434352Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 410.336806ms" Feb 13 19:43:31.471501 kubelet[2293]: W0213 19:43:31.471433 2293 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.110:6443: connect: connection refused Feb 13 19:43:31.471684 kubelet[2293]: E0213 19:43:31.471517 2293 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.110:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.110:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:43:31.613328 containerd[1487]: time="2025-02-13T19:43:31.607139841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:43:31.615980 containerd[1487]: time="2025-02-13T19:43:31.614613612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:43:31.615980 containerd[1487]: time="2025-02-13T19:43:31.614749696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:43:31.616755 containerd[1487]: time="2025-02-13T19:43:31.616588224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:43:31.616930 containerd[1487]: time="2025-02-13T19:43:31.616724306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:43:31.616930 containerd[1487]: time="2025-02-13T19:43:31.616822517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:43:31.616930 containerd[1487]: time="2025-02-13T19:43:31.616851408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:43:31.617958 containerd[1487]: time="2025-02-13T19:43:31.617033471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:43:31.623553 containerd[1487]: time="2025-02-13T19:43:31.622321178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:43:31.623553 containerd[1487]: time="2025-02-13T19:43:31.622440148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:43:31.623553 containerd[1487]: time="2025-02-13T19:43:31.622473927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:43:31.623553 containerd[1487]: time="2025-02-13T19:43:31.623243722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:43:31.675797 systemd[1]: Started cri-containerd-2ba386c7b7582a2629a39b2234ed3a2bff4844bce1db02ae7fe7ffa345f5162f.scope - libcontainer container 2ba386c7b7582a2629a39b2234ed3a2bff4844bce1db02ae7fe7ffa345f5162f. Feb 13 19:43:31.684158 systemd[1]: Started cri-containerd-673a7499d76b9611ff0c85f42266ac332a48695cccf2859640607532ff40f1a3.scope - libcontainer container 673a7499d76b9611ff0c85f42266ac332a48695cccf2859640607532ff40f1a3. Feb 13 19:43:31.687979 systemd[1]: Started cri-containerd-fad4a68a6fca163b94a5afa0437c8fa2c1d4e9aaf210f0ef86bfd54d2c837007.scope - libcontainer container fad4a68a6fca163b94a5afa0437c8fa2c1d4e9aaf210f0ef86bfd54d2c837007. Feb 13 19:43:31.811103 containerd[1487]: time="2025-02-13T19:43:31.811052281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal,Uid:6e130324b280eb1f2301a55e54fff4ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ba386c7b7582a2629a39b2234ed3a2bff4844bce1db02ae7fe7ffa345f5162f\"" Feb 13 19:43:31.814906 containerd[1487]: time="2025-02-13T19:43:31.814838746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal,Uid:9ec37c0be58f2c4503f89db38f4662e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"673a7499d76b9611ff0c85f42266ac332a48695cccf2859640607532ff40f1a3\"" Feb 13 19:43:31.818424 kubelet[2293]: E0213 19:43:31.818382 2293 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-21291" Feb 13 19:43:31.820635 kubelet[2293]: E0213 19:43:31.820582 2293 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-21291" Feb 13 19:43:31.821416 containerd[1487]: time="2025-02-13T19:43:31.821345731Z" level=info msg="CreateContainer within sandbox \"673a7499d76b9611ff0c85f42266ac332a48695cccf2859640607532ff40f1a3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:43:31.825417 containerd[1487]: time="2025-02-13T19:43:31.825369672Z" level=info msg="CreateContainer within sandbox \"2ba386c7b7582a2629a39b2234ed3a2bff4844bce1db02ae7fe7ffa345f5162f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:43:31.840918 containerd[1487]: time="2025-02-13T19:43:31.840676666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal,Uid:8b2cd3ad88844569b0c903f252dc492d,Namespace:kube-system,Attempt:0,} returns sandbox id \"fad4a68a6fca163b94a5afa0437c8fa2c1d4e9aaf210f0ef86bfd54d2c837007\"" Feb 13 19:43:31.844269 kubelet[2293]: E0213 19:43:31.843741 2293 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4186-1-1-7e36f915828f12e2ec74.c.flat" Feb 13 19:43:31.846504 containerd[1487]: time="2025-02-13T19:43:31.846441591Z" level=info msg="CreateContainer within sandbox \"fad4a68a6fca163b94a5afa0437c8fa2c1d4e9aaf210f0ef86bfd54d2c837007\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:43:31.854455 containerd[1487]: time="2025-02-13T19:43:31.854378071Z" level=info msg="CreateContainer within sandbox \"2ba386c7b7582a2629a39b2234ed3a2bff4844bce1db02ae7fe7ffa345f5162f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7620d4256f4dda0e6fa7aeb6bc3f160fd246f89260055fc710a05cfa15d540b4\"" Feb 13 19:43:31.855419 containerd[1487]: time="2025-02-13T19:43:31.855285577Z" level=info msg="StartContainer for \"7620d4256f4dda0e6fa7aeb6bc3f160fd246f89260055fc710a05cfa15d540b4\"" Feb 13 19:43:31.862574 containerd[1487]: time="2025-02-13T19:43:31.861329713Z" level=info msg="CreateContainer within sandbox \"673a7499d76b9611ff0c85f42266ac332a48695cccf2859640607532ff40f1a3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"df004e84c136cedfe386267a64dad11dd5c849654251249f806b9fea14307639\"" Feb 13 19:43:31.862901 containerd[1487]: time="2025-02-13T19:43:31.862761575Z" level=info msg="StartContainer for \"df004e84c136cedfe386267a64dad11dd5c849654251249f806b9fea14307639\"" Feb 13 19:43:31.883644 containerd[1487]: time="2025-02-13T19:43:31.882042010Z" level=info msg="CreateContainer within sandbox \"fad4a68a6fca163b94a5afa0437c8fa2c1d4e9aaf210f0ef86bfd54d2c837007\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8014a33d703f554008b7c5e4ec7f39fbb492b6a790118fb7943922053bf16e60\"" Feb 13 19:43:31.884830 containerd[1487]: time="2025-02-13T19:43:31.884782113Z" level=info msg="StartContainer for \"8014a33d703f554008b7c5e4ec7f39fbb492b6a790118fb7943922053bf16e60\"" Feb 13 19:43:31.915828 kubelet[2293]: W0213 19:43:31.915715 2293 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.110:6443: connect: connection refused Feb 13 19:43:31.915998 kubelet[2293]: E0213 19:43:31.915867 2293 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.110:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.110:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:43:31.930211 systemd[1]: Started cri-containerd-7620d4256f4dda0e6fa7aeb6bc3f160fd246f89260055fc710a05cfa15d540b4.scope - libcontainer container 7620d4256f4dda0e6fa7aeb6bc3f160fd246f89260055fc710a05cfa15d540b4. Feb 13 19:43:31.949378 systemd[1]: Started cri-containerd-df004e84c136cedfe386267a64dad11dd5c849654251249f806b9fea14307639.scope - libcontainer container df004e84c136cedfe386267a64dad11dd5c849654251249f806b9fea14307639. Feb 13 19:43:31.954950 kubelet[2293]: E0213 19:43:31.954866 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.110:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.110:6443: connect: connection refused" interval="1.6s" Feb 13 19:43:31.962338 systemd[1]: Started cri-containerd-8014a33d703f554008b7c5e4ec7f39fbb492b6a790118fb7943922053bf16e60.scope - libcontainer container 8014a33d703f554008b7c5e4ec7f39fbb492b6a790118fb7943922053bf16e60. Feb 13 19:43:32.075203 containerd[1487]: time="2025-02-13T19:43:32.074156200Z" level=info msg="StartContainer for \"df004e84c136cedfe386267a64dad11dd5c849654251249f806b9fea14307639\" returns successfully" Feb 13 19:43:32.096709 kubelet[2293]: W0213 19:43:32.093952 2293 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.110:6443: connect: connection refused Feb 13 19:43:32.096709 kubelet[2293]: E0213 19:43:32.096631 2293 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.110:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.110:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:43:32.107584 containerd[1487]: time="2025-02-13T19:43:32.106674030Z" level=info msg="StartContainer for \"7620d4256f4dda0e6fa7aeb6bc3f160fd246f89260055fc710a05cfa15d540b4\" returns successfully" Feb 13 19:43:32.122687 containerd[1487]: time="2025-02-13T19:43:32.122600248Z" level=info msg="StartContainer for \"8014a33d703f554008b7c5e4ec7f39fbb492b6a790118fb7943922053bf16e60\" returns successfully" Feb 13 19:43:32.175430 kubelet[2293]: I0213 19:43:32.174914 2293 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:32.176270 kubelet[2293]: E0213 19:43:32.176218 2293 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.110:6443/api/v1/nodes\": dial tcp 10.128.0.110:6443: connect: connection refused" node="ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:33.785014 kubelet[2293]: I0213 19:43:33.784970 2293 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:36.796552 kubelet[2293]: I0213 19:43:36.794788 2293 kubelet_node_status.go:75] "Successfully registered node" node="ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:36.796552 kubelet[2293]: E0213 19:43:36.794862 2293 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\": node \"ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" not found" Feb 13 19:43:36.882809 kubelet[2293]: E0213 19:43:36.882738 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Feb 13 19:43:37.510158 kubelet[2293]: I0213 19:43:37.509704 2293 apiserver.go:52] "Watching apiserver" Feb 13 19:43:37.536723 kubelet[2293]: I0213 19:43:37.536679 2293 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:43:38.114292 kubelet[2293]: W0213 19:43:38.114248 2293 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 19:43:38.359595 update_engine[1469]: I20250213 19:43:38.358600 1469 update_attempter.cc:509] Updating boot flags... Feb 13 19:43:38.464663 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2577) Feb 13 19:43:38.629748 systemd[1]: Reloading requested from client PID 2586 ('systemctl') (unit session-9.scope)... Feb 13 19:43:38.629778 systemd[1]: Reloading... Feb 13 19:43:38.666319 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2580) Feb 13 19:43:38.865581 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2580) Feb 13 19:43:38.900578 zram_generator::config[2626]: No configuration found. Feb 13 19:43:39.168157 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:43:39.324735 systemd[1]: Reloading finished in 693 ms. Feb 13 19:43:39.454113 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:43:39.481336 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:43:39.481968 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:43:39.482198 systemd[1]: kubelet.service: Consumed 1.025s CPU time, 117.2M memory peak, 0B memory swap peak. Feb 13 19:43:39.490960 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:43:39.780146 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:43:39.797845 (kubelet)[2678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:43:39.897887 kubelet[2678]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:43:39.897887 kubelet[2678]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:43:39.897887 kubelet[2678]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:43:39.899905 kubelet[2678]: I0213 19:43:39.897981 2678 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:43:39.908320 kubelet[2678]: I0213 19:43:39.908258 2678 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:43:39.908320 kubelet[2678]: I0213 19:43:39.908295 2678 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:43:39.911868 kubelet[2678]: I0213 19:43:39.911831 2678 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:43:39.916953 kubelet[2678]: I0213 19:43:39.916637 2678 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:43:39.924276 kubelet[2678]: I0213 19:43:39.922911 2678 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:43:39.937649 kubelet[2678]: E0213 19:43:39.935289 2678 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:43:39.937649 kubelet[2678]: I0213 19:43:39.935338 2678 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:43:39.941068 kubelet[2678]: I0213 19:43:39.941029 2678 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:43:39.941480 kubelet[2678]: I0213 19:43:39.941446 2678 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:43:39.941895 kubelet[2678]: I0213 19:43:39.941849 2678 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:43:39.942357 kubelet[2678]: I0213 19:43:39.942008 2678 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:43:39.942712 kubelet[2678]: I0213 19:43:39.942691 2678 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:43:39.942892 kubelet[2678]: I0213 19:43:39.942875 2678 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:43:39.943111 kubelet[2678]: I0213 19:43:39.943097 2678 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:43:39.943596 kubelet[2678]: I0213 19:43:39.943577 2678 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:43:39.944587 kubelet[2678]: I0213 19:43:39.944565 2678 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:43:39.944749 kubelet[2678]: I0213 19:43:39.944736 2678 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:43:39.944851 kubelet[2678]: I0213 19:43:39.944837 2678 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:43:39.952229 sudo[2691]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:43:39.952808 kubelet[2678]: I0213 19:43:39.952591 2678 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:43:39.954156 kubelet[2678]: I0213 19:43:39.953390 2678 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:43:39.953606 sudo[2691]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:43:39.958122 kubelet[2678]: I0213 19:43:39.958095 2678 server.go:1269] "Started kubelet" Feb 13 19:43:39.969620 kubelet[2678]: I0213 19:43:39.969587 2678 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:43:39.983587 kubelet[2678]: I0213 19:43:39.981382 2678 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:43:39.990560 kubelet[2678]: I0213 19:43:39.989418 2678 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:43:39.991396 kubelet[2678]: I0213 19:43:39.991321 2678 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:43:39.997751 kubelet[2678]: I0213 19:43:39.997362 2678 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:43:40.000251 kubelet[2678]: I0213 19:43:40.000219 2678 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:43:40.009314 kubelet[2678]: I0213 19:43:40.009273 2678 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:43:40.009504 kubelet[2678]: E0213 19:43:40.009471 2678 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" not found" Feb 13 19:43:40.010517 kubelet[2678]: I0213 19:43:40.010489 2678 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:43:40.010764 kubelet[2678]: I0213 19:43:40.010742 2678 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:43:40.017811 kubelet[2678]: I0213 19:43:40.017761 2678 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:43:40.017998 kubelet[2678]: I0213 19:43:40.017926 2678 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:43:40.058479 kubelet[2678]: I0213 19:43:40.056855 2678 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:43:40.072364 kubelet[2678]: I0213 19:43:40.071802 2678 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:43:40.075474 kubelet[2678]: I0213 19:43:40.075428 2678 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:43:40.075474 kubelet[2678]: I0213 19:43:40.075476 2678 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:43:40.075696 kubelet[2678]: I0213 19:43:40.075513 2678 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:43:40.075696 kubelet[2678]: E0213 19:43:40.075614 2678 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:43:40.105057 kubelet[2678]: E0213 19:43:40.105014 2678 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:43:40.176580 kubelet[2678]: E0213 19:43:40.176509 2678 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:43:40.183843 kubelet[2678]: I0213 19:43:40.183443 2678 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:43:40.183843 kubelet[2678]: I0213 19:43:40.183468 2678 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:43:40.183843 kubelet[2678]: I0213 19:43:40.183495 2678 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:43:40.183843 kubelet[2678]: I0213 19:43:40.183749 2678 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:43:40.183843 kubelet[2678]: I0213 19:43:40.183765 2678 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:43:40.183843 kubelet[2678]: I0213 19:43:40.183789 2678 policy_none.go:49] "None policy: Start" Feb 13 19:43:40.188570 kubelet[2678]: I0213 19:43:40.187371 2678 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:43:40.188570 kubelet[2678]: I0213 19:43:40.187411 2678 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:43:40.188570 kubelet[2678]: I0213 19:43:40.187705 2678 state_mem.go:75] "Updated machine memory state" Feb 13 19:43:40.196949 kubelet[2678]: I0213 19:43:40.196916 2678 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:43:40.197894 kubelet[2678]: I0213 19:43:40.197872 2678 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:43:40.202044 kubelet[2678]: I0213 19:43:40.199356 2678 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:43:40.202437 kubelet[2678]: I0213 19:43:40.202419 2678 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:43:40.322996 kubelet[2678]: I0213 19:43:40.322743 2678 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:40.338884 kubelet[2678]: I0213 19:43:40.338165 2678 kubelet_node_status.go:111] "Node was previously registered" node="ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:40.338884 kubelet[2678]: I0213 19:43:40.338295 2678 kubelet_node_status.go:75] "Successfully registered node" node="ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:40.390326 kubelet[2678]: W0213 19:43:40.390274 2678 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 19:43:40.393131 kubelet[2678]: W0213 19:43:40.393088 2678 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 19:43:40.396884 kubelet[2678]: W0213 19:43:40.396841 2678 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 19:43:40.397047 kubelet[2678]: E0213 19:43:40.396936 2678 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-scheduler-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:40.414404 kubelet[2678]: I0213 19:43:40.413873 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ec37c0be58f2c4503f89db38f4662e5-k8s-certs\") pod \"kube-apiserver-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" (UID: \"9ec37c0be58f2c4503f89db38f4662e5\") " pod="kube-system/kube-apiserver-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:40.414404 kubelet[2678]: I0213 19:43:40.413946 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b2cd3ad88844569b0c903f252dc492d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" (UID: \"8b2cd3ad88844569b0c903f252dc492d\") " pod="kube-system/kube-controller-manager-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:40.414404 kubelet[2678]: I0213 19:43:40.413985 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e130324b280eb1f2301a55e54fff4ad-kubeconfig\") pod \"kube-scheduler-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" (UID: \"6e130324b280eb1f2301a55e54fff4ad\") " pod="kube-system/kube-scheduler-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:40.414404 kubelet[2678]: I0213 19:43:40.414020 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8b2cd3ad88844569b0c903f252dc492d-kubeconfig\") pod \"kube-controller-manager-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" (UID: \"8b2cd3ad88844569b0c903f252dc492d\") " pod="kube-system/kube-controller-manager-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:40.414784 kubelet[2678]: I0213 19:43:40.414050 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ec37c0be58f2c4503f89db38f4662e5-ca-certs\") pod \"kube-apiserver-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" (UID: \"9ec37c0be58f2c4503f89db38f4662e5\") " pod="kube-system/kube-apiserver-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:40.414784 kubelet[2678]: I0213 19:43:40.414081 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ec37c0be58f2c4503f89db38f4662e5-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" (UID: \"9ec37c0be58f2c4503f89db38f4662e5\") " pod="kube-system/kube-apiserver-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:40.414784 kubelet[2678]: I0213 19:43:40.414114 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b2cd3ad88844569b0c903f252dc492d-ca-certs\") pod \"kube-controller-manager-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" (UID: \"8b2cd3ad88844569b0c903f252dc492d\") " pod="kube-system/kube-controller-manager-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:40.414784 kubelet[2678]: I0213 19:43:40.414147 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8b2cd3ad88844569b0c903f252dc492d-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" (UID: \"8b2cd3ad88844569b0c903f252dc492d\") " pod="kube-system/kube-controller-manager-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:40.414985 kubelet[2678]: I0213 19:43:40.414178 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b2cd3ad88844569b0c903f252dc492d-k8s-certs\") pod \"kube-controller-manager-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" (UID: \"8b2cd3ad88844569b0c903f252dc492d\") " pod="kube-system/kube-controller-manager-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" Feb 13 19:43:40.839952 sudo[2691]: pam_unix(sudo:session): session closed for user root Feb 13 19:43:40.945569 kubelet[2678]: I0213 19:43:40.945506 2678 apiserver.go:52] "Watching apiserver" Feb 13 19:43:41.011853 kubelet[2678]: I0213 19:43:41.011646 2678 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:43:41.187209 kubelet[2678]: I0213 19:43:41.186991 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" podStartSLOduration=3.18694736 podStartE2EDuration="3.18694736s" podCreationTimestamp="2025-02-13 19:43:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:43:41.172576282 +0000 UTC m=+1.367678362" watchObservedRunningTime="2025-02-13 19:43:41.18694736 +0000 UTC m=+1.382049435" Feb 13 19:43:41.205563 kubelet[2678]: I0213 19:43:41.203356 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" podStartSLOduration=1.203328514 podStartE2EDuration="1.203328514s" podCreationTimestamp="2025-02-13 19:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:43:41.188039365 +0000 UTC m=+1.383141442" watchObservedRunningTime="2025-02-13 19:43:41.203328514 +0000 UTC m=+1.398430588" Feb 13 19:43:42.869699 sudo[1765]: pam_unix(sudo:session): session closed for user root Feb 13 19:43:42.913159 sshd[1764]: Connection closed by 139.178.68.195 port 50438 Feb 13 19:43:42.914156 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:42.921255 systemd[1]: sshd@8-10.128.0.110:22-139.178.68.195:50438.service: Deactivated successfully. Feb 13 19:43:42.925689 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:43:42.925968 systemd[1]: session-9.scope: Consumed 7.260s CPU time, 152.5M memory peak, 0B memory swap peak. Feb 13 19:43:42.927752 systemd-logind[1467]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:43:42.929517 systemd-logind[1467]: Removed session 9. Feb 13 19:43:43.269151 kubelet[2678]: I0213 19:43:43.268831 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal" podStartSLOduration=3.268804584 podStartE2EDuration="3.268804584s" podCreationTimestamp="2025-02-13 19:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:43:41.204964945 +0000 UTC m=+1.400067073" watchObservedRunningTime="2025-02-13 19:43:43.268804584 +0000 UTC m=+3.463906661" Feb 13 19:43:43.621961 kubelet[2678]: I0213 19:43:43.621828 2678 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:43:43.622954 containerd[1487]: time="2025-02-13T19:43:43.622909035Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:43:43.623587 kubelet[2678]: I0213 19:43:43.623328 2678 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:43:44.257649 systemd[1]: Created slice kubepods-besteffort-pod3e1b06e4_1738_49f9_a195_77c43dd961e3.slice - libcontainer container kubepods-besteffort-pod3e1b06e4_1738_49f9_a195_77c43dd961e3.slice. Feb 13 19:43:44.302014 systemd[1]: Created slice kubepods-burstable-pod544ebe5d_fa5a_41ea_9d1b_a2805a8d2ac6.slice - libcontainer container kubepods-burstable-pod544ebe5d_fa5a_41ea_9d1b_a2805a8d2ac6.slice. Feb 13 19:43:44.345101 kubelet[2678]: I0213 19:43:44.344383 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e1b06e4-1738-49f9-a195-77c43dd961e3-lib-modules\") pod \"kube-proxy-9t6xd\" (UID: \"3e1b06e4-1738-49f9-a195-77c43dd961e3\") " pod="kube-system/kube-proxy-9t6xd" Feb 13 19:43:44.345101 kubelet[2678]: I0213 19:43:44.344464 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-bpf-maps\") pod \"cilium-2cghq\" (UID: \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\") " pod="kube-system/cilium-2cghq" Feb 13 19:43:44.345101 kubelet[2678]: I0213 19:43:44.344502 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mtvq\" (UniqueName: \"kubernetes.io/projected/3e1b06e4-1738-49f9-a195-77c43dd961e3-kube-api-access-4mtvq\") pod \"kube-proxy-9t6xd\" (UID: \"3e1b06e4-1738-49f9-a195-77c43dd961e3\") " pod="kube-system/kube-proxy-9t6xd" Feb 13 19:43:44.345101 kubelet[2678]: I0213 19:43:44.344570 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-cilium-run\") pod \"cilium-2cghq\" (UID: \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\") " pod="kube-system/cilium-2cghq" Feb 13 19:43:44.345101 kubelet[2678]: I0213 19:43:44.344599 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-host-proc-sys-net\") pod \"cilium-2cghq\" (UID: \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\") " pod="kube-system/cilium-2cghq" Feb 13 19:43:44.345788 kubelet[2678]: I0213 19:43:44.344629 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-cilium-config-path\") pod \"cilium-2cghq\" (UID: \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\") " pod="kube-system/cilium-2cghq" Feb 13 19:43:44.345788 kubelet[2678]: I0213 19:43:44.344654 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3e1b06e4-1738-49f9-a195-77c43dd961e3-kube-proxy\") pod \"kube-proxy-9t6xd\" (UID: \"3e1b06e4-1738-49f9-a195-77c43dd961e3\") " pod="kube-system/kube-proxy-9t6xd" Feb 13 19:43:44.345788 kubelet[2678]: I0213 19:43:44.344679 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-cni-path\") pod \"cilium-2cghq\" (UID: \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\") " pod="kube-system/cilium-2cghq" Feb 13 19:43:44.345788 kubelet[2678]: I0213 19:43:44.344712 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-hostproc\") pod \"cilium-2cghq\" (UID: \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\") " pod="kube-system/cilium-2cghq" Feb 13 19:43:44.345788 kubelet[2678]: I0213 19:43:44.344739 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-cilium-cgroup\") pod \"cilium-2cghq\" (UID: \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\") " pod="kube-system/cilium-2cghq" Feb 13 19:43:44.345788 kubelet[2678]: I0213 19:43:44.344767 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-etc-cni-netd\") pod \"cilium-2cghq\" (UID: \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\") " pod="kube-system/cilium-2cghq" Feb 13 19:43:44.345965 kubelet[2678]: I0213 19:43:44.344804 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-xtables-lock\") pod \"cilium-2cghq\" (UID: \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\") " pod="kube-system/cilium-2cghq" Feb 13 19:43:44.345965 kubelet[2678]: I0213 19:43:44.344830 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e1b06e4-1738-49f9-a195-77c43dd961e3-xtables-lock\") pod \"kube-proxy-9t6xd\" (UID: \"3e1b06e4-1738-49f9-a195-77c43dd961e3\") " pod="kube-system/kube-proxy-9t6xd" Feb 13 19:43:44.345965 kubelet[2678]: I0213 19:43:44.344876 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-clustermesh-secrets\") pod \"cilium-2cghq\" (UID: \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\") " pod="kube-system/cilium-2cghq" Feb 13 19:43:44.345965 kubelet[2678]: I0213 19:43:44.344909 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbnwl\" (UniqueName: \"kubernetes.io/projected/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-kube-api-access-tbnwl\") pod \"cilium-2cghq\" (UID: \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\") " pod="kube-system/cilium-2cghq" Feb 13 19:43:44.345965 kubelet[2678]: I0213 19:43:44.344939 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-lib-modules\") pod \"cilium-2cghq\" (UID: \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\") " pod="kube-system/cilium-2cghq" Feb 13 19:43:44.346111 kubelet[2678]: I0213 19:43:44.344997 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-host-proc-sys-kernel\") pod \"cilium-2cghq\" (UID: \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\") " pod="kube-system/cilium-2cghq" Feb 13 19:43:44.346111 kubelet[2678]: I0213 19:43:44.345024 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-hubble-tls\") pod \"cilium-2cghq\" (UID: \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\") " pod="kube-system/cilium-2cghq" Feb 13 19:43:44.569899 containerd[1487]: time="2025-02-13T19:43:44.569815883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9t6xd,Uid:3e1b06e4-1738-49f9-a195-77c43dd961e3,Namespace:kube-system,Attempt:0,}" Feb 13 19:43:44.610331 containerd[1487]: time="2025-02-13T19:43:44.609803384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2cghq,Uid:544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6,Namespace:kube-system,Attempt:0,}" Feb 13 19:43:44.617836 containerd[1487]: time="2025-02-13T19:43:44.617379830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:43:44.617836 containerd[1487]: time="2025-02-13T19:43:44.617501328Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:43:44.617836 containerd[1487]: time="2025-02-13T19:43:44.617585783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:43:44.618339 containerd[1487]: time="2025-02-13T19:43:44.618100376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:43:44.651841 systemd[1]: Started cri-containerd-1f821a2c50cf7c6cffd4758733b3f0776b362a29f46ef52e580784633beb3b2c.scope - libcontainer container 1f821a2c50cf7c6cffd4758733b3f0776b362a29f46ef52e580784633beb3b2c. Feb 13 19:43:44.660560 containerd[1487]: time="2025-02-13T19:43:44.658682290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:43:44.662262 containerd[1487]: time="2025-02-13T19:43:44.660713219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:43:44.663719 containerd[1487]: time="2025-02-13T19:43:44.662291763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:43:44.663719 containerd[1487]: time="2025-02-13T19:43:44.662549780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:43:44.712577 systemd[1]: Started cri-containerd-4ff98513aa7ea4d4d2389656a8856a33ede83f287816d8aa1e5da2a216696aed.scope - libcontainer container 4ff98513aa7ea4d4d2389656a8856a33ede83f287816d8aa1e5da2a216696aed. Feb 13 19:43:44.740154 systemd[1]: Created slice kubepods-besteffort-pod3acf21eb_12e1_4b76_b2dc_ec053651c899.slice - libcontainer container kubepods-besteffort-pod3acf21eb_12e1_4b76_b2dc_ec053651c899.slice. Feb 13 19:43:44.748758 kubelet[2678]: I0213 19:43:44.748569 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-598nh\" (UniqueName: \"kubernetes.io/projected/3acf21eb-12e1-4b76-b2dc-ec053651c899-kube-api-access-598nh\") pod \"cilium-operator-5d85765b45-2d9jl\" (UID: \"3acf21eb-12e1-4b76-b2dc-ec053651c899\") " pod="kube-system/cilium-operator-5d85765b45-2d9jl" Feb 13 19:43:44.748758 kubelet[2678]: I0213 19:43:44.748634 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3acf21eb-12e1-4b76-b2dc-ec053651c899-cilium-config-path\") pod \"cilium-operator-5d85765b45-2d9jl\" (UID: \"3acf21eb-12e1-4b76-b2dc-ec053651c899\") " pod="kube-system/cilium-operator-5d85765b45-2d9jl" Feb 13 19:43:44.830362 containerd[1487]: time="2025-02-13T19:43:44.829676828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2cghq,Uid:544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ff98513aa7ea4d4d2389656a8856a33ede83f287816d8aa1e5da2a216696aed\"" Feb 13 19:43:44.838205 containerd[1487]: time="2025-02-13T19:43:44.838138947Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:43:44.845242 containerd[1487]: time="2025-02-13T19:43:44.845119870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9t6xd,Uid:3e1b06e4-1738-49f9-a195-77c43dd961e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f821a2c50cf7c6cffd4758733b3f0776b362a29f46ef52e580784633beb3b2c\"" Feb 13 19:43:44.853173 containerd[1487]: time="2025-02-13T19:43:44.849869849Z" level=info msg="CreateContainer within sandbox \"1f821a2c50cf7c6cffd4758733b3f0776b362a29f46ef52e580784633beb3b2c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:43:44.882877 containerd[1487]: time="2025-02-13T19:43:44.882804161Z" level=info msg="CreateContainer within sandbox \"1f821a2c50cf7c6cffd4758733b3f0776b362a29f46ef52e580784633beb3b2c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2f85ce1ae2bd1b79365ee5f5cd290e17827d44327ef09b4eb8a00cafa064dc7e\"" Feb 13 19:43:44.884009 containerd[1487]: time="2025-02-13T19:43:44.883930968Z" level=info msg="StartContainer for \"2f85ce1ae2bd1b79365ee5f5cd290e17827d44327ef09b4eb8a00cafa064dc7e\"" Feb 13 19:43:44.928911 systemd[1]: Started cri-containerd-2f85ce1ae2bd1b79365ee5f5cd290e17827d44327ef09b4eb8a00cafa064dc7e.scope - libcontainer container 2f85ce1ae2bd1b79365ee5f5cd290e17827d44327ef09b4eb8a00cafa064dc7e. Feb 13 19:43:44.985249 containerd[1487]: time="2025-02-13T19:43:44.985189145Z" level=info msg="StartContainer for \"2f85ce1ae2bd1b79365ee5f5cd290e17827d44327ef09b4eb8a00cafa064dc7e\" returns successfully" Feb 13 19:43:45.049994 containerd[1487]: time="2025-02-13T19:43:45.049933133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-2d9jl,Uid:3acf21eb-12e1-4b76-b2dc-ec053651c899,Namespace:kube-system,Attempt:0,}" Feb 13 19:43:45.108618 containerd[1487]: time="2025-02-13T19:43:45.108333812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:43:45.108618 containerd[1487]: time="2025-02-13T19:43:45.108414649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:43:45.108618 containerd[1487]: time="2025-02-13T19:43:45.108433722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:43:45.109631 containerd[1487]: time="2025-02-13T19:43:45.108666432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:43:45.149824 systemd[1]: Started cri-containerd-a8085d64ebb106dc17f2288c11c17dfb3eb21d70362933ae842fbc1674e6b9a0.scope - libcontainer container a8085d64ebb106dc17f2288c11c17dfb3eb21d70362933ae842fbc1674e6b9a0. Feb 13 19:43:45.179444 kubelet[2678]: I0213 19:43:45.177472 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9t6xd" podStartSLOduration=1.17744389 podStartE2EDuration="1.17744389s" podCreationTimestamp="2025-02-13 19:43:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:43:45.176646974 +0000 UTC m=+5.371749053" watchObservedRunningTime="2025-02-13 19:43:45.17744389 +0000 UTC m=+5.372545974" Feb 13 19:43:45.263360 containerd[1487]: time="2025-02-13T19:43:45.263208637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-2d9jl,Uid:3acf21eb-12e1-4b76-b2dc-ec053651c899,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8085d64ebb106dc17f2288c11c17dfb3eb21d70362933ae842fbc1674e6b9a0\"" Feb 13 19:43:50.636873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2762974654.mount: Deactivated successfully. Feb 13 19:43:53.722349 containerd[1487]: time="2025-02-13T19:43:53.722273657Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:53.723924 containerd[1487]: time="2025-02-13T19:43:53.723849257Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 19:43:53.725351 containerd[1487]: time="2025-02-13T19:43:53.725278037Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:53.727510 containerd[1487]: time="2025-02-13T19:43:53.727437525Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.889240343s" Feb 13 19:43:53.727510 containerd[1487]: time="2025-02-13T19:43:53.727496312Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 19:43:53.732825 containerd[1487]: time="2025-02-13T19:43:53.730841693Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:43:53.733226 containerd[1487]: time="2025-02-13T19:43:53.733187846Z" level=info msg="CreateContainer within sandbox \"4ff98513aa7ea4d4d2389656a8856a33ede83f287816d8aa1e5da2a216696aed\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:43:53.754705 containerd[1487]: time="2025-02-13T19:43:53.754509686Z" level=info msg="CreateContainer within sandbox \"4ff98513aa7ea4d4d2389656a8856a33ede83f287816d8aa1e5da2a216696aed\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3d569bc8ceaf2a3a0b1b57a7902074d4b8b4a30aee15fdb126f71f03cf234ba8\"" Feb 13 19:43:53.755838 containerd[1487]: time="2025-02-13T19:43:53.755716448Z" level=info msg="StartContainer for \"3d569bc8ceaf2a3a0b1b57a7902074d4b8b4a30aee15fdb126f71f03cf234ba8\"" Feb 13 19:43:53.811807 systemd[1]: Started cri-containerd-3d569bc8ceaf2a3a0b1b57a7902074d4b8b4a30aee15fdb126f71f03cf234ba8.scope - libcontainer container 3d569bc8ceaf2a3a0b1b57a7902074d4b8b4a30aee15fdb126f71f03cf234ba8. Feb 13 19:43:53.854399 containerd[1487]: time="2025-02-13T19:43:53.854317405Z" level=info msg="StartContainer for \"3d569bc8ceaf2a3a0b1b57a7902074d4b8b4a30aee15fdb126f71f03cf234ba8\" returns successfully" Feb 13 19:43:53.875196 systemd[1]: cri-containerd-3d569bc8ceaf2a3a0b1b57a7902074d4b8b4a30aee15fdb126f71f03cf234ba8.scope: Deactivated successfully. Feb 13 19:43:54.748490 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d569bc8ceaf2a3a0b1b57a7902074d4b8b4a30aee15fdb126f71f03cf234ba8-rootfs.mount: Deactivated successfully. Feb 13 19:43:55.698847 containerd[1487]: time="2025-02-13T19:43:55.698742664Z" level=info msg="shim disconnected" id=3d569bc8ceaf2a3a0b1b57a7902074d4b8b4a30aee15fdb126f71f03cf234ba8 namespace=k8s.io Feb 13 19:43:55.698847 containerd[1487]: time="2025-02-13T19:43:55.698827658Z" level=warning msg="cleaning up after shim disconnected" id=3d569bc8ceaf2a3a0b1b57a7902074d4b8b4a30aee15fdb126f71f03cf234ba8 namespace=k8s.io Feb 13 19:43:55.698847 containerd[1487]: time="2025-02-13T19:43:55.698844943Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:43:56.190425 containerd[1487]: time="2025-02-13T19:43:56.190373162Z" level=info msg="CreateContainer within sandbox \"4ff98513aa7ea4d4d2389656a8856a33ede83f287816d8aa1e5da2a216696aed\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:43:56.229861 containerd[1487]: time="2025-02-13T19:43:56.229796095Z" level=info msg="CreateContainer within sandbox \"4ff98513aa7ea4d4d2389656a8856a33ede83f287816d8aa1e5da2a216696aed\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ac6630d28155d039e24c0ac413c8016892c5a19b034ebc6b1621d2fa896a715b\"" Feb 13 19:43:56.233564 containerd[1487]: time="2025-02-13T19:43:56.233029056Z" level=info msg="StartContainer for \"ac6630d28155d039e24c0ac413c8016892c5a19b034ebc6b1621d2fa896a715b\"" Feb 13 19:43:56.290823 systemd[1]: Started cri-containerd-ac6630d28155d039e24c0ac413c8016892c5a19b034ebc6b1621d2fa896a715b.scope - libcontainer container ac6630d28155d039e24c0ac413c8016892c5a19b034ebc6b1621d2fa896a715b. Feb 13 19:43:56.341202 containerd[1487]: time="2025-02-13T19:43:56.341126806Z" level=info msg="StartContainer for \"ac6630d28155d039e24c0ac413c8016892c5a19b034ebc6b1621d2fa896a715b\" returns successfully" Feb 13 19:43:56.361888 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:43:56.362364 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:43:56.362498 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:43:56.375149 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:43:56.375654 systemd[1]: cri-containerd-ac6630d28155d039e24c0ac413c8016892c5a19b034ebc6b1621d2fa896a715b.scope: Deactivated successfully. Feb 13 19:43:56.414536 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac6630d28155d039e24c0ac413c8016892c5a19b034ebc6b1621d2fa896a715b-rootfs.mount: Deactivated successfully. Feb 13 19:43:56.418195 containerd[1487]: time="2025-02-13T19:43:56.417584425Z" level=info msg="shim disconnected" id=ac6630d28155d039e24c0ac413c8016892c5a19b034ebc6b1621d2fa896a715b namespace=k8s.io Feb 13 19:43:56.418195 containerd[1487]: time="2025-02-13T19:43:56.417653573Z" level=warning msg="cleaning up after shim disconnected" id=ac6630d28155d039e24c0ac413c8016892c5a19b034ebc6b1621d2fa896a715b namespace=k8s.io Feb 13 19:43:56.418195 containerd[1487]: time="2025-02-13T19:43:56.417672202Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:43:56.417973 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:43:57.196217 containerd[1487]: time="2025-02-13T19:43:57.196119565Z" level=info msg="CreateContainer within sandbox \"4ff98513aa7ea4d4d2389656a8856a33ede83f287816d8aa1e5da2a216696aed\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:43:57.240061 containerd[1487]: time="2025-02-13T19:43:57.239299378Z" level=info msg="CreateContainer within sandbox \"4ff98513aa7ea4d4d2389656a8856a33ede83f287816d8aa1e5da2a216696aed\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e9fc9fa76ef923198f22551db1b4e1861fe036b72682b36d5a5b4fc92e213d98\"" Feb 13 19:43:57.244381 containerd[1487]: time="2025-02-13T19:43:57.242492569Z" level=info msg="StartContainer for \"e9fc9fa76ef923198f22551db1b4e1861fe036b72682b36d5a5b4fc92e213d98\"" Feb 13 19:43:57.307805 systemd[1]: Started cri-containerd-e9fc9fa76ef923198f22551db1b4e1861fe036b72682b36d5a5b4fc92e213d98.scope - libcontainer container e9fc9fa76ef923198f22551db1b4e1861fe036b72682b36d5a5b4fc92e213d98. Feb 13 19:43:57.355048 containerd[1487]: time="2025-02-13T19:43:57.354971034Z" level=info msg="StartContainer for \"e9fc9fa76ef923198f22551db1b4e1861fe036b72682b36d5a5b4fc92e213d98\" returns successfully" Feb 13 19:43:57.361161 systemd[1]: cri-containerd-e9fc9fa76ef923198f22551db1b4e1861fe036b72682b36d5a5b4fc92e213d98.scope: Deactivated successfully. Feb 13 19:43:57.396927 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9fc9fa76ef923198f22551db1b4e1861fe036b72682b36d5a5b4fc92e213d98-rootfs.mount: Deactivated successfully. Feb 13 19:43:57.400460 containerd[1487]: time="2025-02-13T19:43:57.400298447Z" level=info msg="shim disconnected" id=e9fc9fa76ef923198f22551db1b4e1861fe036b72682b36d5a5b4fc92e213d98 namespace=k8s.io Feb 13 19:43:57.400460 containerd[1487]: time="2025-02-13T19:43:57.400459053Z" level=warning msg="cleaning up after shim disconnected" id=e9fc9fa76ef923198f22551db1b4e1861fe036b72682b36d5a5b4fc92e213d98 namespace=k8s.io Feb 13 19:43:57.400878 containerd[1487]: time="2025-02-13T19:43:57.400475185Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:43:57.423893 containerd[1487]: time="2025-02-13T19:43:57.423829301Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:43:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:43:58.203336 containerd[1487]: time="2025-02-13T19:43:58.203258356Z" level=info msg="CreateContainer within sandbox \"4ff98513aa7ea4d4d2389656a8856a33ede83f287816d8aa1e5da2a216696aed\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:43:58.242222 containerd[1487]: time="2025-02-13T19:43:58.242158013Z" level=info msg="CreateContainer within sandbox \"4ff98513aa7ea4d4d2389656a8856a33ede83f287816d8aa1e5da2a216696aed\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d8eb41db8b7217eea31705f8e4622a85ffefcaca1e715c6b8c2accc1e42ee4a9\"" Feb 13 19:43:58.243197 containerd[1487]: time="2025-02-13T19:43:58.243056175Z" level=info msg="StartContainer for \"d8eb41db8b7217eea31705f8e4622a85ffefcaca1e715c6b8c2accc1e42ee4a9\"" Feb 13 19:43:58.297475 systemd[1]: run-containerd-runc-k8s.io-d8eb41db8b7217eea31705f8e4622a85ffefcaca1e715c6b8c2accc1e42ee4a9-runc.u6A9ti.mount: Deactivated successfully. Feb 13 19:43:58.310826 systemd[1]: Started cri-containerd-d8eb41db8b7217eea31705f8e4622a85ffefcaca1e715c6b8c2accc1e42ee4a9.scope - libcontainer container d8eb41db8b7217eea31705f8e4622a85ffefcaca1e715c6b8c2accc1e42ee4a9. Feb 13 19:43:58.364400 systemd[1]: cri-containerd-d8eb41db8b7217eea31705f8e4622a85ffefcaca1e715c6b8c2accc1e42ee4a9.scope: Deactivated successfully. Feb 13 19:43:58.368109 containerd[1487]: time="2025-02-13T19:43:58.367968551Z" level=info msg="StartContainer for \"d8eb41db8b7217eea31705f8e4622a85ffefcaca1e715c6b8c2accc1e42ee4a9\" returns successfully" Feb 13 19:43:58.409423 containerd[1487]: time="2025-02-13T19:43:58.409237110Z" level=info msg="shim disconnected" id=d8eb41db8b7217eea31705f8e4622a85ffefcaca1e715c6b8c2accc1e42ee4a9 namespace=k8s.io Feb 13 19:43:58.409423 containerd[1487]: time="2025-02-13T19:43:58.409425191Z" level=warning msg="cleaning up after shim disconnected" id=d8eb41db8b7217eea31705f8e4622a85ffefcaca1e715c6b8c2accc1e42ee4a9 namespace=k8s.io Feb 13 19:43:58.410575 containerd[1487]: time="2025-02-13T19:43:58.409444278Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:43:59.218605 containerd[1487]: time="2025-02-13T19:43:59.215732285Z" level=info msg="CreateContainer within sandbox \"4ff98513aa7ea4d4d2389656a8856a33ede83f287816d8aa1e5da2a216696aed\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:43:59.231725 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8eb41db8b7217eea31705f8e4622a85ffefcaca1e715c6b8c2accc1e42ee4a9-rootfs.mount: Deactivated successfully. Feb 13 19:43:59.260021 containerd[1487]: time="2025-02-13T19:43:59.259959288Z" level=info msg="CreateContainer within sandbox \"4ff98513aa7ea4d4d2389656a8856a33ede83f287816d8aa1e5da2a216696aed\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"379227f20e882615b462437bd142f8bd0df401ebdc0c7ce50e45d017353ba7b2\"" Feb 13 19:43:59.260693 containerd[1487]: time="2025-02-13T19:43:59.260653367Z" level=info msg="StartContainer for \"379227f20e882615b462437bd142f8bd0df401ebdc0c7ce50e45d017353ba7b2\"" Feb 13 19:43:59.327250 systemd[1]: Started cri-containerd-379227f20e882615b462437bd142f8bd0df401ebdc0c7ce50e45d017353ba7b2.scope - libcontainer container 379227f20e882615b462437bd142f8bd0df401ebdc0c7ce50e45d017353ba7b2. Feb 13 19:43:59.401225 containerd[1487]: time="2025-02-13T19:43:59.398952334Z" level=info msg="StartContainer for \"379227f20e882615b462437bd142f8bd0df401ebdc0c7ce50e45d017353ba7b2\" returns successfully" Feb 13 19:43:59.604330 kubelet[2678]: I0213 19:43:59.604287 2678 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 19:43:59.698751 systemd[1]: Created slice kubepods-burstable-podd563b64d_0a1c_42d8_838b_d3b80a902594.slice - libcontainer container kubepods-burstable-podd563b64d_0a1c_42d8_838b_d3b80a902594.slice. Feb 13 19:43:59.738565 systemd[1]: Created slice kubepods-burstable-pod3e8a7226_10c3_4ccb_b427_8335e46ec449.slice - libcontainer container kubepods-burstable-pod3e8a7226_10c3_4ccb_b427_8335e46ec449.slice. Feb 13 19:43:59.799423 containerd[1487]: time="2025-02-13T19:43:59.799148945Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:59.802002 containerd[1487]: time="2025-02-13T19:43:59.801841419Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 19:43:59.803781 containerd[1487]: time="2025-02-13T19:43:59.803737795Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:59.809672 containerd[1487]: time="2025-02-13T19:43:59.809500491Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 6.078601931s" Feb 13 19:43:59.809672 containerd[1487]: time="2025-02-13T19:43:59.809632390Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 19:43:59.815704 containerd[1487]: time="2025-02-13T19:43:59.815635502Z" level=info msg="CreateContainer within sandbox \"a8085d64ebb106dc17f2288c11c17dfb3eb21d70362933ae842fbc1674e6b9a0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:43:59.849260 containerd[1487]: time="2025-02-13T19:43:59.849042416Z" level=info msg="CreateContainer within sandbox \"a8085d64ebb106dc17f2288c11c17dfb3eb21d70362933ae842fbc1674e6b9a0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"382b6acd1f7ebacbb07cc961381d98258b2ee03e9877e5377e2971f0dd390248\"" Feb 13 19:43:59.851338 containerd[1487]: time="2025-02-13T19:43:59.851288949Z" level=info msg="StartContainer for \"382b6acd1f7ebacbb07cc961381d98258b2ee03e9877e5377e2971f0dd390248\"" Feb 13 19:43:59.860360 kubelet[2678]: I0213 19:43:59.859000 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpj7p\" (UniqueName: \"kubernetes.io/projected/d563b64d-0a1c-42d8-838b-d3b80a902594-kube-api-access-bpj7p\") pod \"coredns-6f6b679f8f-v8rfd\" (UID: \"d563b64d-0a1c-42d8-838b-d3b80a902594\") " pod="kube-system/coredns-6f6b679f8f-v8rfd" Feb 13 19:43:59.860360 kubelet[2678]: I0213 19:43:59.859079 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwlld\" (UniqueName: \"kubernetes.io/projected/3e8a7226-10c3-4ccb-b427-8335e46ec449-kube-api-access-jwlld\") pod \"coredns-6f6b679f8f-7smrd\" (UID: \"3e8a7226-10c3-4ccb-b427-8335e46ec449\") " pod="kube-system/coredns-6f6b679f8f-7smrd" Feb 13 19:43:59.860360 kubelet[2678]: I0213 19:43:59.859121 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3e8a7226-10c3-4ccb-b427-8335e46ec449-config-volume\") pod \"coredns-6f6b679f8f-7smrd\" (UID: \"3e8a7226-10c3-4ccb-b427-8335e46ec449\") " pod="kube-system/coredns-6f6b679f8f-7smrd" Feb 13 19:43:59.860360 kubelet[2678]: I0213 19:43:59.859184 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d563b64d-0a1c-42d8-838b-d3b80a902594-config-volume\") pod \"coredns-6f6b679f8f-v8rfd\" (UID: \"d563b64d-0a1c-42d8-838b-d3b80a902594\") " pod="kube-system/coredns-6f6b679f8f-v8rfd" Feb 13 19:43:59.906924 systemd[1]: Started cri-containerd-382b6acd1f7ebacbb07cc961381d98258b2ee03e9877e5377e2971f0dd390248.scope - libcontainer container 382b6acd1f7ebacbb07cc961381d98258b2ee03e9877e5377e2971f0dd390248. Feb 13 19:43:59.997562 containerd[1487]: time="2025-02-13T19:43:59.994364766Z" level=info msg="StartContainer for \"382b6acd1f7ebacbb07cc961381d98258b2ee03e9877e5377e2971f0dd390248\" returns successfully" Feb 13 19:44:00.051165 containerd[1487]: time="2025-02-13T19:44:00.050913837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7smrd,Uid:3e8a7226-10c3-4ccb-b427-8335e46ec449,Namespace:kube-system,Attempt:0,}" Feb 13 19:44:00.288701 kubelet[2678]: I0213 19:44:00.288363 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2cghq" podStartSLOduration=7.394373578 podStartE2EDuration="16.288339868s" podCreationTimestamp="2025-02-13 19:43:44 +0000 UTC" firstStartedPulling="2025-02-13 19:43:44.835282748 +0000 UTC m=+5.030384815" lastFinishedPulling="2025-02-13 19:43:53.729249039 +0000 UTC m=+13.924351105" observedRunningTime="2025-02-13 19:44:00.282496351 +0000 UTC m=+20.477598428" watchObservedRunningTime="2025-02-13 19:44:00.288339868 +0000 UTC m=+20.483441959" Feb 13 19:44:00.309999 containerd[1487]: time="2025-02-13T19:44:00.309835710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v8rfd,Uid:d563b64d-0a1c-42d8-838b-d3b80a902594,Namespace:kube-system,Attempt:0,}" Feb 13 19:44:02.648345 systemd-networkd[1388]: cilium_host: Link UP Feb 13 19:44:02.649476 systemd-networkd[1388]: cilium_net: Link UP Feb 13 19:44:02.649484 systemd-networkd[1388]: cilium_net: Gained carrier Feb 13 19:44:02.651556 systemd-networkd[1388]: cilium_host: Gained carrier Feb 13 19:44:02.845377 systemd-networkd[1388]: cilium_vxlan: Link UP Feb 13 19:44:02.845397 systemd-networkd[1388]: cilium_vxlan: Gained carrier Feb 13 19:44:03.061433 systemd-networkd[1388]: cilium_host: Gained IPv6LL Feb 13 19:44:03.150694 kernel: NET: Registered PF_ALG protocol family Feb 13 19:44:03.548709 systemd-networkd[1388]: cilium_net: Gained IPv6LL Feb 13 19:44:03.996800 systemd-networkd[1388]: cilium_vxlan: Gained IPv6LL Feb 13 19:44:04.128992 systemd-networkd[1388]: lxc_health: Link UP Feb 13 19:44:04.140494 systemd-networkd[1388]: lxc_health: Gained carrier Feb 13 19:44:04.432645 systemd-networkd[1388]: lxcbb521ee6c8a6: Link UP Feb 13 19:44:04.444222 kernel: eth0: renamed from tmpbdd14 Feb 13 19:44:04.453769 systemd-networkd[1388]: lxcbb521ee6c8a6: Gained carrier Feb 13 19:44:04.682990 systemd-networkd[1388]: lxc8076d1b84712: Link UP Feb 13 19:44:04.694606 kernel: eth0: renamed from tmp7fc67 Feb 13 19:44:04.714643 systemd-networkd[1388]: lxc8076d1b84712: Gained carrier Feb 13 19:44:04.724029 kubelet[2678]: I0213 19:44:04.723145 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-2d9jl" podStartSLOduration=6.178199869 podStartE2EDuration="20.723109883s" podCreationTimestamp="2025-02-13 19:43:44 +0000 UTC" firstStartedPulling="2025-02-13 19:43:45.26625206 +0000 UTC m=+5.461354125" lastFinishedPulling="2025-02-13 19:43:59.811162073 +0000 UTC m=+20.006264139" observedRunningTime="2025-02-13 19:44:00.32017048 +0000 UTC m=+20.515272556" watchObservedRunningTime="2025-02-13 19:44:04.723109883 +0000 UTC m=+24.918211961" Feb 13 19:44:05.341328 systemd-networkd[1388]: lxc_health: Gained IPv6LL Feb 13 19:44:05.468817 systemd-networkd[1388]: lxcbb521ee6c8a6: Gained IPv6LL Feb 13 19:44:05.852733 systemd-networkd[1388]: lxc8076d1b84712: Gained IPv6LL Feb 13 19:44:08.157710 ntpd[1456]: Listen normally on 8 cilium_host 192.168.0.33:123 Feb 13 19:44:08.158836 ntpd[1456]: 13 Feb 19:44:08 ntpd[1456]: Listen normally on 8 cilium_host 192.168.0.33:123 Feb 13 19:44:08.158836 ntpd[1456]: 13 Feb 19:44:08 ntpd[1456]: Listen normally on 9 cilium_net [fe80::c2a:10ff:feb4:9482%4]:123 Feb 13 19:44:08.158836 ntpd[1456]: 13 Feb 19:44:08 ntpd[1456]: Listen normally on 10 cilium_host [fe80::28a0:7dff:fe5a:735d%5]:123 Feb 13 19:44:08.158836 ntpd[1456]: 13 Feb 19:44:08 ntpd[1456]: Listen normally on 11 cilium_vxlan [fe80::787a:c9ff:fefe:3f47%6]:123 Feb 13 19:44:08.158836 ntpd[1456]: 13 Feb 19:44:08 ntpd[1456]: Listen normally on 12 lxc_health [fe80::dc76:c6ff:fe59:d133%8]:123 Feb 13 19:44:08.158836 ntpd[1456]: 13 Feb 19:44:08 ntpd[1456]: Listen normally on 13 lxcbb521ee6c8a6 [fe80::309d:efff:feba:31f%10]:123 Feb 13 19:44:08.158836 ntpd[1456]: 13 Feb 19:44:08 ntpd[1456]: Listen normally on 14 lxc8076d1b84712 [fe80::3c56:e9ff:feb8:3fda%12]:123 Feb 13 19:44:08.157847 ntpd[1456]: Listen normally on 9 cilium_net [fe80::c2a:10ff:feb4:9482%4]:123 Feb 13 19:44:08.157950 ntpd[1456]: Listen normally on 10 cilium_host [fe80::28a0:7dff:fe5a:735d%5]:123 Feb 13 19:44:08.158015 ntpd[1456]: Listen normally on 11 cilium_vxlan [fe80::787a:c9ff:fefe:3f47%6]:123 Feb 13 19:44:08.158075 ntpd[1456]: Listen normally on 12 lxc_health [fe80::dc76:c6ff:fe59:d133%8]:123 Feb 13 19:44:08.158139 ntpd[1456]: Listen normally on 13 lxcbb521ee6c8a6 [fe80::309d:efff:feba:31f%10]:123 Feb 13 19:44:08.158199 ntpd[1456]: Listen normally on 14 lxc8076d1b84712 [fe80::3c56:e9ff:feb8:3fda%12]:123 Feb 13 19:44:10.116576 containerd[1487]: time="2025-02-13T19:44:10.114858580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:44:10.116576 containerd[1487]: time="2025-02-13T19:44:10.114960624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:44:10.116576 containerd[1487]: time="2025-02-13T19:44:10.114996282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:44:10.116576 containerd[1487]: time="2025-02-13T19:44:10.115136013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:44:10.162738 containerd[1487]: time="2025-02-13T19:44:10.161896770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:44:10.162738 containerd[1487]: time="2025-02-13T19:44:10.161997472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:44:10.162738 containerd[1487]: time="2025-02-13T19:44:10.162032920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:44:10.165679 containerd[1487]: time="2025-02-13T19:44:10.164105104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:44:10.193884 systemd[1]: Started cri-containerd-7fc6736a01645ad612bf5fab0cee2f0be8bdc4b6c364c6fb5760eb9699eddf04.scope - libcontainer container 7fc6736a01645ad612bf5fab0cee2f0be8bdc4b6c364c6fb5760eb9699eddf04. Feb 13 19:44:10.224009 systemd[1]: Started cri-containerd-bdd14412dd14fa49dc59a61135cb8b8e3843eef8044bfa39e3a412fb2b62da55.scope - libcontainer container bdd14412dd14fa49dc59a61135cb8b8e3843eef8044bfa39e3a412fb2b62da55. Feb 13 19:44:10.337480 containerd[1487]: time="2025-02-13T19:44:10.337380212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7smrd,Uid:3e8a7226-10c3-4ccb-b427-8335e46ec449,Namespace:kube-system,Attempt:0,} returns sandbox id \"7fc6736a01645ad612bf5fab0cee2f0be8bdc4b6c364c6fb5760eb9699eddf04\"" Feb 13 19:44:10.353556 containerd[1487]: time="2025-02-13T19:44:10.353469419Z" level=info msg="CreateContainer within sandbox \"7fc6736a01645ad612bf5fab0cee2f0be8bdc4b6c364c6fb5760eb9699eddf04\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:44:10.372643 containerd[1487]: time="2025-02-13T19:44:10.370945782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v8rfd,Uid:d563b64d-0a1c-42d8-838b-d3b80a902594,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdd14412dd14fa49dc59a61135cb8b8e3843eef8044bfa39e3a412fb2b62da55\"" Feb 13 19:44:10.381845 containerd[1487]: time="2025-02-13T19:44:10.381704420Z" level=info msg="CreateContainer within sandbox \"bdd14412dd14fa49dc59a61135cb8b8e3843eef8044bfa39e3a412fb2b62da55\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:44:10.416955 containerd[1487]: time="2025-02-13T19:44:10.416765718Z" level=info msg="CreateContainer within sandbox \"7fc6736a01645ad612bf5fab0cee2f0be8bdc4b6c364c6fb5760eb9699eddf04\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0eced86fe485c30406afd06f1569def7855c3c1cc7508a0aaaa24b1214517bb3\"" Feb 13 19:44:10.427572 containerd[1487]: time="2025-02-13T19:44:10.424622030Z" level=info msg="StartContainer for \"0eced86fe485c30406afd06f1569def7855c3c1cc7508a0aaaa24b1214517bb3\"" Feb 13 19:44:10.448101 containerd[1487]: time="2025-02-13T19:44:10.447449540Z" level=info msg="CreateContainer within sandbox \"bdd14412dd14fa49dc59a61135cb8b8e3843eef8044bfa39e3a412fb2b62da55\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"25ecf0972a5751d805983a404f940b54eee312b2ad7ccc28d9e9246ff1f66e08\"" Feb 13 19:44:10.452737 containerd[1487]: time="2025-02-13T19:44:10.451040833Z" level=info msg="StartContainer for \"25ecf0972a5751d805983a404f940b54eee312b2ad7ccc28d9e9246ff1f66e08\"" Feb 13 19:44:10.522106 systemd[1]: Started cri-containerd-25ecf0972a5751d805983a404f940b54eee312b2ad7ccc28d9e9246ff1f66e08.scope - libcontainer container 25ecf0972a5751d805983a404f940b54eee312b2ad7ccc28d9e9246ff1f66e08. Feb 13 19:44:10.546858 systemd[1]: Started cri-containerd-0eced86fe485c30406afd06f1569def7855c3c1cc7508a0aaaa24b1214517bb3.scope - libcontainer container 0eced86fe485c30406afd06f1569def7855c3c1cc7508a0aaaa24b1214517bb3. Feb 13 19:44:10.598789 containerd[1487]: time="2025-02-13T19:44:10.597667593Z" level=info msg="StartContainer for \"25ecf0972a5751d805983a404f940b54eee312b2ad7ccc28d9e9246ff1f66e08\" returns successfully" Feb 13 19:44:10.618338 containerd[1487]: time="2025-02-13T19:44:10.618038025Z" level=info msg="StartContainer for \"0eced86fe485c30406afd06f1569def7855c3c1cc7508a0aaaa24b1214517bb3\" returns successfully" Feb 13 19:44:11.127149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3263291045.mount: Deactivated successfully. Feb 13 19:44:11.297168 kubelet[2678]: I0213 19:44:11.297093 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-v8rfd" podStartSLOduration=27.297070744 podStartE2EDuration="27.297070744s" podCreationTimestamp="2025-02-13 19:43:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:44:11.296201584 +0000 UTC m=+31.491303662" watchObservedRunningTime="2025-02-13 19:44:11.297070744 +0000 UTC m=+31.492172821" Feb 13 19:44:37.660796 systemd[1]: Started sshd@9-10.128.0.110:22-139.178.68.195:33524.service - OpenSSH per-connection server daemon (139.178.68.195:33524). Feb 13 19:44:37.964574 sshd[4051]: Accepted publickey for core from 139.178.68.195 port 33524 ssh2: RSA SHA256:r/2m5GDaYI6NVLjjK2e6K38+w/tgqQ1v0R6DcMMXQ1c Feb 13 19:44:37.966902 sshd-session[4051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:44:37.973701 systemd-logind[1467]: New session 10 of user core. Feb 13 19:44:37.981872 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:44:38.295948 sshd[4053]: Connection closed by 139.178.68.195 port 33524 Feb 13 19:44:38.297224 sshd-session[4051]: pam_unix(sshd:session): session closed for user core Feb 13 19:44:38.302368 systemd[1]: sshd@9-10.128.0.110:22-139.178.68.195:33524.service: Deactivated successfully. Feb 13 19:44:38.305823 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:44:38.308815 systemd-logind[1467]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:44:38.310655 systemd-logind[1467]: Removed session 10. Feb 13 19:44:43.354720 systemd[1]: Started sshd@10-10.128.0.110:22-139.178.68.195:33540.service - OpenSSH per-connection server daemon (139.178.68.195:33540). Feb 13 19:44:43.658678 sshd[4068]: Accepted publickey for core from 139.178.68.195 port 33540 ssh2: RSA SHA256:r/2m5GDaYI6NVLjjK2e6K38+w/tgqQ1v0R6DcMMXQ1c Feb 13 19:44:43.659918 sshd-session[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:44:43.669876 systemd-logind[1467]: New session 11 of user core. Feb 13 19:44:43.676861 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:44:43.968603 sshd[4070]: Connection closed by 139.178.68.195 port 33540 Feb 13 19:44:43.969977 sshd-session[4068]: pam_unix(sshd:session): session closed for user core Feb 13 19:44:43.975130 systemd[1]: sshd@10-10.128.0.110:22-139.178.68.195:33540.service: Deactivated successfully. Feb 13 19:44:43.978671 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:44:43.981969 systemd-logind[1467]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:44:43.983751 systemd-logind[1467]: Removed session 11. Feb 13 19:44:49.023054 systemd[1]: Started sshd@11-10.128.0.110:22-139.178.68.195:60000.service - OpenSSH per-connection server daemon (139.178.68.195:60000). Feb 13 19:44:49.320820 sshd[4084]: Accepted publickey for core from 139.178.68.195 port 60000 ssh2: RSA SHA256:r/2m5GDaYI6NVLjjK2e6K38+w/tgqQ1v0R6DcMMXQ1c Feb 13 19:44:49.323121 sshd-session[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:44:49.329662 systemd-logind[1467]: New session 12 of user core. Feb 13 19:44:49.336921 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:44:49.621009 sshd[4086]: Connection closed by 139.178.68.195 port 60000 Feb 13 19:44:49.622089 sshd-session[4084]: pam_unix(sshd:session): session closed for user core Feb 13 19:44:49.628080 systemd[1]: sshd@11-10.128.0.110:22-139.178.68.195:60000.service: Deactivated successfully. Feb 13 19:44:49.631776 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:44:49.633153 systemd-logind[1467]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:44:49.635284 systemd-logind[1467]: Removed session 12. Feb 13 19:44:54.683211 systemd[1]: Started sshd@12-10.128.0.110:22-139.178.68.195:60010.service - OpenSSH per-connection server daemon (139.178.68.195:60010). Feb 13 19:44:54.985609 sshd[4098]: Accepted publickey for core from 139.178.68.195 port 60010 ssh2: RSA SHA256:r/2m5GDaYI6NVLjjK2e6K38+w/tgqQ1v0R6DcMMXQ1c Feb 13 19:44:54.987676 sshd-session[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:44:54.995500 systemd-logind[1467]: New session 13 of user core. Feb 13 19:44:55.001963 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:44:55.282797 sshd[4100]: Connection closed by 139.178.68.195 port 60010 Feb 13 19:44:55.284255 sshd-session[4098]: pam_unix(sshd:session): session closed for user core Feb 13 19:44:55.289371 systemd[1]: sshd@12-10.128.0.110:22-139.178.68.195:60010.service: Deactivated successfully. Feb 13 19:44:55.293012 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:44:55.295646 systemd-logind[1467]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:44:55.297893 systemd-logind[1467]: Removed session 13. Feb 13 19:44:55.341999 systemd[1]: Started sshd@13-10.128.0.110:22-139.178.68.195:60018.service - OpenSSH per-connection server daemon (139.178.68.195:60018). Feb 13 19:44:55.638762 sshd[4112]: Accepted publickey for core from 139.178.68.195 port 60018 ssh2: RSA SHA256:r/2m5GDaYI6NVLjjK2e6K38+w/tgqQ1v0R6DcMMXQ1c Feb 13 19:44:55.640785 sshd-session[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:44:55.647288 systemd-logind[1467]: New session 14 of user core. Feb 13 19:44:55.653843 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:44:55.986495 sshd[4114]: Connection closed by 139.178.68.195 port 60018 Feb 13 19:44:55.988041 sshd-session[4112]: pam_unix(sshd:session): session closed for user core Feb 13 19:44:55.992857 systemd[1]: sshd@13-10.128.0.110:22-139.178.68.195:60018.service: Deactivated successfully. Feb 13 19:44:55.996495 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:44:55.999272 systemd-logind[1467]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:44:56.001427 systemd-logind[1467]: Removed session 14. Feb 13 19:44:56.042949 systemd[1]: Started sshd@14-10.128.0.110:22-139.178.68.195:60030.service - OpenSSH per-connection server daemon (139.178.68.195:60030). Feb 13 19:44:56.342574 sshd[4123]: Accepted publickey for core from 139.178.68.195 port 60030 ssh2: RSA SHA256:r/2m5GDaYI6NVLjjK2e6K38+w/tgqQ1v0R6DcMMXQ1c Feb 13 19:44:56.344474 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:44:56.350929 systemd-logind[1467]: New session 15 of user core. Feb 13 19:44:56.356934 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:44:56.643658 sshd[4125]: Connection closed by 139.178.68.195 port 60030 Feb 13 19:44:56.645514 sshd-session[4123]: pam_unix(sshd:session): session closed for user core Feb 13 19:44:56.654005 systemd[1]: sshd@14-10.128.0.110:22-139.178.68.195:60030.service: Deactivated successfully. Feb 13 19:44:56.658362 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:44:56.660686 systemd-logind[1467]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:44:56.663419 systemd-logind[1467]: Removed session 15. Feb 13 19:45:01.700002 systemd[1]: Started sshd@15-10.128.0.110:22-139.178.68.195:45386.service - OpenSSH per-connection server daemon (139.178.68.195:45386). Feb 13 19:45:02.002883 sshd[4136]: Accepted publickey for core from 139.178.68.195 port 45386 ssh2: RSA SHA256:r/2m5GDaYI6NVLjjK2e6K38+w/tgqQ1v0R6DcMMXQ1c Feb 13 19:45:02.005341 sshd-session[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:45:02.012664 systemd-logind[1467]: New session 16 of user core. Feb 13 19:45:02.017814 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:45:02.298452 sshd[4138]: Connection closed by 139.178.68.195 port 45386 Feb 13 19:45:02.299939 sshd-session[4136]: pam_unix(sshd:session): session closed for user core Feb 13 19:45:02.306377 systemd[1]: sshd@15-10.128.0.110:22-139.178.68.195:45386.service: Deactivated successfully. Feb 13 19:45:02.310275 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:45:02.311808 systemd-logind[1467]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:45:02.313892 systemd-logind[1467]: Removed session 16. Feb 13 19:45:07.355055 systemd[1]: Started sshd@16-10.128.0.110:22-139.178.68.195:51358.service - OpenSSH per-connection server daemon (139.178.68.195:51358). Feb 13 19:45:07.653373 sshd[4149]: Accepted publickey for core from 139.178.68.195 port 51358 ssh2: RSA SHA256:r/2m5GDaYI6NVLjjK2e6K38+w/tgqQ1v0R6DcMMXQ1c Feb 13 19:45:07.656298 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:45:07.665508 systemd-logind[1467]: New session 17 of user core. Feb 13 19:45:07.673817 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:45:07.960484 sshd[4151]: Connection closed by 139.178.68.195 port 51358 Feb 13 19:45:07.962155 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Feb 13 19:45:07.967431 systemd[1]: sshd@16-10.128.0.110:22-139.178.68.195:51358.service: Deactivated successfully. Feb 13 19:45:07.971473 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:45:07.974660 systemd-logind[1467]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:45:07.976461 systemd-logind[1467]: Removed session 17. Feb 13 19:45:08.017077 systemd[1]: Started sshd@17-10.128.0.110:22-139.178.68.195:51370.service - OpenSSH per-connection server daemon (139.178.68.195:51370). Feb 13 19:45:08.320972 sshd[4162]: Accepted publickey for core from 139.178.68.195 port 51370 ssh2: RSA SHA256:r/2m5GDaYI6NVLjjK2e6K38+w/tgqQ1v0R6DcMMXQ1c Feb 13 19:45:08.322966 sshd-session[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:45:08.329089 systemd-logind[1467]: New session 18 of user core. Feb 13 19:45:08.335847 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:45:08.692644 sshd[4164]: Connection closed by 139.178.68.195 port 51370 Feb 13 19:45:08.693608 sshd-session[4162]: pam_unix(sshd:session): session closed for user core Feb 13 19:45:08.701052 systemd[1]: sshd@17-10.128.0.110:22-139.178.68.195:51370.service: Deactivated successfully. Feb 13 19:45:08.705460 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:45:08.707389 systemd-logind[1467]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:45:08.709015 systemd-logind[1467]: Removed session 18. Feb 13 19:45:08.753007 systemd[1]: Started sshd@18-10.128.0.110:22-139.178.68.195:51378.service - OpenSSH per-connection server daemon (139.178.68.195:51378). Feb 13 19:45:09.042765 sshd[4173]: Accepted publickey for core from 139.178.68.195 port 51378 ssh2: RSA SHA256:r/2m5GDaYI6NVLjjK2e6K38+w/tgqQ1v0R6DcMMXQ1c Feb 13 19:45:09.044868 sshd-session[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:45:09.052914 systemd-logind[1467]: New session 19 of user core. Feb 13 19:45:09.056815 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:45:10.957105 sshd[4175]: Connection closed by 139.178.68.195 port 51378 Feb 13 19:45:10.958078 sshd-session[4173]: pam_unix(sshd:session): session closed for user core Feb 13 19:45:10.967292 systemd[1]: sshd@18-10.128.0.110:22-139.178.68.195:51378.service: Deactivated successfully. Feb 13 19:45:10.974429 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:45:10.975946 systemd-logind[1467]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:45:10.978205 systemd-logind[1467]: Removed session 19. Feb 13 19:45:11.014049 systemd[1]: Started sshd@19-10.128.0.110:22-139.178.68.195:51380.service - OpenSSH per-connection server daemon (139.178.68.195:51380). Feb 13 19:45:11.308957 sshd[4192]: Accepted publickey for core from 139.178.68.195 port 51380 ssh2: RSA SHA256:r/2m5GDaYI6NVLjjK2e6K38+w/tgqQ1v0R6DcMMXQ1c Feb 13 19:45:11.311263 sshd-session[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:45:11.319096 systemd-logind[1467]: New session 20 of user core. Feb 13 19:45:11.326892 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:45:11.751617 sshd[4194]: Connection closed by 139.178.68.195 port 51380 Feb 13 19:45:11.752733 sshd-session[4192]: pam_unix(sshd:session): session closed for user core Feb 13 19:45:11.759129 systemd[1]: sshd@19-10.128.0.110:22-139.178.68.195:51380.service: Deactivated successfully. Feb 13 19:45:11.762711 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:45:11.764018 systemd-logind[1467]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:45:11.766195 systemd-logind[1467]: Removed session 20. Feb 13 19:45:11.810042 systemd[1]: Started sshd@20-10.128.0.110:22-139.178.68.195:51384.service - OpenSSH per-connection server daemon (139.178.68.195:51384). Feb 13 19:45:12.113589 sshd[4203]: Accepted publickey for core from 139.178.68.195 port 51384 ssh2: RSA SHA256:r/2m5GDaYI6NVLjjK2e6K38+w/tgqQ1v0R6DcMMXQ1c Feb 13 19:45:12.115669 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:45:12.122748 systemd-logind[1467]: New session 21 of user core. Feb 13 19:45:12.131834 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:45:12.406148 sshd[4205]: Connection closed by 139.178.68.195 port 51384 Feb 13 19:45:12.408576 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Feb 13 19:45:12.413390 systemd[1]: sshd@20-10.128.0.110:22-139.178.68.195:51384.service: Deactivated successfully. Feb 13 19:45:12.417166 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:45:12.421265 systemd-logind[1467]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:45:12.423192 systemd-logind[1467]: Removed session 21. Feb 13 19:45:17.466368 systemd[1]: Started sshd@21-10.128.0.110:22-139.178.68.195:57134.service - OpenSSH per-connection server daemon (139.178.68.195:57134). Feb 13 19:45:17.779866 sshd[4219]: Accepted publickey for core from 139.178.68.195 port 57134 ssh2: RSA SHA256:r/2m5GDaYI6NVLjjK2e6K38+w/tgqQ1v0R6DcMMXQ1c Feb 13 19:45:17.781780 sshd-session[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:45:17.788699 systemd-logind[1467]: New session 22 of user core. Feb 13 19:45:17.794891 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:45:18.079802 sshd[4223]: Connection closed by 139.178.68.195 port 57134 Feb 13 19:45:18.080952 sshd-session[4219]: pam_unix(sshd:session): session closed for user core Feb 13 19:45:18.086600 systemd[1]: sshd@21-10.128.0.110:22-139.178.68.195:57134.service: Deactivated successfully. Feb 13 19:45:18.090304 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:45:18.093158 systemd-logind[1467]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:45:18.095681 systemd-logind[1467]: Removed session 22. Feb 13 19:45:23.140185 systemd[1]: Started sshd@22-10.128.0.110:22-139.178.68.195:57146.service - OpenSSH per-connection server daemon (139.178.68.195:57146). Feb 13 19:45:23.434796 sshd[4234]: Accepted publickey for core from 139.178.68.195 port 57146 ssh2: RSA SHA256:r/2m5GDaYI6NVLjjK2e6K38+w/tgqQ1v0R6DcMMXQ1c Feb 13 19:45:23.436772 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:45:23.444841 systemd-logind[1467]: New session 23 of user core. Feb 13 19:45:23.448899 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:45:23.730870 sshd[4236]: Connection closed by 139.178.68.195 port 57146 Feb 13 19:45:23.732285 sshd-session[4234]: pam_unix(sshd:session): session closed for user core Feb 13 19:45:23.737156 systemd[1]: sshd@22-10.128.0.110:22-139.178.68.195:57146.service: Deactivated successfully. Feb 13 19:45:23.741511 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:45:23.744405 systemd-logind[1467]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:45:23.746201 systemd-logind[1467]: Removed session 23. Feb 13 19:45:28.793002 systemd[1]: Started sshd@23-10.128.0.110:22-139.178.68.195:51054.service - OpenSSH per-connection server daemon (139.178.68.195:51054). Feb 13 19:45:29.093744 sshd[4248]: Accepted publickey for core from 139.178.68.195 port 51054 ssh2: RSA SHA256:r/2m5GDaYI6NVLjjK2e6K38+w/tgqQ1v0R6DcMMXQ1c Feb 13 19:45:29.095698 sshd-session[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:45:29.102589 systemd-logind[1467]: New session 24 of user core. Feb 13 19:45:29.111836 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:45:29.384160 sshd[4250]: Connection closed by 139.178.68.195 port 51054 Feb 13 19:45:29.385073 sshd-session[4248]: pam_unix(sshd:session): session closed for user core Feb 13 19:45:29.390555 systemd[1]: sshd@23-10.128.0.110:22-139.178.68.195:51054.service: Deactivated successfully. Feb 13 19:45:29.394067 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:45:29.397274 systemd-logind[1467]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:45:29.399236 systemd-logind[1467]: Removed session 24. Feb 13 19:45:29.440000 systemd[1]: Started sshd@24-10.128.0.110:22-139.178.68.195:51066.service - OpenSSH per-connection server daemon (139.178.68.195:51066). Feb 13 19:45:29.739680 sshd[4261]: Accepted publickey for core from 139.178.68.195 port 51066 ssh2: RSA SHA256:r/2m5GDaYI6NVLjjK2e6K38+w/tgqQ1v0R6DcMMXQ1c Feb 13 19:45:29.741747 sshd-session[4261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:45:29.749091 systemd-logind[1467]: New session 25 of user core. Feb 13 19:45:29.753819 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:45:31.634048 kubelet[2678]: I0213 19:45:31.633956 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-7smrd" podStartSLOduration=107.63392592 podStartE2EDuration="1m47.63392592s" podCreationTimestamp="2025-02-13 19:43:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:44:11.384567074 +0000 UTC m=+31.579669151" watchObservedRunningTime="2025-02-13 19:45:31.63392592 +0000 UTC m=+111.829027997" Feb 13 19:45:31.640186 containerd[1487]: time="2025-02-13T19:45:31.640094750Z" level=info msg="StopContainer for \"382b6acd1f7ebacbb07cc961381d98258b2ee03e9877e5377e2971f0dd390248\" with timeout 30 (s)" Feb 13 19:45:31.644119 containerd[1487]: time="2025-02-13T19:45:31.644044176Z" level=info msg="Stop container \"382b6acd1f7ebacbb07cc961381d98258b2ee03e9877e5377e2971f0dd390248\" with signal terminated" Feb 13 19:45:31.692149 systemd[1]: run-containerd-runc-k8s.io-379227f20e882615b462437bd142f8bd0df401ebdc0c7ce50e45d017353ba7b2-runc.X99WEo.mount: Deactivated successfully. Feb 13 19:45:31.711565 systemd[1]: cri-containerd-382b6acd1f7ebacbb07cc961381d98258b2ee03e9877e5377e2971f0dd390248.scope: Deactivated successfully. Feb 13 19:45:31.726069 containerd[1487]: time="2025-02-13T19:45:31.723892932Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:45:31.735254 containerd[1487]: time="2025-02-13T19:45:31.734948569Z" level=info msg="StopContainer for \"379227f20e882615b462437bd142f8bd0df401ebdc0c7ce50e45d017353ba7b2\" with timeout 2 (s)" Feb 13 19:45:31.735773 containerd[1487]: time="2025-02-13T19:45:31.735592091Z" level=info msg="Stop container \"379227f20e882615b462437bd142f8bd0df401ebdc0c7ce50e45d017353ba7b2\" with signal terminated" Feb 13 19:45:31.753235 systemd-networkd[1388]: lxc_health: Link DOWN Feb 13 19:45:31.753253 systemd-networkd[1388]: lxc_health: Lost carrier Feb 13 19:45:31.783667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-382b6acd1f7ebacbb07cc961381d98258b2ee03e9877e5377e2971f0dd390248-rootfs.mount: Deactivated successfully. Feb 13 19:45:31.795214 systemd[1]: cri-containerd-379227f20e882615b462437bd142f8bd0df401ebdc0c7ce50e45d017353ba7b2.scope: Deactivated successfully. Feb 13 19:45:31.796042 systemd[1]: cri-containerd-379227f20e882615b462437bd142f8bd0df401ebdc0c7ce50e45d017353ba7b2.scope: Consumed 10.761s CPU time. Feb 13 19:45:31.814598 containerd[1487]: time="2025-02-13T19:45:31.814247044Z" level=info msg="shim disconnected" id=382b6acd1f7ebacbb07cc961381d98258b2ee03e9877e5377e2971f0dd390248 namespace=k8s.io Feb 13 19:45:31.814598 containerd[1487]: time="2025-02-13T19:45:31.814336255Z" level=warning msg="cleaning up after shim disconnected" id=382b6acd1f7ebacbb07cc961381d98258b2ee03e9877e5377e2971f0dd390248 namespace=k8s.io Feb 13 19:45:31.814598 containerd[1487]: time="2025-02-13T19:45:31.814357012Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:45:31.834063 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-379227f20e882615b462437bd142f8bd0df401ebdc0c7ce50e45d017353ba7b2-rootfs.mount: Deactivated successfully. Feb 13 19:45:31.843510 containerd[1487]: time="2025-02-13T19:45:31.843245211Z" level=info msg="shim disconnected" id=379227f20e882615b462437bd142f8bd0df401ebdc0c7ce50e45d017353ba7b2 namespace=k8s.io Feb 13 19:45:31.843510 containerd[1487]: time="2025-02-13T19:45:31.843328380Z" level=warning msg="cleaning up after shim disconnected" id=379227f20e882615b462437bd142f8bd0df401ebdc0c7ce50e45d017353ba7b2 namespace=k8s.io Feb 13 19:45:31.843510 containerd[1487]: time="2025-02-13T19:45:31.843345154Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:45:31.860565 containerd[1487]: time="2025-02-13T19:45:31.860257169Z" level=info msg="StopContainer for \"382b6acd1f7ebacbb07cc961381d98258b2ee03e9877e5377e2971f0dd390248\" returns successfully" Feb 13 19:45:31.861822 containerd[1487]: time="2025-02-13T19:45:31.861779425Z" level=info msg="StopPodSandbox for \"a8085d64ebb106dc17f2288c11c17dfb3eb21d70362933ae842fbc1674e6b9a0\"" Feb 13 19:45:31.862677 containerd[1487]: time="2025-02-13T19:45:31.862233416Z" level=info msg="Container to stop \"382b6acd1f7ebacbb07cc961381d98258b2ee03e9877e5377e2971f0dd390248\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:45:31.871043 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a8085d64ebb106dc17f2288c11c17dfb3eb21d70362933ae842fbc1674e6b9a0-shm.mount: Deactivated successfully. Feb 13 19:45:31.877599 containerd[1487]: time="2025-02-13T19:45:31.877479964Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:45:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:45:31.882991 containerd[1487]: time="2025-02-13T19:45:31.882928578Z" level=info msg="StopContainer for \"379227f20e882615b462437bd142f8bd0df401ebdc0c7ce50e45d017353ba7b2\" returns successfully" Feb 13 19:45:31.883646 containerd[1487]: time="2025-02-13T19:45:31.883594027Z" level=info msg="StopPodSandbox for \"4ff98513aa7ea4d4d2389656a8856a33ede83f287816d8aa1e5da2a216696aed\"" Feb 13 19:45:31.883788 containerd[1487]: time="2025-02-13T19:45:31.883662101Z" level=info msg="Container to stop \"ac6630d28155d039e24c0ac413c8016892c5a19b034ebc6b1621d2fa896a715b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:45:31.883788 containerd[1487]: time="2025-02-13T19:45:31.883717962Z" level=info msg="Container to stop \"e9fc9fa76ef923198f22551db1b4e1861fe036b72682b36d5a5b4fc92e213d98\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:45:31.883788 containerd[1487]: time="2025-02-13T19:45:31.883739285Z" level=info msg="Container to stop \"d8eb41db8b7217eea31705f8e4622a85ffefcaca1e715c6b8c2accc1e42ee4a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:45:31.883788 containerd[1487]: time="2025-02-13T19:45:31.883755415Z" level=info msg="Container to stop \"379227f20e882615b462437bd142f8bd0df401ebdc0c7ce50e45d017353ba7b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:45:31.883788 containerd[1487]: time="2025-02-13T19:45:31.883774132Z" level=info msg="Container to stop \"3d569bc8ceaf2a3a0b1b57a7902074d4b8b4a30aee15fdb126f71f03cf234ba8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:45:31.885579 systemd[1]: cri-containerd-a8085d64ebb106dc17f2288c11c17dfb3eb21d70362933ae842fbc1674e6b9a0.scope: Deactivated successfully. Feb 13 19:45:31.904966 systemd[1]: cri-containerd-4ff98513aa7ea4d4d2389656a8856a33ede83f287816d8aa1e5da2a216696aed.scope: Deactivated successfully. Feb 13 19:45:31.957584 containerd[1487]: time="2025-02-13T19:45:31.956379886Z" level=info msg="shim disconnected" id=4ff98513aa7ea4d4d2389656a8856a33ede83f287816d8aa1e5da2a216696aed namespace=k8s.io Feb 13 19:45:31.957584 containerd[1487]: time="2025-02-13T19:45:31.956457422Z" level=warning msg="cleaning up after shim disconnected" id=4ff98513aa7ea4d4d2389656a8856a33ede83f287816d8aa1e5da2a216696aed namespace=k8s.io Feb 13 19:45:31.957584 containerd[1487]: time="2025-02-13T19:45:31.956481690Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:45:31.957584 containerd[1487]: time="2025-02-13T19:45:31.956916386Z" level=info msg="shim disconnected" id=a8085d64ebb106dc17f2288c11c17dfb3eb21d70362933ae842fbc1674e6b9a0 namespace=k8s.io Feb 13 19:45:31.957584 containerd[1487]: time="2025-02-13T19:45:31.956962725Z" level=warning msg="cleaning up after shim disconnected" id=a8085d64ebb106dc17f2288c11c17dfb3eb21d70362933ae842fbc1674e6b9a0 namespace=k8s.io Feb 13 19:45:31.957584 containerd[1487]: time="2025-02-13T19:45:31.956976228Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:45:31.990659 containerd[1487]: time="2025-02-13T19:45:31.990511285Z" level=info msg="TearDown network for sandbox \"a8085d64ebb106dc17f2288c11c17dfb3eb21d70362933ae842fbc1674e6b9a0\" successfully" Feb 13 19:45:31.991502 containerd[1487]: time="2025-02-13T19:45:31.990862083Z" level=info msg="StopPodSandbox for \"a8085d64ebb106dc17f2288c11c17dfb3eb21d70362933ae842fbc1674e6b9a0\" returns successfully" Feb 13 19:45:31.991502 containerd[1487]: time="2025-02-13T19:45:31.991186321Z" level=info msg="TearDown network for sandbox \"4ff98513aa7ea4d4d2389656a8856a33ede83f287816d8aa1e5da2a216696aed\" successfully" Feb 13 19:45:31.991502 containerd[1487]: time="2025-02-13T19:45:31.991210984Z" level=info msg="StopPodSandbox for \"4ff98513aa7ea4d4d2389656a8856a33ede83f287816d8aa1e5da2a216696aed\" returns successfully" Feb 13 19:45:32.026412 kubelet[2678]: I0213 19:45:32.026226 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-598nh\" (UniqueName: \"kubernetes.io/projected/3acf21eb-12e1-4b76-b2dc-ec053651c899-kube-api-access-598nh\") pod \"3acf21eb-12e1-4b76-b2dc-ec053651c899\" (UID: \"3acf21eb-12e1-4b76-b2dc-ec053651c899\") " Feb 13 19:45:32.026681 kubelet[2678]: I0213 19:45:32.026604 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3acf21eb-12e1-4b76-b2dc-ec053651c899-cilium-config-path\") pod \"3acf21eb-12e1-4b76-b2dc-ec053651c899\" (UID: \"3acf21eb-12e1-4b76-b2dc-ec053651c899\") " Feb 13 19:45:32.032572 kubelet[2678]: I0213 19:45:32.031233 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3acf21eb-12e1-4b76-b2dc-ec053651c899-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3acf21eb-12e1-4b76-b2dc-ec053651c899" (UID: "3acf21eb-12e1-4b76-b2dc-ec053651c899"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:45:32.033466 kubelet[2678]: I0213 19:45:32.033410 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3acf21eb-12e1-4b76-b2dc-ec053651c899-kube-api-access-598nh" (OuterVolumeSpecName: "kube-api-access-598nh") pod "3acf21eb-12e1-4b76-b2dc-ec053651c899" (UID: "3acf21eb-12e1-4b76-b2dc-ec053651c899"). InnerVolumeSpecName "kube-api-access-598nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:45:32.087953 systemd[1]: Removed slice kubepods-besteffort-pod3acf21eb_12e1_4b76_b2dc_ec053651c899.slice - libcontainer container kubepods-besteffort-pod3acf21eb_12e1_4b76_b2dc_ec053651c899.slice. Feb 13 19:45:32.127317 kubelet[2678]: I0213 19:45:32.127244 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-cilium-config-path\") pod \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\" (UID: \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\") " Feb 13 19:45:32.127317 kubelet[2678]: I0213 19:45:32.127320 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-xtables-lock\") pod \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\" (UID: \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\") " Feb 13 19:45:32.127637 kubelet[2678]: I0213 19:45:32.127359 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-hubble-tls\") pod \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\" (UID: \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\") " Feb 13 19:45:32.127637 kubelet[2678]: I0213 19:45:32.127393 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-cilium-run\") pod \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\" (UID: \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\") " Feb 13 19:45:32.127637 kubelet[2678]: I0213 19:45:32.127420 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbnwl\" (UniqueName: \"kubernetes.io/projected/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-kube-api-access-tbnwl\") pod \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\" (UID: \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\") " Feb 13 19:45:32.127637 kubelet[2678]: I0213 19:45:32.127455 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-cilium-cgroup\") pod \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\" (UID: \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\") " Feb 13 19:45:32.127637 kubelet[2678]: I0213 19:45:32.127484 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-cni-path\") pod \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\" (UID: \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\") " Feb 13 19:45:32.127637 kubelet[2678]: I0213 19:45:32.127508 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-etc-cni-netd\") pod \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\" (UID: \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\") " Feb 13 19:45:32.127963 kubelet[2678]: I0213 19:45:32.127561 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-lib-modules\") pod \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\" (UID: \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\") " Feb 13 19:45:32.127963 kubelet[2678]: I0213 19:45:32.127591 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-bpf-maps\") pod \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\" (UID: \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\") " Feb 13 19:45:32.127963 kubelet[2678]: I0213 19:45:32.127620 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-host-proc-sys-net\") pod \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\" (UID: \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\") " Feb 13 19:45:32.127963 kubelet[2678]: I0213 19:45:32.127648 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-hostproc\") pod \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\" (UID: \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\") " Feb 13 19:45:32.127963 kubelet[2678]: I0213 19:45:32.127680 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-host-proc-sys-kernel\") pod \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\" (UID: \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\") " Feb 13 19:45:32.127963 kubelet[2678]: I0213 19:45:32.127716 2678 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-clustermesh-secrets\") pod \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\" (UID: \"544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6\") " Feb 13 19:45:32.130788 kubelet[2678]: I0213 19:45:32.127776 2678 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-598nh\" (UniqueName: \"kubernetes.io/projected/3acf21eb-12e1-4b76-b2dc-ec053651c899-kube-api-access-598nh\") on node \"ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:45:32.130788 kubelet[2678]: I0213 19:45:32.127798 2678 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3acf21eb-12e1-4b76-b2dc-ec053651c899-cilium-config-path\") on node \"ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:45:32.130788 kubelet[2678]: I0213 19:45:32.128257 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-cni-path" (OuterVolumeSpecName: "cni-path") pod "544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6" (UID: "544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:45:32.131209 kubelet[2678]: I0213 19:45:32.131157 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6" (UID: "544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:45:32.134713 kubelet[2678]: I0213 19:45:32.131356 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6" (UID: "544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:45:32.134909 kubelet[2678]: I0213 19:45:32.131752 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6" (UID: "544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:45:32.135015 kubelet[2678]: I0213 19:45:32.131782 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6" (UID: "544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:45:32.135124 kubelet[2678]: I0213 19:45:32.131816 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6" (UID: "544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:45:32.135212 kubelet[2678]: I0213 19:45:32.131836 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-hostproc" (OuterVolumeSpecName: "hostproc") pod "544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6" (UID: "544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:45:32.135301 kubelet[2678]: I0213 19:45:32.131852 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6" (UID: "544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:45:32.135487 kubelet[2678]: I0213 19:45:32.133624 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6" (UID: "544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:45:32.137378 kubelet[2678]: I0213 19:45:32.135710 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6" (UID: "544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:45:32.137378 kubelet[2678]: I0213 19:45:32.135818 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6" (UID: "544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 19:45:32.137869 kubelet[2678]: I0213 19:45:32.137721 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6" (UID: "544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:45:32.138509 kubelet[2678]: I0213 19:45:32.138469 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6" (UID: "544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:45:32.140658 kubelet[2678]: I0213 19:45:32.140603 2678 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-kube-api-access-tbnwl" (OuterVolumeSpecName: "kube-api-access-tbnwl") pod "544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6" (UID: "544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6"). InnerVolumeSpecName "kube-api-access-tbnwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:45:32.228082 kubelet[2678]: I0213 19:45:32.227973 2678 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-cni-path\") on node \"ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:45:32.228082 kubelet[2678]: I0213 19:45:32.228072 2678 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-etc-cni-netd\") on node \"ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:45:32.228082 kubelet[2678]: I0213 19:45:32.228092 2678 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-lib-modules\") on node \"ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:45:32.228417 kubelet[2678]: I0213 19:45:32.228110 2678 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-bpf-maps\") on node \"ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:45:32.228417 kubelet[2678]: I0213 19:45:32.228130 2678 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-host-proc-sys-net\") on node \"ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:45:32.228417 kubelet[2678]: I0213 19:45:32.228145 2678 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-hostproc\") on node \"ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:45:32.228417 kubelet[2678]: I0213 19:45:32.228160 2678 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-clustermesh-secrets\") on node \"ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:45:32.228417 kubelet[2678]: I0213 19:45:32.228177 2678 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-host-proc-sys-kernel\") on node \"ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:45:32.228417 kubelet[2678]: I0213 19:45:32.228193 2678 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-cilium-config-path\") on node \"ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:45:32.228417 kubelet[2678]: I0213 19:45:32.228216 2678 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-xtables-lock\") on node \"ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:45:32.228684 kubelet[2678]: I0213 19:45:32.228232 2678 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-hubble-tls\") on node \"ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:45:32.228684 kubelet[2678]: I0213 19:45:32.228252 2678 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-cilium-run\") on node \"ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:45:32.228684 kubelet[2678]: I0213 19:45:32.228267 2678 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-cilium-cgroup\") on node \"ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:45:32.228684 kubelet[2678]: I0213 19:45:32.228283 2678 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tbnwl\" (UniqueName: \"kubernetes.io/projected/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6-kube-api-access-tbnwl\") on node \"ci-4186-1-1-7e36f915828f12e2ec74.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:45:32.479701 kubelet[2678]: I0213 19:45:32.479461 2678 scope.go:117] "RemoveContainer" containerID="379227f20e882615b462437bd142f8bd0df401ebdc0c7ce50e45d017353ba7b2" Feb 13 19:45:32.484199 containerd[1487]: time="2025-02-13T19:45:32.484112036Z" level=info msg="RemoveContainer for \"379227f20e882615b462437bd142f8bd0df401ebdc0c7ce50e45d017353ba7b2\"" Feb 13 19:45:32.498375 containerd[1487]: time="2025-02-13T19:45:32.497646998Z" level=info msg="RemoveContainer for \"379227f20e882615b462437bd142f8bd0df401ebdc0c7ce50e45d017353ba7b2\" returns successfully" Feb 13 19:45:32.498373 systemd[1]: Removed slice kubepods-burstable-pod544ebe5d_fa5a_41ea_9d1b_a2805a8d2ac6.slice - libcontainer container kubepods-burstable-pod544ebe5d_fa5a_41ea_9d1b_a2805a8d2ac6.slice. Feb 13 19:45:32.498568 systemd[1]: kubepods-burstable-pod544ebe5d_fa5a_41ea_9d1b_a2805a8d2ac6.slice: Consumed 10.903s CPU time. Feb 13 19:45:32.500397 kubelet[2678]: I0213 19:45:32.498996 2678 scope.go:117] "RemoveContainer" containerID="d8eb41db8b7217eea31705f8e4622a85ffefcaca1e715c6b8c2accc1e42ee4a9" Feb 13 19:45:32.502418 containerd[1487]: time="2025-02-13T19:45:32.502371719Z" level=info msg="RemoveContainer for \"d8eb41db8b7217eea31705f8e4622a85ffefcaca1e715c6b8c2accc1e42ee4a9\"" Feb 13 19:45:32.507881 containerd[1487]: time="2025-02-13T19:45:32.507821819Z" level=info msg="RemoveContainer for \"d8eb41db8b7217eea31705f8e4622a85ffefcaca1e715c6b8c2accc1e42ee4a9\" returns successfully" Feb 13 19:45:32.509039 kubelet[2678]: I0213 19:45:32.508582 2678 scope.go:117] "RemoveContainer" containerID="e9fc9fa76ef923198f22551db1b4e1861fe036b72682b36d5a5b4fc92e213d98" Feb 13 19:45:32.511794 containerd[1487]: time="2025-02-13T19:45:32.511733883Z" level=info msg="RemoveContainer for \"e9fc9fa76ef923198f22551db1b4e1861fe036b72682b36d5a5b4fc92e213d98\"" Feb 13 19:45:32.522071 containerd[1487]: time="2025-02-13T19:45:32.521838249Z" level=info msg="RemoveContainer for \"e9fc9fa76ef923198f22551db1b4e1861fe036b72682b36d5a5b4fc92e213d98\" returns successfully" Feb 13 19:45:32.523186 kubelet[2678]: I0213 19:45:32.523135 2678 scope.go:117] "RemoveContainer" containerID="ac6630d28155d039e24c0ac413c8016892c5a19b034ebc6b1621d2fa896a715b" Feb 13 19:45:32.525469 containerd[1487]: time="2025-02-13T19:45:32.525346120Z" level=info msg="RemoveContainer for \"ac6630d28155d039e24c0ac413c8016892c5a19b034ebc6b1621d2fa896a715b\"" Feb 13 19:45:32.536029 containerd[1487]: time="2025-02-13T19:45:32.535338825Z" level=info msg="RemoveContainer for \"ac6630d28155d039e24c0ac413c8016892c5a19b034ebc6b1621d2fa896a715b\" returns successfully" Feb 13 19:45:32.536302 kubelet[2678]: I0213 19:45:32.535659 2678 scope.go:117] "RemoveContainer" containerID="3d569bc8ceaf2a3a0b1b57a7902074d4b8b4a30aee15fdb126f71f03cf234ba8" Feb 13 19:45:32.537593 containerd[1487]: time="2025-02-13T19:45:32.537418336Z" level=info msg="RemoveContainer for \"3d569bc8ceaf2a3a0b1b57a7902074d4b8b4a30aee15fdb126f71f03cf234ba8\"" Feb 13 19:45:32.543306 containerd[1487]: time="2025-02-13T19:45:32.543229786Z" level=info msg="RemoveContainer for \"3d569bc8ceaf2a3a0b1b57a7902074d4b8b4a30aee15fdb126f71f03cf234ba8\" returns successfully" Feb 13 19:45:32.543796 kubelet[2678]: I0213 19:45:32.543728 2678 scope.go:117] "RemoveContainer" containerID="379227f20e882615b462437bd142f8bd0df401ebdc0c7ce50e45d017353ba7b2" Feb 13 19:45:32.544342 containerd[1487]: time="2025-02-13T19:45:32.544268163Z" level=error msg="ContainerStatus for \"379227f20e882615b462437bd142f8bd0df401ebdc0c7ce50e45d017353ba7b2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"379227f20e882615b462437bd142f8bd0df401ebdc0c7ce50e45d017353ba7b2\": not found" Feb 13 19:45:32.544608 kubelet[2678]: E0213 19:45:32.544555 2678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"379227f20e882615b462437bd142f8bd0df401ebdc0c7ce50e45d017353ba7b2\": not found" containerID="379227f20e882615b462437bd142f8bd0df401ebdc0c7ce50e45d017353ba7b2" Feb 13 19:45:32.544777 kubelet[2678]: I0213 19:45:32.544618 2678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"379227f20e882615b462437bd142f8bd0df401ebdc0c7ce50e45d017353ba7b2"} err="failed to get container status \"379227f20e882615b462437bd142f8bd0df401ebdc0c7ce50e45d017353ba7b2\": rpc error: code = NotFound desc = an error occurred when try to find container \"379227f20e882615b462437bd142f8bd0df401ebdc0c7ce50e45d017353ba7b2\": not found" Feb 13 19:45:32.544870 kubelet[2678]: I0213 19:45:32.544787 2678 scope.go:117] "RemoveContainer" containerID="d8eb41db8b7217eea31705f8e4622a85ffefcaca1e715c6b8c2accc1e42ee4a9" Feb 13 19:45:32.545145 containerd[1487]: time="2025-02-13T19:45:32.545084397Z" level=error msg="ContainerStatus for \"d8eb41db8b7217eea31705f8e4622a85ffefcaca1e715c6b8c2accc1e42ee4a9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d8eb41db8b7217eea31705f8e4622a85ffefcaca1e715c6b8c2accc1e42ee4a9\": not found" Feb 13 19:45:32.545311 kubelet[2678]: E0213 19:45:32.545267 2678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d8eb41db8b7217eea31705f8e4622a85ffefcaca1e715c6b8c2accc1e42ee4a9\": not found" containerID="d8eb41db8b7217eea31705f8e4622a85ffefcaca1e715c6b8c2accc1e42ee4a9" Feb 13 19:45:32.545401 kubelet[2678]: I0213 19:45:32.545324 2678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d8eb41db8b7217eea31705f8e4622a85ffefcaca1e715c6b8c2accc1e42ee4a9"} err="failed to get container status \"d8eb41db8b7217eea31705f8e4622a85ffefcaca1e715c6b8c2accc1e42ee4a9\": rpc error: code = NotFound desc = an error occurred when try to find container \"d8eb41db8b7217eea31705f8e4622a85ffefcaca1e715c6b8c2accc1e42ee4a9\": not found" Feb 13 19:45:32.545401 kubelet[2678]: I0213 19:45:32.545359 2678 scope.go:117] "RemoveContainer" containerID="e9fc9fa76ef923198f22551db1b4e1861fe036b72682b36d5a5b4fc92e213d98" Feb 13 19:45:32.545791 containerd[1487]: time="2025-02-13T19:45:32.545740504Z" level=error msg="ContainerStatus for \"e9fc9fa76ef923198f22551db1b4e1861fe036b72682b36d5a5b4fc92e213d98\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e9fc9fa76ef923198f22551db1b4e1861fe036b72682b36d5a5b4fc92e213d98\": not found" Feb 13 19:45:32.546344 kubelet[2678]: E0213 19:45:32.546272 2678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e9fc9fa76ef923198f22551db1b4e1861fe036b72682b36d5a5b4fc92e213d98\": not found" containerID="e9fc9fa76ef923198f22551db1b4e1861fe036b72682b36d5a5b4fc92e213d98" Feb 13 19:45:32.546602 kubelet[2678]: I0213 19:45:32.546340 2678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e9fc9fa76ef923198f22551db1b4e1861fe036b72682b36d5a5b4fc92e213d98"} err="failed to get container status \"e9fc9fa76ef923198f22551db1b4e1861fe036b72682b36d5a5b4fc92e213d98\": rpc error: code = NotFound desc = an error occurred when try to find container \"e9fc9fa76ef923198f22551db1b4e1861fe036b72682b36d5a5b4fc92e213d98\": not found" Feb 13 19:45:32.546602 kubelet[2678]: I0213 19:45:32.546388 2678 scope.go:117] "RemoveContainer" containerID="ac6630d28155d039e24c0ac413c8016892c5a19b034ebc6b1621d2fa896a715b" Feb 13 19:45:32.546976 containerd[1487]: time="2025-02-13T19:45:32.546898805Z" level=error msg="ContainerStatus for \"ac6630d28155d039e24c0ac413c8016892c5a19b034ebc6b1621d2fa896a715b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ac6630d28155d039e24c0ac413c8016892c5a19b034ebc6b1621d2fa896a715b\": not found" Feb 13 19:45:32.547169 kubelet[2678]: E0213 19:45:32.547140 2678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ac6630d28155d039e24c0ac413c8016892c5a19b034ebc6b1621d2fa896a715b\": not found" containerID="ac6630d28155d039e24c0ac413c8016892c5a19b034ebc6b1621d2fa896a715b" Feb 13 19:45:32.547377 kubelet[2678]: I0213 19:45:32.547185 2678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ac6630d28155d039e24c0ac413c8016892c5a19b034ebc6b1621d2fa896a715b"} err="failed to get container status \"ac6630d28155d039e24c0ac413c8016892c5a19b034ebc6b1621d2fa896a715b\": rpc error: code = NotFound desc = an error occurred when try to find container \"ac6630d28155d039e24c0ac413c8016892c5a19b034ebc6b1621d2fa896a715b\": not found" Feb 13 19:45:32.547377 kubelet[2678]: I0213 19:45:32.547272 2678 scope.go:117] "RemoveContainer" containerID="3d569bc8ceaf2a3a0b1b57a7902074d4b8b4a30aee15fdb126f71f03cf234ba8" Feb 13 19:45:32.547941 containerd[1487]: time="2025-02-13T19:45:32.547885848Z" level=error msg="ContainerStatus for \"3d569bc8ceaf2a3a0b1b57a7902074d4b8b4a30aee15fdb126f71f03cf234ba8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3d569bc8ceaf2a3a0b1b57a7902074d4b8b4a30aee15fdb126f71f03cf234ba8\": not found" Feb 13 19:45:32.548410 kubelet[2678]: E0213 19:45:32.548122 2678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3d569bc8ceaf2a3a0b1b57a7902074d4b8b4a30aee15fdb126f71f03cf234ba8\": not found" containerID="3d569bc8ceaf2a3a0b1b57a7902074d4b8b4a30aee15fdb126f71f03cf234ba8" Feb 13 19:45:32.548410 kubelet[2678]: I0213 19:45:32.548162 2678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3d569bc8ceaf2a3a0b1b57a7902074d4b8b4a30aee15fdb126f71f03cf234ba8"} err="failed to get container status \"3d569bc8ceaf2a3a0b1b57a7902074d4b8b4a30aee15fdb126f71f03cf234ba8\": rpc error: code = NotFound desc = an error occurred when try to find container \"3d569bc8ceaf2a3a0b1b57a7902074d4b8b4a30aee15fdb126f71f03cf234ba8\": not found" Feb 13 19:45:32.548410 kubelet[2678]: I0213 19:45:32.548188 2678 scope.go:117] "RemoveContainer" containerID="382b6acd1f7ebacbb07cc961381d98258b2ee03e9877e5377e2971f0dd390248" Feb 13 19:45:32.550262 containerd[1487]: time="2025-02-13T19:45:32.550204103Z" level=info msg="RemoveContainer for \"382b6acd1f7ebacbb07cc961381d98258b2ee03e9877e5377e2971f0dd390248\"" Feb 13 19:45:32.555787 containerd[1487]: time="2025-02-13T19:45:32.555702476Z" level=info msg="RemoveContainer for \"382b6acd1f7ebacbb07cc961381d98258b2ee03e9877e5377e2971f0dd390248\" returns successfully" Feb 13 19:45:32.556180 kubelet[2678]: I0213 19:45:32.556061 2678 scope.go:117] "RemoveContainer" containerID="382b6acd1f7ebacbb07cc961381d98258b2ee03e9877e5377e2971f0dd390248" Feb 13 19:45:32.556481 containerd[1487]: time="2025-02-13T19:45:32.556395926Z" level=error msg="ContainerStatus for \"382b6acd1f7ebacbb07cc961381d98258b2ee03e9877e5377e2971f0dd390248\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"382b6acd1f7ebacbb07cc961381d98258b2ee03e9877e5377e2971f0dd390248\": not found" Feb 13 19:45:32.556741 kubelet[2678]: E0213 19:45:32.556707 2678 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"382b6acd1f7ebacbb07cc961381d98258b2ee03e9877e5377e2971f0dd390248\": not found" containerID="382b6acd1f7ebacbb07cc961381d98258b2ee03e9877e5377e2971f0dd390248" Feb 13 19:45:32.556842 kubelet[2678]: I0213 19:45:32.556752 2678 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"382b6acd1f7ebacbb07cc961381d98258b2ee03e9877e5377e2971f0dd390248"} err="failed to get container status \"382b6acd1f7ebacbb07cc961381d98258b2ee03e9877e5377e2971f0dd390248\": rpc error: code = NotFound desc = an error occurred when try to find container \"382b6acd1f7ebacbb07cc961381d98258b2ee03e9877e5377e2971f0dd390248\": not found" Feb 13 19:45:32.677235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8085d64ebb106dc17f2288c11c17dfb3eb21d70362933ae842fbc1674e6b9a0-rootfs.mount: Deactivated successfully. Feb 13 19:45:32.677959 systemd[1]: var-lib-kubelet-pods-3acf21eb\x2d12e1\x2d4b76\x2db2dc\x2dec053651c899-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d598nh.mount: Deactivated successfully. Feb 13 19:45:32.678100 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ff98513aa7ea4d4d2389656a8856a33ede83f287816d8aa1e5da2a216696aed-rootfs.mount: Deactivated successfully. Feb 13 19:45:32.678226 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4ff98513aa7ea4d4d2389656a8856a33ede83f287816d8aa1e5da2a216696aed-shm.mount: Deactivated successfully. Feb 13 19:45:32.678367 systemd[1]: var-lib-kubelet-pods-544ebe5d\x2dfa5a\x2d41ea\x2d9d1b\x2da2805a8d2ac6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtbnwl.mount: Deactivated successfully. Feb 13 19:45:32.678731 systemd[1]: var-lib-kubelet-pods-544ebe5d\x2dfa5a\x2d41ea\x2d9d1b\x2da2805a8d2ac6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:45:32.678970 systemd[1]: var-lib-kubelet-pods-544ebe5d\x2dfa5a\x2d41ea\x2d9d1b\x2da2805a8d2ac6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:45:33.611759 sshd[4263]: Connection closed by 139.178.68.195 port 51066 Feb 13 19:45:33.612781 sshd-session[4261]: pam_unix(sshd:session): session closed for user core Feb 13 19:45:33.618257 systemd[1]: sshd@24-10.128.0.110:22-139.178.68.195:51066.service: Deactivated successfully. Feb 13 19:45:33.621403 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:45:33.621731 systemd[1]: session-25.scope: Consumed 1.119s CPU time. Feb 13 19:45:33.623947 systemd-logind[1467]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:45:33.625463 systemd-logind[1467]: Removed session 25. Feb 13 19:45:33.669021 systemd[1]: Started sshd@25-10.128.0.110:22-139.178.68.195:51082.service - OpenSSH per-connection server daemon (139.178.68.195:51082). Feb 13 19:45:33.968914 sshd[4421]: Accepted publickey for core from 139.178.68.195 port 51082 ssh2: RSA SHA256:r/2m5GDaYI6NVLjjK2e6K38+w/tgqQ1v0R6DcMMXQ1c Feb 13 19:45:33.971133 sshd-session[4421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:45:33.980188 systemd-logind[1467]: New session 26 of user core. Feb 13 19:45:33.984802 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:45:34.081041 kubelet[2678]: I0213 19:45:34.080968 2678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3acf21eb-12e1-4b76-b2dc-ec053651c899" path="/var/lib/kubelet/pods/3acf21eb-12e1-4b76-b2dc-ec053651c899/volumes" Feb 13 19:45:34.081988 kubelet[2678]: I0213 19:45:34.081939 2678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6" path="/var/lib/kubelet/pods/544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6/volumes" Feb 13 19:45:34.157821 ntpd[1456]: Deleting interface #12 lxc_health, fe80::dc76:c6ff:fe59:d133%8#123, interface stats: received=0, sent=0, dropped=0, active_time=86 secs Feb 13 19:45:34.158356 ntpd[1456]: 13 Feb 19:45:34 ntpd[1456]: Deleting interface #12 lxc_health, fe80::dc76:c6ff:fe59:d133%8#123, interface stats: received=0, sent=0, dropped=0, active_time=86 secs Feb 13 19:45:35.242908 kubelet[2678]: E0213 19:45:35.242648 2678 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:45:35.451465 kubelet[2678]: E0213 19:45:35.449860 2678 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6" containerName="mount-bpf-fs" Feb 13 19:45:35.451465 kubelet[2678]: E0213 19:45:35.449907 2678 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6" containerName="mount-cgroup" Feb 13 19:45:35.451465 kubelet[2678]: E0213 19:45:35.449919 2678 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6" containerName="apply-sysctl-overwrites" Feb 13 19:45:35.451465 kubelet[2678]: E0213 19:45:35.449930 2678 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6" containerName="clean-cilium-state" Feb 13 19:45:35.451465 kubelet[2678]: E0213 19:45:35.449942 2678 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6" containerName="cilium-agent" Feb 13 19:45:35.451465 kubelet[2678]: E0213 19:45:35.449952 2678 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3acf21eb-12e1-4b76-b2dc-ec053651c899" containerName="cilium-operator" Feb 13 19:45:35.451465 kubelet[2678]: I0213 19:45:35.449997 2678 memory_manager.go:354] "RemoveStaleState removing state" podUID="544ebe5d-fa5a-41ea-9d1b-a2805a8d2ac6" containerName="cilium-agent" Feb 13 19:45:35.451465 kubelet[2678]: I0213 19:45:35.450009 2678 memory_manager.go:354] "RemoveStaleState removing state" podUID="3acf21eb-12e1-4b76-b2dc-ec053651c899" containerName="cilium-operator" Feb 13 19:45:35.467372 systemd[1]: Created slice kubepods-burstable-pod16d07609_6a6e_4559_badf_ccf73a07ff3b.slice - libcontainer container kubepods-burstable-pod16d07609_6a6e_4559_badf_ccf73a07ff3b.slice. Feb 13 19:45:35.478562 sshd[4425]: Connection closed by 139.178.68.195 port 51082 Feb 13 19:45:35.472318 sshd-session[4421]: pam_unix(sshd:session): session closed for user core Feb 13 19:45:35.486818 systemd[1]: sshd@25-10.128.0.110:22-139.178.68.195:51082.service: Deactivated successfully. Feb 13 19:45:35.493419 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:45:35.493939 systemd[1]: session-26.scope: Consumed 1.272s CPU time. Feb 13 19:45:35.500203 systemd-logind[1467]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:45:35.503365 systemd-logind[1467]: Removed session 26. Feb 13 19:45:35.538992 systemd[1]: Started sshd@26-10.128.0.110:22-139.178.68.195:51094.service - OpenSSH per-connection server daemon (139.178.68.195:51094). Feb 13 19:45:35.553851 kubelet[2678]: I0213 19:45:35.552870 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/16d07609-6a6e-4559-badf-ccf73a07ff3b-clustermesh-secrets\") pod \"cilium-p6f6h\" (UID: \"16d07609-6a6e-4559-badf-ccf73a07ff3b\") " pod="kube-system/cilium-p6f6h" Feb 13 19:45:35.553851 kubelet[2678]: I0213 19:45:35.552923 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/16d07609-6a6e-4559-badf-ccf73a07ff3b-host-proc-sys-kernel\") pod \"cilium-p6f6h\" (UID: \"16d07609-6a6e-4559-badf-ccf73a07ff3b\") " pod="kube-system/cilium-p6f6h" Feb 13 19:45:35.553851 kubelet[2678]: I0213 19:45:35.552953 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/16d07609-6a6e-4559-badf-ccf73a07ff3b-cilium-ipsec-secrets\") pod \"cilium-p6f6h\" (UID: \"16d07609-6a6e-4559-badf-ccf73a07ff3b\") " pod="kube-system/cilium-p6f6h" Feb 13 19:45:35.553851 kubelet[2678]: I0213 19:45:35.552995 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/16d07609-6a6e-4559-badf-ccf73a07ff3b-host-proc-sys-net\") pod \"cilium-p6f6h\" (UID: \"16d07609-6a6e-4559-badf-ccf73a07ff3b\") " pod="kube-system/cilium-p6f6h" Feb 13 19:45:35.553851 kubelet[2678]: I0213 19:45:35.553022 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/16d07609-6a6e-4559-badf-ccf73a07ff3b-cilium-run\") pod \"cilium-p6f6h\" (UID: \"16d07609-6a6e-4559-badf-ccf73a07ff3b\") " pod="kube-system/cilium-p6f6h" Feb 13 19:45:35.553851 kubelet[2678]: I0213 19:45:35.553049 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/16d07609-6a6e-4559-badf-ccf73a07ff3b-hostproc\") pod \"cilium-p6f6h\" (UID: \"16d07609-6a6e-4559-badf-ccf73a07ff3b\") " pod="kube-system/cilium-p6f6h" Feb 13 19:45:35.554304 kubelet[2678]: I0213 19:45:35.553074 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/16d07609-6a6e-4559-badf-ccf73a07ff3b-cilium-config-path\") pod \"cilium-p6f6h\" (UID: \"16d07609-6a6e-4559-badf-ccf73a07ff3b\") " pod="kube-system/cilium-p6f6h" Feb 13 19:45:35.554304 kubelet[2678]: I0213 19:45:35.553111 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/16d07609-6a6e-4559-badf-ccf73a07ff3b-hubble-tls\") pod \"cilium-p6f6h\" (UID: \"16d07609-6a6e-4559-badf-ccf73a07ff3b\") " pod="kube-system/cilium-p6f6h" Feb 13 19:45:35.554304 kubelet[2678]: I0213 19:45:35.553139 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16d07609-6a6e-4559-badf-ccf73a07ff3b-lib-modules\") pod \"cilium-p6f6h\" (UID: \"16d07609-6a6e-4559-badf-ccf73a07ff3b\") " pod="kube-system/cilium-p6f6h" Feb 13 19:45:35.554304 kubelet[2678]: I0213 19:45:35.553167 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrhwb\" (UniqueName: \"kubernetes.io/projected/16d07609-6a6e-4559-badf-ccf73a07ff3b-kube-api-access-xrhwb\") pod \"cilium-p6f6h\" (UID: \"16d07609-6a6e-4559-badf-ccf73a07ff3b\") " pod="kube-system/cilium-p6f6h" Feb 13 19:45:35.556861 kubelet[2678]: I0213 19:45:35.556653 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/16d07609-6a6e-4559-badf-ccf73a07ff3b-cilium-cgroup\") pod \"cilium-p6f6h\" (UID: \"16d07609-6a6e-4559-badf-ccf73a07ff3b\") " pod="kube-system/cilium-p6f6h" Feb 13 19:45:35.556861 kubelet[2678]: I0213 19:45:35.556744 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/16d07609-6a6e-4559-badf-ccf73a07ff3b-cni-path\") pod \"cilium-p6f6h\" (UID: \"16d07609-6a6e-4559-badf-ccf73a07ff3b\") " pod="kube-system/cilium-p6f6h" Feb 13 19:45:35.556861 kubelet[2678]: I0213 19:45:35.556805 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/16d07609-6a6e-4559-badf-ccf73a07ff3b-bpf-maps\") pod \"cilium-p6f6h\" (UID: \"16d07609-6a6e-4559-badf-ccf73a07ff3b\") " pod="kube-system/cilium-p6f6h" Feb 13 19:45:35.558650 kubelet[2678]: I0213 19:45:35.556840 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/16d07609-6a6e-4559-badf-ccf73a07ff3b-etc-cni-netd\") pod \"cilium-p6f6h\" (UID: \"16d07609-6a6e-4559-badf-ccf73a07ff3b\") " pod="kube-system/cilium-p6f6h" Feb 13 19:45:35.558650 kubelet[2678]: I0213 19:45:35.558571 2678 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16d07609-6a6e-4559-badf-ccf73a07ff3b-xtables-lock\") pod \"cilium-p6f6h\" (UID: \"16d07609-6a6e-4559-badf-ccf73a07ff3b\") " pod="kube-system/cilium-p6f6h" Feb 13 19:45:35.784909 containerd[1487]: time="2025-02-13T19:45:35.784168955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p6f6h,Uid:16d07609-6a6e-4559-badf-ccf73a07ff3b,Namespace:kube-system,Attempt:0,}" Feb 13 19:45:35.828076 containerd[1487]: time="2025-02-13T19:45:35.827596651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:45:35.828076 containerd[1487]: time="2025-02-13T19:45:35.827774659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:45:35.828076 containerd[1487]: time="2025-02-13T19:45:35.827821884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:35.828970 containerd[1487]: time="2025-02-13T19:45:35.828003018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:45:35.857801 systemd[1]: Started cri-containerd-2e4c52620372dacbcd05095f8f7bc5f55b4f771f299f3738d55d0f676d30c5a9.scope - libcontainer container 2e4c52620372dacbcd05095f8f7bc5f55b4f771f299f3738d55d0f676d30c5a9. Feb 13 19:45:35.897324 containerd[1487]: time="2025-02-13T19:45:35.897184987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p6f6h,Uid:16d07609-6a6e-4559-badf-ccf73a07ff3b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e4c52620372dacbcd05095f8f7bc5f55b4f771f299f3738d55d0f676d30c5a9\"" Feb 13 19:45:35.898439 sshd[4436]: Accepted publickey for core from 139.178.68.195 port 51094 ssh2: RSA SHA256:r/2m5GDaYI6NVLjjK2e6K38+w/tgqQ1v0R6DcMMXQ1c Feb 13 19:45:35.902315 sshd-session[4436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:45:35.906097 containerd[1487]: time="2025-02-13T19:45:35.905490125Z" level=info msg="CreateContainer within sandbox \"2e4c52620372dacbcd05095f8f7bc5f55b4f771f299f3738d55d0f676d30c5a9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:45:35.914797 systemd-logind[1467]: New session 27 of user core. Feb 13 19:45:35.921908 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:45:35.930459 containerd[1487]: time="2025-02-13T19:45:35.929936783Z" level=info msg="CreateContainer within sandbox \"2e4c52620372dacbcd05095f8f7bc5f55b4f771f299f3738d55d0f676d30c5a9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"02b0b46a8ce0cd9df4768df87c2ed53cc3637fe69af9c646c1cbbe6be4074907\"" Feb 13 19:45:35.931895 containerd[1487]: time="2025-02-13T19:45:35.931759378Z" level=info msg="StartContainer for \"02b0b46a8ce0cd9df4768df87c2ed53cc3637fe69af9c646c1cbbe6be4074907\"" Feb 13 19:45:35.975911 systemd[1]: Started cri-containerd-02b0b46a8ce0cd9df4768df87c2ed53cc3637fe69af9c646c1cbbe6be4074907.scope - libcontainer container 02b0b46a8ce0cd9df4768df87c2ed53cc3637fe69af9c646c1cbbe6be4074907. Feb 13 19:45:36.017870 containerd[1487]: time="2025-02-13T19:45:36.017807537Z" level=info msg="StartContainer for \"02b0b46a8ce0cd9df4768df87c2ed53cc3637fe69af9c646c1cbbe6be4074907\" returns successfully" Feb 13 19:45:36.033499 systemd[1]: cri-containerd-02b0b46a8ce0cd9df4768df87c2ed53cc3637fe69af9c646c1cbbe6be4074907.scope: Deactivated successfully. Feb 13 19:45:36.085070 containerd[1487]: time="2025-02-13T19:45:36.084753447Z" level=info msg="shim disconnected" id=02b0b46a8ce0cd9df4768df87c2ed53cc3637fe69af9c646c1cbbe6be4074907 namespace=k8s.io Feb 13 19:45:36.085070 containerd[1487]: time="2025-02-13T19:45:36.084829131Z" level=warning msg="cleaning up after shim disconnected" id=02b0b46a8ce0cd9df4768df87c2ed53cc3637fe69af9c646c1cbbe6be4074907 namespace=k8s.io Feb 13 19:45:36.085070 containerd[1487]: time="2025-02-13T19:45:36.084843870Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:45:36.116612 sshd[4486]: Connection closed by 139.178.68.195 port 51094 Feb 13 19:45:36.117633 sshd-session[4436]: pam_unix(sshd:session): session closed for user core Feb 13 19:45:36.124090 systemd[1]: sshd@26-10.128.0.110:22-139.178.68.195:51094.service: Deactivated successfully. Feb 13 19:45:36.127098 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:45:36.128360 systemd-logind[1467]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:45:36.130249 systemd-logind[1467]: Removed session 27. Feb 13 19:45:36.177099 systemd[1]: Started sshd@27-10.128.0.110:22-139.178.68.195:51104.service - OpenSSH per-connection server daemon (139.178.68.195:51104). Feb 13 19:45:36.478517 sshd[4554]: Accepted publickey for core from 139.178.68.195 port 51104 ssh2: RSA SHA256:r/2m5GDaYI6NVLjjK2e6K38+w/tgqQ1v0R6DcMMXQ1c Feb 13 19:45:36.481998 sshd-session[4554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:45:36.490977 systemd-logind[1467]: New session 28 of user core. Feb 13 19:45:36.499868 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 19:45:36.515284 containerd[1487]: time="2025-02-13T19:45:36.514658941Z" level=info msg="CreateContainer within sandbox \"2e4c52620372dacbcd05095f8f7bc5f55b4f771f299f3738d55d0f676d30c5a9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:45:36.532906 containerd[1487]: time="2025-02-13T19:45:36.532829998Z" level=info msg="CreateContainer within sandbox \"2e4c52620372dacbcd05095f8f7bc5f55b4f771f299f3738d55d0f676d30c5a9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a70f4ae38924cf98c444e6348af5c985e2698d25a08ae5cf99c1a2934c1ec284\"" Feb 13 19:45:36.533781 containerd[1487]: time="2025-02-13T19:45:36.533735157Z" level=info msg="StartContainer for \"a70f4ae38924cf98c444e6348af5c985e2698d25a08ae5cf99c1a2934c1ec284\"" Feb 13 19:45:36.583845 systemd[1]: Started cri-containerd-a70f4ae38924cf98c444e6348af5c985e2698d25a08ae5cf99c1a2934c1ec284.scope - libcontainer container a70f4ae38924cf98c444e6348af5c985e2698d25a08ae5cf99c1a2934c1ec284. Feb 13 19:45:36.623041 containerd[1487]: time="2025-02-13T19:45:36.622984776Z" level=info msg="StartContainer for \"a70f4ae38924cf98c444e6348af5c985e2698d25a08ae5cf99c1a2934c1ec284\" returns successfully" Feb 13 19:45:36.634191 systemd[1]: cri-containerd-a70f4ae38924cf98c444e6348af5c985e2698d25a08ae5cf99c1a2934c1ec284.scope: Deactivated successfully. Feb 13 19:45:36.697473 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a70f4ae38924cf98c444e6348af5c985e2698d25a08ae5cf99c1a2934c1ec284-rootfs.mount: Deactivated successfully. Feb 13 19:45:36.700745 containerd[1487]: time="2025-02-13T19:45:36.699167953Z" level=info msg="shim disconnected" id=a70f4ae38924cf98c444e6348af5c985e2698d25a08ae5cf99c1a2934c1ec284 namespace=k8s.io Feb 13 19:45:36.700745 containerd[1487]: time="2025-02-13T19:45:36.699243131Z" level=warning msg="cleaning up after shim disconnected" id=a70f4ae38924cf98c444e6348af5c985e2698d25a08ae5cf99c1a2934c1ec284 namespace=k8s.io Feb 13 19:45:36.700745 containerd[1487]: time="2025-02-13T19:45:36.699258590Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:45:37.513924 containerd[1487]: time="2025-02-13T19:45:37.513854197Z" level=info msg="CreateContainer within sandbox \"2e4c52620372dacbcd05095f8f7bc5f55b4f771f299f3738d55d0f676d30c5a9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:45:37.544218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1400074283.mount: Deactivated successfully. Feb 13 19:45:37.547303 containerd[1487]: time="2025-02-13T19:45:37.547131695Z" level=info msg="CreateContainer within sandbox \"2e4c52620372dacbcd05095f8f7bc5f55b4f771f299f3738d55d0f676d30c5a9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"11031be70e64c480d26d3f50ac028020bb373a1c986defdb119891579a833183\"" Feb 13 19:45:37.549438 containerd[1487]: time="2025-02-13T19:45:37.548102334Z" level=info msg="StartContainer for \"11031be70e64c480d26d3f50ac028020bb373a1c986defdb119891579a833183\"" Feb 13 19:45:37.606809 systemd[1]: Started cri-containerd-11031be70e64c480d26d3f50ac028020bb373a1c986defdb119891579a833183.scope - libcontainer container 11031be70e64c480d26d3f50ac028020bb373a1c986defdb119891579a833183. Feb 13 19:45:37.656614 containerd[1487]: time="2025-02-13T19:45:37.655979779Z" level=info msg="StartContainer for \"11031be70e64c480d26d3f50ac028020bb373a1c986defdb119891579a833183\" returns successfully" Feb 13 19:45:37.672397 systemd[1]: cri-containerd-11031be70e64c480d26d3f50ac028020bb373a1c986defdb119891579a833183.scope: Deactivated successfully. Feb 13 19:45:37.708327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11031be70e64c480d26d3f50ac028020bb373a1c986defdb119891579a833183-rootfs.mount: Deactivated successfully. Feb 13 19:45:37.712732 containerd[1487]: time="2025-02-13T19:45:37.712655496Z" level=info msg="shim disconnected" id=11031be70e64c480d26d3f50ac028020bb373a1c986defdb119891579a833183 namespace=k8s.io Feb 13 19:45:37.712949 containerd[1487]: time="2025-02-13T19:45:37.712743722Z" level=warning msg="cleaning up after shim disconnected" id=11031be70e64c480d26d3f50ac028020bb373a1c986defdb119891579a833183 namespace=k8s.io Feb 13 19:45:37.712949 containerd[1487]: time="2025-02-13T19:45:37.712759333Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:45:38.523760 containerd[1487]: time="2025-02-13T19:45:38.523432323Z" level=info msg="CreateContainer within sandbox \"2e4c52620372dacbcd05095f8f7bc5f55b4f771f299f3738d55d0f676d30c5a9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:45:38.554151 containerd[1487]: time="2025-02-13T19:45:38.553465820Z" level=info msg="CreateContainer within sandbox \"2e4c52620372dacbcd05095f8f7bc5f55b4f771f299f3738d55d0f676d30c5a9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a627fbbebebb9477691beddf310fd54fa15efb910d00357a06ebe0e148e184aa\"" Feb 13 19:45:38.556637 containerd[1487]: time="2025-02-13T19:45:38.555798561Z" level=info msg="StartContainer for \"a627fbbebebb9477691beddf310fd54fa15efb910d00357a06ebe0e148e184aa\"" Feb 13 19:45:38.560097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2569614871.mount: Deactivated successfully. Feb 13 19:45:38.646842 systemd[1]: Started cri-containerd-a627fbbebebb9477691beddf310fd54fa15efb910d00357a06ebe0e148e184aa.scope - libcontainer container a627fbbebebb9477691beddf310fd54fa15efb910d00357a06ebe0e148e184aa. Feb 13 19:45:38.794089 systemd[1]: cri-containerd-a627fbbebebb9477691beddf310fd54fa15efb910d00357a06ebe0e148e184aa.scope: Deactivated successfully. Feb 13 19:45:38.803816 containerd[1487]: time="2025-02-13T19:45:38.803580775Z" level=info msg="StartContainer for \"a627fbbebebb9477691beddf310fd54fa15efb910d00357a06ebe0e148e184aa\" returns successfully" Feb 13 19:45:38.850070 containerd[1487]: time="2025-02-13T19:45:38.848692796Z" level=info msg="shim disconnected" id=a627fbbebebb9477691beddf310fd54fa15efb910d00357a06ebe0e148e184aa namespace=k8s.io Feb 13 19:45:38.850070 containerd[1487]: time="2025-02-13T19:45:38.848774174Z" level=warning msg="cleaning up after shim disconnected" id=a627fbbebebb9477691beddf310fd54fa15efb910d00357a06ebe0e148e184aa namespace=k8s.io Feb 13 19:45:38.850070 containerd[1487]: time="2025-02-13T19:45:38.848790873Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:45:38.849601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a627fbbebebb9477691beddf310fd54fa15efb910d00357a06ebe0e148e184aa-rootfs.mount: Deactivated successfully. Feb 13 19:45:39.527897 containerd[1487]: time="2025-02-13T19:45:39.527636210Z" level=info msg="CreateContainer within sandbox \"2e4c52620372dacbcd05095f8f7bc5f55b4f771f299f3738d55d0f676d30c5a9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:45:39.564007 containerd[1487]: time="2025-02-13T19:45:39.563936441Z" level=info msg="CreateContainer within sandbox \"2e4c52620372dacbcd05095f8f7bc5f55b4f771f299f3738d55d0f676d30c5a9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3e2e66ba650d1e17c1556f9dbee2a99bc12d7d221f52dfd01dff68ba4f5c1116\"" Feb 13 19:45:39.565107 containerd[1487]: time="2025-02-13T19:45:39.564665734Z" level=info msg="StartContainer for \"3e2e66ba650d1e17c1556f9dbee2a99bc12d7d221f52dfd01dff68ba4f5c1116\"" Feb 13 19:45:39.616821 systemd[1]: Started cri-containerd-3e2e66ba650d1e17c1556f9dbee2a99bc12d7d221f52dfd01dff68ba4f5c1116.scope - libcontainer container 3e2e66ba650d1e17c1556f9dbee2a99bc12d7d221f52dfd01dff68ba4f5c1116. Feb 13 19:45:39.674584 containerd[1487]: time="2025-02-13T19:45:39.674099741Z" level=info msg="StartContainer for \"3e2e66ba650d1e17c1556f9dbee2a99bc12d7d221f52dfd01dff68ba4f5c1116\" returns successfully" Feb 13 19:45:40.060478 containerd[1487]: time="2025-02-13T19:45:40.060197998Z" level=info msg="StopPodSandbox for \"4ff98513aa7ea4d4d2389656a8856a33ede83f287816d8aa1e5da2a216696aed\"" Feb 13 19:45:40.060478 containerd[1487]: time="2025-02-13T19:45:40.060343222Z" level=info msg="TearDown network for sandbox \"4ff98513aa7ea4d4d2389656a8856a33ede83f287816d8aa1e5da2a216696aed\" successfully" Feb 13 19:45:40.060478 containerd[1487]: time="2025-02-13T19:45:40.060365949Z" level=info msg="StopPodSandbox for \"4ff98513aa7ea4d4d2389656a8856a33ede83f287816d8aa1e5da2a216696aed\" returns successfully" Feb 13 19:45:40.062544 containerd[1487]: time="2025-02-13T19:45:40.061882186Z" level=info msg="RemovePodSandbox for \"4ff98513aa7ea4d4d2389656a8856a33ede83f287816d8aa1e5da2a216696aed\"" Feb 13 19:45:40.062544 containerd[1487]: time="2025-02-13T19:45:40.061963806Z" level=info msg="Forcibly stopping sandbox \"4ff98513aa7ea4d4d2389656a8856a33ede83f287816d8aa1e5da2a216696aed\"" Feb 13 19:45:40.062544 containerd[1487]: time="2025-02-13T19:45:40.062127816Z" level=info msg="TearDown network for sandbox \"4ff98513aa7ea4d4d2389656a8856a33ede83f287816d8aa1e5da2a216696aed\" successfully" Feb 13 19:45:40.068616 containerd[1487]: time="2025-02-13T19:45:40.068275882Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ff98513aa7ea4d4d2389656a8856a33ede83f287816d8aa1e5da2a216696aed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:45:40.068616 containerd[1487]: time="2025-02-13T19:45:40.068361624Z" level=info msg="RemovePodSandbox \"4ff98513aa7ea4d4d2389656a8856a33ede83f287816d8aa1e5da2a216696aed\" returns successfully" Feb 13 19:45:40.069833 containerd[1487]: time="2025-02-13T19:45:40.069518773Z" level=info msg="StopPodSandbox for \"a8085d64ebb106dc17f2288c11c17dfb3eb21d70362933ae842fbc1674e6b9a0\"" Feb 13 19:45:40.069833 containerd[1487]: time="2025-02-13T19:45:40.069659353Z" level=info msg="TearDown network for sandbox \"a8085d64ebb106dc17f2288c11c17dfb3eb21d70362933ae842fbc1674e6b9a0\" successfully" Feb 13 19:45:40.069833 containerd[1487]: time="2025-02-13T19:45:40.069691367Z" level=info msg="StopPodSandbox for \"a8085d64ebb106dc17f2288c11c17dfb3eb21d70362933ae842fbc1674e6b9a0\" returns successfully" Feb 13 19:45:40.071626 containerd[1487]: time="2025-02-13T19:45:40.070689770Z" level=info msg="RemovePodSandbox for \"a8085d64ebb106dc17f2288c11c17dfb3eb21d70362933ae842fbc1674e6b9a0\"" Feb 13 19:45:40.071626 containerd[1487]: time="2025-02-13T19:45:40.070729370Z" level=info msg="Forcibly stopping sandbox \"a8085d64ebb106dc17f2288c11c17dfb3eb21d70362933ae842fbc1674e6b9a0\"" Feb 13 19:45:40.071626 containerd[1487]: time="2025-02-13T19:45:40.070855355Z" level=info msg="TearDown network for sandbox \"a8085d64ebb106dc17f2288c11c17dfb3eb21d70362933ae842fbc1674e6b9a0\" successfully" Feb 13 19:45:40.079915 containerd[1487]: time="2025-02-13T19:45:40.079263509Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a8085d64ebb106dc17f2288c11c17dfb3eb21d70362933ae842fbc1674e6b9a0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:45:40.079915 containerd[1487]: time="2025-02-13T19:45:40.079456138Z" level=info msg="RemovePodSandbox \"a8085d64ebb106dc17f2288c11c17dfb3eb21d70362933ae842fbc1674e6b9a0\" returns successfully" Feb 13 19:45:40.259585 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 19:45:40.551300 kubelet[2678]: I0213 19:45:40.550755 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p6f6h" podStartSLOduration=5.550726461 podStartE2EDuration="5.550726461s" podCreationTimestamp="2025-02-13 19:45:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:45:40.549583213 +0000 UTC m=+120.744685268" watchObservedRunningTime="2025-02-13 19:45:40.550726461 +0000 UTC m=+120.745828539" Feb 13 19:45:43.742822 systemd-networkd[1388]: lxc_health: Link UP Feb 13 19:45:43.754592 systemd-networkd[1388]: lxc_health: Gained carrier Feb 13 19:45:45.756851 systemd-networkd[1388]: lxc_health: Gained IPv6LL Feb 13 19:45:48.157798 ntpd[1456]: Listen normally on 15 lxc_health [fe80::db:96ff:fe35:9eb6%14]:123 Feb 13 19:45:48.158580 ntpd[1456]: 13 Feb 19:45:48 ntpd[1456]: Listen normally on 15 lxc_health [fe80::db:96ff:fe35:9eb6%14]:123 Feb 13 19:45:50.060648 systemd[1]: run-containerd-runc-k8s.io-3e2e66ba650d1e17c1556f9dbee2a99bc12d7d221f52dfd01dff68ba4f5c1116-runc.eFyRP8.mount: Deactivated successfully. Feb 13 19:45:50.173289 sshd[4556]: Connection closed by 139.178.68.195 port 51104 Feb 13 19:45:50.174403 sshd-session[4554]: pam_unix(sshd:session): session closed for user core Feb 13 19:45:50.181219 systemd[1]: sshd@27-10.128.0.110:22-139.178.68.195:51104.service: Deactivated successfully. Feb 13 19:45:50.184650 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 19:45:50.186041 systemd-logind[1467]: Session 28 logged out. Waiting for processes to exit. Feb 13 19:45:50.188018 systemd-logind[1467]: Removed session 28.