Feb 13 19:29:23.129023 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:40:15 -00 2025 Feb 13 19:29:23.129069 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 19:29:23.129087 kernel: BIOS-provided physical RAM map: Feb 13 19:29:23.129101 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Feb 13 19:29:23.129115 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Feb 13 19:29:23.129129 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Feb 13 19:29:23.129146 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Feb 13 19:29:23.129160 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Feb 13 19:29:23.129178 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd328fff] usable Feb 13 19:29:23.129192 kernel: BIOS-e820: [mem 0x00000000bd329000-0x00000000bd330fff] ACPI data Feb 13 19:29:23.129207 kernel: BIOS-e820: [mem 0x00000000bd331000-0x00000000bf8ecfff] usable Feb 13 19:29:23.129222 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Feb 13 19:29:23.129236 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Feb 13 19:29:23.129251 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Feb 13 19:29:23.129273 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Feb 13 19:29:23.129289 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Feb 13 19:29:23.129305 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Feb 13 19:29:23.129321 kernel: NX (Execute Disable) protection: active Feb 13 19:29:23.129337 kernel: APIC: Static calls initialized Feb 13 19:29:23.129353 kernel: efi: EFI v2.7 by EDK II Feb 13 19:29:23.129370 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd329018 Feb 13 19:29:23.129386 kernel: random: crng init done Feb 13 19:29:23.129408 kernel: secureboot: Secure boot disabled Feb 13 19:29:23.129424 kernel: SMBIOS 2.4 present. Feb 13 19:29:23.129444 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024 Feb 13 19:29:23.129460 kernel: Hypervisor detected: KVM Feb 13 19:29:23.129475 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:29:23.129491 kernel: kvm-clock: using sched offset of 13027222419 cycles Feb 13 19:29:23.129533 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:29:23.129550 kernel: tsc: Detected 2299.998 MHz processor Feb 13 19:29:23.129566 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:29:23.129583 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:29:23.129599 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Feb 13 19:29:23.129615 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Feb 13 19:29:23.129636 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:29:23.129652 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Feb 13 19:29:23.129668 kernel: Using GB pages for direct mapping Feb 13 19:29:23.129685 kernel: ACPI: Early table checksum verification disabled Feb 13 19:29:23.129701 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Feb 13 19:29:23.129718 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Feb 13 19:29:23.129740 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Feb 13 19:29:23.129761 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Feb 13 19:29:23.129778 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Feb 13 19:29:23.129795 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Feb 13 19:29:23.129813 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Feb 13 19:29:23.129830 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Feb 13 19:29:23.129847 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Feb 13 19:29:23.129864 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Feb 13 19:29:23.129885 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Feb 13 19:29:23.129902 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Feb 13 19:29:23.129920 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Feb 13 19:29:23.129937 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Feb 13 19:29:23.129954 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Feb 13 19:29:23.129971 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Feb 13 19:29:23.129988 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Feb 13 19:29:23.130005 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Feb 13 19:29:23.130022 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Feb 13 19:29:23.130043 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Feb 13 19:29:23.130060 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 19:29:23.130077 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 19:29:23.130094 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 19:29:23.130111 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Feb 13 19:29:23.130128 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Feb 13 19:29:23.130145 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Feb 13 19:29:23.130163 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Feb 13 19:29:23.130180 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Feb 13 19:29:23.130200 kernel: Zone ranges: Feb 13 19:29:23.130218 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:29:23.130235 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 19:29:23.130252 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Feb 13 19:29:23.130269 kernel: Movable zone start for each node Feb 13 19:29:23.130286 kernel: Early memory node ranges Feb 13 19:29:23.130303 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Feb 13 19:29:23.130320 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Feb 13 19:29:23.130337 kernel: node 0: [mem 0x0000000000100000-0x00000000bd328fff] Feb 13 19:29:23.130358 kernel: node 0: [mem 0x00000000bd331000-0x00000000bf8ecfff] Feb 13 19:29:23.130375 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Feb 13 19:29:23.130392 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Feb 13 19:29:23.130419 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Feb 13 19:29:23.130437 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:29:23.130454 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Feb 13 19:29:23.130471 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Feb 13 19:29:23.130488 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Feb 13 19:29:23.130520 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 19:29:23.130541 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Feb 13 19:29:23.130559 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 19:29:23.130576 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:29:23.130593 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 19:29:23.130610 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:29:23.130627 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:29:23.130645 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:29:23.130662 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:29:23.130679 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:29:23.130699 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 19:29:23.130716 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Feb 13 19:29:23.130733 kernel: Booting paravirtualized kernel on KVM Feb 13 19:29:23.130751 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:29:23.130769 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 19:29:23.130786 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 19:29:23.130803 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 19:29:23.130819 kernel: pcpu-alloc: [0] 0 1 Feb 13 19:29:23.130836 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:29:23.130857 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:29:23.130876 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 19:29:23.130894 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:29:23.130911 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 19:29:23.130928 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:29:23.130945 kernel: Fallback order for Node 0: 0 Feb 13 19:29:23.130961 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932272 Feb 13 19:29:23.131903 kernel: Policy zone: Normal Feb 13 19:29:23.131934 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:29:23.131951 kernel: software IO TLB: area num 2. Feb 13 19:29:23.131969 kernel: Memory: 7511320K/7860552K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43476K init, 1596K bss, 348976K reserved, 0K cma-reserved) Feb 13 19:29:23.131987 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:29:23.132004 kernel: Kernel/User page tables isolation: enabled Feb 13 19:29:23.132020 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 19:29:23.132037 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:29:23.132055 kernel: Dynamic Preempt: voluntary Feb 13 19:29:23.132090 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:29:23.132111 kernel: rcu: RCU event tracing is enabled. Feb 13 19:29:23.132130 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:29:23.132149 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:29:23.132171 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:29:23.132189 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:29:23.132206 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:29:23.132225 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:29:23.132244 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 19:29:23.132268 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:29:23.132288 kernel: Console: colour dummy device 80x25 Feb 13 19:29:23.132306 kernel: printk: console [ttyS0] enabled Feb 13 19:29:23.132325 kernel: ACPI: Core revision 20230628 Feb 13 19:29:23.132344 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:29:23.132363 kernel: x2apic enabled Feb 13 19:29:23.132383 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:29:23.132410 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Feb 13 19:29:23.132429 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 19:29:23.132453 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Feb 13 19:29:23.132473 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Feb 13 19:29:23.132515 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Feb 13 19:29:23.132536 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:29:23.132555 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 19:29:23.132574 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 19:29:23.132594 kernel: Spectre V2 : Mitigation: IBRS Feb 13 19:29:23.132614 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:29:23.132633 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:29:23.132657 kernel: RETBleed: Mitigation: IBRS Feb 13 19:29:23.132677 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 19:29:23.132695 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Feb 13 19:29:23.132714 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 19:29:23.132733 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 19:29:23.132752 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 19:29:23.132772 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:29:23.132792 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:29:23.132811 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:29:23.132834 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:29:23.132853 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 19:29:23.132872 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:29:23.132891 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:29:23.132911 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:29:23.132930 kernel: landlock: Up and running. Feb 13 19:29:23.132949 kernel: SELinux: Initializing. Feb 13 19:29:23.132969 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 19:29:23.132988 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 19:29:23.133010 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Feb 13 19:29:23.133030 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:29:23.133049 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:29:23.133069 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:29:23.133088 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Feb 13 19:29:23.133107 kernel: signal: max sigframe size: 1776 Feb 13 19:29:23.133127 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:29:23.133146 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:29:23.133168 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 19:29:23.133188 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:29:23.133207 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:29:23.133226 kernel: .... node #0, CPUs: #1 Feb 13 19:29:23.133246 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 19:29:23.133267 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 19:29:23.133286 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:29:23.133305 kernel: smpboot: Max logical packages: 1 Feb 13 19:29:23.133324 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Feb 13 19:29:23.133347 kernel: devtmpfs: initialized Feb 13 19:29:23.133366 kernel: x86/mm: Memory block size: 128MB Feb 13 19:29:23.133386 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Feb 13 19:29:23.133411 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:29:23.133430 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:29:23.133450 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:29:23.133469 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:29:23.133488 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:29:23.137654 kernel: audit: type=2000 audit(1739474962.084:1): state=initialized audit_enabled=0 res=1 Feb 13 19:29:23.137685 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:29:23.137707 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:29:23.137728 kernel: cpuidle: using governor menu Feb 13 19:29:23.137749 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:29:23.137769 kernel: dca service started, version 1.12.1 Feb 13 19:29:23.137790 kernel: PCI: Using configuration type 1 for base access Feb 13 19:29:23.137810 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:29:23.137831 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:29:23.137852 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:29:23.137877 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:29:23.137897 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:29:23.137918 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:29:23.137938 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:29:23.137957 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:29:23.137977 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:29:23.137996 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 19:29:23.138014 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:29:23.138030 kernel: ACPI: Interpreter enabled Feb 13 19:29:23.138053 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 19:29:23.138071 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:29:23.138090 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:29:23.138107 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 19:29:23.138145 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 19:29:23.138163 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:29:23.138446 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:29:23.138683 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 19:29:23.138877 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 19:29:23.138901 kernel: PCI host bridge to bus 0000:00 Feb 13 19:29:23.139083 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:29:23.139262 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:29:23.140851 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:29:23.141036 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Feb 13 19:29:23.141223 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:29:23.141427 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 19:29:23.141663 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Feb 13 19:29:23.141871 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 13 19:29:23.142067 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 19:29:23.142270 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Feb 13 19:29:23.142461 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Feb 13 19:29:23.144752 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Feb 13 19:29:23.144962 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 19:29:23.145159 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Feb 13 19:29:23.145346 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Feb 13 19:29:23.145557 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:29:23.145744 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 13 19:29:23.145936 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Feb 13 19:29:23.145961 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:29:23.145981 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:29:23.146000 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:29:23.146020 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:29:23.146046 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 19:29:23.146066 kernel: iommu: Default domain type: Translated Feb 13 19:29:23.146086 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:29:23.146105 kernel: efivars: Registered efivars operations Feb 13 19:29:23.146129 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:29:23.146154 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:29:23.146174 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Feb 13 19:29:23.146193 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Feb 13 19:29:23.146212 kernel: e820: reserve RAM buffer [mem 0xbd329000-0xbfffffff] Feb 13 19:29:23.146231 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Feb 13 19:29:23.146249 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Feb 13 19:29:23.146268 kernel: vgaarb: loaded Feb 13 19:29:23.146288 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:29:23.146311 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:29:23.146330 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:29:23.146349 kernel: pnp: PnP ACPI init Feb 13 19:29:23.146368 kernel: pnp: PnP ACPI: found 7 devices Feb 13 19:29:23.146388 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:29:23.146407 kernel: NET: Registered PF_INET protocol family Feb 13 19:29:23.146426 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 19:29:23.146445 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 19:29:23.146465 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:29:23.146488 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:29:23.147558 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 19:29:23.147580 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 19:29:23.147600 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 19:29:23.147619 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 19:29:23.147639 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:29:23.147659 kernel: NET: Registered PF_XDP protocol family Feb 13 19:29:23.147858 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:29:23.148033 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:29:23.148206 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:29:23.148370 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Feb 13 19:29:23.148580 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 19:29:23.148607 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:29:23.148628 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 19:29:23.148648 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Feb 13 19:29:23.148667 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 19:29:23.148693 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 19:29:23.148712 kernel: clocksource: Switched to clocksource tsc Feb 13 19:29:23.148731 kernel: Initialise system trusted keyrings Feb 13 19:29:23.148750 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 19:29:23.148770 kernel: Key type asymmetric registered Feb 13 19:29:23.148789 kernel: Asymmetric key parser 'x509' registered Feb 13 19:29:23.148808 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:29:23.148827 kernel: io scheduler mq-deadline registered Feb 13 19:29:23.148846 kernel: io scheduler kyber registered Feb 13 19:29:23.148869 kernel: io scheduler bfq registered Feb 13 19:29:23.148888 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:29:23.148909 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 13 19:29:23.149105 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Feb 13 19:29:23.149130 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 13 19:29:23.149319 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Feb 13 19:29:23.149344 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 13 19:29:23.151585 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Feb 13 19:29:23.151625 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:29:23.151646 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:29:23.151666 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 19:29:23.151686 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Feb 13 19:29:23.151705 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Feb 13 19:29:23.151910 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Feb 13 19:29:23.151937 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:29:23.151958 kernel: i8042: Warning: Keylock active Feb 13 19:29:23.151981 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:29:23.152000 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:29:23.152190 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 19:29:23.152367 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 19:29:23.152560 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T19:29:22 UTC (1739474962) Feb 13 19:29:23.152731 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 19:29:23.152755 kernel: intel_pstate: CPU model not supported Feb 13 19:29:23.152775 kernel: pstore: Using crash dump compression: deflate Feb 13 19:29:23.152799 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 19:29:23.152818 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:29:23.152837 kernel: Segment Routing with IPv6 Feb 13 19:29:23.152856 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:29:23.152875 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:29:23.152895 kernel: Key type dns_resolver registered Feb 13 19:29:23.152913 kernel: IPI shorthand broadcast: enabled Feb 13 19:29:23.152933 kernel: sched_clock: Marking stable (859004418, 136673573)->(1019655918, -23977927) Feb 13 19:29:23.152952 kernel: registered taskstats version 1 Feb 13 19:29:23.152975 kernel: Loading compiled-in X.509 certificates Feb 13 19:29:23.152994 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6c364ddae48101e091a28279a8d953535f596d53' Feb 13 19:29:23.153013 kernel: Key type .fscrypt registered Feb 13 19:29:23.153032 kernel: Key type fscrypt-provisioning registered Feb 13 19:29:23.153051 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:29:23.153070 kernel: ima: No architecture policies found Feb 13 19:29:23.153088 kernel: clk: Disabling unused clocks Feb 13 19:29:23.153108 kernel: Freeing unused kernel image (initmem) memory: 43476K Feb 13 19:29:23.153127 kernel: Write protecting the kernel read-only data: 38912k Feb 13 19:29:23.153156 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Feb 13 19:29:23.153175 kernel: Run /init as init process Feb 13 19:29:23.153194 kernel: with arguments: Feb 13 19:29:23.153210 kernel: /init Feb 13 19:29:23.153226 kernel: with environment: Feb 13 19:29:23.153243 kernel: HOME=/ Feb 13 19:29:23.153259 kernel: TERM=linux Feb 13 19:29:23.153276 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:29:23.153295 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 19:29:23.153321 systemd[1]: Successfully made /usr/ read-only. Feb 13 19:29:23.153346 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:29:23.153366 systemd[1]: Detected virtualization google. Feb 13 19:29:23.153386 systemd[1]: Detected architecture x86-64. Feb 13 19:29:23.153404 systemd[1]: Running in initrd. Feb 13 19:29:23.153423 systemd[1]: No hostname configured, using default hostname. Feb 13 19:29:23.153443 systemd[1]: Hostname set to . Feb 13 19:29:23.153466 systemd[1]: Initializing machine ID from random generator. Feb 13 19:29:23.153484 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:29:23.154536 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:29:23.154562 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:29:23.154583 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:29:23.154601 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:29:23.154620 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:29:23.154647 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:29:23.154689 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:29:23.154713 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:29:23.154732 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:29:23.154752 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:29:23.154771 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:29:23.154795 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:29:23.154814 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:29:23.154833 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:29:23.154851 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:29:23.154871 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:29:23.154890 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:29:23.154910 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 19:29:23.154929 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:29:23.154954 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:29:23.154973 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:29:23.154993 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:29:23.155013 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:29:23.155034 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:29:23.155053 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:29:23.155072 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:29:23.155091 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:29:23.155111 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:29:23.155135 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:29:23.155170 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:29:23.155191 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:29:23.155213 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:29:23.155236 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:29:23.155298 systemd-journald[184]: Collecting audit messages is disabled. Feb 13 19:29:23.155347 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:29:23.155370 systemd-journald[184]: Journal started Feb 13 19:29:23.155417 systemd-journald[184]: Runtime Journal (/run/log/journal/1d18e71d553c4ea888cb0ffd4041c925) is 8M, max 148.6M, 140.6M free. Feb 13 19:29:23.163397 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:29:23.163447 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:29:23.163479 kernel: Bridge firewalling registered Feb 13 19:29:23.098445 systemd-modules-load[185]: Inserted module 'overlay' Feb 13 19:29:23.164190 systemd-modules-load[185]: Inserted module 'br_netfilter' Feb 13 19:29:23.165275 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:29:23.166860 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:29:23.167842 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:29:23.182716 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:29:23.185368 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:29:23.197745 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:29:23.203148 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:29:23.215797 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:29:23.220404 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:29:23.228933 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:29:23.232953 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:29:23.243720 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:29:23.264335 dracut-cmdline[213]: dracut-dracut-053 Feb 13 19:29:23.269551 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 19:29:23.302853 systemd-resolved[219]: Positive Trust Anchors: Feb 13 19:29:23.303393 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:29:23.303469 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:29:23.310117 systemd-resolved[219]: Defaulting to hostname 'linux'. Feb 13 19:29:23.314282 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:29:23.325732 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:29:23.378551 kernel: SCSI subsystem initialized Feb 13 19:29:23.389545 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:29:23.402535 kernel: iscsi: registered transport (tcp) Feb 13 19:29:23.425850 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:29:23.425938 kernel: QLogic iSCSI HBA Driver Feb 13 19:29:23.479133 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:29:23.483765 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:29:23.526600 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:29:23.526688 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:29:23.526717 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:29:23.571536 kernel: raid6: avx2x4 gen() 17686 MB/s Feb 13 19:29:23.588529 kernel: raid6: avx2x2 gen() 17748 MB/s Feb 13 19:29:23.605904 kernel: raid6: avx2x1 gen() 13820 MB/s Feb 13 19:29:23.605967 kernel: raid6: using algorithm avx2x2 gen() 17748 MB/s Feb 13 19:29:23.623928 kernel: raid6: .... xor() 18524 MB/s, rmw enabled Feb 13 19:29:23.624021 kernel: raid6: using avx2x2 recovery algorithm Feb 13 19:29:23.646530 kernel: xor: automatically using best checksumming function avx Feb 13 19:29:23.813547 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:29:23.826613 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:29:23.833731 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:29:23.866069 systemd-udevd[402]: Using default interface naming scheme 'v255'. Feb 13 19:29:23.874054 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:29:23.884689 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:29:23.918525 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Feb 13 19:29:23.956277 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:29:23.961751 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:29:24.053998 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:29:24.073454 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:29:24.128817 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:29:24.150424 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:29:24.181533 kernel: scsi host0: Virtio SCSI HBA Feb 13 19:29:24.181290 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:29:24.252673 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:29:24.252717 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Feb 13 19:29:24.198645 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:29:24.253221 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:29:24.298638 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:29:24.298718 kernel: AES CTR mode by8 optimization enabled Feb 13 19:29:24.302413 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:29:24.303692 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:29:24.335653 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Feb 13 19:29:24.396042 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Feb 13 19:29:24.396331 kernel: sd 0:0:1:0: [sda] Write Protect is off Feb 13 19:29:24.396581 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Feb 13 19:29:24.396809 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 19:29:24.397025 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:29:24.397051 kernel: GPT:17805311 != 25165823 Feb 13 19:29:24.397091 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:29:24.397114 kernel: GPT:17805311 != 25165823 Feb 13 19:29:24.397135 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:29:24.397158 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:29:24.397181 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Feb 13 19:29:24.395869 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:29:24.406599 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:29:24.459282 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (446) Feb 13 19:29:24.406850 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:29:24.491690 kernel: BTRFS: device fsid 60f89c25-9096-4268-99ca-ef7992742f2b devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (461) Feb 13 19:29:24.417943 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:29:24.447940 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:29:24.481279 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:29:24.482287 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:29:24.554207 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:29:24.588529 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Feb 13 19:29:24.611491 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Feb 13 19:29:24.624431 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Feb 13 19:29:24.641450 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Feb 13 19:29:24.649860 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Feb 13 19:29:24.677774 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:29:24.717060 disk-uuid[543]: Primary Header is updated. Feb 13 19:29:24.717060 disk-uuid[543]: Secondary Entries is updated. Feb 13 19:29:24.717060 disk-uuid[543]: Secondary Header is updated. Feb 13 19:29:24.725784 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:29:24.771681 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:29:24.771729 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:29:24.813152 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:29:25.779133 disk-uuid[545]: The operation has completed successfully. Feb 13 19:29:25.787705 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:29:25.862624 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:29:25.862808 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:29:25.927818 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:29:25.962734 sh[567]: Success Feb 13 19:29:25.987833 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 19:29:26.085478 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:29:26.112673 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:29:26.122321 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:29:26.168557 kernel: BTRFS info (device dm-0): first mount of filesystem 60f89c25-9096-4268-99ca-ef7992742f2b Feb 13 19:29:26.168675 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:29:26.186095 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:29:26.186189 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:29:26.192929 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:29:26.220571 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:29:26.224807 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:29:26.225918 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:29:26.230870 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:29:26.245997 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:29:26.313057 kernel: BTRFS info (device sda6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:29:26.313187 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:29:26.325356 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:29:26.345814 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:29:26.345943 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:29:26.366377 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:29:26.382794 kernel: BTRFS info (device sda6): last unmount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:29:26.393945 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:29:26.411897 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:29:26.442222 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:29:26.473900 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:29:26.583663 systemd-networkd[750]: lo: Link UP Feb 13 19:29:26.583677 systemd-networkd[750]: lo: Gained carrier Feb 13 19:29:26.590924 systemd-networkd[750]: Enumeration completed Feb 13 19:29:26.622478 ignition[723]: Ignition 2.20.0 Feb 13 19:29:26.591560 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:29:26.622488 ignition[723]: Stage: fetch-offline Feb 13 19:29:26.591569 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:29:26.622561 ignition[723]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:29:26.593095 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:29:26.622573 ignition[723]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:29:26.595943 systemd-networkd[750]: eth0: Link UP Feb 13 19:29:26.622712 ignition[723]: parsed url from cmdline: "" Feb 13 19:29:26.595951 systemd-networkd[750]: eth0: Gained carrier Feb 13 19:29:26.622719 ignition[723]: no config URL provided Feb 13 19:29:26.595967 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:29:26.622729 ignition[723]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:29:26.610630 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.10/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 13 19:29:26.622744 ignition[723]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:29:26.619979 systemd[1]: Reached target network.target - Network. Feb 13 19:29:26.622753 ignition[723]: failed to fetch config: resource requires networking Feb 13 19:29:26.629167 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:29:26.623194 ignition[723]: Ignition finished successfully Feb 13 19:29:26.652837 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:29:26.704925 ignition[760]: Ignition 2.20.0 Feb 13 19:29:26.716117 unknown[760]: fetched base config from "system" Feb 13 19:29:26.704938 ignition[760]: Stage: fetch Feb 13 19:29:26.716131 unknown[760]: fetched base config from "system" Feb 13 19:29:26.705151 ignition[760]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:29:26.716141 unknown[760]: fetched user config from "gcp" Feb 13 19:29:26.705164 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:29:26.718604 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:29:26.705279 ignition[760]: parsed url from cmdline: "" Feb 13 19:29:26.735837 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:29:26.705286 ignition[760]: no config URL provided Feb 13 19:29:26.772670 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:29:26.705296 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:29:26.803757 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:29:26.705311 ignition[760]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:29:26.831214 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:29:26.705340 ignition[760]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Feb 13 19:29:26.848900 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:29:26.709335 ignition[760]: GET result: OK Feb 13 19:29:26.868702 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:29:26.709393 ignition[760]: parsing config with SHA512: 9f1ba8f578b0c9e82f5c03f2608b8135759c2a39edbae8bed1021d2160678449a633914ea04401d2b4bdd797279a9eba136d3f9c8cd15f201dc261f6551b208e Feb 13 19:29:26.886706 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:29:26.716735 ignition[760]: fetch: fetch complete Feb 13 19:29:26.904716 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:29:26.716742 ignition[760]: fetch: fetch passed Feb 13 19:29:26.904850 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:29:26.716808 ignition[760]: Ignition finished successfully Feb 13 19:29:26.933721 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:29:26.770160 ignition[766]: Ignition 2.20.0 Feb 13 19:29:26.770169 ignition[766]: Stage: kargs Feb 13 19:29:26.770389 ignition[766]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:29:26.770407 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:29:26.771403 ignition[766]: kargs: kargs passed Feb 13 19:29:26.771457 ignition[766]: Ignition finished successfully Feb 13 19:29:26.828524 ignition[771]: Ignition 2.20.0 Feb 13 19:29:26.828539 ignition[771]: Stage: disks Feb 13 19:29:26.828772 ignition[771]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:29:26.828785 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:29:26.829884 ignition[771]: disks: disks passed Feb 13 19:29:26.829943 ignition[771]: Ignition finished successfully Feb 13 19:29:26.995243 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 19:29:27.194692 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:29:27.227678 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:29:27.342531 kernel: EXT4-fs (sda9): mounted filesystem 157595f2-1515-4117-a2d1-73fe2ed647fc r/w with ordered data mode. Quota mode: none. Feb 13 19:29:27.343631 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:29:27.344460 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:29:27.377633 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:29:27.385630 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:29:27.401237 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:29:27.464859 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (788) Feb 13 19:29:27.464910 kernel: BTRFS info (device sda6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:29:27.464927 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:29:27.464943 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:29:27.401328 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:29:27.503851 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:29:27.503889 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:29:27.401371 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:29:27.456405 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:29:27.487380 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:29:27.516717 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:29:27.650671 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:29:27.661585 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:29:27.671861 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:29:27.681649 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:29:27.812799 systemd-networkd[750]: eth0: Gained IPv6LL Feb 13 19:29:27.821211 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:29:27.828795 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:29:27.864543 kernel: BTRFS info (device sda6): last unmount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:29:27.873815 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:29:27.914236 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:29:27.922810 ignition[900]: INFO : Ignition 2.20.0 Feb 13 19:29:27.922810 ignition[900]: INFO : Stage: mount Feb 13 19:29:27.922810 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:29:27.922810 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:29:27.922810 ignition[900]: INFO : mount: mount passed Feb 13 19:29:27.922810 ignition[900]: INFO : Ignition finished successfully Feb 13 19:29:27.933078 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:29:27.956700 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:29:28.159771 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:29:28.165885 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:29:28.214522 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (912) Feb 13 19:29:28.231989 kernel: BTRFS info (device sda6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:29:28.232080 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:29:28.232106 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:29:28.253972 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:29:28.254059 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:29:28.257377 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:29:28.293024 ignition[929]: INFO : Ignition 2.20.0 Feb 13 19:29:28.293024 ignition[929]: INFO : Stage: files Feb 13 19:29:28.307652 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:29:28.307652 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:29:28.307652 ignition[929]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:29:28.307652 ignition[929]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:29:28.307652 ignition[929]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:29:28.307652 ignition[929]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:29:28.307652 ignition[929]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:29:28.307652 ignition[929]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:29:28.307652 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 19:29:28.307652 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Feb 13 19:29:28.302782 unknown[929]: wrote ssh authorized keys file for user: core Feb 13 19:29:32.035086 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:29:33.222736 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 19:29:33.222736 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:29:33.222736 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 19:29:33.520571 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:29:33.688453 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:29:33.703651 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:29:33.703651 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:29:33.703651 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:29:33.703651 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:29:33.703651 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:29:33.703651 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:29:33.703651 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:29:33.703651 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:29:33.703651 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:29:33.703651 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:29:33.703651 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:29:33.703651 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:29:33.703651 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:29:33.703651 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 13 19:29:33.945761 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:29:34.385618 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:29:34.385618 ignition[929]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 19:29:34.424688 ignition[929]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:29:34.424688 ignition[929]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:29:34.424688 ignition[929]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 19:29:34.424688 ignition[929]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:29:34.424688 ignition[929]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:29:34.424688 ignition[929]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:29:34.424688 ignition[929]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:29:34.424688 ignition[929]: INFO : files: files passed Feb 13 19:29:34.424688 ignition[929]: INFO : Ignition finished successfully Feb 13 19:29:34.389830 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:29:34.411806 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:29:34.458730 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:29:34.481488 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:29:34.638696 initrd-setup-root-after-ignition[956]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:29:34.638696 initrd-setup-root-after-ignition[956]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:29:34.487813 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:29:34.688691 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:29:34.518257 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:29:34.542153 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:29:34.572803 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:29:34.655898 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:29:34.656029 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:29:34.678421 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:29:34.698796 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:29:34.722905 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:29:34.729863 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:29:34.838098 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:29:34.843741 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:29:34.888038 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:29:34.888352 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:29:34.919932 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:29:34.937931 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:29:34.938150 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:29:34.964902 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:29:34.985846 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:29:35.003885 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:29:35.022843 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:29:35.043873 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:29:35.054979 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:29:35.065027 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:29:35.100882 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:29:35.121919 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:29:35.141986 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:29:35.159908 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:29:35.160201 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:29:35.193949 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:29:35.203854 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:29:35.224883 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:29:35.225060 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:29:35.246801 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:29:35.247030 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:29:35.276915 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:29:35.277200 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:29:35.296000 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:29:35.296222 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:29:35.362674 ignition[981]: INFO : Ignition 2.20.0 Feb 13 19:29:35.362674 ignition[981]: INFO : Stage: umount Feb 13 19:29:35.362674 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:29:35.362674 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:29:35.362674 ignition[981]: INFO : umount: umount passed Feb 13 19:29:35.362674 ignition[981]: INFO : Ignition finished successfully Feb 13 19:29:35.323787 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:29:35.351797 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:29:35.377775 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:29:35.378011 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:29:35.384994 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:29:35.385220 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:29:35.439576 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:29:35.440942 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:29:35.441068 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:29:35.456237 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:29:35.456372 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:29:35.477133 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:29:35.477261 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:29:35.498040 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:29:35.498109 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:29:35.505936 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:29:35.506007 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:29:35.522933 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:29:35.523008 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:29:35.539958 systemd[1]: Stopped target network.target - Network. Feb 13 19:29:35.554888 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:29:35.554982 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:29:35.587779 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:29:35.603664 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:29:35.607626 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:29:35.622670 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:29:35.637660 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:29:35.655748 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:29:35.655832 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:29:35.673737 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:29:35.673811 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:29:35.691738 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:29:35.691847 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:29:35.709814 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:29:35.709913 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:29:35.727770 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:29:35.727877 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:29:35.746970 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:29:35.755999 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:29:35.782134 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:29:35.782311 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:29:35.805149 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 19:29:35.805410 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:29:35.805549 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:29:35.813414 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 19:29:35.814866 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:29:35.814923 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:29:35.833632 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:29:35.854632 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:29:35.854760 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:29:35.865756 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:29:35.865844 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:29:35.888935 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:29:35.889010 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:29:35.906766 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:29:36.358688 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Feb 13 19:29:35.906874 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:29:35.927051 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:29:35.947064 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 19:29:35.947181 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:29:35.947720 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:29:35.947882 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:29:35.972822 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:29:35.972917 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:29:35.989789 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:29:35.989869 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:29:35.999706 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:29:35.999812 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:29:36.036859 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:29:36.037117 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:29:36.066914 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:29:36.067004 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:29:36.102684 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:29:36.124640 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:29:36.124781 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:29:36.144914 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:29:36.144986 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:29:36.166760 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:29:36.166855 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:29:36.185748 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:29:36.185849 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:29:36.208064 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 19:29:36.208157 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:29:36.208749 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:29:36.208861 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:29:36.228015 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:29:36.228136 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:29:36.247897 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:29:36.263784 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:29:36.304289 systemd[1]: Switching root. Feb 13 19:29:36.709708 systemd-journald[184]: Journal stopped Feb 13 19:29:39.093165 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:29:39.093210 kernel: SELinux: policy capability open_perms=1 Feb 13 19:29:39.093225 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:29:39.093236 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:29:39.093249 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:29:39.093266 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:29:39.093285 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:29:39.093305 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:29:39.093328 kernel: audit: type=1403 audit(1739474976.878:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:29:39.093351 systemd[1]: Successfully loaded SELinux policy in 94.859ms. Feb 13 19:29:39.093373 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.851ms. Feb 13 19:29:39.093395 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:29:39.093416 systemd[1]: Detected virtualization google. Feb 13 19:29:39.093435 systemd[1]: Detected architecture x86-64. Feb 13 19:29:39.093459 systemd[1]: Detected first boot. Feb 13 19:29:39.093479 systemd[1]: Initializing machine ID from random generator. Feb 13 19:29:39.093517 zram_generator::config[1024]: No configuration found. Feb 13 19:29:39.093540 kernel: Guest personality initialized and is inactive Feb 13 19:29:39.093558 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Feb 13 19:29:39.093581 kernel: Initialized host personality Feb 13 19:29:39.093598 kernel: NET: Registered PF_VSOCK protocol family Feb 13 19:29:39.093617 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:29:39.093636 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 19:29:39.093655 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:29:39.093673 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:29:39.093693 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:29:39.093713 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:29:39.093733 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:29:39.093757 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:29:39.093778 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:29:39.093800 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:29:39.093821 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:29:39.093842 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:29:39.093864 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:29:39.093886 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:29:39.093912 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:29:39.093932 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:29:39.093954 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:29:39.093973 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:29:39.093994 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:29:39.094021 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:29:39.094050 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:29:39.094074 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:29:39.094101 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:29:39.094124 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:29:39.094147 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:29:39.094170 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:29:39.094193 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:29:39.094216 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:29:39.094237 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:29:39.094257 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:29:39.094286 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:29:39.094309 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 19:29:39.094332 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:29:39.094357 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:29:39.094384 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:29:39.094404 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:29:39.094427 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:29:39.094451 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:29:39.094475 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:29:39.094585 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:29:39.094614 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:29:39.094638 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:29:39.094667 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:29:39.094693 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:29:39.094715 systemd[1]: Reached target machines.target - Containers. Feb 13 19:29:39.094739 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:29:39.094763 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:29:39.094786 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:29:39.094809 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:29:39.094832 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:29:39.094855 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:29:39.094885 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:29:39.094909 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:29:39.094932 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:29:39.094957 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:29:39.094980 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:29:39.095002 kernel: fuse: init (API version 7.39) Feb 13 19:29:39.095025 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:29:39.095063 kernel: loop: module loaded Feb 13 19:29:39.095086 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:29:39.095110 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:29:39.095131 kernel: ACPI: bus type drm_connector registered Feb 13 19:29:39.095156 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:29:39.095180 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:29:39.095202 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:29:39.095269 systemd-journald[1113]: Collecting audit messages is disabled. Feb 13 19:29:39.095322 systemd-journald[1113]: Journal started Feb 13 19:29:39.095368 systemd-journald[1113]: Runtime Journal (/run/log/journal/4a3276910d1c4c79b7afa6f6528f875a) is 8M, max 148.6M, 140.6M free. Feb 13 19:29:37.879680 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:29:37.893338 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 19:29:37.893950 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:29:39.114572 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:29:39.126565 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:29:39.160776 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 19:29:39.187532 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:29:39.212957 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:29:39.213059 systemd[1]: Stopped verity-setup.service. Feb 13 19:29:39.250329 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:29:39.250443 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:29:39.263324 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:29:39.273948 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:29:39.284903 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:29:39.294934 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:29:39.304914 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:29:39.315920 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:29:39.326118 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:29:39.338190 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:29:39.350105 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:29:39.350407 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:29:39.362084 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:29:39.362380 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:29:39.374028 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:29:39.374318 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:29:39.385053 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:29:39.385340 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:29:39.397033 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:29:39.397315 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:29:39.406962 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:29:39.407243 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:29:39.417050 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:29:39.427085 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:29:39.439072 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:29:39.451087 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 19:29:39.463061 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:29:39.487530 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:29:39.503648 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:29:39.528683 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:29:39.539673 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:29:39.539920 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:29:39.551947 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 19:29:39.568736 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:29:39.587539 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:29:39.598859 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:29:39.605870 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:29:39.621376 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:29:39.632810 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:29:39.651751 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:29:39.661695 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:29:39.667327 systemd-journald[1113]: Time spent on flushing to /var/log/journal/4a3276910d1c4c79b7afa6f6528f875a is 58.431ms for 951 entries. Feb 13 19:29:39.667327 systemd-journald[1113]: System Journal (/var/log/journal/4a3276910d1c4c79b7afa6f6528f875a) is 8M, max 584.8M, 576.8M free. Feb 13 19:29:39.771856 systemd-journald[1113]: Received client request to flush runtime journal. Feb 13 19:29:39.677324 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:29:39.702783 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:29:39.725792 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:29:39.744902 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:29:39.761437 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:29:39.774099 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:29:39.784914 kernel: loop0: detected capacity change from 0 to 218376 Feb 13 19:29:39.795473 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:29:39.808318 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:29:39.821394 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:29:39.833295 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:29:39.860190 udevadm[1152]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:29:39.861925 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:29:39.861525 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:29:39.862780 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. Feb 13 19:29:39.862813 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. Feb 13 19:29:39.880729 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 19:29:39.892897 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:29:39.902542 kernel: loop1: detected capacity change from 0 to 52152 Feb 13 19:29:39.925446 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:29:39.939055 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:29:39.943891 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 19:29:39.986596 kernel: loop2: detected capacity change from 0 to 138176 Feb 13 19:29:40.020797 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:29:40.050893 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:29:40.112524 kernel: loop3: detected capacity change from 0 to 147912 Feb 13 19:29:40.126105 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Feb 13 19:29:40.126138 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Feb 13 19:29:40.136988 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:29:40.221571 kernel: loop4: detected capacity change from 0 to 218376 Feb 13 19:29:40.262558 kernel: loop5: detected capacity change from 0 to 52152 Feb 13 19:29:40.298976 kernel: loop6: detected capacity change from 0 to 138176 Feb 13 19:29:40.368571 kernel: loop7: detected capacity change from 0 to 147912 Feb 13 19:29:40.433120 (sd-merge)[1176]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Feb 13 19:29:40.439412 (sd-merge)[1176]: Merged extensions into '/usr'. Feb 13 19:29:40.451958 systemd[1]: Reload requested from client PID 1149 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:29:40.451986 systemd[1]: Reloading... Feb 13 19:29:40.577861 zram_generator::config[1200]: No configuration found. Feb 13 19:29:40.873086 ldconfig[1144]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:29:40.909406 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:29:41.058452 systemd[1]: Reloading finished in 605 ms. Feb 13 19:29:41.079867 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:29:41.090182 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:29:41.115840 systemd[1]: Starting ensure-sysext.service... Feb 13 19:29:41.135075 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:29:41.164739 systemd[1]: Reload requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:29:41.164760 systemd[1]: Reloading... Feb 13 19:29:41.186324 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:29:41.187220 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:29:41.189045 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:29:41.189657 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Feb 13 19:29:41.189785 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Feb 13 19:29:41.196566 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:29:41.196620 systemd-tmpfiles[1245]: Skipping /boot Feb 13 19:29:41.219046 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:29:41.219069 systemd-tmpfiles[1245]: Skipping /boot Feb 13 19:29:41.290536 zram_generator::config[1274]: No configuration found. Feb 13 19:29:41.432712 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:29:41.526082 systemd[1]: Reloading finished in 360 ms. Feb 13 19:29:41.542984 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:29:41.575203 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:29:41.600889 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:29:41.621647 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:29:41.635906 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:29:41.657689 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:29:41.677619 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:29:41.699539 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:29:41.719297 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:29:41.720486 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:29:41.730013 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:29:41.736123 augenrules[1342]: No rules Feb 13 19:29:41.745915 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:29:41.764394 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:29:41.773794 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:29:41.774050 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:29:41.782284 systemd-udevd[1335]: Using default interface naming scheme 'v255'. Feb 13 19:29:41.784322 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:29:41.793640 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:29:41.800839 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:29:41.801178 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:29:41.812545 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:29:41.824541 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:29:41.837391 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:29:41.837700 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:29:41.849114 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:29:41.861675 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:29:41.862084 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:29:41.874168 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:29:41.885696 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:29:41.886001 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:29:41.917132 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:29:41.956421 systemd[1]: Finished ensure-sysext.service. Feb 13 19:29:41.967072 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:29:41.975768 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:29:41.984905 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:29:41.995781 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:29:42.011718 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:29:42.029768 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:29:42.050738 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:29:42.085805 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 19:29:42.094778 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:29:42.094866 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:29:42.105715 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:29:42.107697 augenrules[1384]: /sbin/augenrules: No change Feb 13 19:29:42.116681 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:29:42.133709 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:29:42.141664 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:29:42.141727 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:29:42.143248 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:29:42.145186 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:29:42.157127 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:29:42.157603 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:29:42.165005 augenrules[1410]: No rules Feb 13 19:29:42.168120 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:29:42.168538 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:29:42.179101 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:29:42.180259 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:29:42.181257 systemd-resolved[1327]: Positive Trust Anchors: Feb 13 19:29:42.181281 systemd-resolved[1327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:29:42.181351 systemd-resolved[1327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:29:42.192230 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:29:42.193593 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:29:42.196355 systemd-resolved[1327]: Defaulting to hostname 'linux'. Feb 13 19:29:42.203917 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:29:42.221167 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:29:42.254760 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Feb 13 19:29:42.265134 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. Feb 13 19:29:42.266606 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:29:42.273155 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Feb 13 19:29:42.295785 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 19:29:42.272921 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:29:42.292668 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Feb 13 19:29:42.302705 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:29:42.302814 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:29:42.305827 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 19:29:42.325129 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Feb 13 19:29:42.371528 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1362) Feb 13 19:29:42.379772 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:29:42.421640 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Feb 13 19:29:42.417122 systemd-networkd[1402]: lo: Link UP Feb 13 19:29:42.417139 systemd-networkd[1402]: lo: Gained carrier Feb 13 19:29:42.422218 systemd-networkd[1402]: Enumeration completed Feb 13 19:29:42.422384 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:29:42.428401 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:29:42.428417 systemd-networkd[1402]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:29:42.431841 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:29:42.431891 systemd-networkd[1402]: eth0: Link UP Feb 13 19:29:42.431898 systemd-networkd[1402]: eth0: Gained carrier Feb 13 19:29:42.431917 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:29:42.434570 kernel: ACPI: button: Sleep Button [SLPF] Feb 13 19:29:42.436798 systemd[1]: Reached target network.target - Network. Feb 13 19:29:42.442755 systemd-networkd[1402]: eth0: DHCPv4 address 10.128.0.10/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 13 19:29:42.455783 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 19:29:42.477399 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:29:42.489179 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Feb 13 19:29:42.537524 kernel: EDAC MC: Ver: 3.0.0 Feb 13 19:29:42.548057 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:29:42.565153 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Feb 13 19:29:42.584957 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:29:42.606550 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:29:42.620766 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 19:29:42.635418 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:29:42.647155 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:29:42.655754 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:29:42.674847 lvm[1457]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:29:42.708917 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:29:42.709992 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:29:42.715760 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:29:42.732140 lvm[1459]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:29:42.755756 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:29:42.767141 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:29:42.779518 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:29:42.789871 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:29:42.800789 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:29:42.811931 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:29:42.821857 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:29:42.832695 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:29:42.843691 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:29:42.843758 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:29:42.852653 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:29:42.863289 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:29:42.875422 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:29:42.886304 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 19:29:42.897908 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 19:29:42.909692 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 19:29:42.930485 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:29:42.941366 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 19:29:42.953678 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:29:42.963836 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:29:42.973694 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:29:42.982743 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:29:42.982794 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:29:42.994689 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:29:43.009755 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:29:43.031784 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:29:43.048170 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:29:43.076989 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:29:43.086000 jq[1469]: false Feb 13 19:29:43.086672 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:29:43.095748 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:29:43.114728 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 19:29:43.130547 coreos-metadata[1467]: Feb 13 19:29:43.129 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Feb 13 19:29:43.132652 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:29:43.139824 coreos-metadata[1467]: Feb 13 19:29:43.138 INFO Fetch successful Feb 13 19:29:43.139824 coreos-metadata[1467]: Feb 13 19:29:43.138 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Feb 13 19:29:43.143008 coreos-metadata[1467]: Feb 13 19:29:43.142 INFO Fetch successful Feb 13 19:29:43.143008 coreos-metadata[1467]: Feb 13 19:29:43.142 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Feb 13 19:29:43.145806 coreos-metadata[1467]: Feb 13 19:29:43.145 INFO Fetch successful Feb 13 19:29:43.145806 coreos-metadata[1467]: Feb 13 19:29:43.145 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Feb 13 19:29:43.150519 coreos-metadata[1467]: Feb 13 19:29:43.146 INFO Fetch successful Feb 13 19:29:43.152761 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:29:43.159195 extend-filesystems[1470]: Found loop4 Feb 13 19:29:43.172721 extend-filesystems[1470]: Found loop5 Feb 13 19:29:43.172721 extend-filesystems[1470]: Found loop6 Feb 13 19:29:43.172721 extend-filesystems[1470]: Found loop7 Feb 13 19:29:43.172721 extend-filesystems[1470]: Found sda Feb 13 19:29:43.172721 extend-filesystems[1470]: Found sda1 Feb 13 19:29:43.172721 extend-filesystems[1470]: Found sda2 Feb 13 19:29:43.172721 extend-filesystems[1470]: Found sda3 Feb 13 19:29:43.172721 extend-filesystems[1470]: Found usr Feb 13 19:29:43.172721 extend-filesystems[1470]: Found sda4 Feb 13 19:29:43.172721 extend-filesystems[1470]: Found sda6 Feb 13 19:29:43.172721 extend-filesystems[1470]: Found sda7 Feb 13 19:29:43.172721 extend-filesystems[1470]: Found sda9 Feb 13 19:29:43.172721 extend-filesystems[1470]: Checking size of /dev/sda9 Feb 13 19:29:43.325815 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Feb 13 19:29:43.325889 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Feb 13 19:29:43.325926 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1380) Feb 13 19:29:43.170732 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:29:43.326421 ntpd[1474]: 13 Feb 19:29:43 ntpd[1474]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:04:11 UTC 2025 (1): Starting Feb 13 19:29:43.326421 ntpd[1474]: 13 Feb 19:29:43 ntpd[1474]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:29:43.326421 ntpd[1474]: 13 Feb 19:29:43 ntpd[1474]: ---------------------------------------------------- Feb 13 19:29:43.326421 ntpd[1474]: 13 Feb 19:29:43 ntpd[1474]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:29:43.326421 ntpd[1474]: 13 Feb 19:29:43 ntpd[1474]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:29:43.326421 ntpd[1474]: 13 Feb 19:29:43 ntpd[1474]: corporation. Support and training for ntp-4 are Feb 13 19:29:43.326421 ntpd[1474]: 13 Feb 19:29:43 ntpd[1474]: available at https://www.nwtime.org/support Feb 13 19:29:43.326421 ntpd[1474]: 13 Feb 19:29:43 ntpd[1474]: ---------------------------------------------------- Feb 13 19:29:43.326421 ntpd[1474]: 13 Feb 19:29:43 ntpd[1474]: proto: precision = 0.123 usec (-23) Feb 13 19:29:43.326421 ntpd[1474]: 13 Feb 19:29:43 ntpd[1474]: basedate set to 2025-02-01 Feb 13 19:29:43.326421 ntpd[1474]: 13 Feb 19:29:43 ntpd[1474]: gps base set to 2025-02-02 (week 2352) Feb 13 19:29:43.326421 ntpd[1474]: 13 Feb 19:29:43 ntpd[1474]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:29:43.326421 ntpd[1474]: 13 Feb 19:29:43 ntpd[1474]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:29:43.326421 ntpd[1474]: 13 Feb 19:29:43 ntpd[1474]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:29:43.326421 ntpd[1474]: 13 Feb 19:29:43 ntpd[1474]: Listen normally on 3 eth0 10.128.0.10:123 Feb 13 19:29:43.326421 ntpd[1474]: 13 Feb 19:29:43 ntpd[1474]: Listen normally on 4 lo [::1]:123 Feb 13 19:29:43.326421 ntpd[1474]: 13 Feb 19:29:43 ntpd[1474]: bind(21) AF_INET6 fe80::4001:aff:fe80:a%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:29:43.326421 ntpd[1474]: 13 Feb 19:29:43 ntpd[1474]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:a%2#123 Feb 13 19:29:43.326421 ntpd[1474]: 13 Feb 19:29:43 ntpd[1474]: failed to init interface for address fe80::4001:aff:fe80:a%2 Feb 13 19:29:43.326421 ntpd[1474]: 13 Feb 19:29:43 ntpd[1474]: Listening on routing socket on fd #21 for interface updates Feb 13 19:29:43.326421 ntpd[1474]: 13 Feb 19:29:43 ntpd[1474]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:29:43.326421 ntpd[1474]: 13 Feb 19:29:43 ntpd[1474]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:29:43.327810 extend-filesystems[1470]: Resized partition /dev/sda9 Feb 13 19:29:43.206207 dbus-daemon[1468]: [system] SELinux support is enabled Feb 13 19:29:43.193793 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:29:43.351398 extend-filesystems[1492]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:29:43.351398 extend-filesystems[1492]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 19:29:43.351398 extend-filesystems[1492]: old_desc_blocks = 1, new_desc_blocks = 2 Feb 13 19:29:43.351398 extend-filesystems[1492]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Feb 13 19:29:43.211996 dbus-daemon[1468]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1402 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 19:29:43.232492 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Feb 13 19:29:43.414202 extend-filesystems[1470]: Resized filesystem in /dev/sda9 Feb 13 19:29:43.213789 ntpd[1474]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:04:11 UTC 2025 (1): Starting Feb 13 19:29:43.234609 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:29:43.213820 ntpd[1474]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:29:43.240384 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:29:43.443291 update_engine[1495]: I20250213 19:29:43.340924 1495 main.cc:92] Flatcar Update Engine starting Feb 13 19:29:43.443291 update_engine[1495]: I20250213 19:29:43.359342 1495 update_check_scheduler.cc:74] Next update check in 2m38s Feb 13 19:29:43.213834 ntpd[1474]: ---------------------------------------------------- Feb 13 19:29:43.285708 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:29:43.443959 jq[1497]: true Feb 13 19:29:43.213848 ntpd[1474]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:29:43.312230 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:29:43.213863 ntpd[1474]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:29:43.349088 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:29:43.213877 ntpd[1474]: corporation. Support and training for ntp-4 are Feb 13 19:29:43.350617 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:29:43.213890 ntpd[1474]: available at https://www.nwtime.org/support Feb 13 19:29:43.351114 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:29:43.213904 ntpd[1474]: ---------------------------------------------------- Feb 13 19:29:43.351425 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:29:43.216879 ntpd[1474]: proto: precision = 0.123 usec (-23) Feb 13 19:29:43.362386 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:29:43.218212 ntpd[1474]: basedate set to 2025-02-01 Feb 13 19:29:43.363799 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:29:43.218234 ntpd[1474]: gps base set to 2025-02-02 (week 2352) Feb 13 19:29:43.403103 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:29:43.226791 ntpd[1474]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:29:43.403421 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:29:43.226853 ntpd[1474]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:29:43.227080 ntpd[1474]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:29:43.227131 ntpd[1474]: Listen normally on 3 eth0 10.128.0.10:123 Feb 13 19:29:43.227187 ntpd[1474]: Listen normally on 4 lo [::1]:123 Feb 13 19:29:43.227243 ntpd[1474]: bind(21) AF_INET6 fe80::4001:aff:fe80:a%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:29:43.227273 ntpd[1474]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:a%2#123 Feb 13 19:29:43.227295 ntpd[1474]: failed to init interface for address fe80::4001:aff:fe80:a%2 Feb 13 19:29:43.227336 ntpd[1474]: Listening on routing socket on fd #21 for interface updates Feb 13 19:29:43.233798 ntpd[1474]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:29:43.233840 ntpd[1474]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:29:43.453621 jq[1505]: true Feb 13 19:29:43.490399 (ntainerd)[1511]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:29:43.512703 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:29:43.514220 dbus-daemon[1468]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:29:43.536788 systemd-logind[1487]: Watching system buttons on /dev/input/event2 (Power Button) Feb 13 19:29:43.536831 systemd-logind[1487]: Watching system buttons on /dev/input/event3 (Sleep Button) Feb 13 19:29:43.536862 systemd-logind[1487]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:29:43.545387 systemd-logind[1487]: New seat seat0. Feb 13 19:29:43.574121 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:29:43.604487 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:29:43.620991 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:29:43.622827 tar[1504]: linux-amd64/LICENSE Feb 13 19:29:43.622827 tar[1504]: linux-amd64/helm Feb 13 19:29:43.630550 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:29:43.630827 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:29:43.631083 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:29:43.652873 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 19:29:43.663664 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:29:43.663941 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:29:43.668588 bash[1536]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:29:43.686895 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:29:43.712231 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:29:43.726876 systemd[1]: Starting sshkeys.service... Feb 13 19:29:43.819953 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:29:43.843785 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:29:43.844099 sshd_keygen[1500]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:29:43.999073 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 19:29:44.000378 coreos-metadata[1542]: Feb 13 19:29:43.999 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Feb 13 19:29:44.005318 coreos-metadata[1542]: Feb 13 19:29:44.005 INFO Fetch failed with 404: resource not found Feb 13 19:29:44.005318 coreos-metadata[1542]: Feb 13 19:29:44.005 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Feb 13 19:29:44.006062 dbus-daemon[1468]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 19:29:44.008042 dbus-daemon[1468]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1537 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 19:29:44.008467 coreos-metadata[1542]: Feb 13 19:29:44.008 INFO Fetch successful Feb 13 19:29:44.008467 coreos-metadata[1542]: Feb 13 19:29:44.008 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Feb 13 19:29:44.022941 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 19:29:44.023709 coreos-metadata[1542]: Feb 13 19:29:44.023 INFO Fetch failed with 404: resource not found Feb 13 19:29:44.023709 coreos-metadata[1542]: Feb 13 19:29:44.023 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Feb 13 19:29:44.026239 coreos-metadata[1542]: Feb 13 19:29:44.026 INFO Fetch failed with 404: resource not found Feb 13 19:29:44.026239 coreos-metadata[1542]: Feb 13 19:29:44.026 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Feb 13 19:29:44.027342 coreos-metadata[1542]: Feb 13 19:29:44.026 INFO Fetch successful Feb 13 19:29:44.036682 unknown[1542]: wrote ssh authorized keys file for user: core Feb 13 19:29:44.058842 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:29:44.066934 locksmithd[1538]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:29:44.083991 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:29:44.100904 systemd[1]: Started sshd@0-10.128.0.10:22-147.75.109.163:34858.service - OpenSSH per-connection server daemon (147.75.109.163:34858). Feb 13 19:29:44.121998 polkitd[1559]: Started polkitd version 121 Feb 13 19:29:44.125189 update-ssh-keys[1561]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:29:44.124435 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:29:44.124795 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:29:44.137586 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:29:44.145814 polkitd[1559]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 19:29:44.145907 polkitd[1559]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 19:29:44.148979 polkitd[1559]: Finished loading, compiling and executing 2 rules Feb 13 19:29:44.158909 dbus-daemon[1468]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 19:29:44.160594 polkitd[1559]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 19:29:44.163054 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:29:44.174653 systemd[1]: Finished sshkeys.service. Feb 13 19:29:44.175224 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 19:29:44.215174 ntpd[1474]: bind(24) AF_INET6 fe80::4001:aff:fe80:a%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:29:44.215233 ntpd[1474]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:a%2#123 Feb 13 19:29:44.216628 ntpd[1474]: 13 Feb 19:29:44 ntpd[1474]: bind(24) AF_INET6 fe80::4001:aff:fe80:a%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:29:44.216628 ntpd[1474]: 13 Feb 19:29:44 ntpd[1474]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:a%2#123 Feb 13 19:29:44.216628 ntpd[1474]: 13 Feb 19:29:44 ntpd[1474]: failed to init interface for address fe80::4001:aff:fe80:a%2 Feb 13 19:29:44.215256 ntpd[1474]: failed to init interface for address fe80::4001:aff:fe80:a%2 Feb 13 19:29:44.225521 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:29:44.229920 systemd-hostnamed[1537]: Hostname set to (transient) Feb 13 19:29:44.232234 systemd-resolved[1327]: System hostname changed to 'ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal'. Feb 13 19:29:44.249020 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:29:44.268940 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:29:44.278066 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:29:44.284876 containerd[1511]: time="2025-02-13T19:29:44.284773357Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:29:44.323726 systemd-networkd[1402]: eth0: Gained IPv6LL Feb 13 19:29:44.331078 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:29:44.343470 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:29:44.353622 containerd[1511]: time="2025-02-13T19:29:44.353567453Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:29:44.362941 containerd[1511]: time="2025-02-13T19:29:44.362734579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:29:44.362941 containerd[1511]: time="2025-02-13T19:29:44.362804554Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:29:44.362941 containerd[1511]: time="2025-02-13T19:29:44.362832676Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:29:44.363464 containerd[1511]: time="2025-02-13T19:29:44.363106554Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:29:44.363464 containerd[1511]: time="2025-02-13T19:29:44.363160545Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:29:44.363464 containerd[1511]: time="2025-02-13T19:29:44.363313848Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:29:44.363464 containerd[1511]: time="2025-02-13T19:29:44.363337416Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:29:44.365645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:29:44.366074 containerd[1511]: time="2025-02-13T19:29:44.366015548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:29:44.366074 containerd[1511]: time="2025-02-13T19:29:44.366068787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:29:44.366203 containerd[1511]: time="2025-02-13T19:29:44.366095558Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:29:44.366203 containerd[1511]: time="2025-02-13T19:29:44.366113875Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:29:44.366645 containerd[1511]: time="2025-02-13T19:29:44.366321624Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:29:44.367077 containerd[1511]: time="2025-02-13T19:29:44.366766722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:29:44.367441 containerd[1511]: time="2025-02-13T19:29:44.367294663Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:29:44.367545 containerd[1511]: time="2025-02-13T19:29:44.367444392Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:29:44.367831 containerd[1511]: time="2025-02-13T19:29:44.367667344Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:29:44.367831 containerd[1511]: time="2025-02-13T19:29:44.367761472Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:29:44.374029 containerd[1511]: time="2025-02-13T19:29:44.373999101Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:29:44.374528 containerd[1511]: time="2025-02-13T19:29:44.374171446Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:29:44.374528 containerd[1511]: time="2025-02-13T19:29:44.374206643Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:29:44.374528 containerd[1511]: time="2025-02-13T19:29:44.374233044Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:29:44.374528 containerd[1511]: time="2025-02-13T19:29:44.374257237Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:29:44.374528 containerd[1511]: time="2025-02-13T19:29:44.374455598Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:29:44.375276 containerd[1511]: time="2025-02-13T19:29:44.375247207Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:29:44.375585 containerd[1511]: time="2025-02-13T19:29:44.375546624Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:29:44.375705 containerd[1511]: time="2025-02-13T19:29:44.375685791Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:29:44.376453 containerd[1511]: time="2025-02-13T19:29:44.375785766Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:29:44.376453 containerd[1511]: time="2025-02-13T19:29:44.375815119Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:29:44.376453 containerd[1511]: time="2025-02-13T19:29:44.375842050Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:29:44.376453 containerd[1511]: time="2025-02-13T19:29:44.375865887Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:29:44.376453 containerd[1511]: time="2025-02-13T19:29:44.375890864Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:29:44.376453 containerd[1511]: time="2025-02-13T19:29:44.375915475Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:29:44.376453 containerd[1511]: time="2025-02-13T19:29:44.375937225Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:29:44.376453 containerd[1511]: time="2025-02-13T19:29:44.375959091Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:29:44.376453 containerd[1511]: time="2025-02-13T19:29:44.375979818Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:29:44.376453 containerd[1511]: time="2025-02-13T19:29:44.376012655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:29:44.376453 containerd[1511]: time="2025-02-13T19:29:44.376035555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:29:44.376453 containerd[1511]: time="2025-02-13T19:29:44.376055446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:29:44.376453 containerd[1511]: time="2025-02-13T19:29:44.376077285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:29:44.376453 containerd[1511]: time="2025-02-13T19:29:44.376095443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:29:44.379438 containerd[1511]: time="2025-02-13T19:29:44.376116829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:29:44.379438 containerd[1511]: time="2025-02-13T19:29:44.376135704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:29:44.379438 containerd[1511]: time="2025-02-13T19:29:44.376157674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:29:44.379438 containerd[1511]: time="2025-02-13T19:29:44.376183512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:29:44.379438 containerd[1511]: time="2025-02-13T19:29:44.376208316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:29:44.379438 containerd[1511]: time="2025-02-13T19:29:44.376229178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:29:44.379438 containerd[1511]: time="2025-02-13T19:29:44.376268353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:29:44.379438 containerd[1511]: time="2025-02-13T19:29:44.376291591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:29:44.379438 containerd[1511]: time="2025-02-13T19:29:44.376316500Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:29:44.379438 containerd[1511]: time="2025-02-13T19:29:44.376347281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:29:44.379438 containerd[1511]: time="2025-02-13T19:29:44.376367785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:29:44.379438 containerd[1511]: time="2025-02-13T19:29:44.376384387Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:29:44.383240 containerd[1511]: time="2025-02-13T19:29:44.379974811Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:29:44.383240 containerd[1511]: time="2025-02-13T19:29:44.380024590Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:29:44.383240 containerd[1511]: time="2025-02-13T19:29:44.380049835Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:29:44.383240 containerd[1511]: time="2025-02-13T19:29:44.380074904Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:29:44.383240 containerd[1511]: time="2025-02-13T19:29:44.380095392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:29:44.383240 containerd[1511]: time="2025-02-13T19:29:44.380118174Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:29:44.383240 containerd[1511]: time="2025-02-13T19:29:44.380136158Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:29:44.383240 containerd[1511]: time="2025-02-13T19:29:44.380154227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:29:44.383701 containerd[1511]: time="2025-02-13T19:29:44.380666892Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:29:44.383701 containerd[1511]: time="2025-02-13T19:29:44.380747917Z" level=info msg="Connect containerd service" Feb 13 19:29:44.383701 containerd[1511]: time="2025-02-13T19:29:44.380811753Z" level=info msg="using legacy CRI server" Feb 13 19:29:44.383701 containerd[1511]: time="2025-02-13T19:29:44.380824671Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:29:44.383701 containerd[1511]: time="2025-02-13T19:29:44.381085866Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:29:44.386843 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:29:44.396855 containerd[1511]: time="2025-02-13T19:29:44.394414546Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:29:44.396855 containerd[1511]: time="2025-02-13T19:29:44.395015984Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:29:44.396855 containerd[1511]: time="2025-02-13T19:29:44.395096537Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:29:44.396855 containerd[1511]: time="2025-02-13T19:29:44.395145473Z" level=info msg="Start subscribing containerd event" Feb 13 19:29:44.396855 containerd[1511]: time="2025-02-13T19:29:44.395217351Z" level=info msg="Start recovering state" Feb 13 19:29:44.396855 containerd[1511]: time="2025-02-13T19:29:44.395330752Z" level=info msg="Start event monitor" Feb 13 19:29:44.396855 containerd[1511]: time="2025-02-13T19:29:44.395348285Z" level=info msg="Start snapshots syncer" Feb 13 19:29:44.396855 containerd[1511]: time="2025-02-13T19:29:44.395372675Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:29:44.396855 containerd[1511]: time="2025-02-13T19:29:44.395395167Z" level=info msg="Start streaming server" Feb 13 19:29:44.396855 containerd[1511]: time="2025-02-13T19:29:44.395476154Z" level=info msg="containerd successfully booted in 0.112535s" Feb 13 19:29:44.400852 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Feb 13 19:29:44.408193 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:29:44.457094 init.sh[1591]: + '[' -e /etc/default/instance_configs.cfg.template ']' Feb 13 19:29:44.457525 init.sh[1591]: + echo -e '[InstanceSetup]\nset_host_keys = false' Feb 13 19:29:44.457525 init.sh[1591]: + /usr/bin/google_instance_setup Feb 13 19:29:44.474363 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:29:44.548889 sshd[1566]: Accepted publickey for core from 147.75.109.163 port 34858 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:29:44.552410 sshd-session[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:44.583366 systemd-logind[1487]: New session 1 of user core. Feb 13 19:29:44.585840 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:29:44.605827 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:29:44.650128 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:29:44.673106 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:29:44.711343 (systemd)[1602]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:29:44.721716 systemd-logind[1487]: New session c1 of user core. Feb 13 19:29:44.982763 tar[1504]: linux-amd64/README.md Feb 13 19:29:45.010100 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:29:45.077162 systemd[1602]: Queued start job for default target default.target. Feb 13 19:29:45.082946 systemd[1602]: Created slice app.slice - User Application Slice. Feb 13 19:29:45.082995 systemd[1602]: Reached target paths.target - Paths. Feb 13 19:29:45.083066 systemd[1602]: Reached target timers.target - Timers. Feb 13 19:29:45.087667 systemd[1602]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:29:45.106293 systemd[1602]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:29:45.106514 systemd[1602]: Reached target sockets.target - Sockets. Feb 13 19:29:45.106599 systemd[1602]: Reached target basic.target - Basic System. Feb 13 19:29:45.106674 systemd[1602]: Reached target default.target - Main User Target. Feb 13 19:29:45.106728 systemd[1602]: Startup finished in 365ms. Feb 13 19:29:45.107046 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:29:45.122813 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:29:45.377995 systemd[1]: Started sshd@1-10.128.0.10:22-147.75.109.163:34862.service - OpenSSH per-connection server daemon (147.75.109.163:34862). Feb 13 19:29:45.407195 instance-setup[1598]: INFO Running google_set_multiqueue. Feb 13 19:29:45.431124 instance-setup[1598]: INFO Set channels for eth0 to 2. Feb 13 19:29:45.435893 instance-setup[1598]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Feb 13 19:29:45.438175 instance-setup[1598]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Feb 13 19:29:45.438239 instance-setup[1598]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Feb 13 19:29:45.440583 instance-setup[1598]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Feb 13 19:29:45.440654 instance-setup[1598]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Feb 13 19:29:45.442991 instance-setup[1598]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Feb 13 19:29:45.443047 instance-setup[1598]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Feb 13 19:29:45.445230 instance-setup[1598]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Feb 13 19:29:45.455054 instance-setup[1598]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Feb 13 19:29:45.459731 instance-setup[1598]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Feb 13 19:29:45.461783 instance-setup[1598]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Feb 13 19:29:45.461946 instance-setup[1598]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Feb 13 19:29:45.486001 init.sh[1591]: + /usr/bin/google_metadata_script_runner --script-type startup Feb 13 19:29:45.658429 startup-script[1648]: INFO Starting startup scripts. Feb 13 19:29:45.665645 startup-script[1648]: INFO No startup scripts found in metadata. Feb 13 19:29:45.665727 startup-script[1648]: INFO Finished running startup scripts. Feb 13 19:29:45.686537 init.sh[1591]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Feb 13 19:29:45.686537 init.sh[1591]: + daemon_pids=() Feb 13 19:29:45.686537 init.sh[1591]: + for d in accounts clock_skew network Feb 13 19:29:45.686783 init.sh[1651]: + /usr/bin/google_accounts_daemon Feb 13 19:29:45.687642 init.sh[1591]: + daemon_pids+=($!) Feb 13 19:29:45.687642 init.sh[1591]: + for d in accounts clock_skew network Feb 13 19:29:45.687642 init.sh[1591]: + daemon_pids+=($!) Feb 13 19:29:45.687811 init.sh[1591]: + for d in accounts clock_skew network Feb 13 19:29:45.688458 init.sh[1591]: + daemon_pids+=($!) Feb 13 19:29:45.688458 init.sh[1591]: + NOTIFY_SOCKET=/run/systemd/notify Feb 13 19:29:45.688458 init.sh[1591]: + /usr/bin/systemd-notify --ready Feb 13 19:29:45.689894 init.sh[1653]: + /usr/bin/google_network_daemon Feb 13 19:29:45.690229 init.sh[1652]: + /usr/bin/google_clock_skew_daemon Feb 13 19:29:45.704421 systemd[1]: Started oem-gce.service - GCE Linux Agent. Feb 13 19:29:45.714451 sshd[1620]: Accepted publickey for core from 147.75.109.163 port 34862 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:29:45.717417 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:45.720479 init.sh[1591]: + wait -n 1651 1652 1653 Feb 13 19:29:45.735783 systemd-logind[1487]: New session 2 of user core. Feb 13 19:29:45.741719 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:29:45.959629 sshd[1655]: Connection closed by 147.75.109.163 port 34862 Feb 13 19:29:45.958043 sshd-session[1620]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:45.968624 systemd[1]: sshd@1-10.128.0.10:22-147.75.109.163:34862.service: Deactivated successfully. Feb 13 19:29:45.974134 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:29:45.978249 systemd-logind[1487]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:29:45.980204 systemd-logind[1487]: Removed session 2. Feb 13 19:29:46.022304 systemd[1]: Started sshd@2-10.128.0.10:22-147.75.109.163:34876.service - OpenSSH per-connection server daemon (147.75.109.163:34876). Feb 13 19:29:46.109198 google-networking[1653]: INFO Starting Google Networking daemon. Feb 13 19:29:46.173538 groupadd[1670]: group added to /etc/group: name=google-sudoers, GID=1000 Feb 13 19:29:46.184142 groupadd[1670]: group added to /etc/gshadow: name=google-sudoers Feb 13 19:29:46.187409 google-clock-skew[1652]: INFO Starting Google Clock Skew daemon. Feb 13 19:29:46.193960 google-clock-skew[1652]: INFO Clock drift token has changed: 0. Feb 13 19:29:46.238726 groupadd[1670]: new group: name=google-sudoers, GID=1000 Feb 13 19:29:46.277685 google-accounts[1651]: INFO Starting Google Accounts daemon. Feb 13 19:29:46.292275 google-accounts[1651]: WARNING OS Login not installed. Feb 13 19:29:46.294318 google-accounts[1651]: INFO Creating a new user account for 0. Feb 13 19:29:46.299699 init.sh[1679]: useradd: invalid user name '0': use --badname to ignore Feb 13 19:29:46.300041 google-accounts[1651]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Feb 13 19:29:46.367441 sshd[1666]: Accepted publickey for core from 147.75.109.163 port 34876 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:29:46.369235 sshd-session[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:46.379425 systemd-logind[1487]: New session 3 of user core. Feb 13 19:29:46.382803 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:29:46.424277 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:29:46.437477 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:29:46.441060 (kubelet)[1687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:29:46.448827 systemd[1]: Startup finished in 1.028s (kernel) + 14.102s (initrd) + 9.654s (userspace) = 24.785s. Feb 13 19:29:46.585652 sshd[1683]: Connection closed by 147.75.109.163 port 34876 Feb 13 19:29:46.587468 sshd-session[1666]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:46.593685 systemd[1]: sshd@2-10.128.0.10:22-147.75.109.163:34876.service: Deactivated successfully. Feb 13 19:29:46.596935 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:29:46.598017 systemd-logind[1487]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:29:46.599820 systemd-logind[1487]: Removed session 3. Feb 13 19:29:47.000633 systemd-resolved[1327]: Clock change detected. Flushing caches. Feb 13 19:29:47.001046 google-clock-skew[1652]: INFO Synced system time with hardware clock. Feb 13 19:29:47.510752 ntpd[1474]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:a%2]:123 Feb 13 19:29:47.511408 ntpd[1474]: 13 Feb 19:29:47 ntpd[1474]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:a%2]:123 Feb 13 19:29:47.599446 kubelet[1687]: E0213 19:29:47.599271 1687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:29:47.602070 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:29:47.602306 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:29:47.603346 systemd[1]: kubelet.service: Consumed 1.199s CPU time, 254.4M memory peak. Feb 13 19:29:56.941563 systemd[1]: Started sshd@3-10.128.0.10:22-147.75.109.163:43012.service - OpenSSH per-connection server daemon (147.75.109.163:43012). Feb 13 19:29:57.227960 sshd[1703]: Accepted publickey for core from 147.75.109.163 port 43012 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:29:57.229790 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:57.236799 systemd-logind[1487]: New session 4 of user core. Feb 13 19:29:57.243099 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:29:57.439355 sshd[1705]: Connection closed by 147.75.109.163 port 43012 Feb 13 19:29:57.440292 sshd-session[1703]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:57.444598 systemd[1]: sshd@3-10.128.0.10:22-147.75.109.163:43012.service: Deactivated successfully. Feb 13 19:29:57.446958 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:29:57.449093 systemd-logind[1487]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:29:57.450584 systemd-logind[1487]: Removed session 4. Feb 13 19:29:57.502277 systemd[1]: Started sshd@4-10.128.0.10:22-147.75.109.163:43028.service - OpenSSH per-connection server daemon (147.75.109.163:43028). Feb 13 19:29:57.735620 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:29:57.745288 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:29:57.798281 sshd[1711]: Accepted publickey for core from 147.75.109.163 port 43028 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:29:57.799353 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:57.806339 systemd-logind[1487]: New session 5 of user core. Feb 13 19:29:57.817704 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:29:58.009909 sshd[1716]: Connection closed by 147.75.109.163 port 43028 Feb 13 19:29:58.011192 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:58.026782 systemd[1]: sshd@4-10.128.0.10:22-147.75.109.163:43028.service: Deactivated successfully. Feb 13 19:29:58.034905 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:29:58.040125 systemd-logind[1487]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:29:58.044413 systemd-logind[1487]: Removed session 5. Feb 13 19:29:58.061099 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:29:58.065380 (kubelet)[1726]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:29:58.074390 systemd[1]: Started sshd@5-10.128.0.10:22-147.75.109.163:43044.service - OpenSSH per-connection server daemon (147.75.109.163:43044). Feb 13 19:29:58.152753 kubelet[1726]: E0213 19:29:58.152668 1726 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:29:58.157667 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:29:58.157979 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:29:58.158586 systemd[1]: kubelet.service: Consumed 234ms CPU time, 102.6M memory peak. Feb 13 19:29:58.381219 sshd[1728]: Accepted publickey for core from 147.75.109.163 port 43044 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:29:58.383272 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:58.390778 systemd-logind[1487]: New session 6 of user core. Feb 13 19:29:58.402095 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:29:58.597698 sshd[1736]: Connection closed by 147.75.109.163 port 43044 Feb 13 19:29:58.598883 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:58.604944 systemd[1]: sshd@5-10.128.0.10:22-147.75.109.163:43044.service: Deactivated successfully. Feb 13 19:29:58.607776 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:29:58.608980 systemd-logind[1487]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:29:58.610443 systemd-logind[1487]: Removed session 6. Feb 13 19:29:58.654263 systemd[1]: Started sshd@6-10.128.0.10:22-147.75.109.163:43054.service - OpenSSH per-connection server daemon (147.75.109.163:43054). Feb 13 19:29:58.956259 sshd[1742]: Accepted publickey for core from 147.75.109.163 port 43054 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:29:58.958058 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:58.964401 systemd-logind[1487]: New session 7 of user core. Feb 13 19:29:58.978076 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:29:59.152330 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:29:59.152887 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:29:59.167771 sudo[1745]: pam_unix(sudo:session): session closed for user root Feb 13 19:29:59.210765 sshd[1744]: Connection closed by 147.75.109.163 port 43054 Feb 13 19:29:59.212235 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:59.218040 systemd[1]: sshd@6-10.128.0.10:22-147.75.109.163:43054.service: Deactivated successfully. Feb 13 19:29:59.220489 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:29:59.221706 systemd-logind[1487]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:29:59.223197 systemd-logind[1487]: Removed session 7. Feb 13 19:29:59.268244 systemd[1]: Started sshd@7-10.128.0.10:22-147.75.109.163:43064.service - OpenSSH per-connection server daemon (147.75.109.163:43064). Feb 13 19:29:59.564489 sshd[1751]: Accepted publickey for core from 147.75.109.163 port 43064 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:29:59.566484 sshd-session[1751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:29:59.572547 systemd-logind[1487]: New session 8 of user core. Feb 13 19:29:59.580091 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:29:59.743494 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:29:59.744033 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:29:59.748947 sudo[1755]: pam_unix(sudo:session): session closed for user root Feb 13 19:29:59.762696 sudo[1754]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:29:59.763203 sudo[1754]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:29:59.779319 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:29:59.817743 augenrules[1777]: No rules Feb 13 19:29:59.818254 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:29:59.818573 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:29:59.820822 sudo[1754]: pam_unix(sudo:session): session closed for user root Feb 13 19:29:59.863725 sshd[1753]: Connection closed by 147.75.109.163 port 43064 Feb 13 19:29:59.864565 sshd-session[1751]: pam_unix(sshd:session): session closed for user core Feb 13 19:29:59.870016 systemd[1]: sshd@7-10.128.0.10:22-147.75.109.163:43064.service: Deactivated successfully. Feb 13 19:29:59.872406 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:29:59.873440 systemd-logind[1487]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:29:59.874991 systemd-logind[1487]: Removed session 8. Feb 13 19:29:59.923258 systemd[1]: Started sshd@8-10.128.0.10:22-147.75.109.163:57014.service - OpenSSH per-connection server daemon (147.75.109.163:57014). Feb 13 19:30:00.206798 sshd[1786]: Accepted publickey for core from 147.75.109.163 port 57014 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:30:00.208558 sshd-session[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:30:00.215429 systemd-logind[1487]: New session 9 of user core. Feb 13 19:30:00.222103 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:30:00.383959 sudo[1789]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:30:00.384436 sudo[1789]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:30:00.830280 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:30:00.833168 (dockerd)[1806]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:30:01.274025 dockerd[1806]: time="2025-02-13T19:30:01.273932223Z" level=info msg="Starting up" Feb 13 19:30:01.430934 dockerd[1806]: time="2025-02-13T19:30:01.430580014Z" level=info msg="Loading containers: start." Feb 13 19:30:01.640880 kernel: Initializing XFRM netlink socket Feb 13 19:30:01.756390 systemd-networkd[1402]: docker0: Link UP Feb 13 19:30:01.794659 dockerd[1806]: time="2025-02-13T19:30:01.794599737Z" level=info msg="Loading containers: done." Feb 13 19:30:01.813074 dockerd[1806]: time="2025-02-13T19:30:01.813004175Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:30:01.813296 dockerd[1806]: time="2025-02-13T19:30:01.813140307Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 19:30:01.813372 dockerd[1806]: time="2025-02-13T19:30:01.813293786Z" level=info msg="Daemon has completed initialization" Feb 13 19:30:01.855069 dockerd[1806]: time="2025-02-13T19:30:01.854929776Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:30:01.855381 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:30:02.708262 containerd[1511]: time="2025-02-13T19:30:02.708206939Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 19:30:03.171298 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount877430924.mount: Deactivated successfully. Feb 13 19:30:04.680472 containerd[1511]: time="2025-02-13T19:30:04.680393063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:04.682105 containerd[1511]: time="2025-02-13T19:30:04.682043720Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=28680559" Feb 13 19:30:04.683440 containerd[1511]: time="2025-02-13T19:30:04.683358869Z" level=info msg="ImageCreate event name:\"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:04.687047 containerd[1511]: time="2025-02-13T19:30:04.686985957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:04.689010 containerd[1511]: time="2025-02-13T19:30:04.688459591Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"28670731\" in 1.980194772s" Feb 13 19:30:04.689010 containerd[1511]: time="2025-02-13T19:30:04.688510837Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\"" Feb 13 19:30:04.689610 containerd[1511]: time="2025-02-13T19:30:04.689562505Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 19:30:06.101332 containerd[1511]: time="2025-02-13T19:30:06.101259000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:06.102900 containerd[1511]: time="2025-02-13T19:30:06.102824993Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=24773718" Feb 13 19:30:06.104089 containerd[1511]: time="2025-02-13T19:30:06.104015640Z" level=info msg="ImageCreate event name:\"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:06.108309 containerd[1511]: time="2025-02-13T19:30:06.107678610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:06.109221 containerd[1511]: time="2025-02-13T19:30:06.109176869Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"26259392\" in 1.419572823s" Feb 13 19:30:06.109326 containerd[1511]: time="2025-02-13T19:30:06.109225590Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\"" Feb 13 19:30:06.109887 containerd[1511]: time="2025-02-13T19:30:06.109814195Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 19:30:07.309386 containerd[1511]: time="2025-02-13T19:30:07.309306323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:07.311028 containerd[1511]: time="2025-02-13T19:30:07.310930133Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=19172192" Feb 13 19:30:07.312204 containerd[1511]: time="2025-02-13T19:30:07.312137145Z" level=info msg="ImageCreate event name:\"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:07.316170 containerd[1511]: time="2025-02-13T19:30:07.315619468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:07.317073 containerd[1511]: time="2025-02-13T19:30:07.317030088Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"20657902\" in 1.206260582s" Feb 13 19:30:07.317073 containerd[1511]: time="2025-02-13T19:30:07.317078652Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\"" Feb 13 19:30:07.317648 containerd[1511]: time="2025-02-13T19:30:07.317616586Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 19:30:08.349898 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:30:08.360106 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:30:08.395095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1907304191.mount: Deactivated successfully. Feb 13 19:30:08.679134 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:30:08.689590 (kubelet)[2072]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:30:08.777915 kubelet[2072]: E0213 19:30:08.777235 2072 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:30:08.781686 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:30:08.782772 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:30:08.783309 systemd[1]: kubelet.service: Consumed 215ms CPU time, 103M memory peak. Feb 13 19:30:09.313854 containerd[1511]: time="2025-02-13T19:30:09.313771733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:09.315336 containerd[1511]: time="2025-02-13T19:30:09.315109914Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=30910734" Feb 13 19:30:09.317902 containerd[1511]: time="2025-02-13T19:30:09.316588483Z" level=info msg="ImageCreate event name:\"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:09.319754 containerd[1511]: time="2025-02-13T19:30:09.319711346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:09.320865 containerd[1511]: time="2025-02-13T19:30:09.320612630Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"30907858\" in 2.002948646s" Feb 13 19:30:09.320865 containerd[1511]: time="2025-02-13T19:30:09.320659215Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\"" Feb 13 19:30:09.321267 containerd[1511]: time="2025-02-13T19:30:09.321229866Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 19:30:09.747804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount671122229.mount: Deactivated successfully. Feb 13 19:30:10.886130 containerd[1511]: time="2025-02-13T19:30:10.886056792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:10.887849 containerd[1511]: time="2025-02-13T19:30:10.887760403Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18571883" Feb 13 19:30:10.889237 containerd[1511]: time="2025-02-13T19:30:10.889170893Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:10.894745 containerd[1511]: time="2025-02-13T19:30:10.893032505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:10.894745 containerd[1511]: time="2025-02-13T19:30:10.894575623Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.573301494s" Feb 13 19:30:10.894745 containerd[1511]: time="2025-02-13T19:30:10.894615802Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Feb 13 19:30:10.895530 containerd[1511]: time="2025-02-13T19:30:10.895450917Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:30:11.307794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4249265109.mount: Deactivated successfully. Feb 13 19:30:11.314267 containerd[1511]: time="2025-02-13T19:30:11.314198190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:11.315502 containerd[1511]: time="2025-02-13T19:30:11.315443059Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Feb 13 19:30:11.316673 containerd[1511]: time="2025-02-13T19:30:11.316600359Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:11.319510 containerd[1511]: time="2025-02-13T19:30:11.319447241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:11.320869 containerd[1511]: time="2025-02-13T19:30:11.320492557Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 424.809126ms" Feb 13 19:30:11.320869 containerd[1511]: time="2025-02-13T19:30:11.320539751Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 19:30:11.321791 containerd[1511]: time="2025-02-13T19:30:11.321744143Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 19:30:11.742109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3527099418.mount: Deactivated successfully. Feb 13 19:30:13.946125 containerd[1511]: time="2025-02-13T19:30:13.946053738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:13.947875 containerd[1511]: time="2025-02-13T19:30:13.947797847Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57557903" Feb 13 19:30:13.948975 containerd[1511]: time="2025-02-13T19:30:13.948933951Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:13.952864 containerd[1511]: time="2025-02-13T19:30:13.952786010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:13.954589 containerd[1511]: time="2025-02-13T19:30:13.954406442Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.632621045s" Feb 13 19:30:13.954589 containerd[1511]: time="2025-02-13T19:30:13.954448989Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Feb 13 19:30:14.538188 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 19:30:18.078612 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:30:18.079064 systemd[1]: kubelet.service: Consumed 215ms CPU time, 103M memory peak. Feb 13 19:30:18.089238 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:30:18.129551 systemd[1]: Reload requested from client PID 2220 ('systemctl') (unit session-9.scope)... Feb 13 19:30:18.129790 systemd[1]: Reloading... Feb 13 19:30:18.310312 zram_generator::config[2268]: No configuration found. Feb 13 19:30:18.467766 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:30:18.611459 systemd[1]: Reloading finished in 480 ms. Feb 13 19:30:18.671166 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:30:18.681559 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:30:18.684474 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:30:18.684807 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:30:18.684893 systemd[1]: kubelet.service: Consumed 146ms CPU time, 91.7M memory peak. Feb 13 19:30:18.687133 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:30:19.129082 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:30:19.139528 (kubelet)[2318]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:30:19.212878 kubelet[2318]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:30:19.212878 kubelet[2318]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:30:19.212878 kubelet[2318]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:30:19.212878 kubelet[2318]: I0213 19:30:19.212768 2318 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:30:19.655761 kubelet[2318]: I0213 19:30:19.655711 2318 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:30:19.655761 kubelet[2318]: I0213 19:30:19.655755 2318 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:30:19.656868 kubelet[2318]: I0213 19:30:19.656661 2318 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:30:19.698355 kubelet[2318]: E0213 19:30:19.698297 2318 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:30:19.700576 kubelet[2318]: I0213 19:30:19.700378 2318 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:30:19.713746 kubelet[2318]: E0213 19:30:19.713687 2318 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:30:19.713746 kubelet[2318]: I0213 19:30:19.713738 2318 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:30:19.717737 kubelet[2318]: I0213 19:30:19.717696 2318 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:30:19.719141 kubelet[2318]: I0213 19:30:19.719075 2318 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:30:19.719401 kubelet[2318]: I0213 19:30:19.719129 2318 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:30:19.719401 kubelet[2318]: I0213 19:30:19.719392 2318 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:30:19.719648 kubelet[2318]: I0213 19:30:19.719410 2318 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:30:19.719648 kubelet[2318]: I0213 19:30:19.719602 2318 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:30:19.726355 kubelet[2318]: I0213 19:30:19.726223 2318 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:30:19.726355 kubelet[2318]: I0213 19:30:19.726327 2318 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:30:19.726355 kubelet[2318]: I0213 19:30:19.726359 2318 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:30:19.726568 kubelet[2318]: I0213 19:30:19.726376 2318 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:30:19.734255 kubelet[2318]: W0213 19:30:19.734150 2318 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.10:6443: connect: connection refused Feb 13 19:30:19.734396 kubelet[2318]: E0213 19:30:19.734265 2318 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:30:19.736770 kubelet[2318]: W0213 19:30:19.735961 2318 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.10:6443: connect: connection refused Feb 13 19:30:19.736770 kubelet[2318]: E0213 19:30:19.736032 2318 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:30:19.736770 kubelet[2318]: I0213 19:30:19.736167 2318 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:30:19.737011 kubelet[2318]: I0213 19:30:19.736789 2318 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:30:19.737011 kubelet[2318]: W0213 19:30:19.736929 2318 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:30:19.741003 kubelet[2318]: I0213 19:30:19.740561 2318 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:30:19.741003 kubelet[2318]: I0213 19:30:19.740619 2318 server.go:1287] "Started kubelet" Feb 13 19:30:19.751456 kubelet[2318]: I0213 19:30:19.751088 2318 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:30:19.754689 kubelet[2318]: E0213 19:30:19.751274 2318 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.10:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.10:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal.1823db50774d8c9c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal,UID:ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal,},FirstTimestamp:2025-02-13 19:30:19.74058102 +0000 UTC m=+0.595458921,LastTimestamp:2025-02-13 19:30:19.74058102 +0000 UTC m=+0.595458921,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal,}" Feb 13 19:30:19.757248 kubelet[2318]: E0213 19:30:19.757208 2318 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:30:19.761249 kubelet[2318]: I0213 19:30:19.760848 2318 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:30:19.761733 kubelet[2318]: E0213 19:30:19.761707 2318 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" not found" Feb 13 19:30:19.763864 kubelet[2318]: I0213 19:30:19.761724 2318 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:30:19.763864 kubelet[2318]: I0213 19:30:19.762192 2318 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:30:19.763864 kubelet[2318]: I0213 19:30:19.763786 2318 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:30:19.766232 kubelet[2318]: I0213 19:30:19.766207 2318 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:30:19.766443 kubelet[2318]: I0213 19:30:19.766425 2318 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:30:19.767170 kubelet[2318]: I0213 19:30:19.761791 2318 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:30:19.767407 kubelet[2318]: I0213 19:30:19.767382 2318 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:30:19.769107 kubelet[2318]: W0213 19:30:19.769042 2318 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.10:6443: connect: connection refused Feb 13 19:30:19.769273 kubelet[2318]: E0213 19:30:19.769124 2318 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:30:19.771552 kubelet[2318]: I0213 19:30:19.771518 2318 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:30:19.771552 kubelet[2318]: I0213 19:30:19.771548 2318 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:30:19.771708 kubelet[2318]: I0213 19:30:19.771631 2318 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:30:19.773971 kubelet[2318]: E0213 19:30:19.773763 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.10:6443: connect: connection refused" interval="200ms" Feb 13 19:30:19.786532 kubelet[2318]: I0213 19:30:19.786337 2318 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:30:19.789206 kubelet[2318]: I0213 19:30:19.788751 2318 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:30:19.789206 kubelet[2318]: I0213 19:30:19.788785 2318 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:30:19.789206 kubelet[2318]: I0213 19:30:19.788816 2318 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:30:19.789206 kubelet[2318]: I0213 19:30:19.788876 2318 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:30:19.789206 kubelet[2318]: E0213 19:30:19.788949 2318 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:30:19.799776 kubelet[2318]: W0213 19:30:19.799700 2318 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.10:6443: connect: connection refused Feb 13 19:30:19.800191 kubelet[2318]: E0213 19:30:19.800145 2318 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:30:19.813304 kubelet[2318]: I0213 19:30:19.813214 2318 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:30:19.813304 kubelet[2318]: I0213 19:30:19.813252 2318 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:30:19.813304 kubelet[2318]: I0213 19:30:19.813299 2318 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:30:19.862591 kubelet[2318]: E0213 19:30:19.862540 2318 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" not found" Feb 13 19:30:19.904269 kubelet[2318]: E0213 19:30:19.890002 2318 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:30:19.963435 kubelet[2318]: E0213 19:30:19.963279 2318 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" not found" Feb 13 19:30:19.966264 kubelet[2318]: I0213 19:30:19.965939 2318 policy_none.go:49] "None policy: Start" Feb 13 19:30:19.966264 kubelet[2318]: I0213 19:30:19.965986 2318 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:30:19.966264 kubelet[2318]: I0213 19:30:19.966011 2318 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:30:19.975346 kubelet[2318]: E0213 19:30:19.975300 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.10:6443: connect: connection refused" interval="400ms" Feb 13 19:30:20.063512 kubelet[2318]: E0213 19:30:20.063436 2318 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" not found" Feb 13 19:30:20.105561 kubelet[2318]: E0213 19:30:20.091030 2318 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:30:20.114573 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:30:20.127319 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:30:20.132551 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:30:20.141326 kubelet[2318]: I0213 19:30:20.141034 2318 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:30:20.142023 kubelet[2318]: I0213 19:30:20.141372 2318 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:30:20.142023 kubelet[2318]: I0213 19:30:20.141389 2318 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:30:20.142023 kubelet[2318]: I0213 19:30:20.141705 2318 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:30:20.143911 kubelet[2318]: E0213 19:30:20.143880 2318 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:30:20.144047 kubelet[2318]: E0213 19:30:20.143947 2318 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" not found" Feb 13 19:30:20.253761 kubelet[2318]: I0213 19:30:20.253613 2318 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:20.255305 kubelet[2318]: E0213 19:30:20.254770 2318 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.128.0.10:6443/api/v1/nodes\": dial tcp 10.128.0.10:6443: connect: connection refused" node="ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:20.376438 kubelet[2318]: E0213 19:30:20.376362 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.10:6443: connect: connection refused" interval="800ms" Feb 13 19:30:20.460315 kubelet[2318]: I0213 19:30:20.460269 2318 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:20.460755 kubelet[2318]: E0213 19:30:20.460692 2318 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.128.0.10:6443/api/v1/nodes\": dial tcp 10.128.0.10:6443: connect: connection refused" node="ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:20.511597 systemd[1]: Created slice kubepods-burstable-podfba431dc4f4f8fd6f79fccd6e65fa1cb.slice - libcontainer container kubepods-burstable-podfba431dc4f4f8fd6f79fccd6e65fa1cb.slice. Feb 13 19:30:20.528272 kubelet[2318]: E0213 19:30:20.527968 2318 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" not found" node="ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:20.533337 systemd[1]: Created slice kubepods-burstable-podbfd46dccffd1ee8894d539f39ad98cd4.slice - libcontainer container kubepods-burstable-podbfd46dccffd1ee8894d539f39ad98cd4.slice. Feb 13 19:30:20.536578 kubelet[2318]: E0213 19:30:20.536545 2318 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" not found" node="ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:20.539937 systemd[1]: Created slice kubepods-burstable-pod01befce2951f1d33e2009b135ecbe279.slice - libcontainer container kubepods-burstable-pod01befce2951f1d33e2009b135ecbe279.slice. Feb 13 19:30:20.542447 kubelet[2318]: E0213 19:30:20.542409 2318 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" not found" node="ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:20.571819 kubelet[2318]: I0213 19:30:20.571743 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fba431dc4f4f8fd6f79fccd6e65fa1cb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" (UID: \"fba431dc4f4f8fd6f79fccd6e65fa1cb\") " pod="kube-system/kube-apiserver-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:20.571819 kubelet[2318]: I0213 19:30:20.571809 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bfd46dccffd1ee8894d539f39ad98cd4-ca-certs\") pod \"kube-controller-manager-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" (UID: \"bfd46dccffd1ee8894d539f39ad98cd4\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:20.571819 kubelet[2318]: I0213 19:30:20.571875 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bfd46dccffd1ee8894d539f39ad98cd4-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" (UID: \"bfd46dccffd1ee8894d539f39ad98cd4\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:20.572300 kubelet[2318]: I0213 19:30:20.571906 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bfd46dccffd1ee8894d539f39ad98cd4-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" (UID: \"bfd46dccffd1ee8894d539f39ad98cd4\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:20.572300 kubelet[2318]: I0213 19:30:20.571938 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/01befce2951f1d33e2009b135ecbe279-kubeconfig\") pod \"kube-scheduler-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" (UID: \"01befce2951f1d33e2009b135ecbe279\") " pod="kube-system/kube-scheduler-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:20.572300 kubelet[2318]: I0213 19:30:20.571967 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fba431dc4f4f8fd6f79fccd6e65fa1cb-ca-certs\") pod \"kube-apiserver-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" (UID: \"fba431dc4f4f8fd6f79fccd6e65fa1cb\") " pod="kube-system/kube-apiserver-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:20.572300 kubelet[2318]: I0213 19:30:20.571997 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fba431dc4f4f8fd6f79fccd6e65fa1cb-k8s-certs\") pod \"kube-apiserver-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" (UID: \"fba431dc4f4f8fd6f79fccd6e65fa1cb\") " pod="kube-system/kube-apiserver-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:20.572419 kubelet[2318]: I0213 19:30:20.572023 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bfd46dccffd1ee8894d539f39ad98cd4-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" (UID: \"bfd46dccffd1ee8894d539f39ad98cd4\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:20.572419 kubelet[2318]: I0213 19:30:20.572052 2318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bfd46dccffd1ee8894d539f39ad98cd4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" (UID: \"bfd46dccffd1ee8894d539f39ad98cd4\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:20.637578 kubelet[2318]: W0213 19:30:20.637476 2318 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.10:6443: connect: connection refused Feb 13 19:30:20.637578 kubelet[2318]: E0213 19:30:20.637539 2318 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:30:20.829672 containerd[1511]: time="2025-02-13T19:30:20.829494361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal,Uid:fba431dc4f4f8fd6f79fccd6e65fa1cb,Namespace:kube-system,Attempt:0,}" Feb 13 19:30:20.838442 containerd[1511]: time="2025-02-13T19:30:20.838383271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal,Uid:bfd46dccffd1ee8894d539f39ad98cd4,Namespace:kube-system,Attempt:0,}" Feb 13 19:30:20.844289 containerd[1511]: time="2025-02-13T19:30:20.844246076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal,Uid:01befce2951f1d33e2009b135ecbe279,Namespace:kube-system,Attempt:0,}" Feb 13 19:30:20.871135 kubelet[2318]: I0213 19:30:20.871091 2318 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:20.871570 kubelet[2318]: E0213 19:30:20.871518 2318 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.128.0.10:6443/api/v1/nodes\": dial tcp 10.128.0.10:6443: connect: connection refused" node="ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:21.018012 kubelet[2318]: W0213 19:30:21.017920 2318 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.10:6443: connect: connection refused Feb 13 19:30:21.018171 kubelet[2318]: E0213 19:30:21.018019 2318 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:30:21.108392 kubelet[2318]: W0213 19:30:21.108327 2318 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.10:6443: connect: connection refused Feb 13 19:30:21.108573 kubelet[2318]: E0213 19:30:21.108413 2318 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:30:21.177867 kubelet[2318]: E0213 19:30:21.177792 2318 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.10:6443: connect: connection refused" interval="1.6s" Feb 13 19:30:21.248226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1018562422.mount: Deactivated successfully. Feb 13 19:30:21.257460 containerd[1511]: time="2025-02-13T19:30:21.257399353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:30:21.262406 containerd[1511]: time="2025-02-13T19:30:21.262331508Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Feb 13 19:30:21.263502 containerd[1511]: time="2025-02-13T19:30:21.263432288Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:30:21.265211 containerd[1511]: time="2025-02-13T19:30:21.265041404Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:30:21.268452 containerd[1511]: time="2025-02-13T19:30:21.268379314Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:30:21.273857 containerd[1511]: time="2025-02-13T19:30:21.272548306Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:30:21.273857 containerd[1511]: time="2025-02-13T19:30:21.272611514Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:30:21.274036 kubelet[2318]: W0213 19:30:21.273541 2318 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.10:6443: connect: connection refused Feb 13 19:30:21.274036 kubelet[2318]: E0213 19:30:21.273645 2318 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:30:21.279663 containerd[1511]: time="2025-02-13T19:30:21.279617434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:30:21.281554 containerd[1511]: time="2025-02-13T19:30:21.280922824Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 442.414486ms" Feb 13 19:30:21.283611 containerd[1511]: time="2025-02-13T19:30:21.283555130Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 453.263046ms" Feb 13 19:30:21.288792 containerd[1511]: time="2025-02-13T19:30:21.288736786Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 444.371681ms" Feb 13 19:30:21.492929 containerd[1511]: time="2025-02-13T19:30:21.491880640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:30:21.492929 containerd[1511]: time="2025-02-13T19:30:21.491964860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:30:21.492929 containerd[1511]: time="2025-02-13T19:30:21.491993257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:21.492929 containerd[1511]: time="2025-02-13T19:30:21.492130910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:21.494557 containerd[1511]: time="2025-02-13T19:30:21.494022682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:30:21.494557 containerd[1511]: time="2025-02-13T19:30:21.494172718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:30:21.496068 containerd[1511]: time="2025-02-13T19:30:21.494956583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:21.498199 containerd[1511]: time="2025-02-13T19:30:21.496618371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:21.507543 containerd[1511]: time="2025-02-13T19:30:21.507041919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:30:21.507543 containerd[1511]: time="2025-02-13T19:30:21.507148204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:30:21.507801 containerd[1511]: time="2025-02-13T19:30:21.507192463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:21.507801 containerd[1511]: time="2025-02-13T19:30:21.507450796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:21.547063 systemd[1]: Started cri-containerd-0a8243a71e99507f59abb87393f1414a600cc6467a491f050dc92ba1b351bf8b.scope - libcontainer container 0a8243a71e99507f59abb87393f1414a600cc6467a491f050dc92ba1b351bf8b. Feb 13 19:30:21.549562 systemd[1]: Started cri-containerd-32232a652dc167ca6f7653744e9ec7a30a3f9f44b61fc99e103ef7eafa0cc3b6.scope - libcontainer container 32232a652dc167ca6f7653744e9ec7a30a3f9f44b61fc99e103ef7eafa0cc3b6. Feb 13 19:30:21.556166 systemd[1]: Started cri-containerd-d7a00b34b42b04a086bfc7630b56c15b3919524370364b3bb01fba1c6b364029.scope - libcontainer container d7a00b34b42b04a086bfc7630b56c15b3919524370364b3bb01fba1c6b364029. Feb 13 19:30:21.667566 containerd[1511]: time="2025-02-13T19:30:21.667053077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal,Uid:01befce2951f1d33e2009b135ecbe279,Namespace:kube-system,Attempt:0,} returns sandbox id \"32232a652dc167ca6f7653744e9ec7a30a3f9f44b61fc99e103ef7eafa0cc3b6\"" Feb 13 19:30:21.671383 kubelet[2318]: E0213 19:30:21.670545 2318 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-21291" Feb 13 19:30:21.671536 containerd[1511]: time="2025-02-13T19:30:21.670544410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal,Uid:fba431dc4f4f8fd6f79fccd6e65fa1cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a8243a71e99507f59abb87393f1414a600cc6467a491f050dc92ba1b351bf8b\"" Feb 13 19:30:21.674391 containerd[1511]: time="2025-02-13T19:30:21.674069096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal,Uid:bfd46dccffd1ee8894d539f39ad98cd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7a00b34b42b04a086bfc7630b56c15b3919524370364b3bb01fba1c6b364029\"" Feb 13 19:30:21.676741 kubelet[2318]: E0213 19:30:21.676380 2318 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-21291" Feb 13 19:30:21.679422 containerd[1511]: time="2025-02-13T19:30:21.679388186Z" level=info msg="CreateContainer within sandbox \"32232a652dc167ca6f7653744e9ec7a30a3f9f44b61fc99e103ef7eafa0cc3b6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:30:21.680932 kubelet[2318]: E0213 19:30:21.680893 2318 kubelet_pods.go:555] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4230-0-1-89ba84e2674631ec15cd.c.flat" Feb 13 19:30:21.683818 kubelet[2318]: I0213 19:30:21.683773 2318 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:21.684232 containerd[1511]: time="2025-02-13T19:30:21.684092554Z" level=info msg="CreateContainer within sandbox \"0a8243a71e99507f59abb87393f1414a600cc6467a491f050dc92ba1b351bf8b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:30:21.685859 kubelet[2318]: E0213 19:30:21.685512 2318 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.128.0.10:6443/api/v1/nodes\": dial tcp 10.128.0.10:6443: connect: connection refused" node="ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:21.688094 containerd[1511]: time="2025-02-13T19:30:21.688060430Z" level=info msg="CreateContainer within sandbox \"d7a00b34b42b04a086bfc7630b56c15b3919524370364b3bb01fba1c6b364029\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:30:21.706681 containerd[1511]: time="2025-02-13T19:30:21.706634433Z" level=info msg="CreateContainer within sandbox \"0a8243a71e99507f59abb87393f1414a600cc6467a491f050dc92ba1b351bf8b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"02d021158427dc6bdda0b628987164735fb19066c30b4f743cc33d7e2d2442ee\"" Feb 13 19:30:21.708132 containerd[1511]: time="2025-02-13T19:30:21.708059750Z" level=info msg="StartContainer for \"02d021158427dc6bdda0b628987164735fb19066c30b4f743cc33d7e2d2442ee\"" Feb 13 19:30:21.714034 containerd[1511]: time="2025-02-13T19:30:21.713911697Z" level=info msg="CreateContainer within sandbox \"32232a652dc167ca6f7653744e9ec7a30a3f9f44b61fc99e103ef7eafa0cc3b6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4548bc0b70ce362c76ccf6d180d6070875af8b2e7dda797d2f9bc946dd02260e\"" Feb 13 19:30:21.715775 containerd[1511]: time="2025-02-13T19:30:21.714500884Z" level=info msg="StartContainer for \"4548bc0b70ce362c76ccf6d180d6070875af8b2e7dda797d2f9bc946dd02260e\"" Feb 13 19:30:21.720034 containerd[1511]: time="2025-02-13T19:30:21.719989333Z" level=info msg="CreateContainer within sandbox \"d7a00b34b42b04a086bfc7630b56c15b3919524370364b3bb01fba1c6b364029\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4dea0e05afc3bed5b6ad4d9db57004692e835e8996663c9419ae73e0087d964f\"" Feb 13 19:30:21.720949 containerd[1511]: time="2025-02-13T19:30:21.720797445Z" level=info msg="StartContainer for \"4dea0e05afc3bed5b6ad4d9db57004692e835e8996663c9419ae73e0087d964f\"" Feb 13 19:30:21.768742 systemd[1]: Started cri-containerd-4548bc0b70ce362c76ccf6d180d6070875af8b2e7dda797d2f9bc946dd02260e.scope - libcontainer container 4548bc0b70ce362c76ccf6d180d6070875af8b2e7dda797d2f9bc946dd02260e. Feb 13 19:30:21.771171 kubelet[2318]: E0213 19:30:21.771128 2318 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.10:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:30:21.786104 systemd[1]: Started cri-containerd-02d021158427dc6bdda0b628987164735fb19066c30b4f743cc33d7e2d2442ee.scope - libcontainer container 02d021158427dc6bdda0b628987164735fb19066c30b4f743cc33d7e2d2442ee. Feb 13 19:30:21.802075 systemd[1]: Started cri-containerd-4dea0e05afc3bed5b6ad4d9db57004692e835e8996663c9419ae73e0087d964f.scope - libcontainer container 4dea0e05afc3bed5b6ad4d9db57004692e835e8996663c9419ae73e0087d964f. Feb 13 19:30:21.906277 containerd[1511]: time="2025-02-13T19:30:21.906225232Z" level=info msg="StartContainer for \"02d021158427dc6bdda0b628987164735fb19066c30b4f743cc33d7e2d2442ee\" returns successfully" Feb 13 19:30:21.927774 containerd[1511]: time="2025-02-13T19:30:21.927139276Z" level=info msg="StartContainer for \"4dea0e05afc3bed5b6ad4d9db57004692e835e8996663c9419ae73e0087d964f\" returns successfully" Feb 13 19:30:21.942237 containerd[1511]: time="2025-02-13T19:30:21.942124213Z" level=info msg="StartContainer for \"4548bc0b70ce362c76ccf6d180d6070875af8b2e7dda797d2f9bc946dd02260e\" returns successfully" Feb 13 19:30:22.841868 kubelet[2318]: E0213 19:30:22.839807 2318 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" not found" node="ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:22.844051 kubelet[2318]: E0213 19:30:22.843117 2318 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" not found" node="ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:22.844051 kubelet[2318]: E0213 19:30:22.843311 2318 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" not found" node="ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:23.293589 kubelet[2318]: I0213 19:30:23.293546 2318 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:23.846340 kubelet[2318]: E0213 19:30:23.846295 2318 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" not found" node="ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:23.848595 kubelet[2318]: E0213 19:30:23.848558 2318 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" not found" node="ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:23.849232 kubelet[2318]: E0213 19:30:23.849202 2318 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" not found" node="ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:24.846687 kubelet[2318]: E0213 19:30:24.846643 2318 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" not found" node="ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:24.849027 kubelet[2318]: E0213 19:30:24.848990 2318 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" not found" node="ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:25.011852 kubelet[2318]: E0213 19:30:25.011778 2318 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" not found" node="ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:25.152956 kubelet[2318]: I0213 19:30:25.152910 2318 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:25.166770 kubelet[2318]: I0213 19:30:25.166596 2318 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:25.172873 kubelet[2318]: E0213 19:30:25.172722 2318 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal.1823db50774d8c9c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal,UID:ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal,},FirstTimestamp:2025-02-13 19:30:19.74058102 +0000 UTC m=+0.595458921,LastTimestamp:2025-02-13 19:30:19.74058102 +0000 UTC m=+0.595458921,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal,}" Feb 13 19:30:25.191246 kubelet[2318]: E0213 19:30:25.191194 2318 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:25.191246 kubelet[2318]: I0213 19:30:25.191245 2318 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:25.196554 kubelet[2318]: E0213 19:30:25.196510 2318 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:25.196554 kubelet[2318]: I0213 19:30:25.196551 2318 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:25.200886 kubelet[2318]: E0213 19:30:25.200842 2318 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:25.237219 kubelet[2318]: E0213 19:30:25.236603 2318 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal.1823db50784af897 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal,UID:ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal,},FirstTimestamp:2025-02-13 19:30:19.757189271 +0000 UTC m=+0.612067174,LastTimestamp:2025-02-13 19:30:19.757189271 +0000 UTC m=+0.612067174,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal,}" Feb 13 19:30:25.296811 kubelet[2318]: E0213 19:30:25.296448 2318 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal.1823db507b963824 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal,UID:ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal,},FirstTimestamp:2025-02-13 19:30:19.812452388 +0000 UTC m=+0.667330292,LastTimestamp:2025-02-13 19:30:19.812452388 +0000 UTC m=+0.667330292,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal,}" Feb 13 19:30:25.738042 kubelet[2318]: I0213 19:30:25.737983 2318 apiserver.go:52] "Watching apiserver" Feb 13 19:30:25.767526 kubelet[2318]: I0213 19:30:25.767463 2318 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:30:25.842382 kubelet[2318]: I0213 19:30:25.842333 2318 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:25.844797 kubelet[2318]: E0213 19:30:25.844759 2318 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:25.914467 kubelet[2318]: I0213 19:30:25.912728 2318 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:25.918691 kubelet[2318]: W0213 19:30:25.918643 2318 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 19:30:27.138446 systemd[1]: Reload requested from client PID 2588 ('systemctl') (unit session-9.scope)... Feb 13 19:30:27.138471 systemd[1]: Reloading... Feb 13 19:30:27.295948 zram_generator::config[2642]: No configuration found. Feb 13 19:30:27.427505 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:30:27.596417 systemd[1]: Reloading finished in 457 ms. Feb 13 19:30:27.630731 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:30:27.646632 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:30:27.647204 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:30:27.647421 systemd[1]: kubelet.service: Consumed 1.108s CPU time, 125.3M memory peak. Feb 13 19:30:27.654305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:30:27.963126 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:30:27.973448 (kubelet)[2681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:30:28.046888 kubelet[2681]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:30:28.046888 kubelet[2681]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:30:28.046888 kubelet[2681]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:30:28.046888 kubelet[2681]: I0213 19:30:28.045676 2681 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:30:28.053879 kubelet[2681]: I0213 19:30:28.053816 2681 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:30:28.053879 kubelet[2681]: I0213 19:30:28.053862 2681 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:30:28.054233 kubelet[2681]: I0213 19:30:28.054193 2681 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:30:28.060866 kubelet[2681]: I0213 19:30:28.058942 2681 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:30:28.061783 kubelet[2681]: I0213 19:30:28.061749 2681 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:30:28.065689 kubelet[2681]: E0213 19:30:28.065638 2681 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:30:28.065689 kubelet[2681]: I0213 19:30:28.065675 2681 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:30:28.069067 kubelet[2681]: I0213 19:30:28.069040 2681 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:30:28.069437 kubelet[2681]: I0213 19:30:28.069380 2681 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:30:28.069678 kubelet[2681]: I0213 19:30:28.069424 2681 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:30:28.069890 kubelet[2681]: I0213 19:30:28.069679 2681 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:30:28.069890 kubelet[2681]: I0213 19:30:28.069696 2681 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:30:28.069890 kubelet[2681]: I0213 19:30:28.069771 2681 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:30:28.070072 kubelet[2681]: I0213 19:30:28.070040 2681 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:30:28.071536 kubelet[2681]: I0213 19:30:28.070523 2681 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:30:28.071536 kubelet[2681]: I0213 19:30:28.070560 2681 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:30:28.071536 kubelet[2681]: I0213 19:30:28.070574 2681 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:30:28.072200 kubelet[2681]: I0213 19:30:28.072180 2681 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:30:28.073189 kubelet[2681]: I0213 19:30:28.073166 2681 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:30:28.073910 kubelet[2681]: I0213 19:30:28.073889 2681 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:30:28.074051 kubelet[2681]: I0213 19:30:28.074038 2681 server.go:1287] "Started kubelet" Feb 13 19:30:28.087119 kubelet[2681]: I0213 19:30:28.087093 2681 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:30:28.094083 kubelet[2681]: I0213 19:30:28.094015 2681 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:30:28.096512 kubelet[2681]: I0213 19:30:28.096442 2681 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:30:28.099542 kubelet[2681]: I0213 19:30:28.099520 2681 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:30:28.100196 kubelet[2681]: E0213 19:30:28.100156 2681 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" not found" Feb 13 19:30:28.101924 kubelet[2681]: I0213 19:30:28.101905 2681 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:30:28.102204 kubelet[2681]: I0213 19:30:28.102191 2681 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:30:28.107231 kubelet[2681]: I0213 19:30:28.107109 2681 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:30:28.107231 kubelet[2681]: I0213 19:30:28.107224 2681 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:30:28.114268 kubelet[2681]: I0213 19:30:28.111777 2681 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:30:28.117479 kubelet[2681]: I0213 19:30:28.116107 2681 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:30:28.117479 kubelet[2681]: I0213 19:30:28.116403 2681 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:30:28.118971 kubelet[2681]: I0213 19:30:28.118945 2681 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:30:28.126861 kubelet[2681]: I0213 19:30:28.123527 2681 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:30:28.126861 kubelet[2681]: I0213 19:30:28.125441 2681 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:30:28.126861 kubelet[2681]: I0213 19:30:28.125475 2681 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:30:28.126861 kubelet[2681]: I0213 19:30:28.125513 2681 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:30:28.126861 kubelet[2681]: I0213 19:30:28.125524 2681 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:30:28.126861 kubelet[2681]: E0213 19:30:28.125588 2681 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:30:28.171851 sudo[2709]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:30:28.173332 sudo[2709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:30:28.226147 kubelet[2681]: E0213 19:30:28.226022 2681 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:30:28.297907 kubelet[2681]: I0213 19:30:28.297132 2681 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:30:28.297907 kubelet[2681]: I0213 19:30:28.297158 2681 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:30:28.297907 kubelet[2681]: I0213 19:30:28.297184 2681 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:30:28.297907 kubelet[2681]: I0213 19:30:28.297458 2681 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:30:28.297907 kubelet[2681]: I0213 19:30:28.297477 2681 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:30:28.297907 kubelet[2681]: I0213 19:30:28.297516 2681 policy_none.go:49] "None policy: Start" Feb 13 19:30:28.297907 kubelet[2681]: I0213 19:30:28.297531 2681 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:30:28.297907 kubelet[2681]: I0213 19:30:28.297549 2681 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:30:28.297907 kubelet[2681]: I0213 19:30:28.297717 2681 state_mem.go:75] "Updated machine memory state" Feb 13 19:30:28.310516 kubelet[2681]: I0213 19:30:28.310470 2681 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:30:28.311872 kubelet[2681]: I0213 19:30:28.310717 2681 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:30:28.311872 kubelet[2681]: I0213 19:30:28.310743 2681 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:30:28.314693 kubelet[2681]: I0213 19:30:28.314658 2681 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:30:28.316542 kubelet[2681]: E0213 19:30:28.316277 2681 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:30:28.427502 kubelet[2681]: I0213 19:30:28.427447 2681 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:28.429522 kubelet[2681]: I0213 19:30:28.429336 2681 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:28.430027 kubelet[2681]: I0213 19:30:28.429800 2681 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:28.435413 kubelet[2681]: I0213 19:30:28.435377 2681 kubelet_node_status.go:76] "Attempting to register node" node="ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:28.443057 kubelet[2681]: W0213 19:30:28.442901 2681 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 19:30:28.443057 kubelet[2681]: E0213 19:30:28.442970 2681 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:28.456293 kubelet[2681]: W0213 19:30:28.456215 2681 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 19:30:28.456771 kubelet[2681]: W0213 19:30:28.456554 2681 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 19:30:28.468609 kubelet[2681]: I0213 19:30:28.468083 2681 kubelet_node_status.go:125] "Node was previously registered" node="ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:28.468609 kubelet[2681]: I0213 19:30:28.468182 2681 kubelet_node_status.go:79] "Successfully registered node" node="ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:28.504494 kubelet[2681]: I0213 19:30:28.503767 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bfd46dccffd1ee8894d539f39ad98cd4-ca-certs\") pod \"kube-controller-manager-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" (UID: \"bfd46dccffd1ee8894d539f39ad98cd4\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:28.504494 kubelet[2681]: I0213 19:30:28.503964 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/bfd46dccffd1ee8894d539f39ad98cd4-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" (UID: \"bfd46dccffd1ee8894d539f39ad98cd4\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:28.504494 kubelet[2681]: I0213 19:30:28.504005 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/01befce2951f1d33e2009b135ecbe279-kubeconfig\") pod \"kube-scheduler-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" (UID: \"01befce2951f1d33e2009b135ecbe279\") " pod="kube-system/kube-scheduler-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:28.504494 kubelet[2681]: I0213 19:30:28.504052 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fba431dc4f4f8fd6f79fccd6e65fa1cb-ca-certs\") pod \"kube-apiserver-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" (UID: \"fba431dc4f4f8fd6f79fccd6e65fa1cb\") " pod="kube-system/kube-apiserver-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:28.504806 kubelet[2681]: I0213 19:30:28.504093 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fba431dc4f4f8fd6f79fccd6e65fa1cb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" (UID: \"fba431dc4f4f8fd6f79fccd6e65fa1cb\") " pod="kube-system/kube-apiserver-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:28.504806 kubelet[2681]: I0213 19:30:28.504129 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bfd46dccffd1ee8894d539f39ad98cd4-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" (UID: \"bfd46dccffd1ee8894d539f39ad98cd4\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:28.504953 kubelet[2681]: I0213 19:30:28.504175 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bfd46dccffd1ee8894d539f39ad98cd4-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" (UID: \"bfd46dccffd1ee8894d539f39ad98cd4\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:28.505013 kubelet[2681]: I0213 19:30:28.504944 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bfd46dccffd1ee8894d539f39ad98cd4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" (UID: \"bfd46dccffd1ee8894d539f39ad98cd4\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:28.505013 kubelet[2681]: I0213 19:30:28.504984 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fba431dc4f4f8fd6f79fccd6e65fa1cb-k8s-certs\") pod \"kube-apiserver-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" (UID: \"fba431dc4f4f8fd6f79fccd6e65fa1cb\") " pod="kube-system/kube-apiserver-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:28.961045 update_engine[1495]: I20250213 19:30:28.960878 1495 update_attempter.cc:509] Updating boot flags... Feb 13 19:30:29.041824 sudo[2709]: pam_unix(sudo:session): session closed for user root Feb 13 19:30:29.072646 kubelet[2681]: I0213 19:30:29.072543 2681 apiserver.go:52] "Watching apiserver" Feb 13 19:30:29.091690 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2731) Feb 13 19:30:29.106347 kubelet[2681]: I0213 19:30:29.104396 2681 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:30:29.191594 kubelet[2681]: I0213 19:30:29.191547 2681 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:29.222021 kubelet[2681]: W0213 19:30:29.217977 2681 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 19:30:29.222021 kubelet[2681]: E0213 19:30:29.218058 2681 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" Feb 13 19:30:29.301895 kubelet[2681]: I0213 19:30:29.300181 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" podStartSLOduration=1.300157034 podStartE2EDuration="1.300157034s" podCreationTimestamp="2025-02-13 19:30:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:30:29.269799967 +0000 UTC m=+1.290291965" watchObservedRunningTime="2025-02-13 19:30:29.300157034 +0000 UTC m=+1.320649034" Feb 13 19:30:29.346607 kubelet[2681]: I0213 19:30:29.346541 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" podStartSLOduration=4.346498889 podStartE2EDuration="4.346498889s" podCreationTimestamp="2025-02-13 19:30:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:30:29.307124376 +0000 UTC m=+1.327616364" watchObservedRunningTime="2025-02-13 19:30:29.346498889 +0000 UTC m=+1.366990884" Feb 13 19:30:29.364862 kubelet[2681]: I0213 19:30:29.364399 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" podStartSLOduration=1.364373112 podStartE2EDuration="1.364373112s" podCreationTimestamp="2025-02-13 19:30:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:30:29.349560774 +0000 UTC m=+1.370052753" watchObservedRunningTime="2025-02-13 19:30:29.364373112 +0000 UTC m=+1.384865109" Feb 13 19:30:29.379971 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2727) Feb 13 19:30:31.183597 sudo[1789]: pam_unix(sudo:session): session closed for user root Feb 13 19:30:31.225810 sshd[1788]: Connection closed by 147.75.109.163 port 57014 Feb 13 19:30:31.226715 sshd-session[1786]: pam_unix(sshd:session): session closed for user core Feb 13 19:30:31.231962 systemd[1]: sshd@8-10.128.0.10:22-147.75.109.163:57014.service: Deactivated successfully. Feb 13 19:30:31.235932 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:30:31.236464 systemd[1]: session-9.scope: Consumed 6.989s CPU time, 263.8M memory peak. Feb 13 19:30:31.239499 systemd-logind[1487]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:30:31.241378 systemd-logind[1487]: Removed session 9. Feb 13 19:30:32.842699 kubelet[2681]: I0213 19:30:32.842637 2681 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:30:32.843311 containerd[1511]: time="2025-02-13T19:30:32.843108508Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:30:32.843743 kubelet[2681]: I0213 19:30:32.843338 2681 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:30:33.597348 kubelet[2681]: I0213 19:30:33.597289 2681 status_manager.go:890] "Failed to get status for pod" podUID="c905d67a-23b0-441c-9f70-16c9b5de1dfe" pod="kube-system/kube-proxy-nx77j" err="pods \"kube-proxy-nx77j\" is forbidden: User \"system:node:ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal' and this object" Feb 13 19:30:33.599920 systemd[1]: Created slice kubepods-besteffort-podc905d67a_23b0_441c_9f70_16c9b5de1dfe.slice - libcontainer container kubepods-besteffort-podc905d67a_23b0_441c_9f70_16c9b5de1dfe.slice. Feb 13 19:30:33.628178 systemd[1]: Created slice kubepods-burstable-podd6032467_493d_4afd_947c_afe055c81d94.slice - libcontainer container kubepods-burstable-podd6032467_493d_4afd_947c_afe055c81d94.slice. Feb 13 19:30:33.647697 kubelet[2681]: I0213 19:30:33.647031 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d6032467-493d-4afd-947c-afe055c81d94-clustermesh-secrets\") pod \"cilium-nnfnw\" (UID: \"d6032467-493d-4afd-947c-afe055c81d94\") " pod="kube-system/cilium-nnfnw" Feb 13 19:30:33.647697 kubelet[2681]: I0213 19:30:33.647095 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-xtables-lock\") pod \"cilium-nnfnw\" (UID: \"d6032467-493d-4afd-947c-afe055c81d94\") " pod="kube-system/cilium-nnfnw" Feb 13 19:30:33.647697 kubelet[2681]: I0213 19:30:33.647123 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-host-proc-sys-kernel\") pod \"cilium-nnfnw\" (UID: \"d6032467-493d-4afd-947c-afe055c81d94\") " pod="kube-system/cilium-nnfnw" Feb 13 19:30:33.647697 kubelet[2681]: I0213 19:30:33.647148 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-etc-cni-netd\") pod \"cilium-nnfnw\" (UID: \"d6032467-493d-4afd-947c-afe055c81d94\") " pod="kube-system/cilium-nnfnw" Feb 13 19:30:33.647697 kubelet[2681]: I0213 19:30:33.647173 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c905d67a-23b0-441c-9f70-16c9b5de1dfe-kube-proxy\") pod \"kube-proxy-nx77j\" (UID: \"c905d67a-23b0-441c-9f70-16c9b5de1dfe\") " pod="kube-system/kube-proxy-nx77j" Feb 13 19:30:33.647697 kubelet[2681]: I0213 19:30:33.647198 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-cilium-cgroup\") pod \"cilium-nnfnw\" (UID: \"d6032467-493d-4afd-947c-afe055c81d94\") " pod="kube-system/cilium-nnfnw" Feb 13 19:30:33.648166 kubelet[2681]: I0213 19:30:33.647221 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d6032467-493d-4afd-947c-afe055c81d94-hubble-tls\") pod \"cilium-nnfnw\" (UID: \"d6032467-493d-4afd-947c-afe055c81d94\") " pod="kube-system/cilium-nnfnw" Feb 13 19:30:33.648166 kubelet[2681]: I0213 19:30:33.647248 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6rgn\" (UniqueName: \"kubernetes.io/projected/c905d67a-23b0-441c-9f70-16c9b5de1dfe-kube-api-access-q6rgn\") pod \"kube-proxy-nx77j\" (UID: \"c905d67a-23b0-441c-9f70-16c9b5de1dfe\") " pod="kube-system/kube-proxy-nx77j" Feb 13 19:30:33.648166 kubelet[2681]: I0213 19:30:33.647280 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-cilium-run\") pod \"cilium-nnfnw\" (UID: \"d6032467-493d-4afd-947c-afe055c81d94\") " pod="kube-system/cilium-nnfnw" Feb 13 19:30:33.648166 kubelet[2681]: I0213 19:30:33.647306 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c905d67a-23b0-441c-9f70-16c9b5de1dfe-xtables-lock\") pod \"kube-proxy-nx77j\" (UID: \"c905d67a-23b0-441c-9f70-16c9b5de1dfe\") " pod="kube-system/kube-proxy-nx77j" Feb 13 19:30:33.648166 kubelet[2681]: I0213 19:30:33.647330 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-hostproc\") pod \"cilium-nnfnw\" (UID: \"d6032467-493d-4afd-947c-afe055c81d94\") " pod="kube-system/cilium-nnfnw" Feb 13 19:30:33.648166 kubelet[2681]: I0213 19:30:33.647356 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9ww5\" (UniqueName: \"kubernetes.io/projected/d6032467-493d-4afd-947c-afe055c81d94-kube-api-access-k9ww5\") pod \"cilium-nnfnw\" (UID: \"d6032467-493d-4afd-947c-afe055c81d94\") " pod="kube-system/cilium-nnfnw" Feb 13 19:30:33.648464 kubelet[2681]: I0213 19:30:33.647381 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d6032467-493d-4afd-947c-afe055c81d94-cilium-config-path\") pod \"cilium-nnfnw\" (UID: \"d6032467-493d-4afd-947c-afe055c81d94\") " pod="kube-system/cilium-nnfnw" Feb 13 19:30:33.648464 kubelet[2681]: I0213 19:30:33.647405 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c905d67a-23b0-441c-9f70-16c9b5de1dfe-lib-modules\") pod \"kube-proxy-nx77j\" (UID: \"c905d67a-23b0-441c-9f70-16c9b5de1dfe\") " pod="kube-system/kube-proxy-nx77j" Feb 13 19:30:33.648464 kubelet[2681]: I0213 19:30:33.647430 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-bpf-maps\") pod \"cilium-nnfnw\" (UID: \"d6032467-493d-4afd-947c-afe055c81d94\") " pod="kube-system/cilium-nnfnw" Feb 13 19:30:33.648464 kubelet[2681]: I0213 19:30:33.647471 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-cni-path\") pod \"cilium-nnfnw\" (UID: \"d6032467-493d-4afd-947c-afe055c81d94\") " pod="kube-system/cilium-nnfnw" Feb 13 19:30:33.648464 kubelet[2681]: I0213 19:30:33.647499 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-lib-modules\") pod \"cilium-nnfnw\" (UID: \"d6032467-493d-4afd-947c-afe055c81d94\") " pod="kube-system/cilium-nnfnw" Feb 13 19:30:33.648464 kubelet[2681]: I0213 19:30:33.647528 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-host-proc-sys-net\") pod \"cilium-nnfnw\" (UID: \"d6032467-493d-4afd-947c-afe055c81d94\") " pod="kube-system/cilium-nnfnw" Feb 13 19:30:33.913371 containerd[1511]: time="2025-02-13T19:30:33.913125586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nx77j,Uid:c905d67a-23b0-441c-9f70-16c9b5de1dfe,Namespace:kube-system,Attempt:0,}" Feb 13 19:30:33.940184 containerd[1511]: time="2025-02-13T19:30:33.936711114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nnfnw,Uid:d6032467-493d-4afd-947c-afe055c81d94,Namespace:kube-system,Attempt:0,}" Feb 13 19:30:33.978001 containerd[1511]: time="2025-02-13T19:30:33.976574597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:30:33.978001 containerd[1511]: time="2025-02-13T19:30:33.976657156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:30:33.978001 containerd[1511]: time="2025-02-13T19:30:33.976683807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:33.978001 containerd[1511]: time="2025-02-13T19:30:33.976810880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:34.017928 systemd[1]: Created slice kubepods-besteffort-podb1eddee4_cfa9_4516_a034_aee85d14f917.slice - libcontainer container kubepods-besteffort-podb1eddee4_cfa9_4516_a034_aee85d14f917.slice. Feb 13 19:30:34.043225 systemd[1]: Started cri-containerd-3a7ca3714f028b351a4aae330de8852a0d6603f7a541ecd66480402f07e71aa0.scope - libcontainer container 3a7ca3714f028b351a4aae330de8852a0d6603f7a541ecd66480402f07e71aa0. Feb 13 19:30:34.048383 containerd[1511]: time="2025-02-13T19:30:34.048037843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:30:34.048854 containerd[1511]: time="2025-02-13T19:30:34.048302500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:30:34.051716 kubelet[2681]: I0213 19:30:34.051497 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b1eddee4-cfa9-4516-a034-aee85d14f917-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-s8b7l\" (UID: \"b1eddee4-cfa9-4516-a034-aee85d14f917\") " pod="kube-system/cilium-operator-6c4d7847fc-s8b7l" Feb 13 19:30:34.051716 kubelet[2681]: I0213 19:30:34.051598 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6ms2\" (UniqueName: \"kubernetes.io/projected/b1eddee4-cfa9-4516-a034-aee85d14f917-kube-api-access-p6ms2\") pod \"cilium-operator-6c4d7847fc-s8b7l\" (UID: \"b1eddee4-cfa9-4516-a034-aee85d14f917\") " pod="kube-system/cilium-operator-6c4d7847fc-s8b7l" Feb 13 19:30:34.053177 containerd[1511]: time="2025-02-13T19:30:34.048813039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:34.053177 containerd[1511]: time="2025-02-13T19:30:34.051182314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:34.082952 systemd[1]: Started cri-containerd-5951b3a0dcfed2a8c0442397bf3e3a30251fa2b732b761b2cbe2eb56f7065d0d.scope - libcontainer container 5951b3a0dcfed2a8c0442397bf3e3a30251fa2b732b761b2cbe2eb56f7065d0d. Feb 13 19:30:34.099112 containerd[1511]: time="2025-02-13T19:30:34.099051353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nx77j,Uid:c905d67a-23b0-441c-9f70-16c9b5de1dfe,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a7ca3714f028b351a4aae330de8852a0d6603f7a541ecd66480402f07e71aa0\"" Feb 13 19:30:34.105130 containerd[1511]: time="2025-02-13T19:30:34.105070612Z" level=info msg="CreateContainer within sandbox \"3a7ca3714f028b351a4aae330de8852a0d6603f7a541ecd66480402f07e71aa0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:30:34.129613 containerd[1511]: time="2025-02-13T19:30:34.129398182Z" level=info msg="CreateContainer within sandbox \"3a7ca3714f028b351a4aae330de8852a0d6603f7a541ecd66480402f07e71aa0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"38b2e06a39f9d92c27c9713e9553881541553fceb2f2f7e73f327dd1aa0173da\"" Feb 13 19:30:34.132720 containerd[1511]: time="2025-02-13T19:30:34.132624627Z" level=info msg="StartContainer for \"38b2e06a39f9d92c27c9713e9553881541553fceb2f2f7e73f327dd1aa0173da\"" Feb 13 19:30:34.143309 containerd[1511]: time="2025-02-13T19:30:34.143267802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nnfnw,Uid:d6032467-493d-4afd-947c-afe055c81d94,Namespace:kube-system,Attempt:0,} returns sandbox id \"5951b3a0dcfed2a8c0442397bf3e3a30251fa2b732b761b2cbe2eb56f7065d0d\"" Feb 13 19:30:34.146767 containerd[1511]: time="2025-02-13T19:30:34.146674555Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:30:34.192095 systemd[1]: Started cri-containerd-38b2e06a39f9d92c27c9713e9553881541553fceb2f2f7e73f327dd1aa0173da.scope - libcontainer container 38b2e06a39f9d92c27c9713e9553881541553fceb2f2f7e73f327dd1aa0173da. Feb 13 19:30:34.244669 containerd[1511]: time="2025-02-13T19:30:34.244327121Z" level=info msg="StartContainer for \"38b2e06a39f9d92c27c9713e9553881541553fceb2f2f7e73f327dd1aa0173da\" returns successfully" Feb 13 19:30:34.325061 containerd[1511]: time="2025-02-13T19:30:34.325005950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-s8b7l,Uid:b1eddee4-cfa9-4516-a034-aee85d14f917,Namespace:kube-system,Attempt:0,}" Feb 13 19:30:34.364516 containerd[1511]: time="2025-02-13T19:30:34.364154155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:30:34.364516 containerd[1511]: time="2025-02-13T19:30:34.364394958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:30:34.365611 containerd[1511]: time="2025-02-13T19:30:34.364489413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:34.366301 containerd[1511]: time="2025-02-13T19:30:34.365992905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:34.403139 systemd[1]: Started cri-containerd-ecbdec2619b84d2e0b2563a165f44d251e92138410ee8e94bc7ef9a60ed9e537.scope - libcontainer container ecbdec2619b84d2e0b2563a165f44d251e92138410ee8e94bc7ef9a60ed9e537. Feb 13 19:30:34.495695 containerd[1511]: time="2025-02-13T19:30:34.494702155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-s8b7l,Uid:b1eddee4-cfa9-4516-a034-aee85d14f917,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecbdec2619b84d2e0b2563a165f44d251e92138410ee8e94bc7ef9a60ed9e537\"" Feb 13 19:30:35.229119 kubelet[2681]: I0213 19:30:35.229043 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nx77j" podStartSLOduration=2.228999197 podStartE2EDuration="2.228999197s" podCreationTimestamp="2025-02-13 19:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:30:35.228902122 +0000 UTC m=+7.249394118" watchObservedRunningTime="2025-02-13 19:30:35.228999197 +0000 UTC m=+7.249491197" Feb 13 19:30:41.564819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount910059092.mount: Deactivated successfully. Feb 13 19:30:44.321449 containerd[1511]: time="2025-02-13T19:30:44.321359079Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:44.322908 containerd[1511]: time="2025-02-13T19:30:44.322820157Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 19:30:44.324325 containerd[1511]: time="2025-02-13T19:30:44.324257281Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:44.326414 containerd[1511]: time="2025-02-13T19:30:44.326374745Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.179652178s" Feb 13 19:30:44.326679 containerd[1511]: time="2025-02-13T19:30:44.326552158Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 19:30:44.328801 containerd[1511]: time="2025-02-13T19:30:44.328203203Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:30:44.330128 containerd[1511]: time="2025-02-13T19:30:44.330094996Z" level=info msg="CreateContainer within sandbox \"5951b3a0dcfed2a8c0442397bf3e3a30251fa2b732b761b2cbe2eb56f7065d0d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:30:44.352333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3447363040.mount: Deactivated successfully. Feb 13 19:30:44.354204 containerd[1511]: time="2025-02-13T19:30:44.354158577Z" level=info msg="CreateContainer within sandbox \"5951b3a0dcfed2a8c0442397bf3e3a30251fa2b732b761b2cbe2eb56f7065d0d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cdd87efee172af91b89c7197cdf82f7a6cfcc10e5f86167c0740f8a6ac91eafa\"" Feb 13 19:30:44.355932 containerd[1511]: time="2025-02-13T19:30:44.355751351Z" level=info msg="StartContainer for \"cdd87efee172af91b89c7197cdf82f7a6cfcc10e5f86167c0740f8a6ac91eafa\"" Feb 13 19:30:44.405039 systemd[1]: Started cri-containerd-cdd87efee172af91b89c7197cdf82f7a6cfcc10e5f86167c0740f8a6ac91eafa.scope - libcontainer container cdd87efee172af91b89c7197cdf82f7a6cfcc10e5f86167c0740f8a6ac91eafa. Feb 13 19:30:44.442591 containerd[1511]: time="2025-02-13T19:30:44.442541041Z" level=info msg="StartContainer for \"cdd87efee172af91b89c7197cdf82f7a6cfcc10e5f86167c0740f8a6ac91eafa\" returns successfully" Feb 13 19:30:44.462013 systemd[1]: cri-containerd-cdd87efee172af91b89c7197cdf82f7a6cfcc10e5f86167c0740f8a6ac91eafa.scope: Deactivated successfully. Feb 13 19:30:45.346625 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cdd87efee172af91b89c7197cdf82f7a6cfcc10e5f86167c0740f8a6ac91eafa-rootfs.mount: Deactivated successfully. Feb 13 19:30:46.507417 containerd[1511]: time="2025-02-13T19:30:46.507311789Z" level=info msg="shim disconnected" id=cdd87efee172af91b89c7197cdf82f7a6cfcc10e5f86167c0740f8a6ac91eafa namespace=k8s.io Feb 13 19:30:46.507417 containerd[1511]: time="2025-02-13T19:30:46.507414679Z" level=warning msg="cleaning up after shim disconnected" id=cdd87efee172af91b89c7197cdf82f7a6cfcc10e5f86167c0740f8a6ac91eafa namespace=k8s.io Feb 13 19:30:46.508497 containerd[1511]: time="2025-02-13T19:30:46.507450064Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:30:47.266606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3003553324.mount: Deactivated successfully. Feb 13 19:30:47.274873 containerd[1511]: time="2025-02-13T19:30:47.274733413Z" level=info msg="CreateContainer within sandbox \"5951b3a0dcfed2a8c0442397bf3e3a30251fa2b732b761b2cbe2eb56f7065d0d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:30:47.313261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3733594205.mount: Deactivated successfully. Feb 13 19:30:47.320731 containerd[1511]: time="2025-02-13T19:30:47.320474590Z" level=info msg="CreateContainer within sandbox \"5951b3a0dcfed2a8c0442397bf3e3a30251fa2b732b761b2cbe2eb56f7065d0d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d5634608cd10b53dd58556487b3d19450cce3f9a3a393209befde16b7fb4a357\"" Feb 13 19:30:47.321557 containerd[1511]: time="2025-02-13T19:30:47.321514618Z" level=info msg="StartContainer for \"d5634608cd10b53dd58556487b3d19450cce3f9a3a393209befde16b7fb4a357\"" Feb 13 19:30:47.398074 systemd[1]: Started cri-containerd-d5634608cd10b53dd58556487b3d19450cce3f9a3a393209befde16b7fb4a357.scope - libcontainer container d5634608cd10b53dd58556487b3d19450cce3f9a3a393209befde16b7fb4a357. Feb 13 19:30:47.465416 containerd[1511]: time="2025-02-13T19:30:47.465332188Z" level=info msg="StartContainer for \"d5634608cd10b53dd58556487b3d19450cce3f9a3a393209befde16b7fb4a357\" returns successfully" Feb 13 19:30:47.489076 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:30:47.489502 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:30:47.490310 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:30:47.496369 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:30:47.498343 systemd[1]: cri-containerd-d5634608cd10b53dd58556487b3d19450cce3f9a3a393209befde16b7fb4a357.scope: Deactivated successfully. Feb 13 19:30:47.539322 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:30:47.560785 containerd[1511]: time="2025-02-13T19:30:47.560466206Z" level=info msg="shim disconnected" id=d5634608cd10b53dd58556487b3d19450cce3f9a3a393209befde16b7fb4a357 namespace=k8s.io Feb 13 19:30:47.560785 containerd[1511]: time="2025-02-13T19:30:47.560541708Z" level=warning msg="cleaning up after shim disconnected" id=d5634608cd10b53dd58556487b3d19450cce3f9a3a393209befde16b7fb4a357 namespace=k8s.io Feb 13 19:30:47.560785 containerd[1511]: time="2025-02-13T19:30:47.560557177Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:30:48.249376 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5634608cd10b53dd58556487b3d19450cce3f9a3a393209befde16b7fb4a357-rootfs.mount: Deactivated successfully. Feb 13 19:30:48.275457 containerd[1511]: time="2025-02-13T19:30:48.274687394Z" level=info msg="CreateContainer within sandbox \"5951b3a0dcfed2a8c0442397bf3e3a30251fa2b732b761b2cbe2eb56f7065d0d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:30:48.326540 containerd[1511]: time="2025-02-13T19:30:48.326359843Z" level=info msg="CreateContainer within sandbox \"5951b3a0dcfed2a8c0442397bf3e3a30251fa2b732b761b2cbe2eb56f7065d0d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ef93a4c87e234a663b415fa596a66b98610253fc858b989251bc48036cc1206e\"" Feb 13 19:30:48.332319 containerd[1511]: time="2025-02-13T19:30:48.332274694Z" level=info msg="StartContainer for \"ef93a4c87e234a663b415fa596a66b98610253fc858b989251bc48036cc1206e\"" Feb 13 19:30:48.409110 systemd[1]: Started cri-containerd-ef93a4c87e234a663b415fa596a66b98610253fc858b989251bc48036cc1206e.scope - libcontainer container ef93a4c87e234a663b415fa596a66b98610253fc858b989251bc48036cc1206e. Feb 13 19:30:48.497724 containerd[1511]: time="2025-02-13T19:30:48.497345650Z" level=info msg="StartContainer for \"ef93a4c87e234a663b415fa596a66b98610253fc858b989251bc48036cc1206e\" returns successfully" Feb 13 19:30:48.504046 systemd[1]: cri-containerd-ef93a4c87e234a663b415fa596a66b98610253fc858b989251bc48036cc1206e.scope: Deactivated successfully. Feb 13 19:30:48.702966 containerd[1511]: time="2025-02-13T19:30:48.702793542Z" level=info msg="shim disconnected" id=ef93a4c87e234a663b415fa596a66b98610253fc858b989251bc48036cc1206e namespace=k8s.io Feb 13 19:30:48.702966 containerd[1511]: time="2025-02-13T19:30:48.702885263Z" level=warning msg="cleaning up after shim disconnected" id=ef93a4c87e234a663b415fa596a66b98610253fc858b989251bc48036cc1206e namespace=k8s.io Feb 13 19:30:48.702966 containerd[1511]: time="2025-02-13T19:30:48.702900882Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:30:48.738994 containerd[1511]: time="2025-02-13T19:30:48.738934331Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:48.740099 containerd[1511]: time="2025-02-13T19:30:48.740034448Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 19:30:48.741118 containerd[1511]: time="2025-02-13T19:30:48.741019848Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:30:48.743420 containerd[1511]: time="2025-02-13T19:30:48.743222706Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.414973978s" Feb 13 19:30:48.743420 containerd[1511]: time="2025-02-13T19:30:48.743270234Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 19:30:48.746509 containerd[1511]: time="2025-02-13T19:30:48.746474171Z" level=info msg="CreateContainer within sandbox \"ecbdec2619b84d2e0b2563a165f44d251e92138410ee8e94bc7ef9a60ed9e537\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:30:48.766986 containerd[1511]: time="2025-02-13T19:30:48.764660034Z" level=info msg="CreateContainer within sandbox \"ecbdec2619b84d2e0b2563a165f44d251e92138410ee8e94bc7ef9a60ed9e537\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"12da73847e9b33a8ed9ff956564f0399c85694a9f8c29d8582466455490a7612\"" Feb 13 19:30:48.766986 containerd[1511]: time="2025-02-13T19:30:48.765402748Z" level=info msg="StartContainer for \"12da73847e9b33a8ed9ff956564f0399c85694a9f8c29d8582466455490a7612\"" Feb 13 19:30:48.801055 systemd[1]: Started cri-containerd-12da73847e9b33a8ed9ff956564f0399c85694a9f8c29d8582466455490a7612.scope - libcontainer container 12da73847e9b33a8ed9ff956564f0399c85694a9f8c29d8582466455490a7612. Feb 13 19:30:48.837220 containerd[1511]: time="2025-02-13T19:30:48.837169387Z" level=info msg="StartContainer for \"12da73847e9b33a8ed9ff956564f0399c85694a9f8c29d8582466455490a7612\" returns successfully" Feb 13 19:30:49.255829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef93a4c87e234a663b415fa596a66b98610253fc858b989251bc48036cc1206e-rootfs.mount: Deactivated successfully. Feb 13 19:30:49.281891 containerd[1511]: time="2025-02-13T19:30:49.281825588Z" level=info msg="CreateContainer within sandbox \"5951b3a0dcfed2a8c0442397bf3e3a30251fa2b732b761b2cbe2eb56f7065d0d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:30:49.302059 containerd[1511]: time="2025-02-13T19:30:49.302000279Z" level=info msg="CreateContainer within sandbox \"5951b3a0dcfed2a8c0442397bf3e3a30251fa2b732b761b2cbe2eb56f7065d0d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e4004b9b89339b865f71f43ab1ab1ce2c6c4fbbd2f2507c151a346e6fd035a12\"" Feb 13 19:30:49.303123 containerd[1511]: time="2025-02-13T19:30:49.303084544Z" level=info msg="StartContainer for \"e4004b9b89339b865f71f43ab1ab1ce2c6c4fbbd2f2507c151a346e6fd035a12\"" Feb 13 19:30:49.385067 systemd[1]: Started cri-containerd-e4004b9b89339b865f71f43ab1ab1ce2c6c4fbbd2f2507c151a346e6fd035a12.scope - libcontainer container e4004b9b89339b865f71f43ab1ab1ce2c6c4fbbd2f2507c151a346e6fd035a12. Feb 13 19:30:49.523141 containerd[1511]: time="2025-02-13T19:30:49.522975563Z" level=info msg="StartContainer for \"e4004b9b89339b865f71f43ab1ab1ce2c6c4fbbd2f2507c151a346e6fd035a12\" returns successfully" Feb 13 19:30:49.526462 systemd[1]: cri-containerd-e4004b9b89339b865f71f43ab1ab1ce2c6c4fbbd2f2507c151a346e6fd035a12.scope: Deactivated successfully. Feb 13 19:30:49.592512 containerd[1511]: time="2025-02-13T19:30:49.592433358Z" level=info msg="shim disconnected" id=e4004b9b89339b865f71f43ab1ab1ce2c6c4fbbd2f2507c151a346e6fd035a12 namespace=k8s.io Feb 13 19:30:49.592512 containerd[1511]: time="2025-02-13T19:30:49.592509601Z" level=warning msg="cleaning up after shim disconnected" id=e4004b9b89339b865f71f43ab1ab1ce2c6c4fbbd2f2507c151a346e6fd035a12 namespace=k8s.io Feb 13 19:30:49.592512 containerd[1511]: time="2025-02-13T19:30:49.592522581Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:30:50.246860 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4004b9b89339b865f71f43ab1ab1ce2c6c4fbbd2f2507c151a346e6fd035a12-rootfs.mount: Deactivated successfully. Feb 13 19:30:50.290278 containerd[1511]: time="2025-02-13T19:30:50.290084018Z" level=info msg="CreateContainer within sandbox \"5951b3a0dcfed2a8c0442397bf3e3a30251fa2b732b761b2cbe2eb56f7065d0d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:30:50.322623 kubelet[2681]: I0213 19:30:50.321709 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-s8b7l" podStartSLOduration=3.073741168 podStartE2EDuration="17.321679816s" podCreationTimestamp="2025-02-13 19:30:33 +0000 UTC" firstStartedPulling="2025-02-13 19:30:34.49635527 +0000 UTC m=+6.516847258" lastFinishedPulling="2025-02-13 19:30:48.744293914 +0000 UTC m=+20.764785906" observedRunningTime="2025-02-13 19:30:49.543105879 +0000 UTC m=+21.563597877" watchObservedRunningTime="2025-02-13 19:30:50.321679816 +0000 UTC m=+22.342171816" Feb 13 19:30:50.328118 containerd[1511]: time="2025-02-13T19:30:50.327952789Z" level=info msg="CreateContainer within sandbox \"5951b3a0dcfed2a8c0442397bf3e3a30251fa2b732b761b2cbe2eb56f7065d0d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a69100cc6bb4dcc864bd49844869534df6bac727787dd636397b077ebec3aaa5\"" Feb 13 19:30:50.330309 containerd[1511]: time="2025-02-13T19:30:50.330244103Z" level=info msg="StartContainer for \"a69100cc6bb4dcc864bd49844869534df6bac727787dd636397b077ebec3aaa5\"" Feb 13 19:30:50.391092 systemd[1]: Started cri-containerd-a69100cc6bb4dcc864bd49844869534df6bac727787dd636397b077ebec3aaa5.scope - libcontainer container a69100cc6bb4dcc864bd49844869534df6bac727787dd636397b077ebec3aaa5. Feb 13 19:30:50.448289 containerd[1511]: time="2025-02-13T19:30:50.448237770Z" level=info msg="StartContainer for \"a69100cc6bb4dcc864bd49844869534df6bac727787dd636397b077ebec3aaa5\" returns successfully" Feb 13 19:30:50.630874 kubelet[2681]: I0213 19:30:50.629778 2681 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 19:30:50.669047 kubelet[2681]: I0213 19:30:50.668995 2681 status_manager.go:890] "Failed to get status for pod" podUID="b33e672b-f082-48f8-b8a7-2fd111590c1b" pod="kube-system/coredns-668d6bf9bc-c9fm7" err="pods \"coredns-668d6bf9bc-c9fm7\" is forbidden: User \"system:node:ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal' and this object" Feb 13 19:30:50.680436 systemd[1]: Created slice kubepods-burstable-podb33e672b_f082_48f8_b8a7_2fd111590c1b.slice - libcontainer container kubepods-burstable-podb33e672b_f082_48f8_b8a7_2fd111590c1b.slice. Feb 13 19:30:50.697526 systemd[1]: Created slice kubepods-burstable-poddd9ef12e_9153_40a6_ab6a_e9f610e096a1.slice - libcontainer container kubepods-burstable-poddd9ef12e_9153_40a6_ab6a_e9f610e096a1.slice. Feb 13 19:30:50.781671 kubelet[2681]: I0213 19:30:50.781573 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd9ef12e-9153-40a6-ab6a-e9f610e096a1-config-volume\") pod \"coredns-668d6bf9bc-xjktt\" (UID: \"dd9ef12e-9153-40a6-ab6a-e9f610e096a1\") " pod="kube-system/coredns-668d6bf9bc-xjktt" Feb 13 19:30:50.782244 kubelet[2681]: I0213 19:30:50.782046 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b33e672b-f082-48f8-b8a7-2fd111590c1b-config-volume\") pod \"coredns-668d6bf9bc-c9fm7\" (UID: \"b33e672b-f082-48f8-b8a7-2fd111590c1b\") " pod="kube-system/coredns-668d6bf9bc-c9fm7" Feb 13 19:30:50.782244 kubelet[2681]: I0213 19:30:50.782104 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-682cj\" (UniqueName: \"kubernetes.io/projected/dd9ef12e-9153-40a6-ab6a-e9f610e096a1-kube-api-access-682cj\") pod \"coredns-668d6bf9bc-xjktt\" (UID: \"dd9ef12e-9153-40a6-ab6a-e9f610e096a1\") " pod="kube-system/coredns-668d6bf9bc-xjktt" Feb 13 19:30:50.782244 kubelet[2681]: I0213 19:30:50.782158 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dd78\" (UniqueName: \"kubernetes.io/projected/b33e672b-f082-48f8-b8a7-2fd111590c1b-kube-api-access-6dd78\") pod \"coredns-668d6bf9bc-c9fm7\" (UID: \"b33e672b-f082-48f8-b8a7-2fd111590c1b\") " pod="kube-system/coredns-668d6bf9bc-c9fm7" Feb 13 19:30:50.989003 containerd[1511]: time="2025-02-13T19:30:50.988420359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c9fm7,Uid:b33e672b-f082-48f8-b8a7-2fd111590c1b,Namespace:kube-system,Attempt:0,}" Feb 13 19:30:51.004966 containerd[1511]: time="2025-02-13T19:30:51.004728889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xjktt,Uid:dd9ef12e-9153-40a6-ab6a-e9f610e096a1,Namespace:kube-system,Attempt:0,}" Feb 13 19:30:51.322903 kubelet[2681]: I0213 19:30:51.322685 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nnfnw" podStartSLOduration=8.139821921 podStartE2EDuration="18.322658939s" podCreationTimestamp="2025-02-13 19:30:33 +0000 UTC" firstStartedPulling="2025-02-13 19:30:34.145135516 +0000 UTC m=+6.165627505" lastFinishedPulling="2025-02-13 19:30:44.327972532 +0000 UTC m=+16.348464523" observedRunningTime="2025-02-13 19:30:51.320039416 +0000 UTC m=+23.340531414" watchObservedRunningTime="2025-02-13 19:30:51.322658939 +0000 UTC m=+23.343150937" Feb 13 19:30:52.985053 systemd-networkd[1402]: cilium_host: Link UP Feb 13 19:30:52.989623 systemd-networkd[1402]: cilium_net: Link UP Feb 13 19:30:52.992392 systemd-networkd[1402]: cilium_net: Gained carrier Feb 13 19:30:52.992969 systemd-networkd[1402]: cilium_host: Gained carrier Feb 13 19:30:53.138958 systemd-networkd[1402]: cilium_vxlan: Link UP Feb 13 19:30:53.138976 systemd-networkd[1402]: cilium_vxlan: Gained carrier Feb 13 19:30:53.276309 systemd-networkd[1402]: cilium_net: Gained IPv6LL Feb 13 19:30:53.420951 kernel: NET: Registered PF_ALG protocol family Feb 13 19:30:53.613008 systemd-networkd[1402]: cilium_host: Gained IPv6LL Feb 13 19:30:54.279702 systemd-networkd[1402]: lxc_health: Link UP Feb 13 19:30:54.289607 systemd-networkd[1402]: lxc_health: Gained carrier Feb 13 19:30:54.444624 systemd-networkd[1402]: cilium_vxlan: Gained IPv6LL Feb 13 19:30:54.580215 kernel: eth0: renamed from tmpcee69 Feb 13 19:30:54.605509 systemd-networkd[1402]: lxcc039fe5868c5: Link UP Feb 13 19:30:54.610446 systemd-networkd[1402]: lxcc039fe5868c5: Gained carrier Feb 13 19:30:54.635970 kernel: eth0: renamed from tmpef829 Feb 13 19:30:54.632908 systemd-networkd[1402]: lxc1b52934a15fe: Link UP Feb 13 19:30:54.642503 systemd-networkd[1402]: lxc1b52934a15fe: Gained carrier Feb 13 19:30:56.043998 systemd-networkd[1402]: lxc_health: Gained IPv6LL Feb 13 19:30:56.108214 systemd-networkd[1402]: lxcc039fe5868c5: Gained IPv6LL Feb 13 19:30:56.365065 systemd-networkd[1402]: lxc1b52934a15fe: Gained IPv6LL Feb 13 19:30:58.511041 ntpd[1474]: Listen normally on 8 cilium_host 192.168.0.146:123 Feb 13 19:30:58.512315 ntpd[1474]: 13 Feb 19:30:58 ntpd[1474]: Listen normally on 8 cilium_host 192.168.0.146:123 Feb 13 19:30:58.512315 ntpd[1474]: 13 Feb 19:30:58 ntpd[1474]: Listen normally on 9 cilium_net [fe80::58be:50ff:feaa:5298%4]:123 Feb 13 19:30:58.512315 ntpd[1474]: 13 Feb 19:30:58 ntpd[1474]: Listen normally on 10 cilium_host [fe80::e004:4dff:fe3b:6a57%5]:123 Feb 13 19:30:58.512315 ntpd[1474]: 13 Feb 19:30:58 ntpd[1474]: Listen normally on 11 cilium_vxlan [fe80::9048:1dff:fe51:c0fc%6]:123 Feb 13 19:30:58.512315 ntpd[1474]: 13 Feb 19:30:58 ntpd[1474]: Listen normally on 12 lxc_health [fe80::1c85:e2ff:fea6:dd36%8]:123 Feb 13 19:30:58.512315 ntpd[1474]: 13 Feb 19:30:58 ntpd[1474]: Listen normally on 13 lxcc039fe5868c5 [fe80::8461:f3ff:fe82:16bc%10]:123 Feb 13 19:30:58.512315 ntpd[1474]: 13 Feb 19:30:58 ntpd[1474]: Listen normally on 14 lxc1b52934a15fe [fe80::94a9:c2ff:feae:fd04%12]:123 Feb 13 19:30:58.511163 ntpd[1474]: Listen normally on 9 cilium_net [fe80::58be:50ff:feaa:5298%4]:123 Feb 13 19:30:58.511240 ntpd[1474]: Listen normally on 10 cilium_host [fe80::e004:4dff:fe3b:6a57%5]:123 Feb 13 19:30:58.511304 ntpd[1474]: Listen normally on 11 cilium_vxlan [fe80::9048:1dff:fe51:c0fc%6]:123 Feb 13 19:30:58.511358 ntpd[1474]: Listen normally on 12 lxc_health [fe80::1c85:e2ff:fea6:dd36%8]:123 Feb 13 19:30:58.511415 ntpd[1474]: Listen normally on 13 lxcc039fe5868c5 [fe80::8461:f3ff:fe82:16bc%10]:123 Feb 13 19:30:58.511469 ntpd[1474]: Listen normally on 14 lxc1b52934a15fe [fe80::94a9:c2ff:feae:fd04%12]:123 Feb 13 19:30:59.970617 containerd[1511]: time="2025-02-13T19:30:59.970470592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:30:59.971738 containerd[1511]: time="2025-02-13T19:30:59.971265448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:30:59.971738 containerd[1511]: time="2025-02-13T19:30:59.971406950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:30:59.971738 containerd[1511]: time="2025-02-13T19:30:59.971653956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:31:00.011251 containerd[1511]: time="2025-02-13T19:31:00.010236977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:31:00.011251 containerd[1511]: time="2025-02-13T19:31:00.010313527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:31:00.011251 containerd[1511]: time="2025-02-13T19:31:00.010363918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:31:00.011251 containerd[1511]: time="2025-02-13T19:31:00.010533325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:31:00.065186 systemd[1]: Started cri-containerd-cee69314e9cb6352ae9f76381b265c59985c76140f32610f6fc36e1cdc2b1763.scope - libcontainer container cee69314e9cb6352ae9f76381b265c59985c76140f32610f6fc36e1cdc2b1763. Feb 13 19:31:00.070018 systemd[1]: Started cri-containerd-ef829d424b6c12dbf6e4736792afcce77eb8d1d9a39c06a75e7e2af4d9d509f1.scope - libcontainer container ef829d424b6c12dbf6e4736792afcce77eb8d1d9a39c06a75e7e2af4d9d509f1. Feb 13 19:31:00.205625 containerd[1511]: time="2025-02-13T19:31:00.205460888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c9fm7,Uid:b33e672b-f082-48f8-b8a7-2fd111590c1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"cee69314e9cb6352ae9f76381b265c59985c76140f32610f6fc36e1cdc2b1763\"" Feb 13 19:31:00.218121 containerd[1511]: time="2025-02-13T19:31:00.217767212Z" level=info msg="CreateContainer within sandbox \"cee69314e9cb6352ae9f76381b265c59985c76140f32610f6fc36e1cdc2b1763\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:31:00.229192 containerd[1511]: time="2025-02-13T19:31:00.228944352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xjktt,Uid:dd9ef12e-9153-40a6-ab6a-e9f610e096a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef829d424b6c12dbf6e4736792afcce77eb8d1d9a39c06a75e7e2af4d9d509f1\"" Feb 13 19:31:00.237669 containerd[1511]: time="2025-02-13T19:31:00.237447645Z" level=info msg="CreateContainer within sandbox \"ef829d424b6c12dbf6e4736792afcce77eb8d1d9a39c06a75e7e2af4d9d509f1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:31:00.265863 containerd[1511]: time="2025-02-13T19:31:00.265778013Z" level=info msg="CreateContainer within sandbox \"cee69314e9cb6352ae9f76381b265c59985c76140f32610f6fc36e1cdc2b1763\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"378911b63e64d50bb4df8ab9a5d75c2ec4fb9eeb4e0723a74ed855b622822368\"" Feb 13 19:31:00.268913 containerd[1511]: time="2025-02-13T19:31:00.267555015Z" level=info msg="StartContainer for \"378911b63e64d50bb4df8ab9a5d75c2ec4fb9eeb4e0723a74ed855b622822368\"" Feb 13 19:31:00.275224 containerd[1511]: time="2025-02-13T19:31:00.275157002Z" level=info msg="CreateContainer within sandbox \"ef829d424b6c12dbf6e4736792afcce77eb8d1d9a39c06a75e7e2af4d9d509f1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"708aae9fef479c761af9e0ca34a66845944bca9b98ecd60a18353626ba070f0e\"" Feb 13 19:31:00.277794 containerd[1511]: time="2025-02-13T19:31:00.276226626Z" level=info msg="StartContainer for \"708aae9fef479c761af9e0ca34a66845944bca9b98ecd60a18353626ba070f0e\"" Feb 13 19:31:00.340487 systemd[1]: Started cri-containerd-708aae9fef479c761af9e0ca34a66845944bca9b98ecd60a18353626ba070f0e.scope - libcontainer container 708aae9fef479c761af9e0ca34a66845944bca9b98ecd60a18353626ba070f0e. Feb 13 19:31:00.371633 systemd[1]: Started cri-containerd-378911b63e64d50bb4df8ab9a5d75c2ec4fb9eeb4e0723a74ed855b622822368.scope - libcontainer container 378911b63e64d50bb4df8ab9a5d75c2ec4fb9eeb4e0723a74ed855b622822368. Feb 13 19:31:00.411997 containerd[1511]: time="2025-02-13T19:31:00.411931503Z" level=info msg="StartContainer for \"708aae9fef479c761af9e0ca34a66845944bca9b98ecd60a18353626ba070f0e\" returns successfully" Feb 13 19:31:00.436577 containerd[1511]: time="2025-02-13T19:31:00.436522973Z" level=info msg="StartContainer for \"378911b63e64d50bb4df8ab9a5d75c2ec4fb9eeb4e0723a74ed855b622822368\" returns successfully" Feb 13 19:31:01.387194 kubelet[2681]: I0213 19:31:01.385730 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-c9fm7" podStartSLOduration=28.385706958 podStartE2EDuration="28.385706958s" podCreationTimestamp="2025-02-13 19:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:31:01.38307218 +0000 UTC m=+33.403564190" watchObservedRunningTime="2025-02-13 19:31:01.385706958 +0000 UTC m=+33.406198956" Feb 13 19:31:01.387194 kubelet[2681]: I0213 19:31:01.385896 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xjktt" podStartSLOduration=28.385885895 podStartE2EDuration="28.385885895s" podCreationTimestamp="2025-02-13 19:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:31:01.363944634 +0000 UTC m=+33.384436633" watchObservedRunningTime="2025-02-13 19:31:01.385885895 +0000 UTC m=+33.406377894" Feb 13 19:31:31.160293 systemd[1]: Started sshd@9-10.128.0.10:22-147.75.109.163:43494.service - OpenSSH per-connection server daemon (147.75.109.163:43494). Feb 13 19:31:31.466389 sshd[4073]: Accepted publickey for core from 147.75.109.163 port 43494 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:31:31.470703 sshd-session[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:31.484313 systemd-logind[1487]: New session 10 of user core. Feb 13 19:31:31.493166 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:31:31.771942 sshd[4075]: Connection closed by 147.75.109.163 port 43494 Feb 13 19:31:31.773243 sshd-session[4073]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:31.777764 systemd[1]: sshd@9-10.128.0.10:22-147.75.109.163:43494.service: Deactivated successfully. Feb 13 19:31:31.781614 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:31:31.784009 systemd-logind[1487]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:31:31.785623 systemd-logind[1487]: Removed session 10. Feb 13 19:31:36.830345 systemd[1]: Started sshd@10-10.128.0.10:22-147.75.109.163:43496.service - OpenSSH per-connection server daemon (147.75.109.163:43496). Feb 13 19:31:37.133132 sshd[4090]: Accepted publickey for core from 147.75.109.163 port 43496 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:31:37.135571 sshd-session[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:37.143013 systemd-logind[1487]: New session 11 of user core. Feb 13 19:31:37.149232 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:31:37.433028 sshd[4092]: Connection closed by 147.75.109.163 port 43496 Feb 13 19:31:37.434185 sshd-session[4090]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:37.438815 systemd[1]: sshd@10-10.128.0.10:22-147.75.109.163:43496.service: Deactivated successfully. Feb 13 19:31:37.442138 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:31:37.445543 systemd-logind[1487]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:31:37.447366 systemd-logind[1487]: Removed session 11. Feb 13 19:31:42.493353 systemd[1]: Started sshd@11-10.128.0.10:22-147.75.109.163:48732.service - OpenSSH per-connection server daemon (147.75.109.163:48732). Feb 13 19:31:42.791056 sshd[4105]: Accepted publickey for core from 147.75.109.163 port 48732 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:31:42.793225 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:42.800019 systemd-logind[1487]: New session 12 of user core. Feb 13 19:31:42.806111 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:31:43.088087 sshd[4107]: Connection closed by 147.75.109.163 port 48732 Feb 13 19:31:43.089602 sshd-session[4105]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:43.095957 systemd[1]: sshd@11-10.128.0.10:22-147.75.109.163:48732.service: Deactivated successfully. Feb 13 19:31:43.099930 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:31:43.101182 systemd-logind[1487]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:31:43.102756 systemd-logind[1487]: Removed session 12. Feb 13 19:31:48.152356 systemd[1]: Started sshd@12-10.128.0.10:22-147.75.109.163:48746.service - OpenSSH per-connection server daemon (147.75.109.163:48746). Feb 13 19:31:48.462891 sshd[4120]: Accepted publickey for core from 147.75.109.163 port 48746 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:31:48.464646 sshd-session[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:48.474564 systemd-logind[1487]: New session 13 of user core. Feb 13 19:31:48.480277 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:31:48.789604 sshd[4122]: Connection closed by 147.75.109.163 port 48746 Feb 13 19:31:48.790535 sshd-session[4120]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:48.795511 systemd[1]: sshd@12-10.128.0.10:22-147.75.109.163:48746.service: Deactivated successfully. Feb 13 19:31:48.799261 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:31:48.801583 systemd-logind[1487]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:31:48.803721 systemd-logind[1487]: Removed session 13. Feb 13 19:31:53.847877 systemd[1]: Started sshd@13-10.128.0.10:22-147.75.109.163:42570.service - OpenSSH per-connection server daemon (147.75.109.163:42570). Feb 13 19:31:54.139462 sshd[4135]: Accepted publickey for core from 147.75.109.163 port 42570 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:31:54.141352 sshd-session[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:54.148880 systemd-logind[1487]: New session 14 of user core. Feb 13 19:31:54.155129 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:31:54.437552 sshd[4137]: Connection closed by 147.75.109.163 port 42570 Feb 13 19:31:54.439180 sshd-session[4135]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:54.444972 systemd[1]: sshd@13-10.128.0.10:22-147.75.109.163:42570.service: Deactivated successfully. Feb 13 19:31:54.448731 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:31:54.452094 systemd-logind[1487]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:31:54.453664 systemd-logind[1487]: Removed session 14. Feb 13 19:31:54.498320 systemd[1]: Started sshd@14-10.128.0.10:22-147.75.109.163:42586.service - OpenSSH per-connection server daemon (147.75.109.163:42586). Feb 13 19:31:54.801640 sshd[4151]: Accepted publickey for core from 147.75.109.163 port 42586 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:31:54.803576 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:54.810439 systemd-logind[1487]: New session 15 of user core. Feb 13 19:31:54.823153 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:31:55.151404 sshd[4153]: Connection closed by 147.75.109.163 port 42586 Feb 13 19:31:55.152408 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:55.157283 systemd[1]: sshd@14-10.128.0.10:22-147.75.109.163:42586.service: Deactivated successfully. Feb 13 19:31:55.160430 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:31:55.163196 systemd-logind[1487]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:31:55.165194 systemd-logind[1487]: Removed session 15. Feb 13 19:31:55.211326 systemd[1]: Started sshd@15-10.128.0.10:22-147.75.109.163:42598.service - OpenSSH per-connection server daemon (147.75.109.163:42598). Feb 13 19:31:55.514062 sshd[4163]: Accepted publickey for core from 147.75.109.163 port 42598 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:31:55.516118 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:31:55.523716 systemd-logind[1487]: New session 16 of user core. Feb 13 19:31:55.531168 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:31:55.816822 sshd[4165]: Connection closed by 147.75.109.163 port 42598 Feb 13 19:31:55.818170 sshd-session[4163]: pam_unix(sshd:session): session closed for user core Feb 13 19:31:55.825106 systemd[1]: sshd@15-10.128.0.10:22-147.75.109.163:42598.service: Deactivated successfully. Feb 13 19:31:55.828574 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:31:55.830332 systemd-logind[1487]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:31:55.832489 systemd-logind[1487]: Removed session 16. Feb 13 19:32:00.878333 systemd[1]: Started sshd@16-10.128.0.10:22-147.75.109.163:34348.service - OpenSSH per-connection server daemon (147.75.109.163:34348). Feb 13 19:32:01.181661 sshd[4177]: Accepted publickey for core from 147.75.109.163 port 34348 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:32:01.183591 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:32:01.191280 systemd-logind[1487]: New session 17 of user core. Feb 13 19:32:01.197198 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:32:01.490368 sshd[4179]: Connection closed by 147.75.109.163 port 34348 Feb 13 19:32:01.491228 sshd-session[4177]: pam_unix(sshd:session): session closed for user core Feb 13 19:32:01.496010 systemd[1]: sshd@16-10.128.0.10:22-147.75.109.163:34348.service: Deactivated successfully. Feb 13 19:32:01.500002 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:32:01.503110 systemd-logind[1487]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:32:01.505171 systemd-logind[1487]: Removed session 17. Feb 13 19:32:06.553751 systemd[1]: Started sshd@17-10.128.0.10:22-147.75.109.163:34362.service - OpenSSH per-connection server daemon (147.75.109.163:34362). Feb 13 19:32:06.857217 sshd[4195]: Accepted publickey for core from 147.75.109.163 port 34362 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:32:06.859139 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:32:06.865312 systemd-logind[1487]: New session 18 of user core. Feb 13 19:32:06.875102 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:32:07.149943 sshd[4197]: Connection closed by 147.75.109.163 port 34362 Feb 13 19:32:07.150362 sshd-session[4195]: pam_unix(sshd:session): session closed for user core Feb 13 19:32:07.155820 systemd[1]: sshd@17-10.128.0.10:22-147.75.109.163:34362.service: Deactivated successfully. Feb 13 19:32:07.159057 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:32:07.161490 systemd-logind[1487]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:32:07.163311 systemd-logind[1487]: Removed session 18. Feb 13 19:32:07.205285 systemd[1]: Started sshd@18-10.128.0.10:22-147.75.109.163:34370.service - OpenSSH per-connection server daemon (147.75.109.163:34370). Feb 13 19:32:07.501761 sshd[4208]: Accepted publickey for core from 147.75.109.163 port 34370 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:32:07.503739 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:32:07.511351 systemd-logind[1487]: New session 19 of user core. Feb 13 19:32:07.516095 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:32:07.861387 sshd[4210]: Connection closed by 147.75.109.163 port 34370 Feb 13 19:32:07.862336 sshd-session[4208]: pam_unix(sshd:session): session closed for user core Feb 13 19:32:07.867357 systemd[1]: sshd@18-10.128.0.10:22-147.75.109.163:34370.service: Deactivated successfully. Feb 13 19:32:07.870589 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:32:07.872881 systemd-logind[1487]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:32:07.874681 systemd-logind[1487]: Removed session 19. Feb 13 19:32:07.918320 systemd[1]: Started sshd@19-10.128.0.10:22-147.75.109.163:34384.service - OpenSSH per-connection server daemon (147.75.109.163:34384). Feb 13 19:32:08.217167 sshd[4220]: Accepted publickey for core from 147.75.109.163 port 34384 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:32:08.218809 sshd-session[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:32:08.225127 systemd-logind[1487]: New session 20 of user core. Feb 13 19:32:08.232085 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:32:09.161005 sshd[4222]: Connection closed by 147.75.109.163 port 34384 Feb 13 19:32:09.161712 sshd-session[4220]: pam_unix(sshd:session): session closed for user core Feb 13 19:32:09.171435 systemd[1]: sshd@19-10.128.0.10:22-147.75.109.163:34384.service: Deactivated successfully. Feb 13 19:32:09.177429 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:32:09.179547 systemd-logind[1487]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:32:09.184456 systemd-logind[1487]: Removed session 20. Feb 13 19:32:09.227765 systemd[1]: Started sshd@20-10.128.0.10:22-147.75.109.163:34396.service - OpenSSH per-connection server daemon (147.75.109.163:34396). Feb 13 19:32:09.543972 sshd[4235]: Accepted publickey for core from 147.75.109.163 port 34396 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:32:09.548481 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:32:09.559569 systemd-logind[1487]: New session 21 of user core. Feb 13 19:32:09.565062 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:32:09.975291 sshd[4240]: Connection closed by 147.75.109.163 port 34396 Feb 13 19:32:09.976201 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Feb 13 19:32:09.981672 systemd[1]: sshd@20-10.128.0.10:22-147.75.109.163:34396.service: Deactivated successfully. Feb 13 19:32:09.985045 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:32:09.986560 systemd-logind[1487]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:32:09.988149 systemd-logind[1487]: Removed session 21. Feb 13 19:32:10.038287 systemd[1]: Started sshd@21-10.128.0.10:22-147.75.109.163:58232.service - OpenSSH per-connection server daemon (147.75.109.163:58232). Feb 13 19:32:10.326063 sshd[4250]: Accepted publickey for core from 147.75.109.163 port 58232 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:32:10.327980 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:32:10.334344 systemd-logind[1487]: New session 22 of user core. Feb 13 19:32:10.345099 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:32:10.623734 sshd[4252]: Connection closed by 147.75.109.163 port 58232 Feb 13 19:32:10.624818 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Feb 13 19:32:10.630780 systemd[1]: sshd@21-10.128.0.10:22-147.75.109.163:58232.service: Deactivated successfully. Feb 13 19:32:10.633665 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:32:10.635228 systemd-logind[1487]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:32:10.636943 systemd-logind[1487]: Removed session 22. Feb 13 19:32:15.688380 systemd[1]: Started sshd@22-10.128.0.10:22-147.75.109.163:58240.service - OpenSSH per-connection server daemon (147.75.109.163:58240). Feb 13 19:32:15.982500 sshd[4263]: Accepted publickey for core from 147.75.109.163 port 58240 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:32:15.984441 sshd-session[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:32:15.991220 systemd-logind[1487]: New session 23 of user core. Feb 13 19:32:16.001195 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:32:16.269511 sshd[4267]: Connection closed by 147.75.109.163 port 58240 Feb 13 19:32:16.270747 sshd-session[4263]: pam_unix(sshd:session): session closed for user core Feb 13 19:32:16.275407 systemd[1]: sshd@22-10.128.0.10:22-147.75.109.163:58240.service: Deactivated successfully. Feb 13 19:32:16.279008 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:32:16.281659 systemd-logind[1487]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:32:16.283387 systemd-logind[1487]: Removed session 23. Feb 13 19:32:21.334312 systemd[1]: Started sshd@23-10.128.0.10:22-147.75.109.163:46824.service - OpenSSH per-connection server daemon (147.75.109.163:46824). Feb 13 19:32:21.643172 sshd[4278]: Accepted publickey for core from 147.75.109.163 port 46824 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:32:21.645318 sshd-session[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:32:21.651918 systemd-logind[1487]: New session 24 of user core. Feb 13 19:32:21.663150 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:32:21.938679 sshd[4280]: Connection closed by 147.75.109.163 port 46824 Feb 13 19:32:21.939148 sshd-session[4278]: pam_unix(sshd:session): session closed for user core Feb 13 19:32:21.944504 systemd[1]: sshd@23-10.128.0.10:22-147.75.109.163:46824.service: Deactivated successfully. Feb 13 19:32:21.947961 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:32:21.950901 systemd-logind[1487]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:32:21.952666 systemd-logind[1487]: Removed session 24. Feb 13 19:32:21.960950 update_engine[1495]: I20250213 19:32:21.960878 1495 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 19:32:21.960950 update_engine[1495]: I20250213 19:32:21.960947 1495 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 19:32:21.961514 update_engine[1495]: I20250213 19:32:21.961205 1495 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 19:32:21.961980 update_engine[1495]: I20250213 19:32:21.961934 1495 omaha_request_params.cc:62] Current group set to alpha Feb 13 19:32:21.962574 update_engine[1495]: I20250213 19:32:21.962103 1495 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 19:32:21.962574 update_engine[1495]: I20250213 19:32:21.962127 1495 update_attempter.cc:643] Scheduling an action processor start. Feb 13 19:32:21.962574 update_engine[1495]: I20250213 19:32:21.962153 1495 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 19:32:21.962574 update_engine[1495]: I20250213 19:32:21.962210 1495 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 19:32:21.962574 update_engine[1495]: I20250213 19:32:21.962307 1495 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 19:32:21.962574 update_engine[1495]: I20250213 19:32:21.962322 1495 omaha_request_action.cc:272] Request: Feb 13 19:32:21.962574 update_engine[1495]: Feb 13 19:32:21.962574 update_engine[1495]: Feb 13 19:32:21.962574 update_engine[1495]: Feb 13 19:32:21.962574 update_engine[1495]: Feb 13 19:32:21.962574 update_engine[1495]: Feb 13 19:32:21.962574 update_engine[1495]: Feb 13 19:32:21.962574 update_engine[1495]: Feb 13 19:32:21.962574 update_engine[1495]: Feb 13 19:32:21.962574 update_engine[1495]: I20250213 19:32:21.962335 1495 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:32:21.963292 locksmithd[1538]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 19:32:21.964466 update_engine[1495]: I20250213 19:32:21.964398 1495 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:32:21.964892 update_engine[1495]: I20250213 19:32:21.964823 1495 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:32:22.018564 update_engine[1495]: E20250213 19:32:22.018473 1495 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:32:22.018756 update_engine[1495]: I20250213 19:32:22.018640 1495 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 19:32:26.994351 systemd[1]: Started sshd@24-10.128.0.10:22-147.75.109.163:46828.service - OpenSSH per-connection server daemon (147.75.109.163:46828). Feb 13 19:32:27.295934 sshd[4292]: Accepted publickey for core from 147.75.109.163 port 46828 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:32:27.297793 sshd-session[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:32:27.306296 systemd-logind[1487]: New session 25 of user core. Feb 13 19:32:27.310184 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:32:27.588484 sshd[4294]: Connection closed by 147.75.109.163 port 46828 Feb 13 19:32:27.589466 sshd-session[4292]: pam_unix(sshd:session): session closed for user core Feb 13 19:32:27.595401 systemd-logind[1487]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:32:27.599394 systemd[1]: sshd@24-10.128.0.10:22-147.75.109.163:46828.service: Deactivated successfully. Feb 13 19:32:27.603719 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:32:27.605953 systemd-logind[1487]: Removed session 25. Feb 13 19:32:31.965139 update_engine[1495]: I20250213 19:32:31.965026 1495 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:32:31.965801 update_engine[1495]: I20250213 19:32:31.965425 1495 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:32:31.965919 update_engine[1495]: I20250213 19:32:31.965793 1495 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:32:31.976415 update_engine[1495]: E20250213 19:32:31.976320 1495 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:32:31.976599 update_engine[1495]: I20250213 19:32:31.976449 1495 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 19:32:32.652438 systemd[1]: Started sshd@25-10.128.0.10:22-147.75.109.163:47788.service - OpenSSH per-connection server daemon (147.75.109.163:47788). Feb 13 19:32:32.951000 sshd[4309]: Accepted publickey for core from 147.75.109.163 port 47788 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:32:32.953381 sshd-session[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:32:32.965211 systemd-logind[1487]: New session 26 of user core. Feb 13 19:32:32.973149 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:32:33.257412 sshd[4314]: Connection closed by 147.75.109.163 port 47788 Feb 13 19:32:33.259128 sshd-session[4309]: pam_unix(sshd:session): session closed for user core Feb 13 19:32:33.264944 systemd[1]: sshd@25-10.128.0.10:22-147.75.109.163:47788.service: Deactivated successfully. Feb 13 19:32:33.269131 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:32:33.271781 systemd-logind[1487]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:32:33.273767 systemd-logind[1487]: Removed session 26. Feb 13 19:32:33.316300 systemd[1]: Started sshd@26-10.128.0.10:22-147.75.109.163:47796.service - OpenSSH per-connection server daemon (147.75.109.163:47796). Feb 13 19:32:33.617668 sshd[4326]: Accepted publickey for core from 147.75.109.163 port 47796 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:32:33.619912 sshd-session[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:32:33.627234 systemd-logind[1487]: New session 27 of user core. Feb 13 19:32:33.637137 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:32:35.822035 containerd[1511]: time="2025-02-13T19:32:35.821965904Z" level=info msg="StopContainer for \"12da73847e9b33a8ed9ff956564f0399c85694a9f8c29d8582466455490a7612\" with timeout 30 (s)" Feb 13 19:32:35.826241 containerd[1511]: time="2025-02-13T19:32:35.825289721Z" level=info msg="Stop container \"12da73847e9b33a8ed9ff956564f0399c85694a9f8c29d8582466455490a7612\" with signal terminated" Feb 13 19:32:35.856624 containerd[1511]: time="2025-02-13T19:32:35.856558870Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:32:35.858682 systemd[1]: cri-containerd-12da73847e9b33a8ed9ff956564f0399c85694a9f8c29d8582466455490a7612.scope: Deactivated successfully. Feb 13 19:32:35.871164 containerd[1511]: time="2025-02-13T19:32:35.871089376Z" level=info msg="StopContainer for \"a69100cc6bb4dcc864bd49844869534df6bac727787dd636397b077ebec3aaa5\" with timeout 2 (s)" Feb 13 19:32:35.872070 containerd[1511]: time="2025-02-13T19:32:35.871910291Z" level=info msg="Stop container \"a69100cc6bb4dcc864bd49844869534df6bac727787dd636397b077ebec3aaa5\" with signal terminated" Feb 13 19:32:35.891641 systemd-networkd[1402]: lxc_health: Link DOWN Feb 13 19:32:35.891655 systemd-networkd[1402]: lxc_health: Lost carrier Feb 13 19:32:35.917633 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12da73847e9b33a8ed9ff956564f0399c85694a9f8c29d8582466455490a7612-rootfs.mount: Deactivated successfully. Feb 13 19:32:35.919377 systemd[1]: cri-containerd-a69100cc6bb4dcc864bd49844869534df6bac727787dd636397b077ebec3aaa5.scope: Deactivated successfully. Feb 13 19:32:35.922089 systemd[1]: cri-containerd-a69100cc6bb4dcc864bd49844869534df6bac727787dd636397b077ebec3aaa5.scope: Consumed 9.895s CPU time, 125.6M memory peak, 136K read from disk, 13.3M written to disk. Feb 13 19:32:35.940944 containerd[1511]: time="2025-02-13T19:32:35.940669214Z" level=info msg="shim disconnected" id=12da73847e9b33a8ed9ff956564f0399c85694a9f8c29d8582466455490a7612 namespace=k8s.io Feb 13 19:32:35.940944 containerd[1511]: time="2025-02-13T19:32:35.940802919Z" level=warning msg="cleaning up after shim disconnected" id=12da73847e9b33a8ed9ff956564f0399c85694a9f8c29d8582466455490a7612 namespace=k8s.io Feb 13 19:32:35.940944 containerd[1511]: time="2025-02-13T19:32:35.940823217Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:32:35.969407 containerd[1511]: time="2025-02-13T19:32:35.969228018Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:32:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:32:35.976068 containerd[1511]: time="2025-02-13T19:32:35.976010459Z" level=info msg="StopContainer for \"12da73847e9b33a8ed9ff956564f0399c85694a9f8c29d8582466455490a7612\" returns successfully" Feb 13 19:32:35.980056 containerd[1511]: time="2025-02-13T19:32:35.980010986Z" level=info msg="StopPodSandbox for \"ecbdec2619b84d2e0b2563a165f44d251e92138410ee8e94bc7ef9a60ed9e537\"" Feb 13 19:32:35.985326 containerd[1511]: time="2025-02-13T19:32:35.980073675Z" level=info msg="Container to stop \"12da73847e9b33a8ed9ff956564f0399c85694a9f8c29d8582466455490a7612\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:32:35.980675 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a69100cc6bb4dcc864bd49844869534df6bac727787dd636397b077ebec3aaa5-rootfs.mount: Deactivated successfully. Feb 13 19:32:35.988311 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ecbdec2619b84d2e0b2563a165f44d251e92138410ee8e94bc7ef9a60ed9e537-shm.mount: Deactivated successfully. Feb 13 19:32:35.995443 containerd[1511]: time="2025-02-13T19:32:35.995235881Z" level=info msg="shim disconnected" id=a69100cc6bb4dcc864bd49844869534df6bac727787dd636397b077ebec3aaa5 namespace=k8s.io Feb 13 19:32:35.995443 containerd[1511]: time="2025-02-13T19:32:35.995324239Z" level=warning msg="cleaning up after shim disconnected" id=a69100cc6bb4dcc864bd49844869534df6bac727787dd636397b077ebec3aaa5 namespace=k8s.io Feb 13 19:32:35.995443 containerd[1511]: time="2025-02-13T19:32:35.995339076Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:32:35.998039 systemd[1]: cri-containerd-ecbdec2619b84d2e0b2563a165f44d251e92138410ee8e94bc7ef9a60ed9e537.scope: Deactivated successfully. Feb 13 19:32:36.027733 containerd[1511]: time="2025-02-13T19:32:36.027335030Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:32:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:32:36.034046 containerd[1511]: time="2025-02-13T19:32:36.033974300Z" level=info msg="StopContainer for \"a69100cc6bb4dcc864bd49844869534df6bac727787dd636397b077ebec3aaa5\" returns successfully" Feb 13 19:32:36.036982 containerd[1511]: time="2025-02-13T19:32:36.036621450Z" level=info msg="StopPodSandbox for \"5951b3a0dcfed2a8c0442397bf3e3a30251fa2b732b761b2cbe2eb56f7065d0d\"" Feb 13 19:32:36.036982 containerd[1511]: time="2025-02-13T19:32:36.036680467Z" level=info msg="Container to stop \"e4004b9b89339b865f71f43ab1ab1ce2c6c4fbbd2f2507c151a346e6fd035a12\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:32:36.036982 containerd[1511]: time="2025-02-13T19:32:36.036735434Z" level=info msg="Container to stop \"a69100cc6bb4dcc864bd49844869534df6bac727787dd636397b077ebec3aaa5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:32:36.036982 containerd[1511]: time="2025-02-13T19:32:36.036753613Z" level=info msg="Container to stop \"d5634608cd10b53dd58556487b3d19450cce3f9a3a393209befde16b7fb4a357\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:32:36.036982 containerd[1511]: time="2025-02-13T19:32:36.036769307Z" level=info msg="Container to stop \"ef93a4c87e234a663b415fa596a66b98610253fc858b989251bc48036cc1206e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:32:36.036982 containerd[1511]: time="2025-02-13T19:32:36.036785673Z" level=info msg="Container to stop \"cdd87efee172af91b89c7197cdf82f7a6cfcc10e5f86167c0740f8a6ac91eafa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:32:36.041250 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ecbdec2619b84d2e0b2563a165f44d251e92138410ee8e94bc7ef9a60ed9e537-rootfs.mount: Deactivated successfully. Feb 13 19:32:36.042974 containerd[1511]: time="2025-02-13T19:32:36.042275308Z" level=info msg="shim disconnected" id=ecbdec2619b84d2e0b2563a165f44d251e92138410ee8e94bc7ef9a60ed9e537 namespace=k8s.io Feb 13 19:32:36.042974 containerd[1511]: time="2025-02-13T19:32:36.042346974Z" level=warning msg="cleaning up after shim disconnected" id=ecbdec2619b84d2e0b2563a165f44d251e92138410ee8e94bc7ef9a60ed9e537 namespace=k8s.io Feb 13 19:32:36.042974 containerd[1511]: time="2025-02-13T19:32:36.042362370Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:32:36.054154 systemd[1]: cri-containerd-5951b3a0dcfed2a8c0442397bf3e3a30251fa2b732b761b2cbe2eb56f7065d0d.scope: Deactivated successfully. Feb 13 19:32:36.073251 containerd[1511]: time="2025-02-13T19:32:36.073120556Z" level=info msg="TearDown network for sandbox \"ecbdec2619b84d2e0b2563a165f44d251e92138410ee8e94bc7ef9a60ed9e537\" successfully" Feb 13 19:32:36.075244 containerd[1511]: time="2025-02-13T19:32:36.073436595Z" level=info msg="StopPodSandbox for \"ecbdec2619b84d2e0b2563a165f44d251e92138410ee8e94bc7ef9a60ed9e537\" returns successfully" Feb 13 19:32:36.099236 containerd[1511]: time="2025-02-13T19:32:36.099096159Z" level=info msg="shim disconnected" id=5951b3a0dcfed2a8c0442397bf3e3a30251fa2b732b761b2cbe2eb56f7065d0d namespace=k8s.io Feb 13 19:32:36.099539 containerd[1511]: time="2025-02-13T19:32:36.099184528Z" level=warning msg="cleaning up after shim disconnected" id=5951b3a0dcfed2a8c0442397bf3e3a30251fa2b732b761b2cbe2eb56f7065d0d namespace=k8s.io Feb 13 19:32:36.099539 containerd[1511]: time="2025-02-13T19:32:36.099325998Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:32:36.123996 containerd[1511]: time="2025-02-13T19:32:36.123774369Z" level=info msg="TearDown network for sandbox \"5951b3a0dcfed2a8c0442397bf3e3a30251fa2b732b761b2cbe2eb56f7065d0d\" successfully" Feb 13 19:32:36.123996 containerd[1511]: time="2025-02-13T19:32:36.123965078Z" level=info msg="StopPodSandbox for \"5951b3a0dcfed2a8c0442397bf3e3a30251fa2b732b761b2cbe2eb56f7065d0d\" returns successfully" Feb 13 19:32:36.183989 kubelet[2681]: I0213 19:32:36.183923 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6ms2\" (UniqueName: \"kubernetes.io/projected/b1eddee4-cfa9-4516-a034-aee85d14f917-kube-api-access-p6ms2\") pod \"b1eddee4-cfa9-4516-a034-aee85d14f917\" (UID: \"b1eddee4-cfa9-4516-a034-aee85d14f917\") " Feb 13 19:32:36.186060 kubelet[2681]: I0213 19:32:36.184725 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b1eddee4-cfa9-4516-a034-aee85d14f917-cilium-config-path\") pod \"b1eddee4-cfa9-4516-a034-aee85d14f917\" (UID: \"b1eddee4-cfa9-4516-a034-aee85d14f917\") " Feb 13 19:32:36.188443 kubelet[2681]: I0213 19:32:36.188388 2681 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1eddee4-cfa9-4516-a034-aee85d14f917-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b1eddee4-cfa9-4516-a034-aee85d14f917" (UID: "b1eddee4-cfa9-4516-a034-aee85d14f917"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 19:32:36.189324 kubelet[2681]: I0213 19:32:36.189265 2681 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1eddee4-cfa9-4516-a034-aee85d14f917-kube-api-access-p6ms2" (OuterVolumeSpecName: "kube-api-access-p6ms2") pod "b1eddee4-cfa9-4516-a034-aee85d14f917" (UID: "b1eddee4-cfa9-4516-a034-aee85d14f917"). InnerVolumeSpecName "kube-api-access-p6ms2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:32:36.285943 kubelet[2681]: I0213 19:32:36.285862 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-cilium-run\") pod \"d6032467-493d-4afd-947c-afe055c81d94\" (UID: \"d6032467-493d-4afd-947c-afe055c81d94\") " Feb 13 19:32:36.286179 kubelet[2681]: I0213 19:32:36.286026 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d6032467-493d-4afd-947c-afe055c81d94-cilium-config-path\") pod \"d6032467-493d-4afd-947c-afe055c81d94\" (UID: \"d6032467-493d-4afd-947c-afe055c81d94\") " Feb 13 19:32:36.286179 kubelet[2681]: I0213 19:32:36.286062 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-cilium-cgroup\") pod \"d6032467-493d-4afd-947c-afe055c81d94\" (UID: \"d6032467-493d-4afd-947c-afe055c81d94\") " Feb 13 19:32:36.286179 kubelet[2681]: I0213 19:32:36.286089 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-host-proc-sys-kernel\") pod \"d6032467-493d-4afd-947c-afe055c81d94\" (UID: \"d6032467-493d-4afd-947c-afe055c81d94\") " Feb 13 19:32:36.286179 kubelet[2681]: I0213 19:32:36.286117 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-host-proc-sys-net\") pod \"d6032467-493d-4afd-947c-afe055c81d94\" (UID: \"d6032467-493d-4afd-947c-afe055c81d94\") " Feb 13 19:32:36.286179 kubelet[2681]: I0213 19:32:36.286141 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-bpf-maps\") pod \"d6032467-493d-4afd-947c-afe055c81d94\" (UID: \"d6032467-493d-4afd-947c-afe055c81d94\") " Feb 13 19:32:36.286179 kubelet[2681]: I0213 19:32:36.286169 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d6032467-493d-4afd-947c-afe055c81d94-clustermesh-secrets\") pod \"d6032467-493d-4afd-947c-afe055c81d94\" (UID: \"d6032467-493d-4afd-947c-afe055c81d94\") " Feb 13 19:32:36.286530 kubelet[2681]: I0213 19:32:36.286240 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-xtables-lock\") pod \"d6032467-493d-4afd-947c-afe055c81d94\" (UID: \"d6032467-493d-4afd-947c-afe055c81d94\") " Feb 13 19:32:36.286530 kubelet[2681]: I0213 19:32:36.286268 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-etc-cni-netd\") pod \"d6032467-493d-4afd-947c-afe055c81d94\" (UID: \"d6032467-493d-4afd-947c-afe055c81d94\") " Feb 13 19:32:36.286530 kubelet[2681]: I0213 19:32:36.286312 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9ww5\" (UniqueName: \"kubernetes.io/projected/d6032467-493d-4afd-947c-afe055c81d94-kube-api-access-k9ww5\") pod \"d6032467-493d-4afd-947c-afe055c81d94\" (UID: \"d6032467-493d-4afd-947c-afe055c81d94\") " Feb 13 19:32:36.286530 kubelet[2681]: I0213 19:32:36.286343 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d6032467-493d-4afd-947c-afe055c81d94-hubble-tls\") pod \"d6032467-493d-4afd-947c-afe055c81d94\" (UID: \"d6032467-493d-4afd-947c-afe055c81d94\") " Feb 13 19:32:36.286530 kubelet[2681]: I0213 19:32:36.286369 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-lib-modules\") pod \"d6032467-493d-4afd-947c-afe055c81d94\" (UID: \"d6032467-493d-4afd-947c-afe055c81d94\") " Feb 13 19:32:36.286530 kubelet[2681]: I0213 19:32:36.286397 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-cni-path\") pod \"d6032467-493d-4afd-947c-afe055c81d94\" (UID: \"d6032467-493d-4afd-947c-afe055c81d94\") " Feb 13 19:32:36.286929 kubelet[2681]: I0213 19:32:36.286423 2681 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-hostproc\") pod \"d6032467-493d-4afd-947c-afe055c81d94\" (UID: \"d6032467-493d-4afd-947c-afe055c81d94\") " Feb 13 19:32:36.286929 kubelet[2681]: I0213 19:32:36.286486 2681 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p6ms2\" (UniqueName: \"kubernetes.io/projected/b1eddee4-cfa9-4516-a034-aee85d14f917-kube-api-access-p6ms2\") on node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:32:36.286929 kubelet[2681]: I0213 19:32:36.286506 2681 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b1eddee4-cfa9-4516-a034-aee85d14f917-cilium-config-path\") on node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:32:36.286929 kubelet[2681]: I0213 19:32:36.286575 2681 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-hostproc" (OuterVolumeSpecName: "hostproc") pod "d6032467-493d-4afd-947c-afe055c81d94" (UID: "d6032467-493d-4afd-947c-afe055c81d94"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:32:36.286929 kubelet[2681]: I0213 19:32:36.286637 2681 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d6032467-493d-4afd-947c-afe055c81d94" (UID: "d6032467-493d-4afd-947c-afe055c81d94"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:32:36.288727 kubelet[2681]: I0213 19:32:36.287267 2681 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d6032467-493d-4afd-947c-afe055c81d94" (UID: "d6032467-493d-4afd-947c-afe055c81d94"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:32:36.288727 kubelet[2681]: I0213 19:32:36.287320 2681 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d6032467-493d-4afd-947c-afe055c81d94" (UID: "d6032467-493d-4afd-947c-afe055c81d94"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:32:36.288727 kubelet[2681]: I0213 19:32:36.287366 2681 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d6032467-493d-4afd-947c-afe055c81d94" (UID: "d6032467-493d-4afd-947c-afe055c81d94"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:32:36.288727 kubelet[2681]: I0213 19:32:36.287397 2681 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d6032467-493d-4afd-947c-afe055c81d94" (UID: "d6032467-493d-4afd-947c-afe055c81d94"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:32:36.288727 kubelet[2681]: I0213 19:32:36.287423 2681 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d6032467-493d-4afd-947c-afe055c81d94" (UID: "d6032467-493d-4afd-947c-afe055c81d94"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:32:36.290864 kubelet[2681]: I0213 19:32:36.290110 2681 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6032467-493d-4afd-947c-afe055c81d94-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d6032467-493d-4afd-947c-afe055c81d94" (UID: "d6032467-493d-4afd-947c-afe055c81d94"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 19:32:36.290864 kubelet[2681]: I0213 19:32:36.290255 2681 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d6032467-493d-4afd-947c-afe055c81d94" (UID: "d6032467-493d-4afd-947c-afe055c81d94"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:32:36.290864 kubelet[2681]: I0213 19:32:36.290323 2681 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-cni-path" (OuterVolumeSpecName: "cni-path") pod "d6032467-493d-4afd-947c-afe055c81d94" (UID: "d6032467-493d-4afd-947c-afe055c81d94"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:32:36.290864 kubelet[2681]: I0213 19:32:36.290356 2681 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d6032467-493d-4afd-947c-afe055c81d94" (UID: "d6032467-493d-4afd-947c-afe055c81d94"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:32:36.295135 kubelet[2681]: I0213 19:32:36.295087 2681 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6032467-493d-4afd-947c-afe055c81d94-kube-api-access-k9ww5" (OuterVolumeSpecName: "kube-api-access-k9ww5") pod "d6032467-493d-4afd-947c-afe055c81d94" (UID: "d6032467-493d-4afd-947c-afe055c81d94"). InnerVolumeSpecName "kube-api-access-k9ww5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:32:36.296599 kubelet[2681]: I0213 19:32:36.296511 2681 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6032467-493d-4afd-947c-afe055c81d94-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d6032467-493d-4afd-947c-afe055c81d94" (UID: "d6032467-493d-4afd-947c-afe055c81d94"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 13 19:32:36.297418 kubelet[2681]: I0213 19:32:36.297379 2681 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6032467-493d-4afd-947c-afe055c81d94-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d6032467-493d-4afd-947c-afe055c81d94" (UID: "d6032467-493d-4afd-947c-afe055c81d94"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:32:36.387062 kubelet[2681]: I0213 19:32:36.386978 2681 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-hostproc\") on node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:32:36.387062 kubelet[2681]: I0213 19:32:36.387032 2681 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-cni-path\") on node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:32:36.387062 kubelet[2681]: I0213 19:32:36.387052 2681 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-cilium-run\") on node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:32:36.387062 kubelet[2681]: I0213 19:32:36.387070 2681 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d6032467-493d-4afd-947c-afe055c81d94-cilium-config-path\") on node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:32:36.387476 kubelet[2681]: I0213 19:32:36.387089 2681 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-cilium-cgroup\") on node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:32:36.387476 kubelet[2681]: I0213 19:32:36.387103 2681 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-host-proc-sys-kernel\") on node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:32:36.387476 kubelet[2681]: I0213 19:32:36.387119 2681 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-bpf-maps\") on node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:32:36.387476 kubelet[2681]: I0213 19:32:36.387135 2681 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-host-proc-sys-net\") on node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:32:36.387476 kubelet[2681]: I0213 19:32:36.387149 2681 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-xtables-lock\") on node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:32:36.387476 kubelet[2681]: I0213 19:32:36.387163 2681 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-etc-cni-netd\") on node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:32:36.387476 kubelet[2681]: I0213 19:32:36.387178 2681 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k9ww5\" (UniqueName: \"kubernetes.io/projected/d6032467-493d-4afd-947c-afe055c81d94-kube-api-access-k9ww5\") on node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:32:36.387717 kubelet[2681]: I0213 19:32:36.387194 2681 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d6032467-493d-4afd-947c-afe055c81d94-clustermesh-secrets\") on node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:32:36.387717 kubelet[2681]: I0213 19:32:36.387209 2681 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6032467-493d-4afd-947c-afe055c81d94-lib-modules\") on node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:32:36.387717 kubelet[2681]: I0213 19:32:36.387224 2681 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d6032467-493d-4afd-947c-afe055c81d94-hubble-tls\") on node \"ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:32:36.601726 kubelet[2681]: I0213 19:32:36.601365 2681 scope.go:117] "RemoveContainer" containerID="a69100cc6bb4dcc864bd49844869534df6bac727787dd636397b077ebec3aaa5" Feb 13 19:32:36.604981 containerd[1511]: time="2025-02-13T19:32:36.604927587Z" level=info msg="RemoveContainer for \"a69100cc6bb4dcc864bd49844869534df6bac727787dd636397b077ebec3aaa5\"" Feb 13 19:32:36.616945 systemd[1]: Removed slice kubepods-burstable-podd6032467_493d_4afd_947c_afe055c81d94.slice - libcontainer container kubepods-burstable-podd6032467_493d_4afd_947c_afe055c81d94.slice. Feb 13 19:32:36.617241 systemd[1]: kubepods-burstable-podd6032467_493d_4afd_947c_afe055c81d94.slice: Consumed 10.029s CPU time, 126.1M memory peak, 136K read from disk, 13.3M written to disk. Feb 13 19:32:36.619450 kubelet[2681]: I0213 19:32:36.617445 2681 scope.go:117] "RemoveContainer" containerID="e4004b9b89339b865f71f43ab1ab1ce2c6c4fbbd2f2507c151a346e6fd035a12" Feb 13 19:32:36.619543 containerd[1511]: time="2025-02-13T19:32:36.617011896Z" level=info msg="RemoveContainer for \"a69100cc6bb4dcc864bd49844869534df6bac727787dd636397b077ebec3aaa5\" returns successfully" Feb 13 19:32:36.619543 containerd[1511]: time="2025-02-13T19:32:36.619435500Z" level=info msg="RemoveContainer for \"e4004b9b89339b865f71f43ab1ab1ce2c6c4fbbd2f2507c151a346e6fd035a12\"" Feb 13 19:32:36.626043 containerd[1511]: time="2025-02-13T19:32:36.625377393Z" level=info msg="RemoveContainer for \"e4004b9b89339b865f71f43ab1ab1ce2c6c4fbbd2f2507c151a346e6fd035a12\" returns successfully" Feb 13 19:32:36.626870 kubelet[2681]: I0213 19:32:36.626325 2681 scope.go:117] "RemoveContainer" containerID="ef93a4c87e234a663b415fa596a66b98610253fc858b989251bc48036cc1206e" Feb 13 19:32:36.626924 systemd[1]: Removed slice kubepods-besteffort-podb1eddee4_cfa9_4516_a034_aee85d14f917.slice - libcontainer container kubepods-besteffort-podb1eddee4_cfa9_4516_a034_aee85d14f917.slice. Feb 13 19:32:36.632876 containerd[1511]: time="2025-02-13T19:32:36.632134455Z" level=info msg="RemoveContainer for \"ef93a4c87e234a663b415fa596a66b98610253fc858b989251bc48036cc1206e\"" Feb 13 19:32:36.637969 containerd[1511]: time="2025-02-13T19:32:36.637779845Z" level=info msg="RemoveContainer for \"ef93a4c87e234a663b415fa596a66b98610253fc858b989251bc48036cc1206e\" returns successfully" Feb 13 19:32:36.640304 kubelet[2681]: I0213 19:32:36.638693 2681 scope.go:117] "RemoveContainer" containerID="d5634608cd10b53dd58556487b3d19450cce3f9a3a393209befde16b7fb4a357" Feb 13 19:32:36.642545 containerd[1511]: time="2025-02-13T19:32:36.642505658Z" level=info msg="RemoveContainer for \"d5634608cd10b53dd58556487b3d19450cce3f9a3a393209befde16b7fb4a357\"" Feb 13 19:32:36.650748 containerd[1511]: time="2025-02-13T19:32:36.648873562Z" level=info msg="RemoveContainer for \"d5634608cd10b53dd58556487b3d19450cce3f9a3a393209befde16b7fb4a357\" returns successfully" Feb 13 19:32:36.650995 kubelet[2681]: I0213 19:32:36.649326 2681 scope.go:117] "RemoveContainer" containerID="cdd87efee172af91b89c7197cdf82f7a6cfcc10e5f86167c0740f8a6ac91eafa" Feb 13 19:32:36.652690 containerd[1511]: time="2025-02-13T19:32:36.652652987Z" level=info msg="RemoveContainer for \"cdd87efee172af91b89c7197cdf82f7a6cfcc10e5f86167c0740f8a6ac91eafa\"" Feb 13 19:32:36.657972 containerd[1511]: time="2025-02-13T19:32:36.657909968Z" level=info msg="RemoveContainer for \"cdd87efee172af91b89c7197cdf82f7a6cfcc10e5f86167c0740f8a6ac91eafa\" returns successfully" Feb 13 19:32:36.658449 kubelet[2681]: I0213 19:32:36.658417 2681 scope.go:117] "RemoveContainer" containerID="a69100cc6bb4dcc864bd49844869534df6bac727787dd636397b077ebec3aaa5" Feb 13 19:32:36.658958 containerd[1511]: time="2025-02-13T19:32:36.658852185Z" level=error msg="ContainerStatus for \"a69100cc6bb4dcc864bd49844869534df6bac727787dd636397b077ebec3aaa5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a69100cc6bb4dcc864bd49844869534df6bac727787dd636397b077ebec3aaa5\": not found" Feb 13 19:32:36.660417 kubelet[2681]: E0213 19:32:36.659206 2681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a69100cc6bb4dcc864bd49844869534df6bac727787dd636397b077ebec3aaa5\": not found" containerID="a69100cc6bb4dcc864bd49844869534df6bac727787dd636397b077ebec3aaa5" Feb 13 19:32:36.660417 kubelet[2681]: I0213 19:32:36.660124 2681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a69100cc6bb4dcc864bd49844869534df6bac727787dd636397b077ebec3aaa5"} err="failed to get container status \"a69100cc6bb4dcc864bd49844869534df6bac727787dd636397b077ebec3aaa5\": rpc error: code = NotFound desc = an error occurred when try to find container \"a69100cc6bb4dcc864bd49844869534df6bac727787dd636397b077ebec3aaa5\": not found" Feb 13 19:32:36.660417 kubelet[2681]: I0213 19:32:36.660261 2681 scope.go:117] "RemoveContainer" containerID="e4004b9b89339b865f71f43ab1ab1ce2c6c4fbbd2f2507c151a346e6fd035a12" Feb 13 19:32:36.661513 containerd[1511]: time="2025-02-13T19:32:36.660807232Z" level=error msg="ContainerStatus for \"e4004b9b89339b865f71f43ab1ab1ce2c6c4fbbd2f2507c151a346e6fd035a12\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e4004b9b89339b865f71f43ab1ab1ce2c6c4fbbd2f2507c151a346e6fd035a12\": not found" Feb 13 19:32:36.661765 kubelet[2681]: E0213 19:32:36.661735 2681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e4004b9b89339b865f71f43ab1ab1ce2c6c4fbbd2f2507c151a346e6fd035a12\": not found" containerID="e4004b9b89339b865f71f43ab1ab1ce2c6c4fbbd2f2507c151a346e6fd035a12" Feb 13 19:32:36.662335 kubelet[2681]: I0213 19:32:36.661936 2681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e4004b9b89339b865f71f43ab1ab1ce2c6c4fbbd2f2507c151a346e6fd035a12"} err="failed to get container status \"e4004b9b89339b865f71f43ab1ab1ce2c6c4fbbd2f2507c151a346e6fd035a12\": rpc error: code = NotFound desc = an error occurred when try to find container \"e4004b9b89339b865f71f43ab1ab1ce2c6c4fbbd2f2507c151a346e6fd035a12\": not found" Feb 13 19:32:36.662335 kubelet[2681]: I0213 19:32:36.661979 2681 scope.go:117] "RemoveContainer" containerID="ef93a4c87e234a663b415fa596a66b98610253fc858b989251bc48036cc1206e" Feb 13 19:32:36.662494 containerd[1511]: time="2025-02-13T19:32:36.662235152Z" level=error msg="ContainerStatus for \"ef93a4c87e234a663b415fa596a66b98610253fc858b989251bc48036cc1206e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef93a4c87e234a663b415fa596a66b98610253fc858b989251bc48036cc1206e\": not found" Feb 13 19:32:36.662773 kubelet[2681]: E0213 19:32:36.662643 2681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef93a4c87e234a663b415fa596a66b98610253fc858b989251bc48036cc1206e\": not found" containerID="ef93a4c87e234a663b415fa596a66b98610253fc858b989251bc48036cc1206e" Feb 13 19:32:36.662773 kubelet[2681]: I0213 19:32:36.662683 2681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ef93a4c87e234a663b415fa596a66b98610253fc858b989251bc48036cc1206e"} err="failed to get container status \"ef93a4c87e234a663b415fa596a66b98610253fc858b989251bc48036cc1206e\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef93a4c87e234a663b415fa596a66b98610253fc858b989251bc48036cc1206e\": not found" Feb 13 19:32:36.662773 kubelet[2681]: I0213 19:32:36.662708 2681 scope.go:117] "RemoveContainer" containerID="d5634608cd10b53dd58556487b3d19450cce3f9a3a393209befde16b7fb4a357" Feb 13 19:32:36.663054 containerd[1511]: time="2025-02-13T19:32:36.662974371Z" level=error msg="ContainerStatus for \"d5634608cd10b53dd58556487b3d19450cce3f9a3a393209befde16b7fb4a357\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d5634608cd10b53dd58556487b3d19450cce3f9a3a393209befde16b7fb4a357\": not found" Feb 13 19:32:36.663306 kubelet[2681]: E0213 19:32:36.663267 2681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d5634608cd10b53dd58556487b3d19450cce3f9a3a393209befde16b7fb4a357\": not found" containerID="d5634608cd10b53dd58556487b3d19450cce3f9a3a393209befde16b7fb4a357" Feb 13 19:32:36.663463 kubelet[2681]: I0213 19:32:36.663313 2681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d5634608cd10b53dd58556487b3d19450cce3f9a3a393209befde16b7fb4a357"} err="failed to get container status \"d5634608cd10b53dd58556487b3d19450cce3f9a3a393209befde16b7fb4a357\": rpc error: code = NotFound desc = an error occurred when try to find container \"d5634608cd10b53dd58556487b3d19450cce3f9a3a393209befde16b7fb4a357\": not found" Feb 13 19:32:36.663463 kubelet[2681]: I0213 19:32:36.663343 2681 scope.go:117] "RemoveContainer" containerID="cdd87efee172af91b89c7197cdf82f7a6cfcc10e5f86167c0740f8a6ac91eafa" Feb 13 19:32:36.663697 containerd[1511]: time="2025-02-13T19:32:36.663573601Z" level=error msg="ContainerStatus for \"cdd87efee172af91b89c7197cdf82f7a6cfcc10e5f86167c0740f8a6ac91eafa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cdd87efee172af91b89c7197cdf82f7a6cfcc10e5f86167c0740f8a6ac91eafa\": not found" Feb 13 19:32:36.663764 kubelet[2681]: E0213 19:32:36.663737 2681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cdd87efee172af91b89c7197cdf82f7a6cfcc10e5f86167c0740f8a6ac91eafa\": not found" containerID="cdd87efee172af91b89c7197cdf82f7a6cfcc10e5f86167c0740f8a6ac91eafa" Feb 13 19:32:36.663821 kubelet[2681]: I0213 19:32:36.663766 2681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cdd87efee172af91b89c7197cdf82f7a6cfcc10e5f86167c0740f8a6ac91eafa"} err="failed to get container status \"cdd87efee172af91b89c7197cdf82f7a6cfcc10e5f86167c0740f8a6ac91eafa\": rpc error: code = NotFound desc = an error occurred when try to find container \"cdd87efee172af91b89c7197cdf82f7a6cfcc10e5f86167c0740f8a6ac91eafa\": not found" Feb 13 19:32:36.663821 kubelet[2681]: I0213 19:32:36.663790 2681 scope.go:117] "RemoveContainer" containerID="12da73847e9b33a8ed9ff956564f0399c85694a9f8c29d8582466455490a7612" Feb 13 19:32:36.666744 containerd[1511]: time="2025-02-13T19:32:36.666316748Z" level=info msg="RemoveContainer for \"12da73847e9b33a8ed9ff956564f0399c85694a9f8c29d8582466455490a7612\"" Feb 13 19:32:36.671240 containerd[1511]: time="2025-02-13T19:32:36.671196073Z" level=info msg="RemoveContainer for \"12da73847e9b33a8ed9ff956564f0399c85694a9f8c29d8582466455490a7612\" returns successfully" Feb 13 19:32:36.671667 kubelet[2681]: I0213 19:32:36.671527 2681 scope.go:117] "RemoveContainer" containerID="12da73847e9b33a8ed9ff956564f0399c85694a9f8c29d8582466455490a7612" Feb 13 19:32:36.671917 containerd[1511]: time="2025-02-13T19:32:36.671808240Z" level=error msg="ContainerStatus for \"12da73847e9b33a8ed9ff956564f0399c85694a9f8c29d8582466455490a7612\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"12da73847e9b33a8ed9ff956564f0399c85694a9f8c29d8582466455490a7612\": not found" Feb 13 19:32:36.672242 kubelet[2681]: E0213 19:32:36.672195 2681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"12da73847e9b33a8ed9ff956564f0399c85694a9f8c29d8582466455490a7612\": not found" containerID="12da73847e9b33a8ed9ff956564f0399c85694a9f8c29d8582466455490a7612" Feb 13 19:32:36.672342 kubelet[2681]: I0213 19:32:36.672235 2681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"12da73847e9b33a8ed9ff956564f0399c85694a9f8c29d8582466455490a7612"} err="failed to get container status \"12da73847e9b33a8ed9ff956564f0399c85694a9f8c29d8582466455490a7612\": rpc error: code = NotFound desc = an error occurred when try to find container \"12da73847e9b33a8ed9ff956564f0399c85694a9f8c29d8582466455490a7612\": not found" Feb 13 19:32:36.839650 systemd[1]: var-lib-kubelet-pods-b1eddee4\x2dcfa9\x2d4516\x2da034\x2daee85d14f917-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp6ms2.mount: Deactivated successfully. Feb 13 19:32:36.839849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5951b3a0dcfed2a8c0442397bf3e3a30251fa2b732b761b2cbe2eb56f7065d0d-rootfs.mount: Deactivated successfully. Feb 13 19:32:36.839974 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5951b3a0dcfed2a8c0442397bf3e3a30251fa2b732b761b2cbe2eb56f7065d0d-shm.mount: Deactivated successfully. Feb 13 19:32:36.840089 systemd[1]: var-lib-kubelet-pods-d6032467\x2d493d\x2d4afd\x2d947c\x2dafe055c81d94-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk9ww5.mount: Deactivated successfully. Feb 13 19:32:36.840214 systemd[1]: var-lib-kubelet-pods-d6032467\x2d493d\x2d4afd\x2d947c\x2dafe055c81d94-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:32:36.840337 systemd[1]: var-lib-kubelet-pods-d6032467\x2d493d\x2d4afd\x2d947c\x2dafe055c81d94-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:32:37.793650 sshd[4328]: Connection closed by 147.75.109.163 port 47796 Feb 13 19:32:37.795196 sshd-session[4326]: pam_unix(sshd:session): session closed for user core Feb 13 19:32:37.803096 systemd[1]: sshd@26-10.128.0.10:22-147.75.109.163:47796.service: Deactivated successfully. Feb 13 19:32:37.807720 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:32:37.809737 systemd[1]: session-27.scope: Consumed 1.421s CPU time, 23.9M memory peak. Feb 13 19:32:37.812736 systemd-logind[1487]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:32:37.816717 systemd-logind[1487]: Removed session 27. Feb 13 19:32:37.852659 systemd[1]: Started sshd@27-10.128.0.10:22-147.75.109.163:47800.service - OpenSSH per-connection server daemon (147.75.109.163:47800). Feb 13 19:32:38.129749 kubelet[2681]: I0213 19:32:38.129697 2681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1eddee4-cfa9-4516-a034-aee85d14f917" path="/var/lib/kubelet/pods/b1eddee4-cfa9-4516-a034-aee85d14f917/volumes" Feb 13 19:32:38.130550 kubelet[2681]: I0213 19:32:38.130516 2681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6032467-493d-4afd-947c-afe055c81d94" path="/var/lib/kubelet/pods/d6032467-493d-4afd-947c-afe055c81d94/volumes" Feb 13 19:32:38.146439 sshd[4489]: Accepted publickey for core from 147.75.109.163 port 47800 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:32:38.148414 sshd-session[4489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:32:38.154827 systemd-logind[1487]: New session 28 of user core. Feb 13 19:32:38.160126 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 19:32:38.363861 kubelet[2681]: E0213 19:32:38.363675 2681 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:32:38.510811 ntpd[1474]: Deleting interface #12 lxc_health, fe80::1c85:e2ff:fea6:dd36%8#123, interface stats: received=0, sent=0, dropped=0, active_time=100 secs Feb 13 19:32:38.511532 ntpd[1474]: 13 Feb 19:32:38 ntpd[1474]: Deleting interface #12 lxc_health, fe80::1c85:e2ff:fea6:dd36%8#123, interface stats: received=0, sent=0, dropped=0, active_time=100 secs Feb 13 19:32:39.233182 kubelet[2681]: I0213 19:32:39.233126 2681 memory_manager.go:355] "RemoveStaleState removing state" podUID="b1eddee4-cfa9-4516-a034-aee85d14f917" containerName="cilium-operator" Feb 13 19:32:39.233182 kubelet[2681]: I0213 19:32:39.233174 2681 memory_manager.go:355] "RemoveStaleState removing state" podUID="d6032467-493d-4afd-947c-afe055c81d94" containerName="cilium-agent" Feb 13 19:32:39.247865 sshd[4491]: Connection closed by 147.75.109.163 port 47800 Feb 13 19:32:39.251708 systemd[1]: Created slice kubepods-burstable-podc484ac76_a70f_4c1b_9381_dcc2bd1a06f0.slice - libcontainer container kubepods-burstable-podc484ac76_a70f_4c1b_9381_dcc2bd1a06f0.slice. Feb 13 19:32:39.252992 sshd-session[4489]: pam_unix(sshd:session): session closed for user core Feb 13 19:32:39.265344 systemd[1]: sshd@27-10.128.0.10:22-147.75.109.163:47800.service: Deactivated successfully. Feb 13 19:32:39.271766 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 19:32:39.277487 systemd-logind[1487]: Session 28 logged out. Waiting for processes to exit. Feb 13 19:32:39.280406 systemd-logind[1487]: Removed session 28. Feb 13 19:32:39.326865 kubelet[2681]: I0213 19:32:39.325882 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c484ac76-a70f-4c1b-9381-dcc2bd1a06f0-xtables-lock\") pod \"cilium-8nljx\" (UID: \"c484ac76-a70f-4c1b-9381-dcc2bd1a06f0\") " pod="kube-system/cilium-8nljx" Feb 13 19:32:39.326865 kubelet[2681]: I0213 19:32:39.325991 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c484ac76-a70f-4c1b-9381-dcc2bd1a06f0-cilium-config-path\") pod \"cilium-8nljx\" (UID: \"c484ac76-a70f-4c1b-9381-dcc2bd1a06f0\") " pod="kube-system/cilium-8nljx" Feb 13 19:32:39.326865 kubelet[2681]: I0213 19:32:39.326034 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c484ac76-a70f-4c1b-9381-dcc2bd1a06f0-clustermesh-secrets\") pod \"cilium-8nljx\" (UID: \"c484ac76-a70f-4c1b-9381-dcc2bd1a06f0\") " pod="kube-system/cilium-8nljx" Feb 13 19:32:39.326865 kubelet[2681]: I0213 19:32:39.326061 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c484ac76-a70f-4c1b-9381-dcc2bd1a06f0-host-proc-sys-net\") pod \"cilium-8nljx\" (UID: \"c484ac76-a70f-4c1b-9381-dcc2bd1a06f0\") " pod="kube-system/cilium-8nljx" Feb 13 19:32:39.326865 kubelet[2681]: I0213 19:32:39.326093 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c484ac76-a70f-4c1b-9381-dcc2bd1a06f0-bpf-maps\") pod \"cilium-8nljx\" (UID: \"c484ac76-a70f-4c1b-9381-dcc2bd1a06f0\") " pod="kube-system/cilium-8nljx" Feb 13 19:32:39.326865 kubelet[2681]: I0213 19:32:39.326142 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c484ac76-a70f-4c1b-9381-dcc2bd1a06f0-cilium-cgroup\") pod \"cilium-8nljx\" (UID: \"c484ac76-a70f-4c1b-9381-dcc2bd1a06f0\") " pod="kube-system/cilium-8nljx" Feb 13 19:32:39.327286 kubelet[2681]: I0213 19:32:39.326180 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c484ac76-a70f-4c1b-9381-dcc2bd1a06f0-cilium-ipsec-secrets\") pod \"cilium-8nljx\" (UID: \"c484ac76-a70f-4c1b-9381-dcc2bd1a06f0\") " pod="kube-system/cilium-8nljx" Feb 13 19:32:39.327286 kubelet[2681]: I0213 19:32:39.326220 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c484ac76-a70f-4c1b-9381-dcc2bd1a06f0-host-proc-sys-kernel\") pod \"cilium-8nljx\" (UID: \"c484ac76-a70f-4c1b-9381-dcc2bd1a06f0\") " pod="kube-system/cilium-8nljx" Feb 13 19:32:39.327286 kubelet[2681]: I0213 19:32:39.326246 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c484ac76-a70f-4c1b-9381-dcc2bd1a06f0-cilium-run\") pod \"cilium-8nljx\" (UID: \"c484ac76-a70f-4c1b-9381-dcc2bd1a06f0\") " pod="kube-system/cilium-8nljx" Feb 13 19:32:39.327286 kubelet[2681]: I0213 19:32:39.326272 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c484ac76-a70f-4c1b-9381-dcc2bd1a06f0-cni-path\") pod \"cilium-8nljx\" (UID: \"c484ac76-a70f-4c1b-9381-dcc2bd1a06f0\") " pod="kube-system/cilium-8nljx" Feb 13 19:32:39.327286 kubelet[2681]: I0213 19:32:39.326296 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c484ac76-a70f-4c1b-9381-dcc2bd1a06f0-etc-cni-netd\") pod \"cilium-8nljx\" (UID: \"c484ac76-a70f-4c1b-9381-dcc2bd1a06f0\") " pod="kube-system/cilium-8nljx" Feb 13 19:32:39.327286 kubelet[2681]: I0213 19:32:39.326319 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c484ac76-a70f-4c1b-9381-dcc2bd1a06f0-lib-modules\") pod \"cilium-8nljx\" (UID: \"c484ac76-a70f-4c1b-9381-dcc2bd1a06f0\") " pod="kube-system/cilium-8nljx" Feb 13 19:32:39.327593 kubelet[2681]: I0213 19:32:39.326346 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c484ac76-a70f-4c1b-9381-dcc2bd1a06f0-hubble-tls\") pod \"cilium-8nljx\" (UID: \"c484ac76-a70f-4c1b-9381-dcc2bd1a06f0\") " pod="kube-system/cilium-8nljx" Feb 13 19:32:39.327593 kubelet[2681]: I0213 19:32:39.326403 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj5z6\" (UniqueName: \"kubernetes.io/projected/c484ac76-a70f-4c1b-9381-dcc2bd1a06f0-kube-api-access-bj5z6\") pod \"cilium-8nljx\" (UID: \"c484ac76-a70f-4c1b-9381-dcc2bd1a06f0\") " pod="kube-system/cilium-8nljx" Feb 13 19:32:39.327593 kubelet[2681]: I0213 19:32:39.326449 2681 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c484ac76-a70f-4c1b-9381-dcc2bd1a06f0-hostproc\") pod \"cilium-8nljx\" (UID: \"c484ac76-a70f-4c1b-9381-dcc2bd1a06f0\") " pod="kube-system/cilium-8nljx" Feb 13 19:32:39.341310 systemd[1]: Started sshd@28-10.128.0.10:22-147.75.109.163:47808.service - OpenSSH per-connection server daemon (147.75.109.163:47808). Feb 13 19:32:39.562520 containerd[1511]: time="2025-02-13T19:32:39.562336817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8nljx,Uid:c484ac76-a70f-4c1b-9381-dcc2bd1a06f0,Namespace:kube-system,Attempt:0,}" Feb 13 19:32:39.619067 containerd[1511]: time="2025-02-13T19:32:39.618712517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:32:39.619067 containerd[1511]: time="2025-02-13T19:32:39.618870997Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:32:39.619603 containerd[1511]: time="2025-02-13T19:32:39.618902686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:32:39.619603 containerd[1511]: time="2025-02-13T19:32:39.619154608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:32:39.656129 systemd[1]: Started cri-containerd-5816ad92bd3b4549534d2abdc1d0f41c72385adf979ac11f0877c7269f827da9.scope - libcontainer container 5816ad92bd3b4549534d2abdc1d0f41c72385adf979ac11f0877c7269f827da9. Feb 13 19:32:39.668092 sshd[4502]: Accepted publickey for core from 147.75.109.163 port 47808 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:32:39.671680 sshd-session[4502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:32:39.684252 systemd-logind[1487]: New session 29 of user core. Feb 13 19:32:39.691162 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 19:32:39.699119 containerd[1511]: time="2025-02-13T19:32:39.698721435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8nljx,Uid:c484ac76-a70f-4c1b-9381-dcc2bd1a06f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"5816ad92bd3b4549534d2abdc1d0f41c72385adf979ac11f0877c7269f827da9\"" Feb 13 19:32:39.708918 containerd[1511]: time="2025-02-13T19:32:39.708675625Z" level=info msg="CreateContainer within sandbox \"5816ad92bd3b4549534d2abdc1d0f41c72385adf979ac11f0877c7269f827da9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:32:39.730710 containerd[1511]: time="2025-02-13T19:32:39.730582645Z" level=info msg="CreateContainer within sandbox \"5816ad92bd3b4549534d2abdc1d0f41c72385adf979ac11f0877c7269f827da9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"74cd076a3ef6f963583672b767d9bcd93b4ade81b4ba30e92d13c7a69e3adc4e\"" Feb 13 19:32:39.731877 containerd[1511]: time="2025-02-13T19:32:39.731549809Z" level=info msg="StartContainer for \"74cd076a3ef6f963583672b767d9bcd93b4ade81b4ba30e92d13c7a69e3adc4e\"" Feb 13 19:32:39.773680 systemd[1]: Started cri-containerd-74cd076a3ef6f963583672b767d9bcd93b4ade81b4ba30e92d13c7a69e3adc4e.scope - libcontainer container 74cd076a3ef6f963583672b767d9bcd93b4ade81b4ba30e92d13c7a69e3adc4e. Feb 13 19:32:39.812749 containerd[1511]: time="2025-02-13T19:32:39.812614561Z" level=info msg="StartContainer for \"74cd076a3ef6f963583672b767d9bcd93b4ade81b4ba30e92d13c7a69e3adc4e\" returns successfully" Feb 13 19:32:39.823449 systemd[1]: cri-containerd-74cd076a3ef6f963583672b767d9bcd93b4ade81b4ba30e92d13c7a69e3adc4e.scope: Deactivated successfully. Feb 13 19:32:39.865118 containerd[1511]: time="2025-02-13T19:32:39.864989246Z" level=info msg="shim disconnected" id=74cd076a3ef6f963583672b767d9bcd93b4ade81b4ba30e92d13c7a69e3adc4e namespace=k8s.io Feb 13 19:32:39.865118 containerd[1511]: time="2025-02-13T19:32:39.865092483Z" level=warning msg="cleaning up after shim disconnected" id=74cd076a3ef6f963583672b767d9bcd93b4ade81b4ba30e92d13c7a69e3adc4e namespace=k8s.io Feb 13 19:32:39.865118 containerd[1511]: time="2025-02-13T19:32:39.865121505Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:32:39.882155 sshd[4547]: Connection closed by 147.75.109.163 port 47808 Feb 13 19:32:39.885036 sshd-session[4502]: pam_unix(sshd:session): session closed for user core Feb 13 19:32:39.893408 systemd[1]: sshd@28-10.128.0.10:22-147.75.109.163:47808.service: Deactivated successfully. Feb 13 19:32:39.897490 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 19:32:39.899053 systemd-logind[1487]: Session 29 logged out. Waiting for processes to exit. Feb 13 19:32:39.900627 systemd-logind[1487]: Removed session 29. Feb 13 19:32:39.944311 systemd[1]: Started sshd@29-10.128.0.10:22-147.75.109.163:51820.service - OpenSSH per-connection server daemon (147.75.109.163:51820). Feb 13 19:32:40.231712 sshd[4617]: Accepted publickey for core from 147.75.109.163 port 51820 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:32:40.233547 sshd-session[4617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:32:40.240468 systemd-logind[1487]: New session 30 of user core. Feb 13 19:32:40.246100 systemd[1]: Started session-30.scope - Session 30 of User core. Feb 13 19:32:40.627514 containerd[1511]: time="2025-02-13T19:32:40.627374233Z" level=info msg="CreateContainer within sandbox \"5816ad92bd3b4549534d2abdc1d0f41c72385adf979ac11f0877c7269f827da9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:32:40.654242 containerd[1511]: time="2025-02-13T19:32:40.654173623Z" level=info msg="CreateContainer within sandbox \"5816ad92bd3b4549534d2abdc1d0f41c72385adf979ac11f0877c7269f827da9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b0b9e7632f42d00c160ae6d1f480ea1cbe768ddd065dc85194e419e78e4cb96f\"" Feb 13 19:32:40.658866 containerd[1511]: time="2025-02-13T19:32:40.655543158Z" level=info msg="StartContainer for \"b0b9e7632f42d00c160ae6d1f480ea1cbe768ddd065dc85194e419e78e4cb96f\"" Feb 13 19:32:40.709348 systemd[1]: run-containerd-runc-k8s.io-b0b9e7632f42d00c160ae6d1f480ea1cbe768ddd065dc85194e419e78e4cb96f-runc.KkRDOW.mount: Deactivated successfully. Feb 13 19:32:40.718128 systemd[1]: Started cri-containerd-b0b9e7632f42d00c160ae6d1f480ea1cbe768ddd065dc85194e419e78e4cb96f.scope - libcontainer container b0b9e7632f42d00c160ae6d1f480ea1cbe768ddd065dc85194e419e78e4cb96f. Feb 13 19:32:40.764282 containerd[1511]: time="2025-02-13T19:32:40.764223312Z" level=info msg="StartContainer for \"b0b9e7632f42d00c160ae6d1f480ea1cbe768ddd065dc85194e419e78e4cb96f\" returns successfully" Feb 13 19:32:40.773970 systemd[1]: cri-containerd-b0b9e7632f42d00c160ae6d1f480ea1cbe768ddd065dc85194e419e78e4cb96f.scope: Deactivated successfully. Feb 13 19:32:40.811473 containerd[1511]: time="2025-02-13T19:32:40.811396818Z" level=info msg="shim disconnected" id=b0b9e7632f42d00c160ae6d1f480ea1cbe768ddd065dc85194e419e78e4cb96f namespace=k8s.io Feb 13 19:32:40.812390 containerd[1511]: time="2025-02-13T19:32:40.812055967Z" level=warning msg="cleaning up after shim disconnected" id=b0b9e7632f42d00c160ae6d1f480ea1cbe768ddd065dc85194e419e78e4cb96f namespace=k8s.io Feb 13 19:32:40.812390 containerd[1511]: time="2025-02-13T19:32:40.812100360Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:32:41.233157 kubelet[2681]: I0213 19:32:41.232802 2681 setters.go:602] "Node became not ready" node="ci-4230-0-1-89ba84e2674631ec15cd.c.flatcar-212911.internal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:32:41Z","lastTransitionTime":"2025-02-13T19:32:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:32:41.440821 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b0b9e7632f42d00c160ae6d1f480ea1cbe768ddd065dc85194e419e78e4cb96f-rootfs.mount: Deactivated successfully. Feb 13 19:32:41.633308 containerd[1511]: time="2025-02-13T19:32:41.633241278Z" level=info msg="CreateContainer within sandbox \"5816ad92bd3b4549534d2abdc1d0f41c72385adf979ac11f0877c7269f827da9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:32:41.663996 containerd[1511]: time="2025-02-13T19:32:41.663091019Z" level=info msg="CreateContainer within sandbox \"5816ad92bd3b4549534d2abdc1d0f41c72385adf979ac11f0877c7269f827da9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"034cef316ceca04f1230359689fb3612abbdde574bd68dd92f8eee392db29510\"" Feb 13 19:32:41.666154 containerd[1511]: time="2025-02-13T19:32:41.664429513Z" level=info msg="StartContainer for \"034cef316ceca04f1230359689fb3612abbdde574bd68dd92f8eee392db29510\"" Feb 13 19:32:41.721152 systemd[1]: Started cri-containerd-034cef316ceca04f1230359689fb3612abbdde574bd68dd92f8eee392db29510.scope - libcontainer container 034cef316ceca04f1230359689fb3612abbdde574bd68dd92f8eee392db29510. Feb 13 19:32:41.774347 containerd[1511]: time="2025-02-13T19:32:41.774178802Z" level=info msg="StartContainer for \"034cef316ceca04f1230359689fb3612abbdde574bd68dd92f8eee392db29510\" returns successfully" Feb 13 19:32:41.777666 systemd[1]: cri-containerd-034cef316ceca04f1230359689fb3612abbdde574bd68dd92f8eee392db29510.scope: Deactivated successfully. Feb 13 19:32:41.817736 containerd[1511]: time="2025-02-13T19:32:41.817589109Z" level=info msg="shim disconnected" id=034cef316ceca04f1230359689fb3612abbdde574bd68dd92f8eee392db29510 namespace=k8s.io Feb 13 19:32:41.818551 containerd[1511]: time="2025-02-13T19:32:41.817810026Z" level=warning msg="cleaning up after shim disconnected" id=034cef316ceca04f1230359689fb3612abbdde574bd68dd92f8eee392db29510 namespace=k8s.io Feb 13 19:32:41.818551 containerd[1511]: time="2025-02-13T19:32:41.817849007Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:32:41.844874 containerd[1511]: time="2025-02-13T19:32:41.844759315Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:32:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:32:41.960390 update_engine[1495]: I20250213 19:32:41.960189 1495 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:32:41.960969 update_engine[1495]: I20250213 19:32:41.960547 1495 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:32:41.960969 update_engine[1495]: I20250213 19:32:41.960948 1495 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:32:41.974617 update_engine[1495]: E20250213 19:32:41.974536 1495 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:32:41.974804 update_engine[1495]: I20250213 19:32:41.974659 1495 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 19:32:42.440975 systemd[1]: run-containerd-runc-k8s.io-034cef316ceca04f1230359689fb3612abbdde574bd68dd92f8eee392db29510-runc.kYkYQi.mount: Deactivated successfully. Feb 13 19:32:42.441136 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-034cef316ceca04f1230359689fb3612abbdde574bd68dd92f8eee392db29510-rootfs.mount: Deactivated successfully. Feb 13 19:32:42.640219 containerd[1511]: time="2025-02-13T19:32:42.639818143Z" level=info msg="CreateContainer within sandbox \"5816ad92bd3b4549534d2abdc1d0f41c72385adf979ac11f0877c7269f827da9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:32:42.669506 containerd[1511]: time="2025-02-13T19:32:42.666232042Z" level=info msg="CreateContainer within sandbox \"5816ad92bd3b4549534d2abdc1d0f41c72385adf979ac11f0877c7269f827da9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d5684f2e833647d57284547723cbb1adb0c6c7ad64c8cd35057b721838f8cb42\"" Feb 13 19:32:42.672931 containerd[1511]: time="2025-02-13T19:32:42.669928866Z" level=info msg="StartContainer for \"d5684f2e833647d57284547723cbb1adb0c6c7ad64c8cd35057b721838f8cb42\"" Feb 13 19:32:42.671179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1310779731.mount: Deactivated successfully. Feb 13 19:32:42.727281 systemd[1]: Started cri-containerd-d5684f2e833647d57284547723cbb1adb0c6c7ad64c8cd35057b721838f8cb42.scope - libcontainer container d5684f2e833647d57284547723cbb1adb0c6c7ad64c8cd35057b721838f8cb42. Feb 13 19:32:42.766747 systemd[1]: cri-containerd-d5684f2e833647d57284547723cbb1adb0c6c7ad64c8cd35057b721838f8cb42.scope: Deactivated successfully. Feb 13 19:32:42.772219 containerd[1511]: time="2025-02-13T19:32:42.771964294Z" level=info msg="StartContainer for \"d5684f2e833647d57284547723cbb1adb0c6c7ad64c8cd35057b721838f8cb42\" returns successfully" Feb 13 19:32:42.810718 containerd[1511]: time="2025-02-13T19:32:42.810567671Z" level=info msg="shim disconnected" id=d5684f2e833647d57284547723cbb1adb0c6c7ad64c8cd35057b721838f8cb42 namespace=k8s.io Feb 13 19:32:42.810718 containerd[1511]: time="2025-02-13T19:32:42.810641228Z" level=warning msg="cleaning up after shim disconnected" id=d5684f2e833647d57284547723cbb1adb0c6c7ad64c8cd35057b721838f8cb42 namespace=k8s.io Feb 13 19:32:42.810718 containerd[1511]: time="2025-02-13T19:32:42.810656704Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:32:42.832933 containerd[1511]: time="2025-02-13T19:32:42.832738468Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:32:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:32:43.365574 kubelet[2681]: E0213 19:32:43.365524 2681 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:32:43.441074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5684f2e833647d57284547723cbb1adb0c6c7ad64c8cd35057b721838f8cb42-rootfs.mount: Deactivated successfully. Feb 13 19:32:43.645550 containerd[1511]: time="2025-02-13T19:32:43.645363243Z" level=info msg="CreateContainer within sandbox \"5816ad92bd3b4549534d2abdc1d0f41c72385adf979ac11f0877c7269f827da9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:32:43.674165 containerd[1511]: time="2025-02-13T19:32:43.671981654Z" level=info msg="CreateContainer within sandbox \"5816ad92bd3b4549534d2abdc1d0f41c72385adf979ac11f0877c7269f827da9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4d63acd5b2634b6f4f8094103e6874052f370ff79f1fdea325e5da6c0edcdc45\"" Feb 13 19:32:43.677704 containerd[1511]: time="2025-02-13T19:32:43.674542807Z" level=info msg="StartContainer for \"4d63acd5b2634b6f4f8094103e6874052f370ff79f1fdea325e5da6c0edcdc45\"" Feb 13 19:32:43.724246 systemd[1]: Started cri-containerd-4d63acd5b2634b6f4f8094103e6874052f370ff79f1fdea325e5da6c0edcdc45.scope - libcontainer container 4d63acd5b2634b6f4f8094103e6874052f370ff79f1fdea325e5da6c0edcdc45. Feb 13 19:32:43.769498 containerd[1511]: time="2025-02-13T19:32:43.769163791Z" level=info msg="StartContainer for \"4d63acd5b2634b6f4f8094103e6874052f370ff79f1fdea325e5da6c0edcdc45\" returns successfully" Feb 13 19:32:44.303876 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 19:32:44.668254 kubelet[2681]: I0213 19:32:44.667166 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8nljx" podStartSLOduration=5.667141412 podStartE2EDuration="5.667141412s" podCreationTimestamp="2025-02-13 19:32:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:32:44.666492559 +0000 UTC m=+136.686984557" watchObservedRunningTime="2025-02-13 19:32:44.667141412 +0000 UTC m=+136.687633409" Feb 13 19:32:47.711406 systemd-networkd[1402]: lxc_health: Link UP Feb 13 19:32:47.723514 systemd-networkd[1402]: lxc_health: Gained carrier Feb 13 19:32:48.877677 systemd-networkd[1402]: lxc_health: Gained IPv6LL Feb 13 19:32:48.996492 systemd[1]: run-containerd-runc-k8s.io-4d63acd5b2634b6f4f8094103e6874052f370ff79f1fdea325e5da6c0edcdc45-runc.GffccB.mount: Deactivated successfully. Feb 13 19:32:51.512098 ntpd[1474]: Listen normally on 15 lxc_health [fe80::94ab:7bff:fe1c:8969%14]:123 Feb 13 19:32:51.513524 ntpd[1474]: 13 Feb 19:32:51 ntpd[1474]: Listen normally on 15 lxc_health [fe80::94ab:7bff:fe1c:8969%14]:123 Feb 13 19:32:51.968809 update_engine[1495]: I20250213 19:32:51.967957 1495 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:32:51.968809 update_engine[1495]: I20250213 19:32:51.968351 1495 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:32:51.968809 update_engine[1495]: I20250213 19:32:51.968726 1495 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:32:52.042347 update_engine[1495]: E20250213 19:32:52.041013 1495 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:32:52.042347 update_engine[1495]: I20250213 19:32:52.041146 1495 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 19:32:52.042347 update_engine[1495]: I20250213 19:32:52.041163 1495 omaha_request_action.cc:617] Omaha request response: Feb 13 19:32:52.042347 update_engine[1495]: E20250213 19:32:52.041293 1495 omaha_request_action.cc:636] Omaha request network transfer failed. Feb 13 19:32:52.042347 update_engine[1495]: I20250213 19:32:52.041328 1495 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 13 19:32:52.042347 update_engine[1495]: I20250213 19:32:52.041342 1495 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 19:32:52.042347 update_engine[1495]: I20250213 19:32:52.041352 1495 update_attempter.cc:306] Processing Done. Feb 13 19:32:52.042347 update_engine[1495]: E20250213 19:32:52.041374 1495 update_attempter.cc:619] Update failed. Feb 13 19:32:52.042347 update_engine[1495]: I20250213 19:32:52.041388 1495 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 13 19:32:52.042347 update_engine[1495]: I20250213 19:32:52.041399 1495 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 13 19:32:52.042347 update_engine[1495]: I20250213 19:32:52.041410 1495 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 13 19:32:52.042347 update_engine[1495]: I20250213 19:32:52.041527 1495 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 19:32:52.042347 update_engine[1495]: I20250213 19:32:52.041563 1495 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 19:32:52.042347 update_engine[1495]: I20250213 19:32:52.041574 1495 omaha_request_action.cc:272] Request: Feb 13 19:32:52.042347 update_engine[1495]: Feb 13 19:32:52.042347 update_engine[1495]: Feb 13 19:32:52.043267 update_engine[1495]: Feb 13 19:32:52.043267 update_engine[1495]: Feb 13 19:32:52.043267 update_engine[1495]: Feb 13 19:32:52.043267 update_engine[1495]: Feb 13 19:32:52.043267 update_engine[1495]: I20250213 19:32:52.041586 1495 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:32:52.043267 update_engine[1495]: I20250213 19:32:52.041903 1495 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:32:52.043267 update_engine[1495]: I20250213 19:32:52.042260 1495 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:32:52.044312 locksmithd[1538]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 13 19:32:52.049793 update_engine[1495]: E20250213 19:32:52.049278 1495 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:32:52.049793 update_engine[1495]: I20250213 19:32:52.049536 1495 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 13 19:32:52.049793 update_engine[1495]: I20250213 19:32:52.049559 1495 omaha_request_action.cc:617] Omaha request response: Feb 13 19:32:52.049793 update_engine[1495]: I20250213 19:32:52.049574 1495 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 19:32:52.049793 update_engine[1495]: I20250213 19:32:52.049607 1495 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 13 19:32:52.049793 update_engine[1495]: I20250213 19:32:52.049620 1495 update_attempter.cc:306] Processing Done. Feb 13 19:32:52.049793 update_engine[1495]: I20250213 19:32:52.049634 1495 update_attempter.cc:310] Error event sent. Feb 13 19:32:52.049793 update_engine[1495]: I20250213 19:32:52.049652 1495 update_check_scheduler.cc:74] Next update check in 46m58s Feb 13 19:32:52.050786 locksmithd[1538]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 13 19:32:53.488089 systemd[1]: run-containerd-runc-k8s.io-4d63acd5b2634b6f4f8094103e6874052f370ff79f1fdea325e5da6c0edcdc45-runc.j6iwZO.mount: Deactivated successfully. Feb 13 19:32:53.624309 sshd[4619]: Connection closed by 147.75.109.163 port 51820 Feb 13 19:32:53.626130 sshd-session[4617]: pam_unix(sshd:session): session closed for user core Feb 13 19:32:53.632586 systemd[1]: sshd@29-10.128.0.10:22-147.75.109.163:51820.service: Deactivated successfully. Feb 13 19:32:53.637746 systemd[1]: session-30.scope: Deactivated successfully. Feb 13 19:32:53.641360 systemd-logind[1487]: Session 30 logged out. Waiting for processes to exit. Feb 13 19:32:53.644181 systemd-logind[1487]: Removed session 30.