Feb 13 15:40:08.102842 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 14:00:20 -00 2025 Feb 13 15:40:08.102891 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f6a3351ed39d61c0cb6d1964ad84b777665fb0b2f253a15f9696d9c5fba26f65 Feb 13 15:40:08.102909 kernel: BIOS-provided physical RAM map: Feb 13 15:40:08.102923 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Feb 13 15:40:08.102936 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Feb 13 15:40:08.102950 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Feb 13 15:40:08.102967 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Feb 13 15:40:08.102982 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Feb 13 15:40:08.103000 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd328fff] usable Feb 13 15:40:08.103013 kernel: BIOS-e820: [mem 0x00000000bd329000-0x00000000bd330fff] ACPI data Feb 13 15:40:08.103027 kernel: BIOS-e820: [mem 0x00000000bd331000-0x00000000bf8ecfff] usable Feb 13 15:40:08.103041 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Feb 13 15:40:08.103057 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Feb 13 15:40:08.103071 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Feb 13 15:40:08.103094 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Feb 13 15:40:08.103109 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Feb 13 15:40:08.103124 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Feb 13 15:40:08.103138 kernel: NX (Execute Disable) protection: active Feb 13 15:40:08.103153 kernel: APIC: Static calls initialized Feb 13 15:40:08.103168 kernel: efi: EFI v2.7 by EDK II Feb 13 15:40:08.103185 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd329018 Feb 13 15:40:08.103201 kernel: random: crng init done Feb 13 15:40:08.103215 kernel: secureboot: Secure boot disabled Feb 13 15:40:08.103233 kernel: SMBIOS 2.4 present. Feb 13 15:40:08.103251 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024 Feb 13 15:40:08.103265 kernel: Hypervisor detected: KVM Feb 13 15:40:08.103278 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 15:40:08.103292 kernel: kvm-clock: using sched offset of 13173600864 cycles Feb 13 15:40:08.103307 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 15:40:08.103321 kernel: tsc: Detected 2299.998 MHz processor Feb 13 15:40:08.103336 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:40:08.103350 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:40:08.103380 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Feb 13 15:40:08.103396 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Feb 13 15:40:08.103415 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:40:08.103431 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Feb 13 15:40:08.103447 kernel: Using GB pages for direct mapping Feb 13 15:40:08.103463 kernel: ACPI: Early table checksum verification disabled Feb 13 15:40:08.103479 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Feb 13 15:40:08.103496 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Feb 13 15:40:08.103521 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Feb 13 15:40:08.103549 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Feb 13 15:40:08.103566 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Feb 13 15:40:08.103583 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Feb 13 15:40:08.103598 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Feb 13 15:40:08.103616 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Feb 13 15:40:08.103633 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Feb 13 15:40:08.103651 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Feb 13 15:40:08.103672 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Feb 13 15:40:08.103689 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Feb 13 15:40:08.103705 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Feb 13 15:40:08.103721 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Feb 13 15:40:08.103738 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Feb 13 15:40:08.103756 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Feb 13 15:40:08.103772 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Feb 13 15:40:08.103790 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Feb 13 15:40:08.103806 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Feb 13 15:40:08.103828 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Feb 13 15:40:08.103845 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 15:40:08.103863 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 15:40:08.103880 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 15:40:08.103898 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Feb 13 15:40:08.103915 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Feb 13 15:40:08.103933 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Feb 13 15:40:08.103951 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Feb 13 15:40:08.103968 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Feb 13 15:40:08.103989 kernel: Zone ranges: Feb 13 15:40:08.104005 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:40:08.104023 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 15:40:08.104040 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Feb 13 15:40:08.104057 kernel: Movable zone start for each node Feb 13 15:40:08.104074 kernel: Early memory node ranges Feb 13 15:40:08.104092 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Feb 13 15:40:08.104109 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Feb 13 15:40:08.104127 kernel: node 0: [mem 0x0000000000100000-0x00000000bd328fff] Feb 13 15:40:08.104149 kernel: node 0: [mem 0x00000000bd331000-0x00000000bf8ecfff] Feb 13 15:40:08.104165 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Feb 13 15:40:08.104183 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Feb 13 15:40:08.104200 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Feb 13 15:40:08.104217 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:40:08.104242 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Feb 13 15:40:08.104260 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Feb 13 15:40:08.104277 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Feb 13 15:40:08.104293 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 15:40:08.104315 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Feb 13 15:40:08.104333 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 15:40:08.104350 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 15:40:08.104402 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 15:40:08.104420 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 15:40:08.104436 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:40:08.104454 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 15:40:08.104471 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 15:40:08.104488 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:40:08.104510 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 15:40:08.104526 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Feb 13 15:40:08.104561 kernel: Booting paravirtualized kernel on KVM Feb 13 15:40:08.104578 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:40:08.104596 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 15:40:08.104614 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 15:40:08.104631 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 15:40:08.104648 kernel: pcpu-alloc: [0] 0 1 Feb 13 15:40:08.104665 kernel: kvm-guest: PV spinlocks enabled Feb 13 15:40:08.104686 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:40:08.104705 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f6a3351ed39d61c0cb6d1964ad84b777665fb0b2f253a15f9696d9c5fba26f65 Feb 13 15:40:08.104722 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:40:08.104740 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 15:40:08.104765 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:40:08.104783 kernel: Fallback order for Node 0: 0 Feb 13 15:40:08.104801 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932272 Feb 13 15:40:08.104819 kernel: Policy zone: Normal Feb 13 15:40:08.104840 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:40:08.104856 kernel: software IO TLB: area num 2. Feb 13 15:40:08.104875 kernel: Memory: 7511328K/7860552K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43476K init, 1596K bss, 348968K reserved, 0K cma-reserved) Feb 13 15:40:08.104892 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 15:40:08.104909 kernel: Kernel/User page tables isolation: enabled Feb 13 15:40:08.104926 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 15:40:08.104943 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:40:08.104961 kernel: Dynamic Preempt: voluntary Feb 13 15:40:08.104997 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:40:08.105017 kernel: rcu: RCU event tracing is enabled. Feb 13 15:40:08.105036 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 15:40:08.105055 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:40:08.105077 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:40:08.105096 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:40:08.105115 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:40:08.105133 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 15:40:08.105153 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 15:40:08.105175 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:40:08.105194 kernel: Console: colour dummy device 80x25 Feb 13 15:40:08.105212 kernel: printk: console [ttyS0] enabled Feb 13 15:40:08.105237 kernel: ACPI: Core revision 20230628 Feb 13 15:40:08.105255 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:40:08.105274 kernel: x2apic enabled Feb 13 15:40:08.105292 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 15:40:08.105311 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Feb 13 15:40:08.105331 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 15:40:08.105354 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Feb 13 15:40:08.107290 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Feb 13 15:40:08.107317 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Feb 13 15:40:08.107338 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:40:08.107357 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 15:40:08.107393 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 15:40:08.107411 kernel: Spectre V2 : Mitigation: IBRS Feb 13 15:40:08.107430 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:40:08.107448 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:40:08.107473 kernel: RETBleed: Mitigation: IBRS Feb 13 15:40:08.107492 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 15:40:08.107510 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Feb 13 15:40:08.107528 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 15:40:08.107553 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 15:40:08.107572 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 15:40:08.107591 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:40:08.107609 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:40:08.107628 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:40:08.107650 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:40:08.107668 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 15:40:08.107687 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:40:08.107705 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:40:08.107724 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:40:08.107741 kernel: landlock: Up and running. Feb 13 15:40:08.107760 kernel: SELinux: Initializing. Feb 13 15:40:08.107779 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 15:40:08.107797 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 15:40:08.107819 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Feb 13 15:40:08.107837 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:40:08.107856 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:40:08.107875 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 15:40:08.107894 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Feb 13 15:40:08.107912 kernel: signal: max sigframe size: 1776 Feb 13 15:40:08.107930 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:40:08.107949 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:40:08.107970 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 15:40:08.107989 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:40:08.108007 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:40:08.108024 kernel: .... node #0, CPUs: #1 Feb 13 15:40:08.108043 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 15:40:08.108067 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 15:40:08.108083 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 15:40:08.108099 kernel: smpboot: Max logical packages: 1 Feb 13 15:40:08.108115 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Feb 13 15:40:08.108137 kernel: devtmpfs: initialized Feb 13 15:40:08.108155 kernel: x86/mm: Memory block size: 128MB Feb 13 15:40:08.108173 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Feb 13 15:40:08.108192 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:40:08.108210 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 15:40:08.108228 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:40:08.108247 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:40:08.108265 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:40:08.108283 kernel: audit: type=2000 audit(1739461206.658:1): state=initialized audit_enabled=0 res=1 Feb 13 15:40:08.108305 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:40:08.108321 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:40:08.108339 kernel: cpuidle: using governor menu Feb 13 15:40:08.108357 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:40:08.108389 kernel: dca service started, version 1.12.1 Feb 13 15:40:08.108407 kernel: PCI: Using configuration type 1 for base access Feb 13 15:40:08.108426 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:40:08.108445 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:40:08.108463 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:40:08.108485 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:40:08.110421 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:40:08.110448 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:40:08.110467 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:40:08.110486 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:40:08.110505 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:40:08.110523 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 15:40:08.110550 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:40:08.110569 kernel: ACPI: Interpreter enabled Feb 13 15:40:08.110594 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 15:40:08.110613 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:40:08.110632 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:40:08.110651 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 15:40:08.110670 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 15:40:08.110690 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:40:08.110964 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:40:08.111166 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 15:40:08.111357 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 15:40:08.112461 kernel: PCI host bridge to bus 0000:00 Feb 13 15:40:08.112686 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 15:40:08.112862 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 15:40:08.113030 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 15:40:08.113198 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Feb 13 15:40:08.113389 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:40:08.113618 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 15:40:08.113833 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Feb 13 15:40:08.114047 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 13 15:40:08.114246 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 15:40:08.115908 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Feb 13 15:40:08.116122 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Feb 13 15:40:08.116321 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Feb 13 15:40:08.116566 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 15:40:08.116775 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Feb 13 15:40:08.116974 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Feb 13 15:40:08.117182 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:40:08.119427 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 13 15:40:08.119681 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Feb 13 15:40:08.119709 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 15:40:08.119729 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 15:40:08.119748 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 15:40:08.119767 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 15:40:08.119786 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 15:40:08.119806 kernel: iommu: Default domain type: Translated Feb 13 15:40:08.119826 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:40:08.119844 kernel: efivars: Registered efivars operations Feb 13 15:40:08.119869 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:40:08.119888 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 15:40:08.119907 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Feb 13 15:40:08.119927 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Feb 13 15:40:08.119945 kernel: e820: reserve RAM buffer [mem 0xbd329000-0xbfffffff] Feb 13 15:40:08.119964 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Feb 13 15:40:08.119982 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Feb 13 15:40:08.120001 kernel: vgaarb: loaded Feb 13 15:40:08.120021 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 15:40:08.120044 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:40:08.120064 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:40:08.120083 kernel: pnp: PnP ACPI init Feb 13 15:40:08.120101 kernel: pnp: PnP ACPI: found 7 devices Feb 13 15:40:08.120121 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:40:08.120141 kernel: NET: Registered PF_INET protocol family Feb 13 15:40:08.120160 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 15:40:08.120180 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 15:40:08.120200 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:40:08.120223 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:40:08.120242 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 15:40:08.120261 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 15:40:08.120280 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 15:40:08.120299 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 15:40:08.120319 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:40:08.120337 kernel: NET: Registered PF_XDP protocol family Feb 13 15:40:08.121624 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 15:40:08.121810 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 15:40:08.121973 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 15:40:08.122134 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Feb 13 15:40:08.122332 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 15:40:08.122357 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:40:08.123411 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 15:40:08.123434 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Feb 13 15:40:08.123454 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 15:40:08.123479 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 15:40:08.123499 kernel: clocksource: Switched to clocksource tsc Feb 13 15:40:08.123519 kernel: Initialise system trusted keyrings Feb 13 15:40:08.123545 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 15:40:08.123565 kernel: Key type asymmetric registered Feb 13 15:40:08.123583 kernel: Asymmetric key parser 'x509' registered Feb 13 15:40:08.123602 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:40:08.123622 kernel: io scheduler mq-deadline registered Feb 13 15:40:08.123642 kernel: io scheduler kyber registered Feb 13 15:40:08.123665 kernel: io scheduler bfq registered Feb 13 15:40:08.123684 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:40:08.123704 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 13 15:40:08.123922 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Feb 13 15:40:08.123948 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 13 15:40:08.124138 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Feb 13 15:40:08.124164 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 13 15:40:08.124349 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Feb 13 15:40:08.124398 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:40:08.124421 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:40:08.124439 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 15:40:08.124457 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Feb 13 15:40:08.124474 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Feb 13 15:40:08.124692 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Feb 13 15:40:08.124719 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 15:40:08.124737 kernel: i8042: Warning: Keylock active Feb 13 15:40:08.124754 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 15:40:08.124777 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 15:40:08.124960 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 15:40:08.125129 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 15:40:08.125303 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T15:40:07 UTC (1739461207) Feb 13 15:40:08.127565 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 15:40:08.127602 kernel: intel_pstate: CPU model not supported Feb 13 15:40:08.127620 kernel: pstore: Using crash dump compression: deflate Feb 13 15:40:08.127645 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 15:40:08.127661 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:40:08.127679 kernel: Segment Routing with IPv6 Feb 13 15:40:08.127696 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:40:08.127713 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:40:08.127738 kernel: Key type dns_resolver registered Feb 13 15:40:08.127757 kernel: IPI shorthand broadcast: enabled Feb 13 15:40:08.127775 kernel: sched_clock: Marking stable (882004900, 142855699)->(1044859532, -19998933) Feb 13 15:40:08.127794 kernel: registered taskstats version 1 Feb 13 15:40:08.127812 kernel: Loading compiled-in X.509 certificates Feb 13 15:40:08.127837 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: a260c8876205efb4ca2ab3eb040cd310ec7afd21' Feb 13 15:40:08.127855 kernel: Key type .fscrypt registered Feb 13 15:40:08.127872 kernel: Key type fscrypt-provisioning registered Feb 13 15:40:08.127889 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:40:08.127907 kernel: ima: No architecture policies found Feb 13 15:40:08.127925 kernel: clk: Disabling unused clocks Feb 13 15:40:08.127943 kernel: Freeing unused kernel image (initmem) memory: 43476K Feb 13 15:40:08.127960 kernel: Write protecting the kernel read-only data: 38912k Feb 13 15:40:08.127983 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Feb 13 15:40:08.128001 kernel: Run /init as init process Feb 13 15:40:08.128021 kernel: with arguments: Feb 13 15:40:08.128039 kernel: /init Feb 13 15:40:08.128057 kernel: with environment: Feb 13 15:40:08.128075 kernel: HOME=/ Feb 13 15:40:08.128092 kernel: TERM=linux Feb 13 15:40:08.128110 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:40:08.128128 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Feb 13 15:40:08.128153 systemd[1]: Successfully made /usr/ read-only. Feb 13 15:40:08.128176 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:40:08.128197 systemd[1]: Detected virtualization google. Feb 13 15:40:08.128215 systemd[1]: Detected architecture x86-64. Feb 13 15:40:08.128233 systemd[1]: Running in initrd. Feb 13 15:40:08.128249 systemd[1]: No hostname configured, using default hostname. Feb 13 15:40:08.128268 systemd[1]: Hostname set to . Feb 13 15:40:08.128291 systemd[1]: Initializing machine ID from random generator. Feb 13 15:40:08.128308 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:40:08.128327 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:40:08.128346 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:40:08.128384 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:40:08.128405 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:40:08.129443 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:40:08.129493 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:40:08.129532 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:40:08.129557 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:40:08.129577 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:40:08.129595 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:40:08.129615 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:40:08.129644 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:40:08.129668 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:40:08.129688 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:40:08.129707 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:40:08.129726 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:40:08.129746 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:40:08.129765 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 15:40:08.129785 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:40:08.129809 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:40:08.129828 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:40:08.129846 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:40:08.129865 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:40:08.129884 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:40:08.129902 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:40:08.129922 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:40:08.129942 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:40:08.129962 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:40:08.129987 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:40:08.130007 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:40:08.130072 systemd-journald[184]: Collecting audit messages is disabled. Feb 13 15:40:08.130118 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:40:08.130144 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:40:08.130167 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:40:08.130188 systemd-journald[184]: Journal started Feb 13 15:40:08.130252 systemd-journald[184]: Runtime Journal (/run/log/journal/697d5481a4df4af5822b388de92eb890) is 8M, max 148.6M, 140.6M free. Feb 13 15:40:08.094601 systemd-modules-load[185]: Inserted module 'overlay' Feb 13 15:40:08.134423 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:40:08.143651 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:40:08.157408 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:40:08.158021 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:40:08.166605 kernel: Bridge firewalling registered Feb 13 15:40:08.160180 systemd-modules-load[185]: Inserted module 'br_netfilter' Feb 13 15:40:08.162868 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:40:08.178707 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:40:08.192659 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:40:08.203759 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:40:08.210619 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:40:08.215038 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:40:08.226404 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:40:08.232569 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:40:08.240392 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:40:08.249930 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:40:08.263601 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:40:08.293203 dracut-cmdline[218]: dracut-dracut-053 Feb 13 15:40:08.298529 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f6a3351ed39d61c0cb6d1964ad84b777665fb0b2f253a15f9696d9c5fba26f65 Feb 13 15:40:08.297625 systemd-resolved[211]: Positive Trust Anchors: Feb 13 15:40:08.297639 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:40:08.297718 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:40:08.303051 systemd-resolved[211]: Defaulting to hostname 'linux'. Feb 13 15:40:08.306216 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:40:08.311613 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:40:08.399418 kernel: SCSI subsystem initialized Feb 13 15:40:08.410420 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:40:08.422424 kernel: iscsi: registered transport (tcp) Feb 13 15:40:08.445801 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:40:08.445903 kernel: QLogic iSCSI HBA Driver Feb 13 15:40:08.497559 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:40:08.504607 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:40:08.549568 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:40:08.549657 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:40:08.549686 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:40:08.595434 kernel: raid6: avx2x4 gen() 18227 MB/s Feb 13 15:40:08.612442 kernel: raid6: avx2x2 gen() 18205 MB/s Feb 13 15:40:08.629811 kernel: raid6: avx2x1 gen() 13942 MB/s Feb 13 15:40:08.629863 kernel: raid6: using algorithm avx2x4 gen() 18227 MB/s Feb 13 15:40:08.647757 kernel: raid6: .... xor() 7542 MB/s, rmw enabled Feb 13 15:40:08.647829 kernel: raid6: using avx2x2 recovery algorithm Feb 13 15:40:08.670408 kernel: xor: automatically using best checksumming function avx Feb 13 15:40:08.836412 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:40:08.849949 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:40:08.861658 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:40:08.879432 systemd-udevd[400]: Using default interface naming scheme 'v255'. Feb 13 15:40:08.887663 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:40:08.921621 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:40:08.962817 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Feb 13 15:40:09.000078 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:40:09.024643 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:40:09.134437 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:40:09.151652 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:40:09.201660 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:40:09.225783 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:40:09.242563 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:40:09.254512 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:40:09.311302 kernel: scsi host0: Virtio SCSI HBA Feb 13 15:40:09.347577 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Feb 13 15:40:09.349637 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:40:09.349673 kernel: AES CTR mode by8 optimization enabled Feb 13 15:40:09.290615 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:40:09.352611 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:40:09.392987 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:40:09.496523 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Feb 13 15:40:09.496745 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Feb 13 15:40:09.496901 kernel: sd 0:0:1:0: [sda] Write Protect is off Feb 13 15:40:09.497050 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Feb 13 15:40:09.497211 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 15:40:09.497379 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:40:09.497406 kernel: GPT:17805311 != 25165823 Feb 13 15:40:09.497423 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:40:09.497438 kernel: GPT:17805311 != 25165823 Feb 13 15:40:09.497452 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:40:09.497466 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:40:09.497481 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Feb 13 15:40:09.393210 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:40:09.404728 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:40:09.422097 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:40:09.422389 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:40:09.589557 kernel: BTRFS: device fsid 506754f7-5ef1-4c63-ad2a-b7b855a48f85 devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (445) Feb 13 15:40:09.589597 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (463) Feb 13 15:40:09.482418 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:40:09.529819 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:40:09.569223 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:40:09.569843 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:40:09.610185 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Feb 13 15:40:09.670790 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Feb 13 15:40:09.671358 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:40:09.701956 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Feb 13 15:40:09.710739 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Feb 13 15:40:09.748637 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Feb 13 15:40:09.762661 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:40:09.791901 disk-uuid[540]: Primary Header is updated. Feb 13 15:40:09.791901 disk-uuid[540]: Secondary Entries is updated. Feb 13 15:40:09.791901 disk-uuid[540]: Secondary Header is updated. Feb 13 15:40:09.815611 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:40:09.799604 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:40:09.836403 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:40:09.874624 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:40:10.851394 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 15:40:10.851796 disk-uuid[541]: The operation has completed successfully. Feb 13 15:40:10.936224 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:40:10.936389 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:40:10.985608 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:40:11.005857 sh[565]: Success Feb 13 15:40:11.028481 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 15:40:11.116616 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:40:11.123492 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:40:11.150062 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:40:11.204593 kernel: BTRFS info (device dm-0): first mount of filesystem 506754f7-5ef1-4c63-ad2a-b7b855a48f85 Feb 13 15:40:11.204685 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:40:11.204711 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:40:11.220858 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:40:11.220929 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:40:11.251422 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 15:40:11.257709 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:40:11.258684 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:40:11.264577 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:40:11.326514 kernel: BTRFS info (device sda6): first mount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:40:11.326560 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:40:11.326576 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:40:11.293959 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:40:11.343381 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:40:11.343477 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:40:11.365407 kernel: BTRFS info (device sda6): last unmount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:40:11.384421 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:40:11.400697 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:40:11.508183 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:40:11.521662 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:40:11.613462 ignition[641]: Ignition 2.20.0 Feb 13 15:40:11.613952 ignition[641]: Stage: fetch-offline Feb 13 15:40:11.617004 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:40:11.614052 ignition[641]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:40:11.622223 systemd-networkd[751]: lo: Link UP Feb 13 15:40:11.614071 ignition[641]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:40:11.622229 systemd-networkd[751]: lo: Gained carrier Feb 13 15:40:11.614255 ignition[641]: parsed url from cmdline: "" Feb 13 15:40:11.624188 systemd-networkd[751]: Enumeration completed Feb 13 15:40:11.614262 ignition[641]: no config URL provided Feb 13 15:40:11.624791 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:40:11.614272 ignition[641]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:40:11.624799 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:40:11.614287 ignition[641]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:40:11.625976 systemd-networkd[751]: eth0: Link UP Feb 13 15:40:11.614299 ignition[641]: failed to fetch config: resource requires networking Feb 13 15:40:11.625983 systemd-networkd[751]: eth0: Gained carrier Feb 13 15:40:11.614839 ignition[641]: Ignition finished successfully Feb 13 15:40:11.625997 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:40:11.727019 ignition[761]: Ignition 2.20.0 Feb 13 15:40:11.626899 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:40:11.727028 ignition[761]: Stage: fetch Feb 13 15:40:11.635489 systemd-networkd[751]: eth0: DHCPv4 address 10.128.0.120/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 13 15:40:11.727258 ignition[761]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:40:11.655927 systemd[1]: Reached target network.target - Network. Feb 13 15:40:11.727271 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:40:11.678663 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 15:40:11.727417 ignition[761]: parsed url from cmdline: "" Feb 13 15:40:11.739579 unknown[761]: fetched base config from "system" Feb 13 15:40:11.727423 ignition[761]: no config URL provided Feb 13 15:40:11.739592 unknown[761]: fetched base config from "system" Feb 13 15:40:11.727430 ignition[761]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:40:11.739602 unknown[761]: fetched user config from "gcp" Feb 13 15:40:11.727442 ignition[761]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:40:11.742137 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 15:40:11.727470 ignition[761]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Feb 13 15:40:11.760615 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:40:11.731900 ignition[761]: GET result: OK Feb 13 15:40:11.813149 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:40:11.732030 ignition[761]: parsing config with SHA512: 17269e238cf91bd336ff7bfac1e53ae71e113a0d0b487215aa6450363897d97f08d7f877506eed54ac30ee470b84d18fc9f3e24fba2cd04703ea9db1e246cfd7 Feb 13 15:40:11.847575 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:40:11.740188 ignition[761]: fetch: fetch complete Feb 13 15:40:11.885960 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:40:11.740195 ignition[761]: fetch: fetch passed Feb 13 15:40:11.893816 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:40:11.740250 ignition[761]: Ignition finished successfully Feb 13 15:40:11.919647 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:40:11.810760 ignition[767]: Ignition 2.20.0 Feb 13 15:40:11.929737 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:40:11.810770 ignition[767]: Stage: kargs Feb 13 15:40:11.957664 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:40:11.810970 ignition[767]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:40:11.964739 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:40:11.810987 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:40:11.986571 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:40:11.811924 ignition[767]: kargs: kargs passed Feb 13 15:40:11.811978 ignition[767]: Ignition finished successfully Feb 13 15:40:11.879303 ignition[773]: Ignition 2.20.0 Feb 13 15:40:11.879313 ignition[773]: Stage: disks Feb 13 15:40:11.879550 ignition[773]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:40:11.879563 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:40:11.880551 ignition[773]: disks: disks passed Feb 13 15:40:11.880608 ignition[773]: Ignition finished successfully Feb 13 15:40:12.035723 systemd-fsck[781]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 15:40:12.209396 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:40:12.242577 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:40:12.358468 kernel: EXT4-fs (sda9): mounted filesystem 8023eced-1511-4e72-a58a-db1b8cb3210e r/w with ordered data mode. Quota mode: none. Feb 13 15:40:12.359574 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:40:12.360422 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:40:12.391624 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:40:12.424515 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (789) Feb 13 15:40:12.424255 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:40:12.463572 kernel: BTRFS info (device sda6): first mount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:40:12.463628 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:40:12.463655 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:40:12.424957 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:40:12.491672 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:40:12.491712 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:40:12.425048 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:40:12.425091 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:40:12.504671 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:40:12.528712 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:40:12.550638 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:40:12.681057 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:40:12.691540 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:40:12.701516 initrd-setup-root[827]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:40:12.711558 initrd-setup-root[834]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:40:12.851668 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:40:12.855534 systemd-networkd[751]: eth0: Gained IPv6LL Feb 13 15:40:12.868649 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:40:12.899685 kernel: BTRFS info (device sda6): last unmount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:40:12.913624 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:40:12.914890 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:40:12.958693 ignition[901]: INFO : Ignition 2.20.0 Feb 13 15:40:12.966525 ignition[901]: INFO : Stage: mount Feb 13 15:40:12.966525 ignition[901]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:40:12.966525 ignition[901]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:40:12.966525 ignition[901]: INFO : mount: mount passed Feb 13 15:40:12.966525 ignition[901]: INFO : Ignition finished successfully Feb 13 15:40:12.962385 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:40:12.969015 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:40:12.987672 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:40:13.024708 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:40:13.117717 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (913) Feb 13 15:40:13.117765 kernel: BTRFS info (device sda6): first mount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773 Feb 13 15:40:13.117781 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:40:13.117796 kernel: BTRFS info (device sda6): using free space tree Feb 13 15:40:13.117822 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 15:40:13.117837 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 15:40:13.120642 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:40:13.159140 ignition[930]: INFO : Ignition 2.20.0 Feb 13 15:40:13.159140 ignition[930]: INFO : Stage: files Feb 13 15:40:13.173531 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:40:13.173531 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:40:13.173531 ignition[930]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:40:13.173531 ignition[930]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:40:13.173531 ignition[930]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:40:13.173531 ignition[930]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:40:13.173531 ignition[930]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:40:13.173531 ignition[930]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:40:13.173531 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:40:13.173531 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 15:40:13.166283 unknown[930]: wrote ssh authorized keys file for user: core Feb 13 15:40:13.308539 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 15:40:13.539978 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 15:40:13.556523 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:40:13.556523 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:40:13.556523 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:40:13.556523 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 15:40:13.556523 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:40:13.556523 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 15:40:13.556523 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:40:13.556523 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 15:40:13.556523 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:40:13.556523 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:40:13.556523 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:40:13.556523 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:40:13.556523 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:40:13.556523 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 15:40:13.835263 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 15:40:14.151528 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:40:14.151528 ignition[930]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 15:40:14.190563 ignition[930]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:40:14.190563 ignition[930]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 15:40:14.190563 ignition[930]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 15:40:14.190563 ignition[930]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 15:40:14.190563 ignition[930]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 15:40:14.190563 ignition[930]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:40:14.190563 ignition[930]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:40:14.190563 ignition[930]: INFO : files: files passed Feb 13 15:40:14.190563 ignition[930]: INFO : Ignition finished successfully Feb 13 15:40:14.156704 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:40:14.177563 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:40:14.224663 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:40:14.256054 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:40:14.402553 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:40:14.402553 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:40:14.256212 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:40:14.460574 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:40:14.280018 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:40:14.290870 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:40:14.330641 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:40:14.405838 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:40:14.405983 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:40:14.416984 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:40:14.450726 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:40:14.470803 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:40:14.477724 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:40:14.531099 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:40:14.554640 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:40:14.580356 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:40:14.591802 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:40:14.612799 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:40:14.633793 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:40:14.634007 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:40:14.666850 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:40:14.687804 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:40:14.705723 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:40:14.725946 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:40:14.744755 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:40:14.763767 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:40:14.784803 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:40:14.805818 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:40:14.823753 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:40:14.844783 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:40:14.863735 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:40:14.864001 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:40:14.893826 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:40:14.915813 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:40:14.936668 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:40:14.936807 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:40:14.957765 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:40:15.074730 ignition[983]: INFO : Ignition 2.20.0 Feb 13 15:40:15.074730 ignition[983]: INFO : Stage: umount Feb 13 15:40:15.074730 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:40:15.074730 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 15:40:15.074730 ignition[983]: INFO : umount: umount passed Feb 13 15:40:15.074730 ignition[983]: INFO : Ignition finished successfully Feb 13 15:40:14.957976 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:40:14.982809 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:40:14.983044 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:40:15.004936 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:40:15.005134 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:40:15.031683 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:40:15.066563 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:40:15.066873 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:40:15.093665 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:40:15.140633 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:40:15.140890 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:40:15.161862 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:40:15.162053 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:40:15.197932 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:40:15.199320 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:40:15.199486 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:40:15.214169 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:40:15.214287 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:40:15.235996 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:40:15.236126 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:40:15.255927 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:40:15.255993 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:40:15.273626 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:40:15.273708 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:40:15.294671 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 15:40:15.294761 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 15:40:15.312634 systemd[1]: Stopped target network.target - Network. Feb 13 15:40:15.330551 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:40:15.330686 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:40:15.351672 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:40:15.369686 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:40:15.371477 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:40:15.379738 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:40:15.397765 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:40:15.412832 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:40:15.412895 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:40:15.427805 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:40:15.427866 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:40:15.442812 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:40:15.442900 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:40:15.459813 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:40:15.459892 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:40:15.476815 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:40:15.476899 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:40:15.505920 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:40:15.530738 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:40:15.549223 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:40:15.549382 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:40:15.560401 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 15:40:15.560697 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:40:15.560826 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:40:15.575381 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 15:40:15.576973 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:40:15.577030 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:40:15.598518 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:40:15.625491 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:40:15.625628 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:40:15.646641 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:40:15.646733 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:40:15.664845 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:40:16.120531 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Feb 13 15:40:15.664938 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:40:15.684600 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:40:15.684701 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:40:15.708843 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:40:15.738905 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 15:40:15.739021 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:40:15.739565 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:40:15.739740 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:40:15.763620 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:40:15.763696 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:40:15.783632 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:40:15.783705 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:40:15.803590 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:40:15.803692 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:40:15.829566 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:40:15.829700 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:40:15.859573 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:40:15.859723 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:40:15.896620 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:40:15.920519 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:40:15.920659 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:40:15.939866 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:40:15.939945 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:40:15.962140 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 15:40:15.962231 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 15:40:15.962798 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:40:15.962925 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:40:15.982036 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:40:15.982157 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:40:16.001960 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:40:16.016632 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:40:16.062702 systemd[1]: Switching root. Feb 13 15:40:16.445510 systemd-journald[184]: Journal stopped Feb 13 15:40:18.938766 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:40:18.938830 kernel: SELinux: policy capability open_perms=1 Feb 13 15:40:18.938853 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:40:18.938870 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:40:18.938888 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:40:18.938906 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:40:18.938927 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:40:18.938945 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:40:18.938968 kernel: audit: type=1403 audit(1739461216.680:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:40:18.938991 systemd[1]: Successfully loaded SELinux policy in 92.585ms. Feb 13 15:40:18.939013 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.161ms. Feb 13 15:40:18.939036 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 15:40:18.939057 systemd[1]: Detected virtualization google. Feb 13 15:40:18.939077 systemd[1]: Detected architecture x86-64. Feb 13 15:40:18.939102 systemd[1]: Detected first boot. Feb 13 15:40:18.939125 systemd[1]: Initializing machine ID from random generator. Feb 13 15:40:18.939147 zram_generator::config[1026]: No configuration found. Feb 13 15:40:18.939170 kernel: Guest personality initialized and is inactive Feb 13 15:40:18.939189 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Feb 13 15:40:18.939213 kernel: Initialized host personality Feb 13 15:40:18.939232 kernel: NET: Registered PF_VSOCK protocol family Feb 13 15:40:18.939252 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:40:18.939275 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 15:40:18.939297 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:40:18.939318 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:40:18.939348 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:40:18.939406 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:40:18.939430 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:40:18.939464 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:40:18.939487 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:40:18.939509 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:40:18.939532 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:40:18.939555 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:40:18.939578 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:40:18.939601 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:40:18.939629 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:40:18.939660 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:40:18.939684 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:40:18.939707 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:40:18.939729 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:40:18.939757 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:40:18.939779 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:40:18.939802 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:40:18.939827 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:40:18.939850 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:40:18.939871 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:40:18.939894 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:40:18.939918 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:40:18.939940 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:40:18.939962 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:40:18.939984 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:40:18.940010 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:40:18.940032 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 15:40:18.940054 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:40:18.940078 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:40:18.940104 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:40:18.940126 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:40:18.940148 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:40:18.940171 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:40:18.940193 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:40:18.940216 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:40:18.940239 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:40:18.940263 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:40:18.940290 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:40:18.940314 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:40:18.940354 systemd[1]: Reached target machines.target - Containers. Feb 13 15:40:18.940397 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:40:18.940429 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:40:18.940450 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:40:18.940471 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:40:18.940493 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:40:18.940515 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:40:18.940543 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:40:18.940566 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:40:18.940586 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:40:18.940606 kernel: ACPI: bus type drm_connector registered Feb 13 15:40:18.940628 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:40:18.940652 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:40:18.940675 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:40:18.940703 kernel: fuse: init (API version 7.39) Feb 13 15:40:18.940725 kernel: loop: module loaded Feb 13 15:40:18.940746 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:40:18.940769 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:40:18.940793 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:40:18.940816 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:40:18.940839 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:40:18.940899 systemd-journald[1114]: Collecting audit messages is disabled. Feb 13 15:40:18.940951 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:40:18.940974 systemd-journald[1114]: Journal started Feb 13 15:40:18.941021 systemd-journald[1114]: Runtime Journal (/run/log/journal/496203a0c4a045cbbf43998a8e39eef4) is 8M, max 148.6M, 140.6M free. Feb 13 15:40:17.678276 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:40:17.692220 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 15:40:17.692879 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:40:18.964523 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:40:19.004525 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 15:40:19.034427 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:40:19.057120 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:40:19.057219 systemd[1]: Stopped verity-setup.service. Feb 13 15:40:19.082404 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:40:19.095432 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:40:19.107181 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:40:19.117809 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:40:19.128768 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:40:19.138777 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:40:19.149837 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:40:19.159718 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:40:19.169894 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:40:19.181924 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:40:19.193882 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:40:19.194199 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:40:19.205909 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:40:19.206215 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:40:19.218924 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:40:19.219224 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:40:19.229916 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:40:19.230201 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:40:19.241910 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:40:19.242210 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:40:19.252850 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:40:19.253134 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:40:19.262867 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:40:19.272847 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:40:19.284889 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:40:19.296956 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 15:40:19.309936 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:40:19.334689 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:40:19.356543 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:40:19.377568 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:40:19.387536 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:40:19.387612 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:40:19.399089 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 15:40:19.422628 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:40:19.435974 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:40:19.445714 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:40:19.450828 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:40:19.462779 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:40:19.473588 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:40:19.480776 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:40:19.490329 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:40:19.506681 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:40:19.521936 systemd-journald[1114]: Time spent on flushing to /var/log/journal/496203a0c4a045cbbf43998a8e39eef4 is 43.985ms for 945 entries. Feb 13 15:40:19.521936 systemd-journald[1114]: System Journal (/var/log/journal/496203a0c4a045cbbf43998a8e39eef4) is 8M, max 584.8M, 576.8M free. Feb 13 15:40:19.610664 systemd-journald[1114]: Received client request to flush runtime journal. Feb 13 15:40:19.610736 kernel: loop0: detected capacity change from 0 to 210664 Feb 13 15:40:19.535324 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:40:19.549866 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:40:19.569091 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:40:19.595490 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:40:19.606744 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:40:19.619208 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:40:19.634129 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:40:19.646138 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:40:19.658106 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:40:19.686109 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:40:19.706836 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 15:40:19.717361 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:40:19.736617 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:40:19.743000 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:40:19.756759 udevadm[1153]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:40:19.785538 kernel: loop1: detected capacity change from 0 to 52152 Feb 13 15:40:19.782387 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:40:19.785088 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 15:40:19.829833 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Feb 13 15:40:19.829882 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Feb 13 15:40:19.848664 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:40:19.875410 kernel: loop2: detected capacity change from 0 to 138176 Feb 13 15:40:20.000533 kernel: loop3: detected capacity change from 0 to 147912 Feb 13 15:40:20.105403 kernel: loop4: detected capacity change from 0 to 210664 Feb 13 15:40:20.159560 kernel: loop5: detected capacity change from 0 to 52152 Feb 13 15:40:20.195438 kernel: loop6: detected capacity change from 0 to 138176 Feb 13 15:40:20.259595 kernel: loop7: detected capacity change from 0 to 147912 Feb 13 15:40:20.318674 (sd-merge)[1174]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Feb 13 15:40:20.319738 (sd-merge)[1174]: Merged extensions into '/usr'. Feb 13 15:40:20.327959 systemd[1]: Reload requested from client PID 1150 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:40:20.328408 systemd[1]: Reloading... Feb 13 15:40:20.489657 zram_generator::config[1201]: No configuration found. Feb 13 15:40:20.674409 ldconfig[1145]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:40:20.762872 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:40:20.898510 systemd[1]: Reloading finished in 569 ms. Feb 13 15:40:20.915490 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:40:20.927092 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:40:20.938939 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:40:20.959885 systemd[1]: Starting ensure-sysext.service... Feb 13 15:40:20.968346 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:40:20.988643 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:40:21.017853 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:40:21.018407 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:40:21.021784 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:40:21.022358 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Feb 13 15:40:21.022513 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Feb 13 15:40:21.022556 systemd[1]: Reload requested from client PID 1243 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:40:21.022574 systemd[1]: Reloading... Feb 13 15:40:21.037746 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:40:21.037766 systemd-tmpfiles[1244]: Skipping /boot Feb 13 15:40:21.069318 systemd-udevd[1245]: Using default interface naming scheme 'v255'. Feb 13 15:40:21.081741 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:40:21.082639 systemd-tmpfiles[1244]: Skipping /boot Feb 13 15:40:21.187397 zram_generator::config[1278]: No configuration found. Feb 13 15:40:21.401421 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1291) Feb 13 15:40:21.548398 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Feb 13 15:40:21.584640 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 15:40:21.582207 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:40:21.635806 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 13 15:40:21.635931 kernel: EDAC MC: Ver: 3.0.0 Feb 13 15:40:21.663412 kernel: ACPI: button: Power Button [PWRF] Feb 13 15:40:21.715396 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Feb 13 15:40:21.736943 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:40:21.737070 kernel: ACPI: button: Sleep Button [SLPF] Feb 13 15:40:21.799001 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:40:21.799322 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Feb 13 15:40:21.810979 systemd[1]: Reloading finished in 787 ms. Feb 13 15:40:21.826903 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:40:21.858460 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:40:21.892920 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:40:21.913850 systemd[1]: Finished ensure-sysext.service. Feb 13 15:40:21.951619 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Feb 13 15:40:21.961601 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:40:21.966606 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:40:21.982909 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:40:21.994825 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:40:22.004652 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:40:22.022668 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:40:22.044224 lvm[1355]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:40:22.045586 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:40:22.060861 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:40:22.082667 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:40:22.094811 augenrules[1375]: No rules Feb 13 15:40:22.096814 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 15:40:22.105748 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:40:22.111981 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:40:22.123539 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 15:40:22.130201 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:40:22.150673 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:40:22.172646 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:40:22.182625 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:40:22.199625 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:40:22.222671 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:40:22.232556 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:40:22.249425 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:40:22.250010 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:40:22.264074 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:40:22.276103 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:40:22.276681 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:40:22.276963 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:40:22.277516 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:40:22.277802 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:40:22.278297 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:40:22.278596 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:40:22.279074 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:40:22.279347 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:40:22.286091 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:40:22.286648 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:40:22.293214 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 15:40:22.304419 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:40:22.309666 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:40:22.312562 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Feb 13 15:40:22.312672 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:40:22.312774 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:40:22.317187 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:40:22.334200 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:40:22.329628 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:40:22.329717 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:40:22.330715 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:40:22.393470 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:40:22.399917 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:40:22.416499 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:40:22.430288 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Feb 13 15:40:22.440067 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:40:22.547556 systemd-networkd[1384]: lo: Link UP Feb 13 15:40:22.547573 systemd-networkd[1384]: lo: Gained carrier Feb 13 15:40:22.550170 systemd-networkd[1384]: Enumeration completed Feb 13 15:40:22.551601 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:40:22.552473 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:40:22.552482 systemd-networkd[1384]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:40:22.553073 systemd-networkd[1384]: eth0: Link UP Feb 13 15:40:22.553080 systemd-networkd[1384]: eth0: Gained carrier Feb 13 15:40:22.553105 systemd-networkd[1384]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:40:22.561527 systemd-networkd[1384]: eth0: DHCPv4 address 10.128.0.120/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 13 15:40:22.562279 systemd-resolved[1385]: Positive Trust Anchors: Feb 13 15:40:22.562291 systemd-resolved[1385]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:40:22.562361 systemd-resolved[1385]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:40:22.569740 systemd-resolved[1385]: Defaulting to hostname 'linux'. Feb 13 15:40:22.570658 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 15:40:22.588549 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:40:22.588991 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:40:22.589333 systemd[1]: Reached target network.target - Network. Feb 13 15:40:22.616585 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:40:22.627527 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:40:22.637611 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:40:22.648528 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:40:22.659706 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:40:22.669723 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:40:22.680508 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:40:22.691546 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:40:22.691608 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:40:22.700500 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:40:22.710642 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:40:22.722277 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:40:22.734059 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 15:40:22.745774 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 15:40:22.756521 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 15:40:22.779444 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:40:22.790188 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 15:40:22.802903 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 15:40:22.814744 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:40:22.825461 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:40:22.835505 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:40:22.843599 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:40:22.843657 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:40:22.855532 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:40:22.871626 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 15:40:22.894535 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:40:22.918872 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:40:22.928800 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:40:22.940512 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:40:22.944870 jq[1438]: false Feb 13 15:40:22.951655 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:40:22.972621 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 15:40:22.988512 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 15:40:23.003072 coreos-metadata[1436]: Feb 13 15:40:23.002 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Feb 13 15:40:23.009177 coreos-metadata[1436]: Feb 13 15:40:23.005 INFO Fetch successful Feb 13 15:40:23.009177 coreos-metadata[1436]: Feb 13 15:40:23.005 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Feb 13 15:40:23.009177 coreos-metadata[1436]: Feb 13 15:40:23.007 INFO Fetch successful Feb 13 15:40:23.009177 coreos-metadata[1436]: Feb 13 15:40:23.007 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Feb 13 15:40:23.008625 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:40:23.012621 coreos-metadata[1436]: Feb 13 15:40:23.011 INFO Fetch successful Feb 13 15:40:23.012621 coreos-metadata[1436]: Feb 13 15:40:23.011 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Feb 13 15:40:23.015866 coreos-metadata[1436]: Feb 13 15:40:23.013 INFO Fetch successful Feb 13 15:40:23.031816 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:40:23.046074 dbus-daemon[1437]: [system] SELinux support is enabled Feb 13 15:40:23.050240 extend-filesystems[1439]: Found loop4 Feb 13 15:40:23.050240 extend-filesystems[1439]: Found loop5 Feb 13 15:40:23.050240 extend-filesystems[1439]: Found loop6 Feb 13 15:40:23.050240 extend-filesystems[1439]: Found loop7 Feb 13 15:40:23.050240 extend-filesystems[1439]: Found sda Feb 13 15:40:23.050240 extend-filesystems[1439]: Found sda1 Feb 13 15:40:23.050240 extend-filesystems[1439]: Found sda2 Feb 13 15:40:23.050240 extend-filesystems[1439]: Found sda3 Feb 13 15:40:23.050240 extend-filesystems[1439]: Found usr Feb 13 15:40:23.050240 extend-filesystems[1439]: Found sda4 Feb 13 15:40:23.183602 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Feb 13 15:40:23.183664 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Feb 13 15:40:23.183785 ntpd[1444]: 13 Feb 15:40:23 ntpd[1444]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:23:52 UTC 2025 (1): Starting Feb 13 15:40:23.183785 ntpd[1444]: 13 Feb 15:40:23 ntpd[1444]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:40:23.183785 ntpd[1444]: 13 Feb 15:40:23 ntpd[1444]: ---------------------------------------------------- Feb 13 15:40:23.183785 ntpd[1444]: 13 Feb 15:40:23 ntpd[1444]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:40:23.183785 ntpd[1444]: 13 Feb 15:40:23 ntpd[1444]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:40:23.183785 ntpd[1444]: 13 Feb 15:40:23 ntpd[1444]: corporation. Support and training for ntp-4 are Feb 13 15:40:23.183785 ntpd[1444]: 13 Feb 15:40:23 ntpd[1444]: available at https://www.nwtime.org/support Feb 13 15:40:23.183785 ntpd[1444]: 13 Feb 15:40:23 ntpd[1444]: ---------------------------------------------------- Feb 13 15:40:23.183785 ntpd[1444]: 13 Feb 15:40:23 ntpd[1444]: proto: precision = 0.076 usec (-24) Feb 13 15:40:23.183785 ntpd[1444]: 13 Feb 15:40:23 ntpd[1444]: basedate set to 2025-02-01 Feb 13 15:40:23.183785 ntpd[1444]: 13 Feb 15:40:23 ntpd[1444]: gps base set to 2025-02-02 (week 2352) Feb 13 15:40:23.183785 ntpd[1444]: 13 Feb 15:40:23 ntpd[1444]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:40:23.183785 ntpd[1444]: 13 Feb 15:40:23 ntpd[1444]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:40:23.183785 ntpd[1444]: 13 Feb 15:40:23 ntpd[1444]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:40:23.183785 ntpd[1444]: 13 Feb 15:40:23 ntpd[1444]: Listen normally on 3 eth0 10.128.0.120:123 Feb 13 15:40:23.183785 ntpd[1444]: 13 Feb 15:40:23 ntpd[1444]: Listen normally on 4 lo [::1]:123 Feb 13 15:40:23.183785 ntpd[1444]: 13 Feb 15:40:23 ntpd[1444]: bind(21) AF_INET6 fe80::4001:aff:fe80:78%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:40:23.183785 ntpd[1444]: 13 Feb 15:40:23 ntpd[1444]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:78%2#123 Feb 13 15:40:23.183785 ntpd[1444]: 13 Feb 15:40:23 ntpd[1444]: failed to init interface for address fe80::4001:aff:fe80:78%2 Feb 13 15:40:23.183785 ntpd[1444]: 13 Feb 15:40:23 ntpd[1444]: Listening on routing socket on fd #21 for interface updates Feb 13 15:40:23.183785 ntpd[1444]: 13 Feb 15:40:23 ntpd[1444]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:40:23.183785 ntpd[1444]: 13 Feb 15:40:23 ntpd[1444]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:40:23.194525 extend-filesystems[1439]: Found sda6 Feb 13 15:40:23.194525 extend-filesystems[1439]: Found sda7 Feb 13 15:40:23.194525 extend-filesystems[1439]: Found sda9 Feb 13 15:40:23.194525 extend-filesystems[1439]: Checking size of /dev/sda9 Feb 13 15:40:23.194525 extend-filesystems[1439]: Resized partition /dev/sda9 Feb 13 15:40:23.056621 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:40:23.051182 dbus-daemon[1437]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1384 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 15:40:23.264986 extend-filesystems[1465]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:40:23.264986 extend-filesystems[1465]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 15:40:23.264986 extend-filesystems[1465]: old_desc_blocks = 1, new_desc_blocks = 2 Feb 13 15:40:23.264986 extend-filesystems[1465]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Feb 13 15:40:23.329575 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1277) Feb 13 15:40:23.077032 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Feb 13 15:40:23.063166 ntpd[1444]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:23:52 UTC 2025 (1): Starting Feb 13 15:40:23.333775 extend-filesystems[1439]: Resized filesystem in /dev/sda9 Feb 13 15:40:23.078026 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:40:23.063195 ntpd[1444]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 15:40:23.342716 update_engine[1462]: I20250213 15:40:23.163435 1462 main.cc:92] Flatcar Update Engine starting Feb 13 15:40:23.342716 update_engine[1462]: I20250213 15:40:23.173355 1462 update_check_scheduler.cc:74] Next update check in 8m59s Feb 13 15:40:23.088221 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:40:23.063210 ntpd[1444]: ---------------------------------------------------- Feb 13 15:40:23.343503 jq[1467]: true Feb 13 15:40:23.137545 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:40:23.063224 ntpd[1444]: ntp-4 is maintained by Network Time Foundation, Feb 13 15:40:23.153177 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:40:23.063238 ntpd[1444]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 15:40:23.184584 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:40:23.063252 ntpd[1444]: corporation. Support and training for ntp-4 are Feb 13 15:40:23.186438 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:40:23.063266 ntpd[1444]: available at https://www.nwtime.org/support Feb 13 15:40:23.345942 jq[1473]: true Feb 13 15:40:23.186986 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:40:23.063279 ntpd[1444]: ---------------------------------------------------- Feb 13 15:40:23.187282 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:40:23.065363 ntpd[1444]: proto: precision = 0.076 usec (-24) Feb 13 15:40:23.198184 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:40:23.066174 ntpd[1444]: basedate set to 2025-02-01 Feb 13 15:40:23.199567 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:40:23.066200 ntpd[1444]: gps base set to 2025-02-02 (week 2352) Feb 13 15:40:23.223081 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:40:23.068871 ntpd[1444]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 15:40:23.223454 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:40:23.068938 ntpd[1444]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 15:40:23.239431 systemd-logind[1456]: Watching system buttons on /dev/input/event2 (Power Button) Feb 13 15:40:23.069175 ntpd[1444]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 15:40:23.239466 systemd-logind[1456]: Watching system buttons on /dev/input/event3 (Sleep Button) Feb 13 15:40:23.069231 ntpd[1444]: Listen normally on 3 eth0 10.128.0.120:123 Feb 13 15:40:23.239498 systemd-logind[1456]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:40:23.069286 ntpd[1444]: Listen normally on 4 lo [::1]:123 Feb 13 15:40:23.241601 systemd-logind[1456]: New seat seat0. Feb 13 15:40:23.069357 ntpd[1444]: bind(21) AF_INET6 fe80::4001:aff:fe80:78%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 15:40:23.263078 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:40:23.069412 ntpd[1444]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:78%2#123 Feb 13 15:40:23.331349 (ntainerd)[1474]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:40:23.069434 ntpd[1444]: failed to init interface for address fe80::4001:aff:fe80:78%2 Feb 13 15:40:23.069476 ntpd[1444]: Listening on routing socket on fd #21 for interface updates Feb 13 15:40:23.071552 ntpd[1444]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:40:23.071586 ntpd[1444]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 15:40:23.326003 dbus-daemon[1437]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 15:40:23.365178 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 15:40:23.388207 tar[1472]: linux-amd64/helm Feb 13 15:40:23.420248 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:40:23.457017 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:40:23.457318 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:40:23.457612 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:40:23.479836 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 15:40:23.489803 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:40:23.490059 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:40:23.507735 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:40:23.564695 bash[1506]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:40:23.566554 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:40:23.603944 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:40:23.607925 systemd-networkd[1384]: eth0: Gained IPv6LL Feb 13 15:40:23.623740 systemd[1]: Starting sshkeys.service... Feb 13 15:40:23.631230 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:40:23.643155 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:40:23.660664 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:40:23.676743 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:40:23.696386 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Feb 13 15:40:23.741227 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 15:40:23.761862 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 15:40:23.774395 init.sh[1513]: + '[' -e /etc/default/instance_configs.cfg.template ']' Feb 13 15:40:23.774395 init.sh[1513]: + echo -e '[InstanceSetup]\nset_host_keys = false' Feb 13 15:40:23.774395 init.sh[1513]: + /usr/bin/google_instance_setup Feb 13 15:40:23.887830 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:40:24.048687 coreos-metadata[1514]: Feb 13 15:40:24.048 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Feb 13 15:40:24.061877 coreos-metadata[1514]: Feb 13 15:40:24.061 INFO Fetch failed with 404: resource not found Feb 13 15:40:24.062041 coreos-metadata[1514]: Feb 13 15:40:24.061 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Feb 13 15:40:24.070951 coreos-metadata[1514]: Feb 13 15:40:24.070 INFO Fetch successful Feb 13 15:40:24.071332 coreos-metadata[1514]: Feb 13 15:40:24.071 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Feb 13 15:40:24.076395 coreos-metadata[1514]: Feb 13 15:40:24.074 INFO Fetch failed with 404: resource not found Feb 13 15:40:24.076395 coreos-metadata[1514]: Feb 13 15:40:24.074 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Feb 13 15:40:24.081386 coreos-metadata[1514]: Feb 13 15:40:24.081 INFO Fetch failed with 404: resource not found Feb 13 15:40:24.081488 coreos-metadata[1514]: Feb 13 15:40:24.081 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Feb 13 15:40:24.087435 coreos-metadata[1514]: Feb 13 15:40:24.087 INFO Fetch successful Feb 13 15:40:24.096755 unknown[1514]: wrote ssh authorized keys file for user: core Feb 13 15:40:24.169790 update-ssh-keys[1531]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:40:24.170854 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 15:40:24.184816 systemd[1]: Finished sshkeys.service. Feb 13 15:40:24.187988 sshd_keygen[1459]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:40:24.200828 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 15:40:24.205570 dbus-daemon[1437]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 15:40:24.219532 dbus-daemon[1437]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1501 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 15:40:24.232810 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 15:40:24.261604 locksmithd[1505]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:40:24.262636 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:40:24.284168 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:40:24.302929 systemd[1]: Started sshd@0-10.128.0.120:22-139.178.68.195:38460.service - OpenSSH per-connection server daemon (139.178.68.195:38460). Feb 13 15:40:24.342041 polkitd[1544]: Started polkitd version 121 Feb 13 15:40:24.373037 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:40:24.375799 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:40:24.380544 polkitd[1544]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 15:40:24.380663 polkitd[1544]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 15:40:24.384437 polkitd[1544]: Finished loading, compiling and executing 2 rules Feb 13 15:40:24.395827 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:40:24.398470 dbus-daemon[1437]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 15:40:24.400588 polkitd[1544]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 15:40:24.405813 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 15:40:24.472162 systemd-hostnamed[1501]: Hostname set to (transient) Feb 13 15:40:24.477236 systemd-resolved[1385]: System hostname changed to 'ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal'. Feb 13 15:40:24.480852 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:40:24.503933 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:40:24.526568 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:40:24.538498 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:40:24.578096 containerd[1474]: time="2025-02-13T15:40:24.577925950Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:40:24.689054 containerd[1474]: time="2025-02-13T15:40:24.688760740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:40:24.695354 containerd[1474]: time="2025-02-13T15:40:24.693661674Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:40:24.695354 containerd[1474]: time="2025-02-13T15:40:24.693717718Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:40:24.695354 containerd[1474]: time="2025-02-13T15:40:24.693748431Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:40:24.695354 containerd[1474]: time="2025-02-13T15:40:24.693939066Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:40:24.695354 containerd[1474]: time="2025-02-13T15:40:24.693965205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:40:24.695354 containerd[1474]: time="2025-02-13T15:40:24.694053706Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:40:24.695354 containerd[1474]: time="2025-02-13T15:40:24.694073597Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:40:24.695939 containerd[1474]: time="2025-02-13T15:40:24.694362414Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:40:24.696049 containerd[1474]: time="2025-02-13T15:40:24.696023604Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:40:24.696146 containerd[1474]: time="2025-02-13T15:40:24.696125312Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:40:24.699393 containerd[1474]: time="2025-02-13T15:40:24.698605876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:40:24.699393 containerd[1474]: time="2025-02-13T15:40:24.699285658Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:40:24.700748 containerd[1474]: time="2025-02-13T15:40:24.700713842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:40:24.701294 containerd[1474]: time="2025-02-13T15:40:24.701260500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:40:24.701497 containerd[1474]: time="2025-02-13T15:40:24.701471290Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:40:24.701728 containerd[1474]: time="2025-02-13T15:40:24.701704413Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:40:24.702942 containerd[1474]: time="2025-02-13T15:40:24.702914191Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:40:24.712080 containerd[1474]: time="2025-02-13T15:40:24.712032033Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:40:24.715345 containerd[1474]: time="2025-02-13T15:40:24.712628173Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:40:24.715345 containerd[1474]: time="2025-02-13T15:40:24.714127168Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:40:24.715345 containerd[1474]: time="2025-02-13T15:40:24.714172170Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:40:24.715345 containerd[1474]: time="2025-02-13T15:40:24.714201595Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:40:24.715345 containerd[1474]: time="2025-02-13T15:40:24.714464033Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:40:24.715345 containerd[1474]: time="2025-02-13T15:40:24.715187817Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:40:24.715849 containerd[1474]: time="2025-02-13T15:40:24.715819410Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:40:24.716523 containerd[1474]: time="2025-02-13T15:40:24.715966276Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:40:24.716523 containerd[1474]: time="2025-02-13T15:40:24.716066917Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:40:24.716523 containerd[1474]: time="2025-02-13T15:40:24.716091622Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:40:24.716523 containerd[1474]: time="2025-02-13T15:40:24.716133846Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:40:24.716523 containerd[1474]: time="2025-02-13T15:40:24.716156505Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:40:24.716523 containerd[1474]: time="2025-02-13T15:40:24.716199656Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:40:24.716523 containerd[1474]: time="2025-02-13T15:40:24.716227336Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:40:24.716523 containerd[1474]: time="2025-02-13T15:40:24.716250225Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:40:24.716523 containerd[1474]: time="2025-02-13T15:40:24.716288777Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:40:24.716523 containerd[1474]: time="2025-02-13T15:40:24.716309682Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:40:24.716523 containerd[1474]: time="2025-02-13T15:40:24.716357903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:40:24.716523 containerd[1474]: time="2025-02-13T15:40:24.716408646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:40:24.716523 containerd[1474]: time="2025-02-13T15:40:24.716451276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:40:24.716523 containerd[1474]: time="2025-02-13T15:40:24.716475761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:40:24.717825 containerd[1474]: time="2025-02-13T15:40:24.717169118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:40:24.717825 containerd[1474]: time="2025-02-13T15:40:24.717213247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:40:24.717825 containerd[1474]: time="2025-02-13T15:40:24.717257378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:40:24.717825 containerd[1474]: time="2025-02-13T15:40:24.717281151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:40:24.717825 containerd[1474]: time="2025-02-13T15:40:24.717302754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:40:24.717825 containerd[1474]: time="2025-02-13T15:40:24.717347916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:40:24.717825 containerd[1474]: time="2025-02-13T15:40:24.717527340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:40:24.717825 containerd[1474]: time="2025-02-13T15:40:24.717559130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:40:24.719676 containerd[1474]: time="2025-02-13T15:40:24.717580137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:40:24.719676 containerd[1474]: time="2025-02-13T15:40:24.718345829Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:40:24.719676 containerd[1474]: time="2025-02-13T15:40:24.718516313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:40:24.719676 containerd[1474]: time="2025-02-13T15:40:24.718540137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:40:24.719676 containerd[1474]: time="2025-02-13T15:40:24.718558157Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:40:24.719676 containerd[1474]: time="2025-02-13T15:40:24.718640288Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:40:24.719676 containerd[1474]: time="2025-02-13T15:40:24.718671551Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:40:24.719676 containerd[1474]: time="2025-02-13T15:40:24.718690779Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:40:24.719676 containerd[1474]: time="2025-02-13T15:40:24.718711817Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:40:24.719676 containerd[1474]: time="2025-02-13T15:40:24.718728892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:40:24.719676 containerd[1474]: time="2025-02-13T15:40:24.718769735Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:40:24.719676 containerd[1474]: time="2025-02-13T15:40:24.718789821Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:40:24.719676 containerd[1474]: time="2025-02-13T15:40:24.718808767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:40:24.722327 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:40:24.722891 containerd[1474]: time="2025-02-13T15:40:24.719291176Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:40:24.722891 containerd[1474]: time="2025-02-13T15:40:24.719411948Z" level=info msg="Connect containerd service" Feb 13 15:40:24.722891 containerd[1474]: time="2025-02-13T15:40:24.719590900Z" level=info msg="using legacy CRI server" Feb 13 15:40:24.722891 containerd[1474]: time="2025-02-13T15:40:24.719609363Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:40:24.722891 containerd[1474]: time="2025-02-13T15:40:24.719813517Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:40:24.722891 containerd[1474]: time="2025-02-13T15:40:24.720904798Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:40:24.722891 containerd[1474]: time="2025-02-13T15:40:24.721075844Z" level=info msg="Start subscribing containerd event" Feb 13 15:40:24.722891 containerd[1474]: time="2025-02-13T15:40:24.721141391Z" level=info msg="Start recovering state" Feb 13 15:40:24.722891 containerd[1474]: time="2025-02-13T15:40:24.721238978Z" level=info msg="Start event monitor" Feb 13 15:40:24.722891 containerd[1474]: time="2025-02-13T15:40:24.721275859Z" level=info msg="Start snapshots syncer" Feb 13 15:40:24.722891 containerd[1474]: time="2025-02-13T15:40:24.721290643Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:40:24.722891 containerd[1474]: time="2025-02-13T15:40:24.721303379Z" level=info msg="Start streaming server" Feb 13 15:40:24.722891 containerd[1474]: time="2025-02-13T15:40:24.721327326Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:40:24.722891 containerd[1474]: time="2025-02-13T15:40:24.721424884Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:40:24.722891 containerd[1474]: time="2025-02-13T15:40:24.721518666Z" level=info msg="containerd successfully booted in 0.149954s" Feb 13 15:40:24.742556 sshd[1549]: Accepted publickey for core from 139.178.68.195 port 38460 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:40:24.748295 sshd-session[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:24.773108 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:40:24.791106 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:40:24.829294 systemd-logind[1456]: New session 1 of user core. Feb 13 15:40:24.852808 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:40:24.878211 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:40:24.920087 (systemd)[1572]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:40:24.933489 systemd-logind[1456]: New session c1 of user core. Feb 13 15:40:25.151468 tar[1472]: linux-amd64/LICENSE Feb 13 15:40:25.151468 tar[1472]: linux-amd64/README.md Feb 13 15:40:25.178242 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 15:40:25.286768 instance-setup[1517]: INFO Running google_set_multiqueue. Feb 13 15:40:25.312628 instance-setup[1517]: INFO Set channels for eth0 to 2. Feb 13 15:40:25.313161 systemd[1572]: Queued start job for default target default.target. Feb 13 15:40:25.320221 systemd[1572]: Created slice app.slice - User Application Slice. Feb 13 15:40:25.318541 instance-setup[1517]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Feb 13 15:40:25.320266 systemd[1572]: Reached target paths.target - Paths. Feb 13 15:40:25.320916 systemd[1572]: Reached target timers.target - Timers. Feb 13 15:40:25.323469 instance-setup[1517]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Feb 13 15:40:25.323850 instance-setup[1517]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Feb 13 15:40:25.325597 systemd[1572]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:40:25.328359 instance-setup[1517]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Feb 13 15:40:25.328492 instance-setup[1517]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Feb 13 15:40:25.332673 instance-setup[1517]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Feb 13 15:40:25.332868 instance-setup[1517]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Feb 13 15:40:25.335027 instance-setup[1517]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Feb 13 15:40:25.347866 instance-setup[1517]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Feb 13 15:40:25.354519 systemd[1572]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:40:25.354736 systemd[1572]: Reached target sockets.target - Sockets. Feb 13 15:40:25.354820 systemd[1572]: Reached target basic.target - Basic System. Feb 13 15:40:25.354898 systemd[1572]: Reached target default.target - Main User Target. Feb 13 15:40:25.354952 systemd[1572]: Startup finished in 401ms. Feb 13 15:40:25.354956 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:40:25.358208 instance-setup[1517]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Feb 13 15:40:25.360274 instance-setup[1517]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Feb 13 15:40:25.360332 instance-setup[1517]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Feb 13 15:40:25.372622 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:40:25.387867 init.sh[1513]: + /usr/bin/google_metadata_script_runner --script-type startup Feb 13 15:40:25.556245 startup-script[1613]: INFO Starting startup scripts. Feb 13 15:40:25.567935 startup-script[1613]: INFO No startup scripts found in metadata. Feb 13 15:40:25.568021 startup-script[1613]: INFO Finished running startup scripts. Feb 13 15:40:25.604879 init.sh[1513]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Feb 13 15:40:25.604879 init.sh[1513]: + daemon_pids=() Feb 13 15:40:25.604879 init.sh[1513]: + for d in accounts clock_skew network Feb 13 15:40:25.604879 init.sh[1513]: + daemon_pids+=($!) Feb 13 15:40:25.604879 init.sh[1513]: + for d in accounts clock_skew network Feb 13 15:40:25.605200 init.sh[1620]: + /usr/bin/google_clock_skew_daemon Feb 13 15:40:25.609230 init.sh[1513]: + daemon_pids+=($!) Feb 13 15:40:25.609230 init.sh[1513]: + for d in accounts clock_skew network Feb 13 15:40:25.609230 init.sh[1513]: + daemon_pids+=($!) Feb 13 15:40:25.609230 init.sh[1513]: + NOTIFY_SOCKET=/run/systemd/notify Feb 13 15:40:25.609230 init.sh[1513]: + /usr/bin/systemd-notify --ready Feb 13 15:40:25.609495 init.sh[1619]: + /usr/bin/google_accounts_daemon Feb 13 15:40:25.609815 init.sh[1621]: + /usr/bin/google_network_daemon Feb 13 15:40:25.624962 systemd[1]: Started sshd@1-10.128.0.120:22-139.178.68.195:38474.service - OpenSSH per-connection server daemon (139.178.68.195:38474). Feb 13 15:40:25.643495 systemd[1]: Started oem-gce.service - GCE Linux Agent. Feb 13 15:40:25.667688 init.sh[1513]: + wait -n 1619 1620 1621 Feb 13 15:40:25.986820 sshd[1623]: Accepted publickey for core from 139.178.68.195 port 38474 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:40:25.988336 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:26.004445 systemd-logind[1456]: New session 2 of user core. Feb 13 15:40:26.010680 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:40:26.018888 google-networking[1621]: INFO Starting Google Networking daemon. Feb 13 15:40:26.063803 ntpd[1444]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:78%2]:123 Feb 13 15:40:26.064514 ntpd[1444]: 13 Feb 15:40:26 ntpd[1444]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:78%2]:123 Feb 13 15:40:26.071518 google-clock-skew[1620]: INFO Starting Google Clock Skew daemon. Feb 13 15:40:26.081834 google-clock-skew[1620]: INFO Clock drift token has changed: 0. Feb 13 15:40:26.000343 systemd-resolved[1385]: Clock change detected. Flushing caches. Feb 13 15:40:26.016913 systemd-journald[1114]: Time jumped backwards, rotating. Feb 13 15:40:26.001949 google-clock-skew[1620]: INFO Synced system time with hardware clock. Feb 13 15:40:26.038542 groupadd[1634]: group added to /etc/group: name=google-sudoers, GID=1000 Feb 13 15:40:26.043805 groupadd[1634]: group added to /etc/gshadow: name=google-sudoers Feb 13 15:40:26.095067 groupadd[1634]: new group: name=google-sudoers, GID=1000 Feb 13 15:40:26.102001 sshd[1631]: Connection closed by 139.178.68.195 port 38474 Feb 13 15:40:26.105634 sshd-session[1623]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:26.116412 systemd[1]: sshd@1-10.128.0.120:22-139.178.68.195:38474.service: Deactivated successfully. Feb 13 15:40:26.120534 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:40:26.122413 systemd-logind[1456]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:40:26.124636 systemd-logind[1456]: Removed session 2. Feb 13 15:40:26.133560 google-accounts[1619]: INFO Starting Google Accounts daemon. Feb 13 15:40:26.148105 google-accounts[1619]: WARNING OS Login not installed. Feb 13 15:40:26.150234 google-accounts[1619]: INFO Creating a new user account for 0. Feb 13 15:40:26.156137 init.sh[1648]: useradd: invalid user name '0': use --badname to ignore Feb 13 15:40:26.156756 google-accounts[1619]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Feb 13 15:40:26.181437 systemd[1]: Started sshd@2-10.128.0.120:22-139.178.68.195:38486.service - OpenSSH per-connection server daemon (139.178.68.195:38486). Feb 13 15:40:26.203533 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:40:26.217898 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:40:26.221945 (kubelet)[1655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:40:26.228553 systemd[1]: Startup finished in 1.056s (kernel) + 8.900s (initrd) + 9.740s (userspace) = 19.698s. Feb 13 15:40:26.499322 sshd[1654]: Accepted publickey for core from 139.178.68.195 port 38486 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:40:26.499896 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:26.507938 systemd-logind[1456]: New session 3 of user core. Feb 13 15:40:26.513519 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:40:26.713047 sshd[1666]: Connection closed by 139.178.68.195 port 38486 Feb 13 15:40:26.713918 sshd-session[1654]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:26.720008 systemd-logind[1456]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:40:26.721342 systemd[1]: sshd@2-10.128.0.120:22-139.178.68.195:38486.service: Deactivated successfully. Feb 13 15:40:26.724035 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:40:26.725769 systemd-logind[1456]: Removed session 3. Feb 13 15:40:27.132218 kubelet[1655]: E0213 15:40:27.132131 1655 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:40:27.135459 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:40:27.135731 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:40:27.136315 systemd[1]: kubelet.service: Consumed 1.216s CPU time, 246.2M memory peak. Feb 13 15:40:36.776715 systemd[1]: Started sshd@3-10.128.0.120:22-139.178.68.195:49942.service - OpenSSH per-connection server daemon (139.178.68.195:49942). Feb 13 15:40:37.069339 sshd[1675]: Accepted publickey for core from 139.178.68.195 port 49942 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:40:37.071122 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:37.078051 systemd-logind[1456]: New session 4 of user core. Feb 13 15:40:37.085512 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:40:37.241777 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:40:37.254862 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:40:37.284309 sshd[1677]: Connection closed by 139.178.68.195 port 49942 Feb 13 15:40:37.284557 sshd-session[1675]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:37.289936 systemd[1]: sshd@3-10.128.0.120:22-139.178.68.195:49942.service: Deactivated successfully. Feb 13 15:40:37.292580 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:40:37.294523 systemd-logind[1456]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:40:37.295957 systemd-logind[1456]: Removed session 4. Feb 13 15:40:37.345713 systemd[1]: Started sshd@4-10.128.0.120:22-139.178.68.195:49954.service - OpenSSH per-connection server daemon (139.178.68.195:49954). Feb 13 15:40:37.541499 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:40:37.558066 (kubelet)[1692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:40:37.619873 kubelet[1692]: E0213 15:40:37.619827 1692 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:40:37.624067 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:40:37.624326 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:40:37.624847 systemd[1]: kubelet.service: Consumed 188ms CPU time, 97.7M memory peak. Feb 13 15:40:37.640987 sshd[1686]: Accepted publickey for core from 139.178.68.195 port 49954 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:40:37.642707 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:37.649787 systemd-logind[1456]: New session 5 of user core. Feb 13 15:40:37.659508 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:40:37.849093 sshd[1701]: Connection closed by 139.178.68.195 port 49954 Feb 13 15:40:37.849921 sshd-session[1686]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:37.854236 systemd[1]: sshd@4-10.128.0.120:22-139.178.68.195:49954.service: Deactivated successfully. Feb 13 15:40:37.856712 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:40:37.858667 systemd-logind[1456]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:40:37.860002 systemd-logind[1456]: Removed session 5. Feb 13 15:40:37.906684 systemd[1]: Started sshd@5-10.128.0.120:22-139.178.68.195:49968.service - OpenSSH per-connection server daemon (139.178.68.195:49968). Feb 13 15:40:38.195792 sshd[1707]: Accepted publickey for core from 139.178.68.195 port 49968 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:40:38.197554 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:38.203446 systemd-logind[1456]: New session 6 of user core. Feb 13 15:40:38.210547 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:40:38.408784 sshd[1709]: Connection closed by 139.178.68.195 port 49968 Feb 13 15:40:38.409629 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Feb 13 15:40:38.414828 systemd[1]: sshd@5-10.128.0.120:22-139.178.68.195:49968.service: Deactivated successfully. Feb 13 15:40:38.417385 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:40:38.418409 systemd-logind[1456]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:40:38.419872 systemd-logind[1456]: Removed session 6. Feb 13 15:40:38.470697 systemd[1]: Started sshd@6-10.128.0.120:22-139.178.68.195:49970.service - OpenSSH per-connection server daemon (139.178.68.195:49970). Feb 13 15:40:38.755679 sshd[1715]: Accepted publickey for core from 139.178.68.195 port 49970 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:40:38.757491 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:40:38.763562 systemd-logind[1456]: New session 7 of user core. Feb 13 15:40:38.767500 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:40:38.948837 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:40:38.949381 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:40:39.428956 (dockerd)[1734]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 15:40:39.429225 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 15:40:39.866648 dockerd[1734]: time="2025-02-13T15:40:39.866572936Z" level=info msg="Starting up" Feb 13 15:40:39.996392 dockerd[1734]: time="2025-02-13T15:40:39.996337413Z" level=info msg="Loading containers: start." Feb 13 15:40:40.219307 kernel: Initializing XFRM netlink socket Feb 13 15:40:40.334895 systemd-networkd[1384]: docker0: Link UP Feb 13 15:40:40.372145 dockerd[1734]: time="2025-02-13T15:40:40.372078324Z" level=info msg="Loading containers: done." Feb 13 15:40:40.391036 dockerd[1734]: time="2025-02-13T15:40:40.390968167Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 15:40:40.391227 dockerd[1734]: time="2025-02-13T15:40:40.391096268Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 15:40:40.391311 dockerd[1734]: time="2025-02-13T15:40:40.391260160Z" level=info msg="Daemon has completed initialization" Feb 13 15:40:40.431374 dockerd[1734]: time="2025-02-13T15:40:40.431184962Z" level=info msg="API listen on /run/docker.sock" Feb 13 15:40:40.431708 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 15:40:41.409185 containerd[1474]: time="2025-02-13T15:40:41.408337628Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 15:40:41.965548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount624122222.mount: Deactivated successfully. Feb 13 15:40:43.641131 containerd[1474]: time="2025-02-13T15:40:43.641049721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:43.642726 containerd[1474]: time="2025-02-13T15:40:43.642661313Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=32684842" Feb 13 15:40:43.643954 containerd[1474]: time="2025-02-13T15:40:43.643876476Z" level=info msg="ImageCreate event name:\"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:43.647800 containerd[1474]: time="2025-02-13T15:40:43.647734821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:43.649495 containerd[1474]: time="2025-02-13T15:40:43.649099033Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"32675014\" in 2.240704491s" Feb 13 15:40:43.649495 containerd[1474]: time="2025-02-13T15:40:43.649161225Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\"" Feb 13 15:40:43.680530 containerd[1474]: time="2025-02-13T15:40:43.680483112Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 15:40:45.448547 containerd[1474]: time="2025-02-13T15:40:45.448470981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:45.451892 containerd[1474]: time="2025-02-13T15:40:45.451164234Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=29613479" Feb 13 15:40:45.455845 containerd[1474]: time="2025-02-13T15:40:45.455797879Z" level=info msg="ImageCreate event name:\"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:45.460222 containerd[1474]: time="2025-02-13T15:40:45.460155008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:45.461822 containerd[1474]: time="2025-02-13T15:40:45.461608638Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"31058091\" in 1.781057284s" Feb 13 15:40:45.461822 containerd[1474]: time="2025-02-13T15:40:45.461656975Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\"" Feb 13 15:40:45.491120 containerd[1474]: time="2025-02-13T15:40:45.491074183Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 15:40:46.575882 containerd[1474]: time="2025-02-13T15:40:46.575813328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:46.577557 containerd[1474]: time="2025-02-13T15:40:46.577487893Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=17784046" Feb 13 15:40:46.578433 containerd[1474]: time="2025-02-13T15:40:46.578357036Z" level=info msg="ImageCreate event name:\"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:46.582049 containerd[1474]: time="2025-02-13T15:40:46.581969713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:46.583566 containerd[1474]: time="2025-02-13T15:40:46.583411060Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"19228694\" in 1.092229232s" Feb 13 15:40:46.583566 containerd[1474]: time="2025-02-13T15:40:46.583454396Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\"" Feb 13 15:40:46.613298 containerd[1474]: time="2025-02-13T15:40:46.613239209Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 15:40:47.704220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1873800929.mount: Deactivated successfully. Feb 13 15:40:47.707076 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 15:40:47.716958 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:40:47.979541 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:40:47.992177 (kubelet)[2016]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:40:48.081496 kubelet[2016]: E0213 15:40:48.080779 2016 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:40:48.084769 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:40:48.085061 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:40:48.087366 systemd[1]: kubelet.service: Consumed 213ms CPU time, 94.3M memory peak. Feb 13 15:40:48.402501 containerd[1474]: time="2025-02-13T15:40:48.402416393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:48.403678 containerd[1474]: time="2025-02-13T15:40:48.403604565Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29059753" Feb 13 15:40:48.405091 containerd[1474]: time="2025-02-13T15:40:48.405023967Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:48.407816 containerd[1474]: time="2025-02-13T15:40:48.407771402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:48.409068 containerd[1474]: time="2025-02-13T15:40:48.408592277Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 1.79525039s" Feb 13 15:40:48.409068 containerd[1474]: time="2025-02-13T15:40:48.408648982Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 15:40:48.438659 containerd[1474]: time="2025-02-13T15:40:48.438606750Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 15:40:48.823349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3862165379.mount: Deactivated successfully. Feb 13 15:40:49.841312 containerd[1474]: time="2025-02-13T15:40:49.841229818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:49.842843 containerd[1474]: time="2025-02-13T15:40:49.842784491Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Feb 13 15:40:49.844045 containerd[1474]: time="2025-02-13T15:40:49.843970638Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:49.847412 containerd[1474]: time="2025-02-13T15:40:49.847341920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:49.848996 containerd[1474]: time="2025-02-13T15:40:49.848815451Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.410157914s" Feb 13 15:40:49.848996 containerd[1474]: time="2025-02-13T15:40:49.848859821Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 15:40:49.880042 containerd[1474]: time="2025-02-13T15:40:49.879973522Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 15:40:50.258185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3831678393.mount: Deactivated successfully. Feb 13 15:40:50.265764 containerd[1474]: time="2025-02-13T15:40:50.265698046Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:50.267086 containerd[1474]: time="2025-02-13T15:40:50.267034457Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=324188" Feb 13 15:40:50.268205 containerd[1474]: time="2025-02-13T15:40:50.268126135Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:50.272651 containerd[1474]: time="2025-02-13T15:40:50.272571497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:50.273859 containerd[1474]: time="2025-02-13T15:40:50.273688229Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 393.660927ms" Feb 13 15:40:50.273859 containerd[1474]: time="2025-02-13T15:40:50.273733467Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 15:40:50.305599 containerd[1474]: time="2025-02-13T15:40:50.305526923Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 15:40:50.686246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3195158055.mount: Deactivated successfully. Feb 13 15:40:52.816226 containerd[1474]: time="2025-02-13T15:40:52.816157054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:52.817816 containerd[1474]: time="2025-02-13T15:40:52.817750555Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57246061" Feb 13 15:40:52.819181 containerd[1474]: time="2025-02-13T15:40:52.819116011Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:52.827410 containerd[1474]: time="2025-02-13T15:40:52.827362326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:40:52.830765 containerd[1474]: time="2025-02-13T15:40:52.829595207Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.524015784s" Feb 13 15:40:52.830765 containerd[1474]: time="2025-02-13T15:40:52.829640907Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Feb 13 15:40:54.395627 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 15:40:57.311483 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:40:57.311797 systemd[1]: kubelet.service: Consumed 213ms CPU time, 94.3M memory peak. Feb 13 15:40:57.324665 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:40:57.360752 systemd[1]: Reload requested from client PID 2196 ('systemctl') (unit session-7.scope)... Feb 13 15:40:57.360779 systemd[1]: Reloading... Feb 13 15:40:57.514323 zram_generator::config[2238]: No configuration found. Feb 13 15:40:57.684968 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:40:57.826429 systemd[1]: Reloading finished in 464 ms. Feb 13 15:40:57.894525 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:40:57.903231 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:40:57.905170 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:40:57.905524 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:40:57.905594 systemd[1]: kubelet.service: Consumed 132ms CPU time, 83.5M memory peak. Feb 13 15:40:57.911638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:40:58.172379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:40:58.185907 (kubelet)[2294]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:40:58.245311 kubelet[2294]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:40:58.245311 kubelet[2294]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:40:58.245311 kubelet[2294]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:40:58.245845 kubelet[2294]: I0213 15:40:58.245385 2294 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:40:58.757539 kubelet[2294]: I0213 15:40:58.757496 2294 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:40:58.757539 kubelet[2294]: I0213 15:40:58.757531 2294 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:40:58.757868 kubelet[2294]: I0213 15:40:58.757831 2294 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:40:58.790502 kubelet[2294]: I0213 15:40:58.789312 2294 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:40:58.790502 kubelet[2294]: E0213 15:40:58.790416 2294 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.120:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.120:6443: connect: connection refused Feb 13 15:40:58.806870 kubelet[2294]: I0213 15:40:58.806823 2294 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:40:58.807314 kubelet[2294]: I0213 15:40:58.807239 2294 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:40:58.807572 kubelet[2294]: I0213 15:40:58.807316 2294 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:40:58.808642 kubelet[2294]: I0213 15:40:58.808598 2294 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:40:58.808642 kubelet[2294]: I0213 15:40:58.808635 2294 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:40:58.808864 kubelet[2294]: I0213 15:40:58.808826 2294 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:40:58.810081 kubelet[2294]: I0213 15:40:58.809941 2294 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:40:58.810081 kubelet[2294]: I0213 15:40:58.809970 2294 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:40:58.810081 kubelet[2294]: I0213 15:40:58.810005 2294 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:40:58.810081 kubelet[2294]: I0213 15:40:58.810029 2294 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:40:58.814590 kubelet[2294]: W0213 15:40:58.813392 2294 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.120:6443: connect: connection refused Feb 13 15:40:58.814590 kubelet[2294]: E0213 15:40:58.813496 2294 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.120:6443: connect: connection refused Feb 13 15:40:58.816570 kubelet[2294]: W0213 15:40:58.816392 2294 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.120:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.120:6443: connect: connection refused Feb 13 15:40:58.816570 kubelet[2294]: E0213 15:40:58.816455 2294 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.120:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.120:6443: connect: connection refused Feb 13 15:40:58.816822 kubelet[2294]: I0213 15:40:58.816801 2294 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:40:58.818937 kubelet[2294]: I0213 15:40:58.818911 2294 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:40:58.819298 kubelet[2294]: W0213 15:40:58.819097 2294 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:40:58.820047 kubelet[2294]: I0213 15:40:58.820003 2294 server.go:1264] "Started kubelet" Feb 13 15:40:58.825966 kubelet[2294]: I0213 15:40:58.825853 2294 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:40:58.827580 kubelet[2294]: I0213 15:40:58.827248 2294 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:40:58.829988 kubelet[2294]: I0213 15:40:58.829410 2294 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:40:58.829988 kubelet[2294]: I0213 15:40:58.829804 2294 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:40:58.830709 kubelet[2294]: E0213 15:40:58.830501 2294 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.120:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.120:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal.1823cecc80849181 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal,UID:ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal,},FirstTimestamp:2025-02-13 15:40:58.819965313 +0000 UTC m=+0.628268954,LastTimestamp:2025-02-13 15:40:58.819965313 +0000 UTC m=+0.628268954,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal,}" Feb 13 15:40:58.831250 kubelet[2294]: I0213 15:40:58.831209 2294 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:40:58.838039 kubelet[2294]: E0213 15:40:58.838010 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal\" not found" Feb 13 15:40:58.839840 kubelet[2294]: I0213 15:40:58.838196 2294 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:40:58.839840 kubelet[2294]: I0213 15:40:58.838852 2294 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:40:58.839840 kubelet[2294]: I0213 15:40:58.839116 2294 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:40:58.839840 kubelet[2294]: E0213 15:40:58.839477 2294 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:40:58.840911 kubelet[2294]: W0213 15:40:58.840780 2294 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.120:6443: connect: connection refused Feb 13 15:40:58.840911 kubelet[2294]: E0213 15:40:58.840850 2294 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.120:6443: connect: connection refused Feb 13 15:40:58.842138 kubelet[2294]: E0213 15:40:58.841612 2294 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.120:6443: connect: connection refused" interval="200ms" Feb 13 15:40:58.845261 kubelet[2294]: I0213 15:40:58.845231 2294 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:40:58.845261 kubelet[2294]: I0213 15:40:58.845260 2294 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:40:58.845427 kubelet[2294]: I0213 15:40:58.845399 2294 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:40:58.871597 kubelet[2294]: I0213 15:40:58.871516 2294 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:40:58.874333 kubelet[2294]: I0213 15:40:58.874302 2294 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:40:58.874560 kubelet[2294]: I0213 15:40:58.874534 2294 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:40:58.874635 kubelet[2294]: I0213 15:40:58.874573 2294 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:40:58.874681 kubelet[2294]: E0213 15:40:58.874636 2294 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:40:58.877623 kubelet[2294]: W0213 15:40:58.877448 2294 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.120:6443: connect: connection refused Feb 13 15:40:58.877821 kubelet[2294]: E0213 15:40:58.877801 2294 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.120:6443: connect: connection refused Feb 13 15:40:58.883615 kubelet[2294]: I0213 15:40:58.883593 2294 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:40:58.883802 kubelet[2294]: I0213 15:40:58.883740 2294 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:40:58.883802 kubelet[2294]: I0213 15:40:58.883773 2294 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:40:58.886066 kubelet[2294]: I0213 15:40:58.886027 2294 policy_none.go:49] "None policy: Start" Feb 13 15:40:58.887071 kubelet[2294]: I0213 15:40:58.886873 2294 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:40:58.887071 kubelet[2294]: I0213 15:40:58.886903 2294 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:40:58.894812 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:40:58.916047 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:40:58.920900 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:40:58.930315 kubelet[2294]: I0213 15:40:58.930261 2294 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:40:58.930595 kubelet[2294]: I0213 15:40:58.930538 2294 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:40:58.930753 kubelet[2294]: I0213 15:40:58.930722 2294 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:40:58.934329 kubelet[2294]: E0213 15:40:58.934074 2294 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal\" not found" Feb 13 15:40:58.947773 kubelet[2294]: I0213 15:40:58.947738 2294 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:40:58.948239 kubelet[2294]: E0213 15:40:58.948192 2294 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.120:6443/api/v1/nodes\": dial tcp 10.128.0.120:6443: connect: connection refused" node="ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:40:58.975461 kubelet[2294]: I0213 15:40:58.975364 2294 topology_manager.go:215] "Topology Admit Handler" podUID="81dead4a5eb467d9045b8cf664f965ae" podNamespace="kube-system" podName="kube-apiserver-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:40:58.981831 kubelet[2294]: I0213 15:40:58.981751 2294 topology_manager.go:215] "Topology Admit Handler" podUID="039eabe65f6d367c1fc07a301100a870" podNamespace="kube-system" podName="kube-controller-manager-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:40:59.005442 kubelet[2294]: I0213 15:40:59.005101 2294 topology_manager.go:215] "Topology Admit Handler" podUID="075e36480e769ca0020c4a3c01c1037e" podNamespace="kube-system" podName="kube-scheduler-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:40:59.012450 systemd[1]: Created slice kubepods-burstable-pod81dead4a5eb467d9045b8cf664f965ae.slice - libcontainer container kubepods-burstable-pod81dead4a5eb467d9045b8cf664f965ae.slice. Feb 13 15:40:59.027483 systemd[1]: Created slice kubepods-burstable-pod039eabe65f6d367c1fc07a301100a870.slice - libcontainer container kubepods-burstable-pod039eabe65f6d367c1fc07a301100a870.slice. Feb 13 15:40:59.038599 systemd[1]: Created slice kubepods-burstable-pod075e36480e769ca0020c4a3c01c1037e.slice - libcontainer container kubepods-burstable-pod075e36480e769ca0020c4a3c01c1037e.slice. Feb 13 15:40:59.040298 kubelet[2294]: I0213 15:40:59.040232 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/81dead4a5eb467d9045b8cf664f965ae-k8s-certs\") pod \"kube-apiserver-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal\" (UID: \"81dead4a5eb467d9045b8cf664f965ae\") " pod="kube-system/kube-apiserver-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:40:59.040431 kubelet[2294]: I0213 15:40:59.040305 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/039eabe65f6d367c1fc07a301100a870-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal\" (UID: \"039eabe65f6d367c1fc07a301100a870\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:40:59.040431 kubelet[2294]: I0213 15:40:59.040343 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/075e36480e769ca0020c4a3c01c1037e-kubeconfig\") pod \"kube-scheduler-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal\" (UID: \"075e36480e769ca0020c4a3c01c1037e\") " pod="kube-system/kube-scheduler-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:40:59.040431 kubelet[2294]: I0213 15:40:59.040373 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/81dead4a5eb467d9045b8cf664f965ae-ca-certs\") pod \"kube-apiserver-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal\" (UID: \"81dead4a5eb467d9045b8cf664f965ae\") " pod="kube-system/kube-apiserver-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:40:59.040431 kubelet[2294]: I0213 15:40:59.040403 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/81dead4a5eb467d9045b8cf664f965ae-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal\" (UID: \"81dead4a5eb467d9045b8cf664f965ae\") " pod="kube-system/kube-apiserver-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:40:59.040624 kubelet[2294]: I0213 15:40:59.040430 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/039eabe65f6d367c1fc07a301100a870-ca-certs\") pod \"kube-controller-manager-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal\" (UID: \"039eabe65f6d367c1fc07a301100a870\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:40:59.040624 kubelet[2294]: I0213 15:40:59.040475 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/039eabe65f6d367c1fc07a301100a870-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal\" (UID: \"039eabe65f6d367c1fc07a301100a870\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:40:59.040624 kubelet[2294]: I0213 15:40:59.040515 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/039eabe65f6d367c1fc07a301100a870-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal\" (UID: \"039eabe65f6d367c1fc07a301100a870\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:40:59.040624 kubelet[2294]: I0213 15:40:59.040551 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/039eabe65f6d367c1fc07a301100a870-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal\" (UID: \"039eabe65f6d367c1fc07a301100a870\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:40:59.042441 kubelet[2294]: E0213 15:40:59.042396 2294 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.120:6443: connect: connection refused" interval="400ms" Feb 13 15:40:59.154999 kubelet[2294]: I0213 15:40:59.154965 2294 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:40:59.155435 kubelet[2294]: E0213 15:40:59.155388 2294 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.120:6443/api/v1/nodes\": dial tcp 10.128.0.120:6443: connect: connection refused" node="ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:40:59.322577 containerd[1474]: time="2025-02-13T15:40:59.322338240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal,Uid:81dead4a5eb467d9045b8cf664f965ae,Namespace:kube-system,Attempt:0,}" Feb 13 15:40:59.336337 containerd[1474]: time="2025-02-13T15:40:59.336249243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal,Uid:039eabe65f6d367c1fc07a301100a870,Namespace:kube-system,Attempt:0,}" Feb 13 15:40:59.343293 containerd[1474]: time="2025-02-13T15:40:59.343233564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal,Uid:075e36480e769ca0020c4a3c01c1037e,Namespace:kube-system,Attempt:0,}" Feb 13 15:40:59.443441 kubelet[2294]: E0213 15:40:59.443372 2294 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.120:6443: connect: connection refused" interval="800ms" Feb 13 15:40:59.563458 kubelet[2294]: I0213 15:40:59.563406 2294 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:40:59.563856 kubelet[2294]: E0213 15:40:59.563805 2294 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.120:6443/api/v1/nodes\": dial tcp 10.128.0.120:6443: connect: connection refused" node="ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:40:59.699249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1523302710.mount: Deactivated successfully. Feb 13 15:40:59.707325 containerd[1474]: time="2025-02-13T15:40:59.706932657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:40:59.711185 containerd[1474]: time="2025-02-13T15:40:59.710996750Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Feb 13 15:40:59.712310 containerd[1474]: time="2025-02-13T15:40:59.712239131Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:40:59.713387 containerd[1474]: time="2025-02-13T15:40:59.713339943Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:40:59.715593 containerd[1474]: time="2025-02-13T15:40:59.715530075Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:40:59.717153 containerd[1474]: time="2025-02-13T15:40:59.717021068Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:40:59.718466 containerd[1474]: time="2025-02-13T15:40:59.718230538Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:40:59.719599 containerd[1474]: time="2025-02-13T15:40:59.719489974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:40:59.721886 containerd[1474]: time="2025-02-13T15:40:59.721267385Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 384.87366ms" Feb 13 15:40:59.722739 containerd[1474]: time="2025-02-13T15:40:59.722681929Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 400.217715ms" Feb 13 15:40:59.726613 containerd[1474]: time="2025-02-13T15:40:59.726561031Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 383.188194ms" Feb 13 15:40:59.849746 kubelet[2294]: W0213 15:40:59.849652 2294 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.120:6443: connect: connection refused Feb 13 15:40:59.849746 kubelet[2294]: E0213 15:40:59.849721 2294 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.120:6443: connect: connection refused Feb 13 15:40:59.910642 containerd[1474]: time="2025-02-13T15:40:59.907225336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:40:59.910642 containerd[1474]: time="2025-02-13T15:40:59.907332605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:40:59.910642 containerd[1474]: time="2025-02-13T15:40:59.907356646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:59.912861 containerd[1474]: time="2025-02-13T15:40:59.912804322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:59.921222 containerd[1474]: time="2025-02-13T15:40:59.920173837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:40:59.921222 containerd[1474]: time="2025-02-13T15:40:59.921128049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:40:59.921222 containerd[1474]: time="2025-02-13T15:40:59.921151165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:59.924076 containerd[1474]: time="2025-02-13T15:40:59.922330077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:59.924691 containerd[1474]: time="2025-02-13T15:40:59.924347597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:40:59.924691 containerd[1474]: time="2025-02-13T15:40:59.924422392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:40:59.924691 containerd[1474]: time="2025-02-13T15:40:59.924442374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:59.924691 containerd[1474]: time="2025-02-13T15:40:59.924580852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:40:59.962515 systemd[1]: Started cri-containerd-814bc3c290398c2da7ac917e87ae91fa12a693dc9109ff95ae6d97807dd9a726.scope - libcontainer container 814bc3c290398c2da7ac917e87ae91fa12a693dc9109ff95ae6d97807dd9a726. Feb 13 15:40:59.971444 systemd[1]: Started cri-containerd-5de98f8957b6c3a8bafc5212119e6273aaeedfd1bfebeae5a66ab0bed5b3fc09.scope - libcontainer container 5de98f8957b6c3a8bafc5212119e6273aaeedfd1bfebeae5a66ab0bed5b3fc09. Feb 13 15:40:59.981635 systemd[1]: Started cri-containerd-15e5bd038d7b25dba7cbc4ec268f4b79aa09fa739598f02575dff21e20340fb4.scope - libcontainer container 15e5bd038d7b25dba7cbc4ec268f4b79aa09fa739598f02575dff21e20340fb4. Feb 13 15:41:00.066503 containerd[1474]: time="2025-02-13T15:41:00.066103211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal,Uid:039eabe65f6d367c1fc07a301100a870,Namespace:kube-system,Attempt:0,} returns sandbox id \"15e5bd038d7b25dba7cbc4ec268f4b79aa09fa739598f02575dff21e20340fb4\"" Feb 13 15:41:00.073544 kubelet[2294]: E0213 15:41:00.072492 2294 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4230-0-1-e2ec6a93160059e7f66f.c.flat" Feb 13 15:41:00.077250 containerd[1474]: time="2025-02-13T15:41:00.076953980Z" level=info msg="CreateContainer within sandbox \"15e5bd038d7b25dba7cbc4ec268f4b79aa09fa739598f02575dff21e20340fb4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 15:41:00.077796 containerd[1474]: time="2025-02-13T15:41:00.077680250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal,Uid:81dead4a5eb467d9045b8cf664f965ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"814bc3c290398c2da7ac917e87ae91fa12a693dc9109ff95ae6d97807dd9a726\"" Feb 13 15:41:00.080640 kubelet[2294]: E0213 15:41:00.080584 2294 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-21291" Feb 13 15:41:00.083077 containerd[1474]: time="2025-02-13T15:41:00.082903932Z" level=info msg="CreateContainer within sandbox \"814bc3c290398c2da7ac917e87ae91fa12a693dc9109ff95ae6d97807dd9a726\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 15:41:00.106356 containerd[1474]: time="2025-02-13T15:41:00.105887853Z" level=info msg="CreateContainer within sandbox \"15e5bd038d7b25dba7cbc4ec268f4b79aa09fa739598f02575dff21e20340fb4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2800c941fa99f843c300bc8faef700444554ed3e82e0d0c2dcd9e2c39f6b09f4\"" Feb 13 15:41:00.107655 containerd[1474]: time="2025-02-13T15:41:00.107506393Z" level=info msg="StartContainer for \"2800c941fa99f843c300bc8faef700444554ed3e82e0d0c2dcd9e2c39f6b09f4\"" Feb 13 15:41:00.116742 containerd[1474]: time="2025-02-13T15:41:00.116697255Z" level=info msg="CreateContainer within sandbox \"814bc3c290398c2da7ac917e87ae91fa12a693dc9109ff95ae6d97807dd9a726\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8dac95abe6e6f4a4ad215b4c8cd0b757dc2afa63dd49a41d05c7c004b9c6d5c7\"" Feb 13 15:41:00.117785 containerd[1474]: time="2025-02-13T15:41:00.117602393Z" level=info msg="StartContainer for \"8dac95abe6e6f4a4ad215b4c8cd0b757dc2afa63dd49a41d05c7c004b9c6d5c7\"" Feb 13 15:41:00.132502 containerd[1474]: time="2025-02-13T15:41:00.132401313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal,Uid:075e36480e769ca0020c4a3c01c1037e,Namespace:kube-system,Attempt:0,} returns sandbox id \"5de98f8957b6c3a8bafc5212119e6273aaeedfd1bfebeae5a66ab0bed5b3fc09\"" Feb 13 15:41:00.137513 kubelet[2294]: W0213 15:41:00.137316 2294 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.120:6443: connect: connection refused Feb 13 15:41:00.137513 kubelet[2294]: E0213 15:41:00.137393 2294 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.120:6443: connect: connection refused Feb 13 15:41:00.138619 kubelet[2294]: E0213 15:41:00.138303 2294 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-21291" Feb 13 15:41:00.144120 containerd[1474]: time="2025-02-13T15:41:00.143865136Z" level=info msg="CreateContainer within sandbox \"5de98f8957b6c3a8bafc5212119e6273aaeedfd1bfebeae5a66ab0bed5b3fc09\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 15:41:00.165530 systemd[1]: Started cri-containerd-2800c941fa99f843c300bc8faef700444554ed3e82e0d0c2dcd9e2c39f6b09f4.scope - libcontainer container 2800c941fa99f843c300bc8faef700444554ed3e82e0d0c2dcd9e2c39f6b09f4. Feb 13 15:41:00.182505 systemd[1]: Started cri-containerd-8dac95abe6e6f4a4ad215b4c8cd0b757dc2afa63dd49a41d05c7c004b9c6d5c7.scope - libcontainer container 8dac95abe6e6f4a4ad215b4c8cd0b757dc2afa63dd49a41d05c7c004b9c6d5c7. Feb 13 15:41:00.185293 containerd[1474]: time="2025-02-13T15:41:00.184753362Z" level=info msg="CreateContainer within sandbox \"5de98f8957b6c3a8bafc5212119e6273aaeedfd1bfebeae5a66ab0bed5b3fc09\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"60577013772e02675b1924a28c40416db95370c1d7f2dfaaee4f5d84dd6b8b91\"" Feb 13 15:41:00.185687 containerd[1474]: time="2025-02-13T15:41:00.185652247Z" level=info msg="StartContainer for \"60577013772e02675b1924a28c40416db95370c1d7f2dfaaee4f5d84dd6b8b91\"" Feb 13 15:41:00.235685 kubelet[2294]: W0213 15:41:00.235458 2294 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.120:6443: connect: connection refused Feb 13 15:41:00.235685 kubelet[2294]: E0213 15:41:00.235547 2294 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.120:6443: connect: connection refused Feb 13 15:41:00.238105 systemd[1]: Started cri-containerd-60577013772e02675b1924a28c40416db95370c1d7f2dfaaee4f5d84dd6b8b91.scope - libcontainer container 60577013772e02675b1924a28c40416db95370c1d7f2dfaaee4f5d84dd6b8b91. Feb 13 15:41:00.244658 kubelet[2294]: E0213 15:41:00.244579 2294 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.120:6443: connect: connection refused" interval="1.6s" Feb 13 15:41:00.283162 kubelet[2294]: W0213 15:41:00.283080 2294 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.120:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.120:6443: connect: connection refused Feb 13 15:41:00.284484 kubelet[2294]: E0213 15:41:00.284386 2294 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.120:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.120:6443: connect: connection refused Feb 13 15:41:00.304842 containerd[1474]: time="2025-02-13T15:41:00.304784280Z" level=info msg="StartContainer for \"2800c941fa99f843c300bc8faef700444554ed3e82e0d0c2dcd9e2c39f6b09f4\" returns successfully" Feb 13 15:41:00.310050 containerd[1474]: time="2025-02-13T15:41:00.309991076Z" level=info msg="StartContainer for \"8dac95abe6e6f4a4ad215b4c8cd0b757dc2afa63dd49a41d05c7c004b9c6d5c7\" returns successfully" Feb 13 15:41:00.376236 kubelet[2294]: I0213 15:41:00.376190 2294 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:41:00.376709 kubelet[2294]: E0213 15:41:00.376666 2294 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.120:6443/api/v1/nodes\": dial tcp 10.128.0.120:6443: connect: connection refused" node="ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:41:00.389481 containerd[1474]: time="2025-02-13T15:41:00.389431872Z" level=info msg="StartContainer for \"60577013772e02675b1924a28c40416db95370c1d7f2dfaaee4f5d84dd6b8b91\" returns successfully" Feb 13 15:41:01.984546 kubelet[2294]: I0213 15:41:01.984501 2294 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:41:03.412708 kubelet[2294]: E0213 15:41:03.412654 2294 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal\" not found" node="ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:41:03.547263 kubelet[2294]: I0213 15:41:03.547206 2294 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:41:03.819332 kubelet[2294]: I0213 15:41:03.819182 2294 apiserver.go:52] "Watching apiserver" Feb 13 15:41:03.839853 kubelet[2294]: I0213 15:41:03.839797 2294 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:41:05.139579 systemd[1]: Reload requested from client PID 2567 ('systemctl') (unit session-7.scope)... Feb 13 15:41:05.139604 systemd[1]: Reloading... Feb 13 15:41:05.297321 zram_generator::config[2622]: No configuration found. Feb 13 15:41:05.445331 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:41:05.620744 systemd[1]: Reloading finished in 480 ms. Feb 13 15:41:05.661980 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:41:05.663919 kubelet[2294]: I0213 15:41:05.663390 2294 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:41:05.678151 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 15:41:05.678591 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:41:05.678756 systemd[1]: kubelet.service: Consumed 1.097s CPU time, 115.1M memory peak. Feb 13 15:41:05.687714 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:41:05.958340 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:41:05.970844 (kubelet)[2660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:41:06.039448 kubelet[2660]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:41:06.039448 kubelet[2660]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:41:06.039448 kubelet[2660]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:41:06.039989 kubelet[2660]: I0213 15:41:06.039567 2660 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:41:06.048140 kubelet[2660]: I0213 15:41:06.048081 2660 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:41:06.048581 kubelet[2660]: I0213 15:41:06.048494 2660 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:41:06.048838 kubelet[2660]: I0213 15:41:06.048809 2660 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:41:06.051041 kubelet[2660]: I0213 15:41:06.051011 2660 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 15:41:06.052839 kubelet[2660]: I0213 15:41:06.052630 2660 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:41:06.064881 kubelet[2660]: I0213 15:41:06.064849 2660 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:41:06.065293 kubelet[2660]: I0213 15:41:06.065193 2660 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:41:06.065553 kubelet[2660]: I0213 15:41:06.065236 2660 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:41:06.065718 kubelet[2660]: I0213 15:41:06.065564 2660 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:41:06.065718 kubelet[2660]: I0213 15:41:06.065581 2660 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:41:06.065718 kubelet[2660]: I0213 15:41:06.065646 2660 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:41:06.065874 kubelet[2660]: I0213 15:41:06.065805 2660 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:41:06.065874 kubelet[2660]: I0213 15:41:06.065824 2660 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:41:06.067260 kubelet[2660]: I0213 15:41:06.066018 2660 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:41:06.067260 kubelet[2660]: I0213 15:41:06.066055 2660 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:41:06.069792 kubelet[2660]: I0213 15:41:06.069744 2660 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:41:06.070021 kubelet[2660]: I0213 15:41:06.070000 2660 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:41:06.070601 kubelet[2660]: I0213 15:41:06.070578 2660 server.go:1264] "Started kubelet" Feb 13 15:41:06.075779 kubelet[2660]: I0213 15:41:06.075751 2660 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:41:06.084603 kubelet[2660]: I0213 15:41:06.084551 2660 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:41:06.086919 kubelet[2660]: I0213 15:41:06.086885 2660 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:41:06.087144 kubelet[2660]: I0213 15:41:06.087124 2660 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:41:06.087801 kubelet[2660]: I0213 15:41:06.087768 2660 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:41:06.088018 kubelet[2660]: I0213 15:41:06.087988 2660 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:41:06.090187 kubelet[2660]: I0213 15:41:06.089476 2660 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:41:06.091764 kubelet[2660]: I0213 15:41:06.091740 2660 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:41:06.109628 kubelet[2660]: I0213 15:41:06.109576 2660 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:41:06.110013 kubelet[2660]: I0213 15:41:06.109989 2660 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:41:06.126428 kubelet[2660]: I0213 15:41:06.126338 2660 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:41:06.135547 kubelet[2660]: I0213 15:41:06.135502 2660 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:41:06.137848 kubelet[2660]: I0213 15:41:06.137733 2660 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:41:06.137848 kubelet[2660]: I0213 15:41:06.137765 2660 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:41:06.137848 kubelet[2660]: I0213 15:41:06.137791 2660 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:41:06.138579 kubelet[2660]: E0213 15:41:06.138170 2660 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:41:06.151721 kubelet[2660]: E0213 15:41:06.151395 2660 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:41:06.199489 kubelet[2660]: I0213 15:41:06.199118 2660 kubelet_node_status.go:73] "Attempting to register node" node="ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:41:06.214920 kubelet[2660]: I0213 15:41:06.213750 2660 kubelet_node_status.go:112] "Node was previously registered" node="ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:41:06.215700 kubelet[2660]: I0213 15:41:06.215253 2660 kubelet_node_status.go:76] "Successfully registered node" node="ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:41:06.239195 kubelet[2660]: E0213 15:41:06.239154 2660 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 15:41:06.240623 kubelet[2660]: I0213 15:41:06.240579 2660 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:41:06.240623 kubelet[2660]: I0213 15:41:06.240600 2660 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:41:06.240623 kubelet[2660]: I0213 15:41:06.240629 2660 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:41:06.241095 kubelet[2660]: I0213 15:41:06.240839 2660 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 15:41:06.241095 kubelet[2660]: I0213 15:41:06.240855 2660 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 15:41:06.241095 kubelet[2660]: I0213 15:41:06.240887 2660 policy_none.go:49] "None policy: Start" Feb 13 15:41:06.241919 kubelet[2660]: I0213 15:41:06.241805 2660 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:41:06.241919 kubelet[2660]: I0213 15:41:06.241844 2660 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:41:06.242304 kubelet[2660]: I0213 15:41:06.242137 2660 state_mem.go:75] "Updated machine memory state" Feb 13 15:41:06.250180 kubelet[2660]: I0213 15:41:06.250065 2660 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:41:06.251004 kubelet[2660]: I0213 15:41:06.250465 2660 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:41:06.251004 kubelet[2660]: I0213 15:41:06.250755 2660 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:41:06.439422 kubelet[2660]: I0213 15:41:06.439351 2660 topology_manager.go:215] "Topology Admit Handler" podUID="81dead4a5eb467d9045b8cf664f965ae" podNamespace="kube-system" podName="kube-apiserver-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:41:06.439611 kubelet[2660]: I0213 15:41:06.439501 2660 topology_manager.go:215] "Topology Admit Handler" podUID="039eabe65f6d367c1fc07a301100a870" podNamespace="kube-system" podName="kube-controller-manager-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:41:06.439611 kubelet[2660]: I0213 15:41:06.439607 2660 topology_manager.go:215] "Topology Admit Handler" podUID="075e36480e769ca0020c4a3c01c1037e" podNamespace="kube-system" podName="kube-scheduler-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:41:06.452107 kubelet[2660]: W0213 15:41:06.452063 2660 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 15:41:06.454971 kubelet[2660]: W0213 15:41:06.454851 2660 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 15:41:06.455407 kubelet[2660]: W0213 15:41:06.455102 2660 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 15:41:06.490146 kubelet[2660]: I0213 15:41:06.489692 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/039eabe65f6d367c1fc07a301100a870-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal\" (UID: \"039eabe65f6d367c1fc07a301100a870\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:41:06.490146 kubelet[2660]: I0213 15:41:06.489770 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/81dead4a5eb467d9045b8cf664f965ae-k8s-certs\") pod \"kube-apiserver-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal\" (UID: \"81dead4a5eb467d9045b8cf664f965ae\") " pod="kube-system/kube-apiserver-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:41:06.490146 kubelet[2660]: I0213 15:41:06.489805 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/039eabe65f6d367c1fc07a301100a870-ca-certs\") pod \"kube-controller-manager-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal\" (UID: \"039eabe65f6d367c1fc07a301100a870\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:41:06.490146 kubelet[2660]: I0213 15:41:06.489835 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/039eabe65f6d367c1fc07a301100a870-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal\" (UID: \"039eabe65f6d367c1fc07a301100a870\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:41:06.490702 kubelet[2660]: I0213 15:41:06.489860 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/039eabe65f6d367c1fc07a301100a870-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal\" (UID: \"039eabe65f6d367c1fc07a301100a870\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:41:06.490702 kubelet[2660]: I0213 15:41:06.489886 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/81dead4a5eb467d9045b8cf664f965ae-ca-certs\") pod \"kube-apiserver-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal\" (UID: \"81dead4a5eb467d9045b8cf664f965ae\") " pod="kube-system/kube-apiserver-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:41:06.490702 kubelet[2660]: I0213 15:41:06.489944 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/81dead4a5eb467d9045b8cf664f965ae-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal\" (UID: \"81dead4a5eb467d9045b8cf664f965ae\") " pod="kube-system/kube-apiserver-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:41:06.490702 kubelet[2660]: I0213 15:41:06.489985 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/039eabe65f6d367c1fc07a301100a870-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal\" (UID: \"039eabe65f6d367c1fc07a301100a870\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:41:06.490918 kubelet[2660]: I0213 15:41:06.490021 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/075e36480e769ca0020c4a3c01c1037e-kubeconfig\") pod \"kube-scheduler-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal\" (UID: \"075e36480e769ca0020c4a3c01c1037e\") " pod="kube-system/kube-scheduler-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" Feb 13 15:41:07.066838 kubelet[2660]: I0213 15:41:07.066787 2660 apiserver.go:52] "Watching apiserver" Feb 13 15:41:07.088659 kubelet[2660]: I0213 15:41:07.088588 2660 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:41:07.223549 kubelet[2660]: I0213 15:41:07.222843 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" podStartSLOduration=1.222819681 podStartE2EDuration="1.222819681s" podCreationTimestamp="2025-02-13 15:41:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:41:07.222156212 +0000 UTC m=+1.243972198" watchObservedRunningTime="2025-02-13 15:41:07.222819681 +0000 UTC m=+1.244635664" Feb 13 15:41:07.249553 kubelet[2660]: I0213 15:41:07.248532 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" podStartSLOduration=1.248506408 podStartE2EDuration="1.248506408s" podCreationTimestamp="2025-02-13 15:41:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:41:07.234216279 +0000 UTC m=+1.256032287" watchObservedRunningTime="2025-02-13 15:41:07.248506408 +0000 UTC m=+1.270322399" Feb 13 15:41:07.249553 kubelet[2660]: I0213 15:41:07.248676 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-0-1-e2ec6a93160059e7f66f.c.flatcar-212911.internal" podStartSLOduration=1.248667377 podStartE2EDuration="1.248667377s" podCreationTimestamp="2025-02-13 15:41:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:41:07.248262988 +0000 UTC m=+1.270078976" watchObservedRunningTime="2025-02-13 15:41:07.248667377 +0000 UTC m=+1.270483363" Feb 13 15:41:07.385899 sudo[1718]: pam_unix(sudo:session): session closed for user root Feb 13 15:41:07.427818 sshd[1717]: Connection closed by 139.178.68.195 port 49970 Feb 13 15:41:07.428796 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:07.433545 systemd[1]: sshd@6-10.128.0.120:22-139.178.68.195:49970.service: Deactivated successfully. Feb 13 15:41:07.436763 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:41:07.437131 systemd[1]: session-7.scope: Consumed 6.362s CPU time, 257.3M memory peak. Feb 13 15:41:07.439941 systemd-logind[1456]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:41:07.441793 systemd-logind[1456]: Removed session 7. Feb 13 15:41:08.531415 update_engine[1462]: I20250213 15:41:08.531327 1462 update_attempter.cc:509] Updating boot flags... Feb 13 15:41:08.602409 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (2727) Feb 13 15:41:08.761300 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (2730) Feb 13 15:41:08.922932 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (2730) Feb 13 15:41:21.308075 kubelet[2660]: I0213 15:41:21.308030 2660 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 15:41:21.308873 containerd[1474]: time="2025-02-13T15:41:21.308628002Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:41:21.309325 kubelet[2660]: I0213 15:41:21.309182 2660 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 15:41:21.644795 kubelet[2660]: I0213 15:41:21.643747 2660 topology_manager.go:215] "Topology Admit Handler" podUID="906aaf07-6516-4383-934b-0daba3c0e49a" podNamespace="kube-system" podName="kube-proxy-nfzhd" Feb 13 15:41:21.660483 systemd[1]: Created slice kubepods-besteffort-pod906aaf07_6516_4383_934b_0daba3c0e49a.slice - libcontainer container kubepods-besteffort-pod906aaf07_6516_4383_934b_0daba3c0e49a.slice. Feb 13 15:41:21.663204 kubelet[2660]: I0213 15:41:21.663157 2660 topology_manager.go:215] "Topology Admit Handler" podUID="3c102168-4c3d-4e83-81a0-afd4ae598123" podNamespace="kube-flannel" podName="kube-flannel-ds-dml4q" Feb 13 15:41:21.685681 systemd[1]: Created slice kubepods-burstable-pod3c102168_4c3d_4e83_81a0_afd4ae598123.slice - libcontainer container kubepods-burstable-pod3c102168_4c3d_4e83_81a0_afd4ae598123.slice. Feb 13 15:41:21.691912 kubelet[2660]: I0213 15:41:21.691775 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/3c102168-4c3d-4e83-81a0-afd4ae598123-cni\") pod \"kube-flannel-ds-dml4q\" (UID: \"3c102168-4c3d-4e83-81a0-afd4ae598123\") " pod="kube-flannel/kube-flannel-ds-dml4q" Feb 13 15:41:21.691912 kubelet[2660]: I0213 15:41:21.691889 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/3c102168-4c3d-4e83-81a0-afd4ae598123-flannel-cfg\") pod \"kube-flannel-ds-dml4q\" (UID: \"3c102168-4c3d-4e83-81a0-afd4ae598123\") " pod="kube-flannel/kube-flannel-ds-dml4q" Feb 13 15:41:21.692507 kubelet[2660]: I0213 15:41:21.691922 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c102168-4c3d-4e83-81a0-afd4ae598123-xtables-lock\") pod \"kube-flannel-ds-dml4q\" (UID: \"3c102168-4c3d-4e83-81a0-afd4ae598123\") " pod="kube-flannel/kube-flannel-ds-dml4q" Feb 13 15:41:21.692507 kubelet[2660]: I0213 15:41:21.691955 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/906aaf07-6516-4383-934b-0daba3c0e49a-xtables-lock\") pod \"kube-proxy-nfzhd\" (UID: \"906aaf07-6516-4383-934b-0daba3c0e49a\") " pod="kube-system/kube-proxy-nfzhd" Feb 13 15:41:21.692507 kubelet[2660]: I0213 15:41:21.691984 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/906aaf07-6516-4383-934b-0daba3c0e49a-lib-modules\") pod \"kube-proxy-nfzhd\" (UID: \"906aaf07-6516-4383-934b-0daba3c0e49a\") " pod="kube-system/kube-proxy-nfzhd" Feb 13 15:41:21.692507 kubelet[2660]: I0213 15:41:21.692014 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szx49\" (UniqueName: \"kubernetes.io/projected/3c102168-4c3d-4e83-81a0-afd4ae598123-kube-api-access-szx49\") pod \"kube-flannel-ds-dml4q\" (UID: \"3c102168-4c3d-4e83-81a0-afd4ae598123\") " pod="kube-flannel/kube-flannel-ds-dml4q" Feb 13 15:41:21.692507 kubelet[2660]: I0213 15:41:21.692041 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3c102168-4c3d-4e83-81a0-afd4ae598123-run\") pod \"kube-flannel-ds-dml4q\" (UID: \"3c102168-4c3d-4e83-81a0-afd4ae598123\") " pod="kube-flannel/kube-flannel-ds-dml4q" Feb 13 15:41:21.692751 kubelet[2660]: I0213 15:41:21.692069 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/3c102168-4c3d-4e83-81a0-afd4ae598123-cni-plugin\") pod \"kube-flannel-ds-dml4q\" (UID: \"3c102168-4c3d-4e83-81a0-afd4ae598123\") " pod="kube-flannel/kube-flannel-ds-dml4q" Feb 13 15:41:21.692751 kubelet[2660]: I0213 15:41:21.692094 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/906aaf07-6516-4383-934b-0daba3c0e49a-kube-proxy\") pod \"kube-proxy-nfzhd\" (UID: \"906aaf07-6516-4383-934b-0daba3c0e49a\") " pod="kube-system/kube-proxy-nfzhd" Feb 13 15:41:21.692751 kubelet[2660]: I0213 15:41:21.692118 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bst8\" (UniqueName: \"kubernetes.io/projected/906aaf07-6516-4383-934b-0daba3c0e49a-kube-api-access-2bst8\") pod \"kube-proxy-nfzhd\" (UID: \"906aaf07-6516-4383-934b-0daba3c0e49a\") " pod="kube-system/kube-proxy-nfzhd" Feb 13 15:41:21.801802 kubelet[2660]: E0213 15:41:21.801324 2660 projected.go:294] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 15:41:21.801802 kubelet[2660]: E0213 15:41:21.801383 2660 projected.go:200] Error preparing data for projected volume kube-api-access-szx49 for pod kube-flannel/kube-flannel-ds-dml4q: configmap "kube-root-ca.crt" not found Feb 13 15:41:21.801802 kubelet[2660]: E0213 15:41:21.801486 2660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3c102168-4c3d-4e83-81a0-afd4ae598123-kube-api-access-szx49 podName:3c102168-4c3d-4e83-81a0-afd4ae598123 nodeName:}" failed. No retries permitted until 2025-02-13 15:41:22.301459556 +0000 UTC m=+16.323275537 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-szx49" (UniqueName: "kubernetes.io/projected/3c102168-4c3d-4e83-81a0-afd4ae598123-kube-api-access-szx49") pod "kube-flannel-ds-dml4q" (UID: "3c102168-4c3d-4e83-81a0-afd4ae598123") : configmap "kube-root-ca.crt" not found Feb 13 15:41:21.803124 kubelet[2660]: E0213 15:41:21.802650 2660 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Feb 13 15:41:21.803124 kubelet[2660]: E0213 15:41:21.802680 2660 projected.go:200] Error preparing data for projected volume kube-api-access-2bst8 for pod kube-system/kube-proxy-nfzhd: configmap "kube-root-ca.crt" not found Feb 13 15:41:21.803124 kubelet[2660]: E0213 15:41:21.802757 2660 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/906aaf07-6516-4383-934b-0daba3c0e49a-kube-api-access-2bst8 podName:906aaf07-6516-4383-934b-0daba3c0e49a nodeName:}" failed. No retries permitted until 2025-02-13 15:41:22.302737505 +0000 UTC m=+16.324553488 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2bst8" (UniqueName: "kubernetes.io/projected/906aaf07-6516-4383-934b-0daba3c0e49a-kube-api-access-2bst8") pod "kube-proxy-nfzhd" (UID: "906aaf07-6516-4383-934b-0daba3c0e49a") : configmap "kube-root-ca.crt" not found Feb 13 15:41:22.580746 containerd[1474]: time="2025-02-13T15:41:22.580686739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nfzhd,Uid:906aaf07-6516-4383-934b-0daba3c0e49a,Namespace:kube-system,Attempt:0,}" Feb 13 15:41:22.590634 containerd[1474]: time="2025-02-13T15:41:22.590556064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-dml4q,Uid:3c102168-4c3d-4e83-81a0-afd4ae598123,Namespace:kube-flannel,Attempt:0,}" Feb 13 15:41:22.632437 containerd[1474]: time="2025-02-13T15:41:22.631666011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:41:22.632437 containerd[1474]: time="2025-02-13T15:41:22.631736999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:41:22.632437 containerd[1474]: time="2025-02-13T15:41:22.631761419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:41:22.632437 containerd[1474]: time="2025-02-13T15:41:22.631885520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:41:22.675710 containerd[1474]: time="2025-02-13T15:41:22.675289022Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:41:22.676122 containerd[1474]: time="2025-02-13T15:41:22.676025206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:41:22.677305 containerd[1474]: time="2025-02-13T15:41:22.677014523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:41:22.677532 systemd[1]: Started cri-containerd-6bcca2a0ad8d4fec077bf230ba38a71bf6219cd42df1eddafb7b3dc222d047a9.scope - libcontainer container 6bcca2a0ad8d4fec077bf230ba38a71bf6219cd42df1eddafb7b3dc222d047a9. Feb 13 15:41:22.678011 containerd[1474]: time="2025-02-13T15:41:22.677610588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:41:22.710506 systemd[1]: Started cri-containerd-2d47320b2e30a62f50eb5006902f9072f4bedade60b78efb74001e8192b7dda9.scope - libcontainer container 2d47320b2e30a62f50eb5006902f9072f4bedade60b78efb74001e8192b7dda9. Feb 13 15:41:22.742356 containerd[1474]: time="2025-02-13T15:41:22.742213632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nfzhd,Uid:906aaf07-6516-4383-934b-0daba3c0e49a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bcca2a0ad8d4fec077bf230ba38a71bf6219cd42df1eddafb7b3dc222d047a9\"" Feb 13 15:41:22.750954 containerd[1474]: time="2025-02-13T15:41:22.750854591Z" level=info msg="CreateContainer within sandbox \"6bcca2a0ad8d4fec077bf230ba38a71bf6219cd42df1eddafb7b3dc222d047a9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:41:22.774310 containerd[1474]: time="2025-02-13T15:41:22.773674593Z" level=info msg="CreateContainer within sandbox \"6bcca2a0ad8d4fec077bf230ba38a71bf6219cd42df1eddafb7b3dc222d047a9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ab249e4aa0247e8fb3ea3beb8c38f12369a480086baa89b4a34d13214af97cff\"" Feb 13 15:41:22.777901 containerd[1474]: time="2025-02-13T15:41:22.777825882Z" level=info msg="StartContainer for \"ab249e4aa0247e8fb3ea3beb8c38f12369a480086baa89b4a34d13214af97cff\"" Feb 13 15:41:22.796536 containerd[1474]: time="2025-02-13T15:41:22.796479647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-dml4q,Uid:3c102168-4c3d-4e83-81a0-afd4ae598123,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"2d47320b2e30a62f50eb5006902f9072f4bedade60b78efb74001e8192b7dda9\"" Feb 13 15:41:22.800664 containerd[1474]: time="2025-02-13T15:41:22.800623382Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 15:41:22.820504 systemd[1]: Started cri-containerd-ab249e4aa0247e8fb3ea3beb8c38f12369a480086baa89b4a34d13214af97cff.scope - libcontainer container ab249e4aa0247e8fb3ea3beb8c38f12369a480086baa89b4a34d13214af97cff. Feb 13 15:41:22.862251 containerd[1474]: time="2025-02-13T15:41:22.862204623Z" level=info msg="StartContainer for \"ab249e4aa0247e8fb3ea3beb8c38f12369a480086baa89b4a34d13214af97cff\" returns successfully" Feb 13 15:41:23.239439 kubelet[2660]: I0213 15:41:23.239258 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nfzhd" podStartSLOduration=2.23920897 podStartE2EDuration="2.23920897s" podCreationTimestamp="2025-02-13 15:41:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:41:23.237662353 +0000 UTC m=+17.259478340" watchObservedRunningTime="2025-02-13 15:41:23.23920897 +0000 UTC m=+17.261024957" Feb 13 15:41:24.205122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1220782575.mount: Deactivated successfully. Feb 13 15:41:24.260602 containerd[1474]: time="2025-02-13T15:41:24.260531118Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:24.261950 containerd[1474]: time="2025-02-13T15:41:24.261881059Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Feb 13 15:41:24.263352 containerd[1474]: time="2025-02-13T15:41:24.263314503Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:24.266332 containerd[1474]: time="2025-02-13T15:41:24.266225782Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:24.267671 containerd[1474]: time="2025-02-13T15:41:24.267328996Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 1.466261062s" Feb 13 15:41:24.267671 containerd[1474]: time="2025-02-13T15:41:24.267372251Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Feb 13 15:41:24.270303 containerd[1474]: time="2025-02-13T15:41:24.270238239Z" level=info msg="CreateContainer within sandbox \"2d47320b2e30a62f50eb5006902f9072f4bedade60b78efb74001e8192b7dda9\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 15:41:24.290827 containerd[1474]: time="2025-02-13T15:41:24.290780938Z" level=info msg="CreateContainer within sandbox \"2d47320b2e30a62f50eb5006902f9072f4bedade60b78efb74001e8192b7dda9\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"b9ea60ba64f197588771b1db63c3a0d05bc732c0de6ee0a2a032417bda71495f\"" Feb 13 15:41:24.292545 containerd[1474]: time="2025-02-13T15:41:24.291529997Z" level=info msg="StartContainer for \"b9ea60ba64f197588771b1db63c3a0d05bc732c0de6ee0a2a032417bda71495f\"" Feb 13 15:41:24.328488 systemd[1]: Started cri-containerd-b9ea60ba64f197588771b1db63c3a0d05bc732c0de6ee0a2a032417bda71495f.scope - libcontainer container b9ea60ba64f197588771b1db63c3a0d05bc732c0de6ee0a2a032417bda71495f. Feb 13 15:41:24.361906 systemd[1]: cri-containerd-b9ea60ba64f197588771b1db63c3a0d05bc732c0de6ee0a2a032417bda71495f.scope: Deactivated successfully. Feb 13 15:41:24.362929 containerd[1474]: time="2025-02-13T15:41:24.362512168Z" level=info msg="StartContainer for \"b9ea60ba64f197588771b1db63c3a0d05bc732c0de6ee0a2a032417bda71495f\" returns successfully" Feb 13 15:41:24.408907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9ea60ba64f197588771b1db63c3a0d05bc732c0de6ee0a2a032417bda71495f-rootfs.mount: Deactivated successfully. Feb 13 15:41:24.433915 containerd[1474]: time="2025-02-13T15:41:24.433832509Z" level=info msg="shim disconnected" id=b9ea60ba64f197588771b1db63c3a0d05bc732c0de6ee0a2a032417bda71495f namespace=k8s.io Feb 13 15:41:24.433915 containerd[1474]: time="2025-02-13T15:41:24.433912643Z" level=warning msg="cleaning up after shim disconnected" id=b9ea60ba64f197588771b1db63c3a0d05bc732c0de6ee0a2a032417bda71495f namespace=k8s.io Feb 13 15:41:24.434986 containerd[1474]: time="2025-02-13T15:41:24.433926125Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:41:24.451884 containerd[1474]: time="2025-02-13T15:41:24.451809581Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:41:24Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:41:25.230861 containerd[1474]: time="2025-02-13T15:41:25.230789362Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 15:41:26.565390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3181192732.mount: Deactivated successfully. Feb 13 15:41:27.503082 containerd[1474]: time="2025-02-13T15:41:27.503006858Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:27.504642 containerd[1474]: time="2025-02-13T15:41:27.504568110Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866357" Feb 13 15:41:27.505979 containerd[1474]: time="2025-02-13T15:41:27.505934878Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:27.511820 containerd[1474]: time="2025-02-13T15:41:27.511743040Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:41:27.513965 containerd[1474]: time="2025-02-13T15:41:27.513501331Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 2.282652189s" Feb 13 15:41:27.513965 containerd[1474]: time="2025-02-13T15:41:27.513545768Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Feb 13 15:41:27.516899 containerd[1474]: time="2025-02-13T15:41:27.516707907Z" level=info msg="CreateContainer within sandbox \"2d47320b2e30a62f50eb5006902f9072f4bedade60b78efb74001e8192b7dda9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:41:27.535939 containerd[1474]: time="2025-02-13T15:41:27.535880431Z" level=info msg="CreateContainer within sandbox \"2d47320b2e30a62f50eb5006902f9072f4bedade60b78efb74001e8192b7dda9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5b20f2c61588ea6956e02af5a0e3b34f65f4c127e74784bc9f499b2bf341ee2a\"" Feb 13 15:41:27.536672 containerd[1474]: time="2025-02-13T15:41:27.536593006Z" level=info msg="StartContainer for \"5b20f2c61588ea6956e02af5a0e3b34f65f4c127e74784bc9f499b2bf341ee2a\"" Feb 13 15:41:27.579515 systemd[1]: Started cri-containerd-5b20f2c61588ea6956e02af5a0e3b34f65f4c127e74784bc9f499b2bf341ee2a.scope - libcontainer container 5b20f2c61588ea6956e02af5a0e3b34f65f4c127e74784bc9f499b2bf341ee2a. Feb 13 15:41:27.612748 systemd[1]: cri-containerd-5b20f2c61588ea6956e02af5a0e3b34f65f4c127e74784bc9f499b2bf341ee2a.scope: Deactivated successfully. Feb 13 15:41:27.616188 containerd[1474]: time="2025-02-13T15:41:27.616143710Z" level=info msg="StartContainer for \"5b20f2c61588ea6956e02af5a0e3b34f65f4c127e74784bc9f499b2bf341ee2a\" returns successfully" Feb 13 15:41:27.625371 kubelet[2660]: I0213 15:41:27.624624 2660 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:41:27.663151 kubelet[2660]: I0213 15:41:27.661395 2660 topology_manager.go:215] "Topology Admit Handler" podUID="bae28591-5e0d-4a49-8640-11bff6a116f4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-pjbq5" Feb 13 15:41:27.662572 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b20f2c61588ea6956e02af5a0e3b34f65f4c127e74784bc9f499b2bf341ee2a-rootfs.mount: Deactivated successfully. Feb 13 15:41:27.669101 kubelet[2660]: I0213 15:41:27.666509 2660 topology_manager.go:215] "Topology Admit Handler" podUID="244f95e5-e6a3-4f3b-9c8b-a30eec4b1813" podNamespace="kube-system" podName="coredns-7db6d8ff4d-sr4n8" Feb 13 15:41:27.687244 systemd[1]: Created slice kubepods-burstable-podbae28591_5e0d_4a49_8640_11bff6a116f4.slice - libcontainer container kubepods-burstable-podbae28591_5e0d_4a49_8640_11bff6a116f4.slice. Feb 13 15:41:27.704070 systemd[1]: Created slice kubepods-burstable-pod244f95e5_e6a3_4f3b_9c8b_a30eec4b1813.slice - libcontainer container kubepods-burstable-pod244f95e5_e6a3_4f3b_9c8b_a30eec4b1813.slice. Feb 13 15:41:27.726536 kubelet[2660]: I0213 15:41:27.726485 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmhf6\" (UniqueName: \"kubernetes.io/projected/244f95e5-e6a3-4f3b-9c8b-a30eec4b1813-kube-api-access-vmhf6\") pod \"coredns-7db6d8ff4d-sr4n8\" (UID: \"244f95e5-e6a3-4f3b-9c8b-a30eec4b1813\") " pod="kube-system/coredns-7db6d8ff4d-sr4n8" Feb 13 15:41:27.726793 kubelet[2660]: I0213 15:41:27.726594 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/244f95e5-e6a3-4f3b-9c8b-a30eec4b1813-config-volume\") pod \"coredns-7db6d8ff4d-sr4n8\" (UID: \"244f95e5-e6a3-4f3b-9c8b-a30eec4b1813\") " pod="kube-system/coredns-7db6d8ff4d-sr4n8" Feb 13 15:41:27.726793 kubelet[2660]: I0213 15:41:27.726623 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bae28591-5e0d-4a49-8640-11bff6a116f4-config-volume\") pod \"coredns-7db6d8ff4d-pjbq5\" (UID: \"bae28591-5e0d-4a49-8640-11bff6a116f4\") " pod="kube-system/coredns-7db6d8ff4d-pjbq5" Feb 13 15:41:27.726793 kubelet[2660]: I0213 15:41:27.726655 2660 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwb5x\" (UniqueName: \"kubernetes.io/projected/bae28591-5e0d-4a49-8640-11bff6a116f4-kube-api-access-wwb5x\") pod \"coredns-7db6d8ff4d-pjbq5\" (UID: \"bae28591-5e0d-4a49-8640-11bff6a116f4\") " pod="kube-system/coredns-7db6d8ff4d-pjbq5" Feb 13 15:41:27.876842 containerd[1474]: time="2025-02-13T15:41:27.876687288Z" level=info msg="shim disconnected" id=5b20f2c61588ea6956e02af5a0e3b34f65f4c127e74784bc9f499b2bf341ee2a namespace=k8s.io Feb 13 15:41:27.876842 containerd[1474]: time="2025-02-13T15:41:27.876772611Z" level=warning msg="cleaning up after shim disconnected" id=5b20f2c61588ea6956e02af5a0e3b34f65f4c127e74784bc9f499b2bf341ee2a namespace=k8s.io Feb 13 15:41:27.876842 containerd[1474]: time="2025-02-13T15:41:27.876787798Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:41:27.997630 containerd[1474]: time="2025-02-13T15:41:27.997570288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pjbq5,Uid:bae28591-5e0d-4a49-8640-11bff6a116f4,Namespace:kube-system,Attempt:0,}" Feb 13 15:41:28.011116 containerd[1474]: time="2025-02-13T15:41:28.010556316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sr4n8,Uid:244f95e5-e6a3-4f3b-9c8b-a30eec4b1813,Namespace:kube-system,Attempt:0,}" Feb 13 15:41:28.050188 containerd[1474]: time="2025-02-13T15:41:28.050077188Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pjbq5,Uid:bae28591-5e0d-4a49-8640-11bff6a116f4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"90b13c9bdea99f4cfed8f093aa146e5ea10ec92d269b50e3f5ccdc2b42a7a8c6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:41:28.050860 kubelet[2660]: E0213 15:41:28.050690 2660 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90b13c9bdea99f4cfed8f093aa146e5ea10ec92d269b50e3f5ccdc2b42a7a8c6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:41:28.050860 kubelet[2660]: E0213 15:41:28.050812 2660 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90b13c9bdea99f4cfed8f093aa146e5ea10ec92d269b50e3f5ccdc2b42a7a8c6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-pjbq5" Feb 13 15:41:28.050860 kubelet[2660]: E0213 15:41:28.050846 2660 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90b13c9bdea99f4cfed8f093aa146e5ea10ec92d269b50e3f5ccdc2b42a7a8c6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-pjbq5" Feb 13 15:41:28.051115 kubelet[2660]: E0213 15:41:28.050914 2660 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-pjbq5_kube-system(bae28591-5e0d-4a49-8640-11bff6a116f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-pjbq5_kube-system(bae28591-5e0d-4a49-8640-11bff6a116f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90b13c9bdea99f4cfed8f093aa146e5ea10ec92d269b50e3f5ccdc2b42a7a8c6\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-pjbq5" podUID="bae28591-5e0d-4a49-8640-11bff6a116f4" Feb 13 15:41:28.052976 containerd[1474]: time="2025-02-13T15:41:28.052538192Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sr4n8,Uid:244f95e5-e6a3-4f3b-9c8b-a30eec4b1813,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"45ab17589f0011ec9695f786e40c5a24ac7389c64813b6d1bc1d56b58aaddcb6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:41:28.053163 kubelet[2660]: E0213 15:41:28.053005 2660 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45ab17589f0011ec9695f786e40c5a24ac7389c64813b6d1bc1d56b58aaddcb6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 15:41:28.053163 kubelet[2660]: E0213 15:41:28.053058 2660 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45ab17589f0011ec9695f786e40c5a24ac7389c64813b6d1bc1d56b58aaddcb6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-sr4n8" Feb 13 15:41:28.053163 kubelet[2660]: E0213 15:41:28.053085 2660 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45ab17589f0011ec9695f786e40c5a24ac7389c64813b6d1bc1d56b58aaddcb6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-sr4n8" Feb 13 15:41:28.053163 kubelet[2660]: E0213 15:41:28.053132 2660 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-sr4n8_kube-system(244f95e5-e6a3-4f3b-9c8b-a30eec4b1813)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-sr4n8_kube-system(244f95e5-e6a3-4f3b-9c8b-a30eec4b1813)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45ab17589f0011ec9695f786e40c5a24ac7389c64813b6d1bc1d56b58aaddcb6\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-sr4n8" podUID="244f95e5-e6a3-4f3b-9c8b-a30eec4b1813" Feb 13 15:41:28.239841 containerd[1474]: time="2025-02-13T15:41:28.239443659Z" level=info msg="CreateContainer within sandbox \"2d47320b2e30a62f50eb5006902f9072f4bedade60b78efb74001e8192b7dda9\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 13 15:41:28.256858 containerd[1474]: time="2025-02-13T15:41:28.256354244Z" level=info msg="CreateContainer within sandbox \"2d47320b2e30a62f50eb5006902f9072f4bedade60b78efb74001e8192b7dda9\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"e2f576e79771d0a69ec4e9f7cb4084d08ac30b0f690e54c7797b5477661244bc\"" Feb 13 15:41:28.259096 containerd[1474]: time="2025-02-13T15:41:28.257263696Z" level=info msg="StartContainer for \"e2f576e79771d0a69ec4e9f7cb4084d08ac30b0f690e54c7797b5477661244bc\"" Feb 13 15:41:28.298498 systemd[1]: Started cri-containerd-e2f576e79771d0a69ec4e9f7cb4084d08ac30b0f690e54c7797b5477661244bc.scope - libcontainer container e2f576e79771d0a69ec4e9f7cb4084d08ac30b0f690e54c7797b5477661244bc. Feb 13 15:41:28.334580 containerd[1474]: time="2025-02-13T15:41:28.334521007Z" level=info msg="StartContainer for \"e2f576e79771d0a69ec4e9f7cb4084d08ac30b0f690e54c7797b5477661244bc\" returns successfully" Feb 13 15:41:29.255450 kubelet[2660]: I0213 15:41:29.253501 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-dml4q" podStartSLOduration=3.537271656 podStartE2EDuration="8.253474338s" podCreationTimestamp="2025-02-13 15:41:21 +0000 UTC" firstStartedPulling="2025-02-13 15:41:22.798772556 +0000 UTC m=+16.820588527" lastFinishedPulling="2025-02-13 15:41:27.514975232 +0000 UTC m=+21.536791209" observedRunningTime="2025-02-13 15:41:29.253145735 +0000 UTC m=+23.274961722" watchObservedRunningTime="2025-02-13 15:41:29.253474338 +0000 UTC m=+23.275290325" Feb 13 15:41:29.414559 systemd-networkd[1384]: flannel.1: Link UP Feb 13 15:41:29.414572 systemd-networkd[1384]: flannel.1: Gained carrier Feb 13 15:41:30.568595 systemd-networkd[1384]: flannel.1: Gained IPv6LL Feb 13 15:41:32.952653 ntpd[1444]: Listen normally on 7 flannel.1 192.168.0.0:123 Feb 13 15:41:32.952778 ntpd[1444]: Listen normally on 8 flannel.1 [fe80::d4a5:5fff:fee4:d410%4]:123 Feb 13 15:41:32.953256 ntpd[1444]: 13 Feb 15:41:32 ntpd[1444]: Listen normally on 7 flannel.1 192.168.0.0:123 Feb 13 15:41:32.953256 ntpd[1444]: 13 Feb 15:41:32 ntpd[1444]: Listen normally on 8 flannel.1 [fe80::d4a5:5fff:fee4:d410%4]:123 Feb 13 15:41:39.139872 containerd[1474]: time="2025-02-13T15:41:39.139783010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sr4n8,Uid:244f95e5-e6a3-4f3b-9c8b-a30eec4b1813,Namespace:kube-system,Attempt:0,}" Feb 13 15:41:39.173383 systemd-networkd[1384]: cni0: Link UP Feb 13 15:41:39.173397 systemd-networkd[1384]: cni0: Gained carrier Feb 13 15:41:39.181063 systemd-networkd[1384]: cni0: Lost carrier Feb 13 15:41:39.190680 systemd-networkd[1384]: veth3645b550: Link UP Feb 13 15:41:39.204098 kernel: cni0: port 1(veth3645b550) entered blocking state Feb 13 15:41:39.204224 kernel: cni0: port 1(veth3645b550) entered disabled state Feb 13 15:41:39.219510 kernel: veth3645b550: entered allmulticast mode Feb 13 15:41:39.219623 kernel: veth3645b550: entered promiscuous mode Feb 13 15:41:39.232057 kernel: cni0: port 1(veth3645b550) entered blocking state Feb 13 15:41:39.232193 kernel: cni0: port 1(veth3645b550) entered forwarding state Feb 13 15:41:39.232227 kernel: cni0: port 1(veth3645b550) entered disabled state Feb 13 15:41:39.256181 kernel: cni0: port 1(veth3645b550) entered blocking state Feb 13 15:41:39.256357 kernel: cni0: port 1(veth3645b550) entered forwarding state Feb 13 15:41:39.256586 systemd-networkd[1384]: veth3645b550: Gained carrier Feb 13 15:41:39.258041 systemd-networkd[1384]: cni0: Gained carrier Feb 13 15:41:39.261401 containerd[1474]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Feb 13 15:41:39.261401 containerd[1474]: delegateAdd: netconf sent to delegate plugin: Feb 13 15:41:39.288716 containerd[1474]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1410,"name":"cbr0","type":"bridge"}time="2025-02-13T15:41:39.288363808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:41:39.288716 containerd[1474]: time="2025-02-13T15:41:39.288450045Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:41:39.288716 containerd[1474]: time="2025-02-13T15:41:39.288475760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:41:39.288716 containerd[1474]: time="2025-02-13T15:41:39.288601592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:41:39.330488 systemd[1]: Started cri-containerd-5005353e274a1792732b633572bd90ab64209f214154311dbe06e58ab7a5c2f1.scope - libcontainer container 5005353e274a1792732b633572bd90ab64209f214154311dbe06e58ab7a5c2f1. Feb 13 15:41:39.384427 containerd[1474]: time="2025-02-13T15:41:39.384376888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sr4n8,Uid:244f95e5-e6a3-4f3b-9c8b-a30eec4b1813,Namespace:kube-system,Attempt:0,} returns sandbox id \"5005353e274a1792732b633572bd90ab64209f214154311dbe06e58ab7a5c2f1\"" Feb 13 15:41:39.388443 containerd[1474]: time="2025-02-13T15:41:39.388398088Z" level=info msg="CreateContainer within sandbox \"5005353e274a1792732b633572bd90ab64209f214154311dbe06e58ab7a5c2f1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:41:39.411535 containerd[1474]: time="2025-02-13T15:41:39.411341422Z" level=info msg="CreateContainer within sandbox \"5005353e274a1792732b633572bd90ab64209f214154311dbe06e58ab7a5c2f1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0aa642c11437e52e408ea01c368a3a8738c4207d3c5f1a61ac9e7607c79e4d17\"" Feb 13 15:41:39.413833 containerd[1474]: time="2025-02-13T15:41:39.412645441Z" level=info msg="StartContainer for \"0aa642c11437e52e408ea01c368a3a8738c4207d3c5f1a61ac9e7607c79e4d17\"" Feb 13 15:41:39.449559 systemd[1]: Started cri-containerd-0aa642c11437e52e408ea01c368a3a8738c4207d3c5f1a61ac9e7607c79e4d17.scope - libcontainer container 0aa642c11437e52e408ea01c368a3a8738c4207d3c5f1a61ac9e7607c79e4d17. Feb 13 15:41:39.485349 containerd[1474]: time="2025-02-13T15:41:39.485065832Z" level=info msg="StartContainer for \"0aa642c11437e52e408ea01c368a3a8738c4207d3c5f1a61ac9e7607c79e4d17\" returns successfully" Feb 13 15:41:40.297652 kubelet[2660]: I0213 15:41:40.297353 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-sr4n8" podStartSLOduration=18.297327767 podStartE2EDuration="18.297327767s" podCreationTimestamp="2025-02-13 15:41:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:41:40.281122308 +0000 UTC m=+34.302938297" watchObservedRunningTime="2025-02-13 15:41:40.297327767 +0000 UTC m=+34.319143756" Feb 13 15:41:40.488743 systemd-networkd[1384]: cni0: Gained IPv6LL Feb 13 15:41:41.000650 systemd-networkd[1384]: veth3645b550: Gained IPv6LL Feb 13 15:41:43.139786 containerd[1474]: time="2025-02-13T15:41:43.139731913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pjbq5,Uid:bae28591-5e0d-4a49-8640-11bff6a116f4,Namespace:kube-system,Attempt:0,}" Feb 13 15:41:43.172849 systemd-networkd[1384]: vethba1d633a: Link UP Feb 13 15:41:43.186039 kernel: cni0: port 2(vethba1d633a) entered blocking state Feb 13 15:41:43.186174 kernel: cni0: port 2(vethba1d633a) entered disabled state Feb 13 15:41:43.186209 kernel: vethba1d633a: entered allmulticast mode Feb 13 15:41:43.196198 kernel: vethba1d633a: entered promiscuous mode Feb 13 15:41:43.218035 kernel: cni0: port 2(vethba1d633a) entered blocking state Feb 13 15:41:43.218149 kernel: cni0: port 2(vethba1d633a) entered forwarding state Feb 13 15:41:43.218224 systemd-networkd[1384]: vethba1d633a: Gained carrier Feb 13 15:41:43.223912 containerd[1474]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Feb 13 15:41:43.223912 containerd[1474]: delegateAdd: netconf sent to delegate plugin: Feb 13 15:41:43.251190 containerd[1474]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1410,"name":"cbr0","type":"bridge"}time="2025-02-13T15:41:43.251066692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:41:43.251452 containerd[1474]: time="2025-02-13T15:41:43.251244784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:41:43.251452 containerd[1474]: time="2025-02-13T15:41:43.251312593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:41:43.252517 containerd[1474]: time="2025-02-13T15:41:43.252446493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:41:43.292511 systemd[1]: Started cri-containerd-3147d22ee0b596f6ea4115807a7c89940f2e6b4eb730bf7985af7ebde41e66ec.scope - libcontainer container 3147d22ee0b596f6ea4115807a7c89940f2e6b4eb730bf7985af7ebde41e66ec. Feb 13 15:41:43.349121 containerd[1474]: time="2025-02-13T15:41:43.349070855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pjbq5,Uid:bae28591-5e0d-4a49-8640-11bff6a116f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"3147d22ee0b596f6ea4115807a7c89940f2e6b4eb730bf7985af7ebde41e66ec\"" Feb 13 15:41:43.353593 containerd[1474]: time="2025-02-13T15:41:43.353502680Z" level=info msg="CreateContainer within sandbox \"3147d22ee0b596f6ea4115807a7c89940f2e6b4eb730bf7985af7ebde41e66ec\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 15:41:43.373660 containerd[1474]: time="2025-02-13T15:41:43.373545366Z" level=info msg="CreateContainer within sandbox \"3147d22ee0b596f6ea4115807a7c89940f2e6b4eb730bf7985af7ebde41e66ec\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2a04af4db7cf8793f960acc672e8fcb52b1556cd9f7c3b976266eaecac90e3d4\"" Feb 13 15:41:43.374531 containerd[1474]: time="2025-02-13T15:41:43.374295197Z" level=info msg="StartContainer for \"2a04af4db7cf8793f960acc672e8fcb52b1556cd9f7c3b976266eaecac90e3d4\"" Feb 13 15:41:43.409516 systemd[1]: Started cri-containerd-2a04af4db7cf8793f960acc672e8fcb52b1556cd9f7c3b976266eaecac90e3d4.scope - libcontainer container 2a04af4db7cf8793f960acc672e8fcb52b1556cd9f7c3b976266eaecac90e3d4. Feb 13 15:41:43.448108 containerd[1474]: time="2025-02-13T15:41:43.448048678Z" level=info msg="StartContainer for \"2a04af4db7cf8793f960acc672e8fcb52b1556cd9f7c3b976266eaecac90e3d4\" returns successfully" Feb 13 15:41:44.303854 kubelet[2660]: I0213 15:41:44.303764 2660 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-pjbq5" podStartSLOduration=22.30373785 podStartE2EDuration="22.30373785s" podCreationTimestamp="2025-02-13 15:41:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:41:44.289277377 +0000 UTC m=+38.311093388" watchObservedRunningTime="2025-02-13 15:41:44.30373785 +0000 UTC m=+38.325553837" Feb 13 15:41:45.032688 systemd-networkd[1384]: vethba1d633a: Gained IPv6LL Feb 13 15:41:47.952663 ntpd[1444]: Listen normally on 9 cni0 192.168.0.1:123 Feb 13 15:41:47.952796 ntpd[1444]: Listen normally on 10 cni0 [fe80::8cc5:30ff:fe16:4003%5]:123 Feb 13 15:41:47.953344 ntpd[1444]: 13 Feb 15:41:47 ntpd[1444]: Listen normally on 9 cni0 192.168.0.1:123 Feb 13 15:41:47.953344 ntpd[1444]: 13 Feb 15:41:47 ntpd[1444]: Listen normally on 10 cni0 [fe80::8cc5:30ff:fe16:4003%5]:123 Feb 13 15:41:47.953344 ntpd[1444]: 13 Feb 15:41:47 ntpd[1444]: Listen normally on 11 veth3645b550 [fe80::8897:60ff:fed8:6614%6]:123 Feb 13 15:41:47.953344 ntpd[1444]: 13 Feb 15:41:47 ntpd[1444]: Listen normally on 12 vethba1d633a [fe80::98ac:9bff:fea2:b953%7]:123 Feb 13 15:41:47.952907 ntpd[1444]: Listen normally on 11 veth3645b550 [fe80::8897:60ff:fed8:6614%6]:123 Feb 13 15:41:47.952973 ntpd[1444]: Listen normally on 12 vethba1d633a [fe80::98ac:9bff:fea2:b953%7]:123 Feb 13 15:41:52.460734 systemd[1]: Started sshd@7-10.128.0.120:22-139.178.68.195:37260.service - OpenSSH per-connection server daemon (139.178.68.195:37260). Feb 13 15:41:52.759900 sshd[3612]: Accepted publickey for core from 139.178.68.195 port 37260 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:52.761853 sshd-session[3612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:52.770769 systemd-logind[1456]: New session 8 of user core. Feb 13 15:41:52.776690 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:41:53.061313 sshd[3614]: Connection closed by 139.178.68.195 port 37260 Feb 13 15:41:53.062654 sshd-session[3612]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:53.067499 systemd[1]: sshd@7-10.128.0.120:22-139.178.68.195:37260.service: Deactivated successfully. Feb 13 15:41:53.070417 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:41:53.072748 systemd-logind[1456]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:41:53.074152 systemd-logind[1456]: Removed session 8. Feb 13 15:41:58.122021 systemd[1]: Started sshd@8-10.128.0.120:22-139.178.68.195:40174.service - OpenSSH per-connection server daemon (139.178.68.195:40174). Feb 13 15:41:58.415139 sshd[3650]: Accepted publickey for core from 139.178.68.195 port 40174 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:41:58.417125 sshd-session[3650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:41:58.423797 systemd-logind[1456]: New session 9 of user core. Feb 13 15:41:58.429527 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:41:58.707380 sshd[3652]: Connection closed by 139.178.68.195 port 40174 Feb 13 15:41:58.708604 sshd-session[3650]: pam_unix(sshd:session): session closed for user core Feb 13 15:41:58.713078 systemd[1]: sshd@8-10.128.0.120:22-139.178.68.195:40174.service: Deactivated successfully. Feb 13 15:41:58.715785 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:41:58.718320 systemd-logind[1456]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:41:58.720405 systemd-logind[1456]: Removed session 9. Feb 13 15:42:03.765696 systemd[1]: Started sshd@9-10.128.0.120:22-139.178.68.195:40182.service - OpenSSH per-connection server daemon (139.178.68.195:40182). Feb 13 15:42:04.059447 sshd[3685]: Accepted publickey for core from 139.178.68.195 port 40182 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:42:04.061198 sshd-session[3685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:04.068425 systemd-logind[1456]: New session 10 of user core. Feb 13 15:42:04.074509 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 15:42:04.358879 sshd[3687]: Connection closed by 139.178.68.195 port 40182 Feb 13 15:42:04.359756 sshd-session[3685]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:04.365643 systemd[1]: sshd@9-10.128.0.120:22-139.178.68.195:40182.service: Deactivated successfully. Feb 13 15:42:04.368624 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 15:42:04.370105 systemd-logind[1456]: Session 10 logged out. Waiting for processes to exit. Feb 13 15:42:04.371672 systemd-logind[1456]: Removed session 10. Feb 13 15:42:04.416698 systemd[1]: Started sshd@10-10.128.0.120:22-139.178.68.195:40184.service - OpenSSH per-connection server daemon (139.178.68.195:40184). Feb 13 15:42:04.719879 sshd[3700]: Accepted publickey for core from 139.178.68.195 port 40184 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:42:04.721722 sshd-session[3700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:04.729003 systemd-logind[1456]: New session 11 of user core. Feb 13 15:42:04.735503 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 15:42:05.054697 sshd[3723]: Connection closed by 139.178.68.195 port 40184 Feb 13 15:42:05.055948 sshd-session[3700]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:05.061454 systemd[1]: sshd@10-10.128.0.120:22-139.178.68.195:40184.service: Deactivated successfully. Feb 13 15:42:05.064447 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 15:42:05.065662 systemd-logind[1456]: Session 11 logged out. Waiting for processes to exit. Feb 13 15:42:05.067400 systemd-logind[1456]: Removed session 11. Feb 13 15:42:05.113720 systemd[1]: Started sshd@11-10.128.0.120:22-139.178.68.195:40194.service - OpenSSH per-connection server daemon (139.178.68.195:40194). Feb 13 15:42:05.415651 sshd[3733]: Accepted publickey for core from 139.178.68.195 port 40194 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:42:05.417485 sshd-session[3733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:05.423673 systemd-logind[1456]: New session 12 of user core. Feb 13 15:42:05.429524 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 15:42:05.708855 sshd[3735]: Connection closed by 139.178.68.195 port 40194 Feb 13 15:42:05.710190 sshd-session[3733]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:05.715528 systemd[1]: sshd@11-10.128.0.120:22-139.178.68.195:40194.service: Deactivated successfully. Feb 13 15:42:05.718891 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 15:42:05.720444 systemd-logind[1456]: Session 12 logged out. Waiting for processes to exit. Feb 13 15:42:05.721990 systemd-logind[1456]: Removed session 12. Feb 13 15:42:10.769167 systemd[1]: Started sshd@12-10.128.0.120:22-139.178.68.195:39560.service - OpenSSH per-connection server daemon (139.178.68.195:39560). Feb 13 15:42:11.062095 sshd[3770]: Accepted publickey for core from 139.178.68.195 port 39560 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:42:11.064310 sshd-session[3770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:11.071689 systemd-logind[1456]: New session 13 of user core. Feb 13 15:42:11.076520 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 15:42:11.352252 sshd[3772]: Connection closed by 139.178.68.195 port 39560 Feb 13 15:42:11.353489 sshd-session[3770]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:11.359050 systemd[1]: sshd@12-10.128.0.120:22-139.178.68.195:39560.service: Deactivated successfully. Feb 13 15:42:11.361845 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 15:42:11.363068 systemd-logind[1456]: Session 13 logged out. Waiting for processes to exit. Feb 13 15:42:11.364660 systemd-logind[1456]: Removed session 13. Feb 13 15:42:11.412708 systemd[1]: Started sshd@13-10.128.0.120:22-139.178.68.195:39576.service - OpenSSH per-connection server daemon (139.178.68.195:39576). Feb 13 15:42:11.704603 sshd[3784]: Accepted publickey for core from 139.178.68.195 port 39576 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:42:11.706444 sshd-session[3784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:11.713574 systemd-logind[1456]: New session 14 of user core. Feb 13 15:42:11.719507 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 15:42:12.100402 sshd[3786]: Connection closed by 139.178.68.195 port 39576 Feb 13 15:42:12.101685 sshd-session[3784]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:12.107140 systemd[1]: sshd@13-10.128.0.120:22-139.178.68.195:39576.service: Deactivated successfully. Feb 13 15:42:12.110100 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 15:42:12.111511 systemd-logind[1456]: Session 14 logged out. Waiting for processes to exit. Feb 13 15:42:12.113266 systemd-logind[1456]: Removed session 14. Feb 13 15:42:12.158722 systemd[1]: Started sshd@14-10.128.0.120:22-139.178.68.195:39584.service - OpenSSH per-connection server daemon (139.178.68.195:39584). Feb 13 15:42:12.448471 sshd[3796]: Accepted publickey for core from 139.178.68.195 port 39584 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:42:12.450350 sshd-session[3796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:12.456534 systemd-logind[1456]: New session 15 of user core. Feb 13 15:42:12.464534 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 15:42:14.089598 sshd[3798]: Connection closed by 139.178.68.195 port 39584 Feb 13 15:42:14.090993 sshd-session[3796]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:14.098093 systemd-logind[1456]: Session 15 logged out. Waiting for processes to exit. Feb 13 15:42:14.098414 systemd[1]: sshd@14-10.128.0.120:22-139.178.68.195:39584.service: Deactivated successfully. Feb 13 15:42:14.102516 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 15:42:14.105447 systemd-logind[1456]: Removed session 15. Feb 13 15:42:14.147692 systemd[1]: Started sshd@15-10.128.0.120:22-139.178.68.195:39590.service - OpenSSH per-connection server daemon (139.178.68.195:39590). Feb 13 15:42:14.443472 sshd[3816]: Accepted publickey for core from 139.178.68.195 port 39590 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:42:14.445265 sshd-session[3816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:14.451263 systemd-logind[1456]: New session 16 of user core. Feb 13 15:42:14.457487 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 15:42:14.895706 sshd[3818]: Connection closed by 139.178.68.195 port 39590 Feb 13 15:42:14.896661 sshd-session[3816]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:14.901373 systemd[1]: sshd@15-10.128.0.120:22-139.178.68.195:39590.service: Deactivated successfully. Feb 13 15:42:14.904531 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 15:42:14.907058 systemd-logind[1456]: Session 16 logged out. Waiting for processes to exit. Feb 13 15:42:14.908772 systemd-logind[1456]: Removed session 16. Feb 13 15:42:14.953327 systemd[1]: Started sshd@16-10.128.0.120:22-139.178.68.195:39598.service - OpenSSH per-connection server daemon (139.178.68.195:39598). Feb 13 15:42:15.245932 sshd[3849]: Accepted publickey for core from 139.178.68.195 port 39598 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:42:15.248089 sshd-session[3849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:15.254098 systemd-logind[1456]: New session 17 of user core. Feb 13 15:42:15.265503 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 15:42:15.533017 sshd[3851]: Connection closed by 139.178.68.195 port 39598 Feb 13 15:42:15.533987 sshd-session[3849]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:15.538352 systemd[1]: sshd@16-10.128.0.120:22-139.178.68.195:39598.service: Deactivated successfully. Feb 13 15:42:15.541561 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 15:42:15.543873 systemd-logind[1456]: Session 17 logged out. Waiting for processes to exit. Feb 13 15:42:15.545564 systemd-logind[1456]: Removed session 17. Feb 13 15:42:20.589696 systemd[1]: Started sshd@17-10.128.0.120:22-139.178.68.195:48452.service - OpenSSH per-connection server daemon (139.178.68.195:48452). Feb 13 15:42:20.881079 sshd[3887]: Accepted publickey for core from 139.178.68.195 port 48452 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:42:20.882905 sshd-session[3887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:20.889055 systemd-logind[1456]: New session 18 of user core. Feb 13 15:42:20.896517 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 15:42:21.184497 sshd[3889]: Connection closed by 139.178.68.195 port 48452 Feb 13 15:42:21.185867 sshd-session[3887]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:21.191478 systemd[1]: sshd@17-10.128.0.120:22-139.178.68.195:48452.service: Deactivated successfully. Feb 13 15:42:21.194892 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 15:42:21.196648 systemd-logind[1456]: Session 18 logged out. Waiting for processes to exit. Feb 13 15:42:21.198739 systemd-logind[1456]: Removed session 18. Feb 13 15:42:26.244753 systemd[1]: Started sshd@18-10.128.0.120:22-139.178.68.195:48458.service - OpenSSH per-connection server daemon (139.178.68.195:48458). Feb 13 15:42:26.540999 sshd[3924]: Accepted publickey for core from 139.178.68.195 port 48458 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:42:26.542934 sshd-session[3924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:26.550010 systemd-logind[1456]: New session 19 of user core. Feb 13 15:42:26.557734 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 15:42:26.826502 sshd[3926]: Connection closed by 139.178.68.195 port 48458 Feb 13 15:42:26.827245 sshd-session[3924]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:26.831967 systemd[1]: sshd@18-10.128.0.120:22-139.178.68.195:48458.service: Deactivated successfully. Feb 13 15:42:26.835109 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 15:42:26.837337 systemd-logind[1456]: Session 19 logged out. Waiting for processes to exit. Feb 13 15:42:26.839285 systemd-logind[1456]: Removed session 19. Feb 13 15:42:31.882734 systemd[1]: Started sshd@19-10.128.0.120:22-139.178.68.195:49386.service - OpenSSH per-connection server daemon (139.178.68.195:49386). Feb 13 15:42:32.184119 sshd[3960]: Accepted publickey for core from 139.178.68.195 port 49386 ssh2: RSA SHA256:kN9Kjz2l873TmRIEATva0Vh6UY6avfqacvbNhSwCVcE Feb 13 15:42:32.186165 sshd-session[3960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:42:32.193072 systemd-logind[1456]: New session 20 of user core. Feb 13 15:42:32.199923 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 15:42:32.467310 sshd[3962]: Connection closed by 139.178.68.195 port 49386 Feb 13 15:42:32.468092 sshd-session[3960]: pam_unix(sshd:session): session closed for user core Feb 13 15:42:32.474910 systemd[1]: sshd@19-10.128.0.120:22-139.178.68.195:49386.service: Deactivated successfully. Feb 13 15:42:32.478213 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 15:42:32.480823 systemd-logind[1456]: Session 20 logged out. Waiting for processes to exit. Feb 13 15:42:32.482873 systemd-logind[1456]: Removed session 20.