Feb 13 19:18:29.125601 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:40:15 -00 2025 Feb 13 19:18:29.125653 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 19:18:29.125672 kernel: BIOS-provided physical RAM map: Feb 13 19:18:29.125686 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Feb 13 19:18:29.125699 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Feb 13 19:18:29.125711 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Feb 13 19:18:29.125728 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Feb 13 19:18:29.125742 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Feb 13 19:18:29.125760 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd328fff] usable Feb 13 19:18:29.125774 kernel: BIOS-e820: [mem 0x00000000bd329000-0x00000000bd330fff] ACPI data Feb 13 19:18:29.125789 kernel: BIOS-e820: [mem 0x00000000bd331000-0x00000000bf8ecfff] usable Feb 13 19:18:29.125802 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Feb 13 19:18:29.125816 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Feb 13 19:18:29.125831 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Feb 13 19:18:29.125852 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Feb 13 19:18:29.125868 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Feb 13 19:18:29.125884 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Feb 13 19:18:29.125899 kernel: NX (Execute Disable) protection: active Feb 13 19:18:29.125915 kernel: APIC: Static calls initialized Feb 13 19:18:29.125930 kernel: efi: EFI v2.7 by EDK II Feb 13 19:18:29.125946 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd329018 Feb 13 19:18:29.125962 kernel: random: crng init done Feb 13 19:18:29.125977 kernel: secureboot: Secure boot disabled Feb 13 19:18:29.125993 kernel: SMBIOS 2.4 present. Feb 13 19:18:29.126012 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024 Feb 13 19:18:29.126027 kernel: Hypervisor detected: KVM Feb 13 19:18:29.126043 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:18:29.126058 kernel: kvm-clock: using sched offset of 12960726306 cycles Feb 13 19:18:29.126075 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:18:29.126091 kernel: tsc: Detected 2299.998 MHz processor Feb 13 19:18:29.126107 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:18:29.126124 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:18:29.126140 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Feb 13 19:18:29.126156 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Feb 13 19:18:29.126182 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:18:29.126198 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Feb 13 19:18:29.126214 kernel: Using GB pages for direct mapping Feb 13 19:18:29.126230 kernel: ACPI: Early table checksum verification disabled Feb 13 19:18:29.126246 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Feb 13 19:18:29.126263 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Feb 13 19:18:29.126286 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Feb 13 19:18:29.126306 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Feb 13 19:18:29.126323 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Feb 13 19:18:29.126340 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Feb 13 19:18:29.126358 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Feb 13 19:18:29.126375 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Feb 13 19:18:29.126391 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Feb 13 19:18:29.126408 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Feb 13 19:18:29.126429 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Feb 13 19:18:29.126446 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Feb 13 19:18:29.126463 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Feb 13 19:18:29.126480 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Feb 13 19:18:29.126497 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Feb 13 19:18:29.126514 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Feb 13 19:18:29.126530 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Feb 13 19:18:29.126561 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Feb 13 19:18:29.126581 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Feb 13 19:18:29.126598 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Feb 13 19:18:29.126614 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 19:18:29.126630 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 19:18:29.126654 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 19:18:29.126676 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Feb 13 19:18:29.126691 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Feb 13 19:18:29.126708 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Feb 13 19:18:29.126726 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Feb 13 19:18:29.126753 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Feb 13 19:18:29.126769 kernel: Zone ranges: Feb 13 19:18:29.126785 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:18:29.126801 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 19:18:29.126819 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Feb 13 19:18:29.126836 kernel: Movable zone start for each node Feb 13 19:18:29.126854 kernel: Early memory node ranges Feb 13 19:18:29.126872 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Feb 13 19:18:29.126889 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Feb 13 19:18:29.126907 kernel: node 0: [mem 0x0000000000100000-0x00000000bd328fff] Feb 13 19:18:29.126930 kernel: node 0: [mem 0x00000000bd331000-0x00000000bf8ecfff] Feb 13 19:18:29.126948 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Feb 13 19:18:29.126965 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Feb 13 19:18:29.126983 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Feb 13 19:18:29.127001 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:18:29.127018 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Feb 13 19:18:29.127036 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Feb 13 19:18:29.127054 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Feb 13 19:18:29.127072 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 19:18:29.127093 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Feb 13 19:18:29.127110 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 19:18:29.127128 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:18:29.127145 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 19:18:29.127163 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:18:29.127189 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:18:29.127207 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:18:29.127224 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:18:29.127242 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:18:29.127264 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 19:18:29.127282 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Feb 13 19:18:29.127300 kernel: Booting paravirtualized kernel on KVM Feb 13 19:18:29.127318 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:18:29.127335 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 19:18:29.127353 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 19:18:29.127371 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 19:18:29.127388 kernel: pcpu-alloc: [0] 0 1 Feb 13 19:18:29.127405 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:18:29.127427 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:18:29.127448 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 19:18:29.127466 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:18:29.127483 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 19:18:29.127501 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:18:29.127519 kernel: Fallback order for Node 0: 0 Feb 13 19:18:29.127537 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932272 Feb 13 19:18:29.128255 kernel: Policy zone: Normal Feb 13 19:18:29.128282 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:18:29.128300 kernel: software IO TLB: area num 2. Feb 13 19:18:29.128453 kernel: Memory: 7511320K/7860552K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43476K init, 1596K bss, 348976K reserved, 0K cma-reserved) Feb 13 19:18:29.128469 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 19:18:29.128487 kernel: Kernel/User page tables isolation: enabled Feb 13 19:18:29.128504 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 19:18:29.128738 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:18:29.128760 kernel: Dynamic Preempt: voluntary Feb 13 19:18:29.128943 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:18:29.128969 kernel: rcu: RCU event tracing is enabled. Feb 13 19:18:29.128989 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 19:18:29.129106 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:18:29.129132 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:18:29.129150 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:18:29.129169 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:18:29.129198 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 19:18:29.129217 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 19:18:29.129240 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:18:29.129260 kernel: Console: colour dummy device 80x25 Feb 13 19:18:29.129278 kernel: printk: console [ttyS0] enabled Feb 13 19:18:29.129297 kernel: ACPI: Core revision 20230628 Feb 13 19:18:29.129316 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:18:29.129335 kernel: x2apic enabled Feb 13 19:18:29.129354 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:18:29.129373 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Feb 13 19:18:29.129393 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 19:18:29.129417 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Feb 13 19:18:29.129436 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Feb 13 19:18:29.129455 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Feb 13 19:18:29.129473 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:18:29.129492 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 19:18:29.129511 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 19:18:29.129530 kernel: Spectre V2 : Mitigation: IBRS Feb 13 19:18:29.129573 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:18:29.129592 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:18:29.129616 kernel: RETBleed: Mitigation: IBRS Feb 13 19:18:29.129635 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 19:18:29.129654 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Feb 13 19:18:29.129673 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 19:18:29.129692 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 19:18:29.129711 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 19:18:29.129730 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:18:29.129748 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:18:29.129771 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:18:29.129790 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:18:29.129809 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 19:18:29.129828 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:18:29.129846 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:18:29.129865 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:18:29.129884 kernel: landlock: Up and running. Feb 13 19:18:29.129903 kernel: SELinux: Initializing. Feb 13 19:18:29.129922 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 19:18:29.129945 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 19:18:29.129964 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Feb 13 19:18:29.129983 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:18:29.130002 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:18:29.130021 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 19:18:29.130040 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Feb 13 19:18:29.130059 kernel: signal: max sigframe size: 1776 Feb 13 19:18:29.130077 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:18:29.130097 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:18:29.130120 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 19:18:29.130139 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:18:29.130158 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:18:29.130188 kernel: .... node #0, CPUs: #1 Feb 13 19:18:29.130208 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 19:18:29.130229 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 19:18:29.130248 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:18:29.130267 kernel: smpboot: Max logical packages: 1 Feb 13 19:18:29.130286 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Feb 13 19:18:29.130309 kernel: devtmpfs: initialized Feb 13 19:18:29.130328 kernel: x86/mm: Memory block size: 128MB Feb 13 19:18:29.130347 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Feb 13 19:18:29.130365 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:18:29.130385 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 19:18:29.130403 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:18:29.130422 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:18:29.130441 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:18:29.130460 kernel: audit: type=2000 audit(1739474307.283:1): state=initialized audit_enabled=0 res=1 Feb 13 19:18:29.130482 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:18:29.130507 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:18:29.130526 kernel: cpuidle: using governor menu Feb 13 19:18:29.130625 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:18:29.130645 kernel: dca service started, version 1.12.1 Feb 13 19:18:29.130664 kernel: PCI: Using configuration type 1 for base access Feb 13 19:18:29.130683 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:18:29.130702 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:18:29.130727 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:18:29.130746 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:18:29.130765 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:18:29.130784 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:18:29.130803 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:18:29.130821 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:18:29.130840 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:18:29.130859 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 19:18:29.130878 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:18:29.130897 kernel: ACPI: Interpreter enabled Feb 13 19:18:29.130920 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 19:18:29.130939 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:18:29.130958 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:18:29.130976 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 19:18:29.130995 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 19:18:29.131014 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:18:29.131331 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:18:29.131530 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 19:18:29.133089 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 19:18:29.133118 kernel: PCI host bridge to bus 0000:00 Feb 13 19:18:29.133337 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:18:29.133508 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:18:29.134751 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:18:29.134934 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Feb 13 19:18:29.135112 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:18:29.135330 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 19:18:29.135531 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Feb 13 19:18:29.136818 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 13 19:18:29.137010 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 19:18:29.137214 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Feb 13 19:18:29.137409 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Feb 13 19:18:29.138739 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Feb 13 19:18:29.138961 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 19:18:29.139156 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Feb 13 19:18:29.139357 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Feb 13 19:18:29.141513 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:18:29.141774 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 13 19:18:29.141976 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Feb 13 19:18:29.142000 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:18:29.142017 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:18:29.142035 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:18:29.142051 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:18:29.142068 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 19:18:29.142085 kernel: iommu: Default domain type: Translated Feb 13 19:18:29.142105 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:18:29.142124 kernel: efivars: Registered efivars operations Feb 13 19:18:29.142149 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:18:29.142168 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:18:29.142200 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Feb 13 19:18:29.142218 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Feb 13 19:18:29.142237 kernel: e820: reserve RAM buffer [mem 0xbd329000-0xbfffffff] Feb 13 19:18:29.142256 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Feb 13 19:18:29.142274 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Feb 13 19:18:29.142293 kernel: vgaarb: loaded Feb 13 19:18:29.142312 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:18:29.142336 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:18:29.142356 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:18:29.142375 kernel: pnp: PnP ACPI init Feb 13 19:18:29.142394 kernel: pnp: PnP ACPI: found 7 devices Feb 13 19:18:29.142413 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:18:29.142432 kernel: NET: Registered PF_INET protocol family Feb 13 19:18:29.142451 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 19:18:29.142470 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 19:18:29.142490 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:18:29.142514 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:18:29.142533 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 19:18:29.143051 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 19:18:29.143071 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 19:18:29.143089 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 19:18:29.143105 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:18:29.143125 kernel: NET: Registered PF_XDP protocol family Feb 13 19:18:29.143338 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:18:29.143532 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:18:29.144769 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:18:29.144946 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Feb 13 19:18:29.145143 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 19:18:29.145180 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:18:29.145201 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 19:18:29.145221 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Feb 13 19:18:29.145241 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 19:18:29.145268 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 19:18:29.145287 kernel: clocksource: Switched to clocksource tsc Feb 13 19:18:29.145306 kernel: Initialise system trusted keyrings Feb 13 19:18:29.145325 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 19:18:29.145345 kernel: Key type asymmetric registered Feb 13 19:18:29.145363 kernel: Asymmetric key parser 'x509' registered Feb 13 19:18:29.145382 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:18:29.145401 kernel: io scheduler mq-deadline registered Feb 13 19:18:29.145424 kernel: io scheduler kyber registered Feb 13 19:18:29.145444 kernel: io scheduler bfq registered Feb 13 19:18:29.145463 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:18:29.145483 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 13 19:18:29.146785 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Feb 13 19:18:29.146820 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 13 19:18:29.147014 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Feb 13 19:18:29.147040 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 13 19:18:29.147235 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Feb 13 19:18:29.147267 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:18:29.147288 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:18:29.147308 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 19:18:29.147327 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Feb 13 19:18:29.147346 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Feb 13 19:18:29.147537 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Feb 13 19:18:29.149650 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:18:29.149672 kernel: i8042: Warning: Keylock active Feb 13 19:18:29.149700 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:18:29.149719 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:18:29.149947 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 19:18:29.150129 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 19:18:29.150322 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T19:18:28 UTC (1739474308) Feb 13 19:18:29.150505 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 19:18:29.150530 kernel: intel_pstate: CPU model not supported Feb 13 19:18:29.150593 kernel: pstore: Using crash dump compression: deflate Feb 13 19:18:29.150619 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 19:18:29.150639 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:18:29.150659 kernel: Segment Routing with IPv6 Feb 13 19:18:29.150678 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:18:29.150698 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:18:29.150717 kernel: Key type dns_resolver registered Feb 13 19:18:29.150737 kernel: IPI shorthand broadcast: enabled Feb 13 19:18:29.150756 kernel: sched_clock: Marking stable (868004262, 131792483)->(1034138965, -34342220) Feb 13 19:18:29.150775 kernel: registered taskstats version 1 Feb 13 19:18:29.150799 kernel: Loading compiled-in X.509 certificates Feb 13 19:18:29.150819 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6c364ddae48101e091a28279a8d953535f596d53' Feb 13 19:18:29.150837 kernel: Key type .fscrypt registered Feb 13 19:18:29.150856 kernel: Key type fscrypt-provisioning registered Feb 13 19:18:29.150876 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:18:29.150896 kernel: ima: No architecture policies found Feb 13 19:18:29.150915 kernel: clk: Disabling unused clocks Feb 13 19:18:29.150934 kernel: Freeing unused kernel image (initmem) memory: 43476K Feb 13 19:18:29.150952 kernel: Write protecting the kernel read-only data: 38912k Feb 13 19:18:29.150976 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Feb 13 19:18:29.150995 kernel: Run /init as init process Feb 13 19:18:29.151014 kernel: with arguments: Feb 13 19:18:29.151034 kernel: /init Feb 13 19:18:29.151053 kernel: with environment: Feb 13 19:18:29.151071 kernel: HOME=/ Feb 13 19:18:29.151090 kernel: TERM=linux Feb 13 19:18:29.151109 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:18:29.151129 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 19:18:29.151154 systemd[1]: Successfully made /usr/ read-only. Feb 13 19:18:29.151187 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:18:29.151209 systemd[1]: Detected virtualization google. Feb 13 19:18:29.151228 systemd[1]: Detected architecture x86-64. Feb 13 19:18:29.151248 systemd[1]: Running in initrd. Feb 13 19:18:29.151268 systemd[1]: No hostname configured, using default hostname. Feb 13 19:18:29.151289 systemd[1]: Hostname set to . Feb 13 19:18:29.151312 systemd[1]: Initializing machine ID from random generator. Feb 13 19:18:29.151332 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:18:29.151353 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:18:29.151373 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:18:29.151395 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:18:29.151415 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:18:29.151435 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:18:29.151462 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:18:29.151504 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:18:29.151529 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:18:29.151565 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:18:29.151586 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:18:29.151608 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:18:29.151632 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:18:29.151654 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:18:29.151675 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:18:29.151696 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:18:29.151717 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:18:29.151738 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:18:29.151760 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 19:18:29.151781 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:18:29.151806 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:18:29.151827 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:18:29.151849 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:18:29.151870 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:18:29.151891 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:18:29.151911 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:18:29.151930 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:18:29.151950 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:18:29.151971 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:18:29.151998 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:18:29.152018 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:18:29.152083 systemd-journald[184]: Collecting audit messages is disabled. Feb 13 19:18:29.152131 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:18:29.152158 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:18:29.152188 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:18:29.152211 systemd-journald[184]: Journal started Feb 13 19:18:29.152264 systemd-journald[184]: Runtime Journal (/run/log/journal/c3ceba5027c04b47b7db5fa748b9eb07) is 8M, max 148.6M, 140.6M free. Feb 13 19:18:29.154883 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:18:29.121476 systemd-modules-load[185]: Inserted module 'overlay' Feb 13 19:18:29.162773 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:18:29.171246 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:18:29.182564 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:18:29.184411 systemd-modules-load[185]: Inserted module 'br_netfilter' Feb 13 19:18:29.187722 kernel: Bridge firewalling registered Feb 13 19:18:29.189891 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:18:29.194778 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:18:29.197764 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:18:29.199945 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:18:29.212847 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:18:29.224626 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:18:29.230864 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:18:29.241334 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:18:29.242029 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:18:29.253777 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:18:29.260767 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:18:29.276751 dracut-cmdline[218]: dracut-dracut-053 Feb 13 19:18:29.280919 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 19:18:29.336511 systemd-resolved[219]: Positive Trust Anchors: Feb 13 19:18:29.336532 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:18:29.336633 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:18:29.342953 systemd-resolved[219]: Defaulting to hostname 'linux'. Feb 13 19:18:29.345721 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:18:29.353207 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:18:29.393594 kernel: SCSI subsystem initialized Feb 13 19:18:29.403569 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:18:29.415578 kernel: iscsi: registered transport (tcp) Feb 13 19:18:29.439592 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:18:29.439693 kernel: QLogic iSCSI HBA Driver Feb 13 19:18:29.491708 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:18:29.499771 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:18:29.540581 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:18:29.540675 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:18:29.542572 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:18:29.587590 kernel: raid6: avx2x4 gen() 17753 MB/s Feb 13 19:18:29.604581 kernel: raid6: avx2x2 gen() 17931 MB/s Feb 13 19:18:29.622084 kernel: raid6: avx2x1 gen() 13625 MB/s Feb 13 19:18:29.622148 kernel: raid6: using algorithm avx2x2 gen() 17931 MB/s Feb 13 19:18:29.640179 kernel: raid6: .... xor() 18563 MB/s, rmw enabled Feb 13 19:18:29.640256 kernel: raid6: using avx2x2 recovery algorithm Feb 13 19:18:29.663590 kernel: xor: automatically using best checksumming function avx Feb 13 19:18:29.831586 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:18:29.845766 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:18:29.859841 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:18:29.879246 systemd-udevd[402]: Using default interface naming scheme 'v255'. Feb 13 19:18:29.888476 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:18:29.904144 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:18:29.933046 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Feb 13 19:18:29.971505 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:18:29.976769 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:18:30.074166 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:18:30.094788 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:18:30.144838 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:18:30.168322 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:18:30.192028 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:18:30.221713 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:18:30.210384 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:18:30.239570 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:18:30.239672 kernel: AES CTR mode by8 optimization enabled Feb 13 19:18:30.255588 kernel: scsi host0: Virtio SCSI HBA Feb 13 19:18:30.268929 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:18:30.306769 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Feb 13 19:18:30.327901 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:18:30.328086 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:18:30.361669 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:18:30.458726 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Feb 13 19:18:30.459022 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Feb 13 19:18:30.459285 kernel: sd 0:0:1:0: [sda] Write Protect is off Feb 13 19:18:30.459515 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Feb 13 19:18:30.459782 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 19:18:30.460030 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:18:30.460060 kernel: GPT:17805311 != 25165823 Feb 13 19:18:30.460085 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:18:30.460120 kernel: GPT:17805311 != 25165823 Feb 13 19:18:30.460145 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:18:30.460169 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:18:30.460195 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Feb 13 19:18:30.372892 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:18:30.372999 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:18:30.411961 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:18:30.487798 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:18:30.521586 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (448) Feb 13 19:18:30.526615 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:18:30.550756 kernel: BTRFS: device fsid 60f89c25-9096-4268-99ca-ef7992742f2b devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (466) Feb 13 19:18:30.527575 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:18:30.582532 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:18:30.613562 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Feb 13 19:18:30.635533 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Feb 13 19:18:30.666659 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Feb 13 19:18:30.677434 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Feb 13 19:18:30.686925 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Feb 13 19:18:30.714958 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:18:30.749455 disk-uuid[542]: Primary Header is updated. Feb 13 19:18:30.749455 disk-uuid[542]: Secondary Entries is updated. Feb 13 19:18:30.749455 disk-uuid[542]: Secondary Header is updated. Feb 13 19:18:30.773939 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:18:30.767720 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:18:30.824036 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:18:31.799367 disk-uuid[543]: The operation has completed successfully. Feb 13 19:18:31.808795 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:18:31.877465 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:18:31.877650 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:18:31.945800 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:18:31.975961 sh[566]: Success Feb 13 19:18:32.000610 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 19:18:32.095472 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:18:32.103729 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:18:32.130206 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:18:32.185764 kernel: BTRFS info (device dm-0): first mount of filesystem 60f89c25-9096-4268-99ca-ef7992742f2b Feb 13 19:18:32.185858 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:18:32.185884 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:18:32.195195 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:18:32.207716 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:18:32.229599 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:18:32.237007 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:18:32.237973 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:18:32.244744 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:18:32.320007 kernel: BTRFS info (device sda6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:18:32.320054 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:18:32.320089 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:18:32.320113 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:18:32.320136 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:18:32.297868 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:18:32.341710 kernel: BTRFS info (device sda6): last unmount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:18:32.354299 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:18:32.368879 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:18:32.497597 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:18:32.540680 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:18:32.566349 ignition[646]: Ignition 2.20.0 Feb 13 19:18:32.566367 ignition[646]: Stage: fetch-offline Feb 13 19:18:32.569130 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:18:32.566425 ignition[646]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:18:32.606474 systemd-networkd[750]: lo: Link UP Feb 13 19:18:32.566441 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:18:32.606480 systemd-networkd[750]: lo: Gained carrier Feb 13 19:18:32.566803 ignition[646]: parsed url from cmdline: "" Feb 13 19:18:32.608338 systemd-networkd[750]: Enumeration completed Feb 13 19:18:32.566811 ignition[646]: no config URL provided Feb 13 19:18:32.608916 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:18:32.566821 ignition[646]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:18:32.608924 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:18:32.566845 ignition[646]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:18:32.609298 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:18:32.566857 ignition[646]: failed to fetch config: resource requires networking Feb 13 19:18:32.610705 systemd-networkd[750]: eth0: Link UP Feb 13 19:18:32.567155 ignition[646]: Ignition finished successfully Feb 13 19:18:32.610712 systemd-networkd[750]: eth0: Gained carrier Feb 13 19:18:32.684360 ignition[760]: Ignition 2.20.0 Feb 13 19:18:32.610743 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:18:32.684370 ignition[760]: Stage: fetch Feb 13 19:18:32.619660 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.48/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 13 19:18:32.684591 ignition[760]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:18:32.622807 systemd[1]: Reached target network.target - Network. Feb 13 19:18:32.684609 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:18:32.636804 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 19:18:32.684720 ignition[760]: parsed url from cmdline: "" Feb 13 19:18:32.694683 unknown[760]: fetched base config from "system" Feb 13 19:18:32.684743 ignition[760]: no config URL provided Feb 13 19:18:32.694696 unknown[760]: fetched base config from "system" Feb 13 19:18:32.684752 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:18:32.694707 unknown[760]: fetched user config from "gcp" Feb 13 19:18:32.684764 ignition[760]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:18:32.697227 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 19:18:32.684790 ignition[760]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Feb 13 19:18:32.721792 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:18:32.688853 ignition[760]: GET result: OK Feb 13 19:18:32.749892 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:18:32.688929 ignition[760]: parsing config with SHA512: d973a0e938617c4358b153f13084fde2b8c41d59192e06c32d1209de43092b72026e7ee7153316693e1e95450672c6b26001e211f8c4f2f1fbbc58248230098c Feb 13 19:18:32.779744 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:18:32.695228 ignition[760]: fetch: fetch complete Feb 13 19:18:32.804162 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:18:32.695235 ignition[760]: fetch: fetch passed Feb 13 19:18:32.810578 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:18:32.695292 ignition[760]: Ignition finished successfully Feb 13 19:18:32.845866 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:18:32.747388 ignition[767]: Ignition 2.20.0 Feb 13 19:18:32.851877 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:18:32.747399 ignition[767]: Stage: kargs Feb 13 19:18:32.874022 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:18:32.747649 ignition[767]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:18:32.898873 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:18:32.747663 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:18:32.913913 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:18:32.748762 ignition[767]: kargs: kargs passed Feb 13 19:18:32.748823 ignition[767]: Ignition finished successfully Feb 13 19:18:32.799281 ignition[773]: Ignition 2.20.0 Feb 13 19:18:32.799293 ignition[773]: Stage: disks Feb 13 19:18:32.799500 ignition[773]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:18:32.799513 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:18:32.800803 ignition[773]: disks: disks passed Feb 13 19:18:32.800869 ignition[773]: Ignition finished successfully Feb 13 19:18:32.957712 systemd-fsck[782]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 19:18:33.160529 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:18:33.191758 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:18:33.309589 kernel: EXT4-fs (sda9): mounted filesystem 157595f2-1515-4117-a2d1-73fe2ed647fc r/w with ordered data mode. Quota mode: none. Feb 13 19:18:33.310441 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:18:33.311367 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:18:33.334762 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:18:33.372455 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:18:33.403717 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (790) Feb 13 19:18:33.403754 kernel: BTRFS info (device sda6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:18:33.403771 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:18:33.403786 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:18:33.423491 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:18:33.423609 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:18:33.423982 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:18:33.424075 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:18:33.424119 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:18:33.449923 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:18:33.473955 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:18:33.497847 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:18:33.638892 initrd-setup-root[814]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:18:33.648986 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:18:33.659704 initrd-setup-root[828]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:18:33.669739 initrd-setup-root[835]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:18:33.813538 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:18:33.817792 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:18:33.846927 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:18:33.869780 kernel: BTRFS info (device sda6): last unmount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:18:33.879842 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:18:33.911568 ignition[902]: INFO : Ignition 2.20.0 Feb 13 19:18:33.911568 ignition[902]: INFO : Stage: mount Feb 13 19:18:33.911568 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:18:33.911568 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:18:33.964881 ignition[902]: INFO : mount: mount passed Feb 13 19:18:33.964881 ignition[902]: INFO : Ignition finished successfully Feb 13 19:18:33.914345 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:18:33.921246 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:18:33.947749 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:18:33.998514 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:18:34.057642 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (914) Feb 13 19:18:34.057695 kernel: BTRFS info (device sda6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:18:34.057722 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:18:34.057746 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:18:33.998619 systemd-networkd[750]: eth0: Gained IPv6LL Feb 13 19:18:34.074835 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:18:34.074888 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 19:18:34.084515 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:18:34.124311 ignition[931]: INFO : Ignition 2.20.0 Feb 13 19:18:34.124311 ignition[931]: INFO : Stage: files Feb 13 19:18:34.138810 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:18:34.138810 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:18:34.138810 ignition[931]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:18:34.138810 ignition[931]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:18:34.138810 ignition[931]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:18:34.138810 ignition[931]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:18:34.138810 ignition[931]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:18:34.138810 ignition[931]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:18:34.137128 unknown[931]: wrote ssh authorized keys file for user: core Feb 13 19:18:34.239751 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:18:34.239751 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 19:18:34.273728 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:18:34.515750 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:18:34.532739 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:18:34.532739 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 19:18:34.864566 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 19:18:35.032896 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 19:18:35.048724 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:18:35.048724 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:18:35.048724 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:18:35.048724 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:18:35.048724 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:18:35.048724 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:18:35.048724 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:18:35.048724 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:18:35.048724 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:18:35.048724 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:18:35.048724 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:18:35.048724 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:18:35.048724 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:18:35.048724 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 19:18:35.282906 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 19:18:35.706308 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:18:35.724774 ignition[931]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 19:18:35.724774 ignition[931]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:18:35.724774 ignition[931]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:18:35.724774 ignition[931]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 19:18:35.724774 ignition[931]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:18:35.724774 ignition[931]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:18:35.724774 ignition[931]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:18:35.724774 ignition[931]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:18:35.724774 ignition[931]: INFO : files: files passed Feb 13 19:18:35.724774 ignition[931]: INFO : Ignition finished successfully Feb 13 19:18:35.711252 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:18:35.732819 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:18:35.768865 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:18:35.820521 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:18:35.957749 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:18:35.957749 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:18:35.820748 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:18:36.024768 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:18:35.832721 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:18:35.845074 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:18:35.875838 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:18:35.955496 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:18:35.955655 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:18:35.969064 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:18:35.982921 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:18:36.014941 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:18:36.022778 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:18:36.073978 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:18:36.100811 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:18:36.135697 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:18:36.146934 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:18:36.168040 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:18:36.188968 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:18:36.189142 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:18:36.222030 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:18:36.243033 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:18:36.261905 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:18:36.281953 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:18:36.300879 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:18:36.320019 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:18:36.340893 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:18:36.361999 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:18:36.379930 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:18:36.399912 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:18:36.417953 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:18:36.418168 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:18:36.447996 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:18:36.468996 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:18:36.489872 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:18:36.490108 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:18:36.510924 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:18:36.658725 ignition[983]: INFO : Ignition 2.20.0 Feb 13 19:18:36.658725 ignition[983]: INFO : Stage: umount Feb 13 19:18:36.658725 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:18:36.658725 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 19:18:36.658725 ignition[983]: INFO : umount: umount passed Feb 13 19:18:36.658725 ignition[983]: INFO : Ignition finished successfully Feb 13 19:18:36.511135 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:18:36.535971 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:18:36.536207 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:18:36.557048 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:18:36.557275 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:18:36.583923 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:18:36.606880 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:18:36.625712 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:18:36.626072 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:18:36.637951 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:18:36.638162 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:18:36.663297 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:18:36.665295 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:18:36.665449 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:18:36.668392 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:18:36.668509 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:18:36.683856 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:18:36.684041 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:18:36.698964 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:18:36.699042 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:18:36.716969 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 19:18:36.717048 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 19:18:36.731965 systemd[1]: Stopped target network.target - Network. Feb 13 19:18:36.754818 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:18:36.754937 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:18:36.775899 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:18:36.793720 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:18:36.799650 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:18:36.804939 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:18:36.822939 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:18:36.837951 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:18:36.838021 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:18:36.854977 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:18:36.855035 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:18:36.881904 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:18:36.882000 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:18:36.892040 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:18:36.892120 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:18:36.919925 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:18:36.920012 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:18:36.940104 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:18:36.960875 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:18:36.980413 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:18:36.980612 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:18:36.991308 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 19:18:36.991646 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:18:36.991790 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:18:37.017377 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 19:18:37.017782 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:18:37.017891 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:18:37.039274 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:18:37.039358 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:18:37.047745 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:18:37.602722 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Feb 13 19:18:37.075698 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:18:37.075834 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:18:37.086910 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:18:37.086991 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:18:37.095131 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:18:37.095215 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:18:37.121917 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:18:37.122002 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:18:37.141061 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:18:37.169413 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 19:18:37.169514 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:18:37.170068 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:18:37.170227 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:18:37.211923 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:18:37.212002 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:18:37.227973 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:18:37.228027 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:18:37.254887 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:18:37.254979 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:18:37.283075 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:18:37.283168 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:18:37.323770 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:18:37.324003 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:18:37.357788 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:18:37.371880 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:18:37.371966 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:18:37.401058 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:18:37.401133 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:18:37.421809 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:18:37.421909 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:18:37.442814 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:18:37.442916 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:18:37.463270 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 19:18:37.463358 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:18:37.463933 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:18:37.464049 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:18:37.472174 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:18:37.472287 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:18:37.491280 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:18:37.513788 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:18:37.549490 systemd[1]: Switching root. Feb 13 19:18:38.021680 systemd-journald[184]: Journal stopped Feb 13 19:18:40.529748 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:18:40.529808 kernel: SELinux: policy capability open_perms=1 Feb 13 19:18:40.529834 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:18:40.529850 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:18:40.529867 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:18:40.529885 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:18:40.529908 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:18:40.529926 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:18:40.529949 kernel: audit: type=1403 audit(1739474318.265:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:18:40.529974 systemd[1]: Successfully loaded SELinux policy in 94.864ms. Feb 13 19:18:40.529997 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.718ms. Feb 13 19:18:40.530020 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:18:40.530040 systemd[1]: Detected virtualization google. Feb 13 19:18:40.530061 systemd[1]: Detected architecture x86-64. Feb 13 19:18:40.530090 systemd[1]: Detected first boot. Feb 13 19:18:40.530130 systemd[1]: Initializing machine ID from random generator. Feb 13 19:18:40.530153 zram_generator::config[1026]: No configuration found. Feb 13 19:18:40.530175 kernel: Guest personality initialized and is inactive Feb 13 19:18:40.530196 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Feb 13 19:18:40.530222 kernel: Initialized host personality Feb 13 19:18:40.530242 kernel: NET: Registered PF_VSOCK protocol family Feb 13 19:18:40.530261 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:18:40.530284 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 19:18:40.530304 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:18:40.530325 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:18:40.530345 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:18:40.530367 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:18:40.530389 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:18:40.530416 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:18:40.530437 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:18:40.530462 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:18:40.530483 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:18:40.530505 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:18:40.530532 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:18:40.530577 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:18:40.530605 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:18:40.530627 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:18:40.530648 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:18:40.530670 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:18:40.530695 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:18:40.530726 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:18:40.530749 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:18:40.530772 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:18:40.530800 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:18:40.530824 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:18:40.530848 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:18:40.530873 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:18:40.530898 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:18:40.530920 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:18:40.530943 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:18:40.530966 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:18:40.530996 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:18:40.531019 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 19:18:40.531042 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:18:40.531066 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:18:40.531104 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:18:40.531130 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:18:40.531155 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:18:40.531180 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:18:40.531204 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:18:40.531226 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:18:40.531248 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:18:40.531268 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:18:40.531295 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:18:40.531319 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:18:40.531340 systemd[1]: Reached target machines.target - Containers. Feb 13 19:18:40.531361 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:18:40.531384 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:18:40.531406 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:18:40.531427 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:18:40.531447 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:18:40.531468 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:18:40.531497 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:18:40.531519 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:18:40.531537 kernel: ACPI: bus type drm_connector registered Feb 13 19:18:40.531589 kernel: fuse: init (API version 7.39) Feb 13 19:18:40.531612 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:18:40.531627 kernel: loop: module loaded Feb 13 19:18:40.531641 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:18:40.531659 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:18:40.531673 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:18:40.531687 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:18:40.531701 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:18:40.531716 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:18:40.531730 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:18:40.531744 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:18:40.531789 systemd-journald[1114]: Collecting audit messages is disabled. Feb 13 19:18:40.531823 systemd-journald[1114]: Journal started Feb 13 19:18:40.531907 systemd-journald[1114]: Runtime Journal (/run/log/journal/97fc90bfe84948d8af03e70d1f4c1292) is 8M, max 148.6M, 140.6M free. Feb 13 19:18:39.273968 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:18:39.286312 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 19:18:39.286991 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:18:40.550609 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:18:40.565599 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:18:40.600580 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 19:18:40.626614 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:18:40.649895 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:18:40.649999 systemd[1]: Stopped verity-setup.service. Feb 13 19:18:40.687401 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:18:40.687523 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:18:40.699291 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:18:40.709992 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:18:40.721067 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:18:40.730951 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:18:40.740920 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:18:40.750920 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:18:40.762138 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:18:40.774165 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:18:40.786086 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:18:40.786394 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:18:40.798130 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:18:40.798435 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:18:40.810105 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:18:40.810413 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:18:40.821084 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:18:40.821385 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:18:40.833082 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:18:40.833376 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:18:40.844120 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:18:40.844428 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:18:40.855143 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:18:40.865150 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:18:40.877156 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:18:40.889143 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 19:18:40.901139 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:18:40.926969 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:18:40.948785 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:18:40.963724 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:18:40.973814 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:18:40.974134 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:18:40.985104 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 19:18:41.001819 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:18:41.019635 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:18:41.029909 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:18:41.036959 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:18:41.054154 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:18:41.063075 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:18:41.066215 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:18:41.076206 systemd-journald[1114]: Time spent on flushing to /var/log/journal/97fc90bfe84948d8af03e70d1f4c1292 is 79.268ms for 948 entries. Feb 13 19:18:41.076206 systemd-journald[1114]: System Journal (/var/log/journal/97fc90bfe84948d8af03e70d1f4c1292) is 8M, max 584.8M, 576.8M free. Feb 13 19:18:41.180785 systemd-journald[1114]: Received client request to flush runtime journal. Feb 13 19:18:41.091714 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:18:41.107794 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:18:41.126821 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:18:41.145827 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:18:41.164251 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:18:41.169213 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:18:41.191944 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:18:41.206891 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:18:41.207521 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:18:41.208663 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:18:41.228573 kernel: loop0: detected capacity change from 0 to 147912 Feb 13 19:18:41.246458 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:18:41.258230 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. Feb 13 19:18:41.258266 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. Feb 13 19:18:41.281333 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:18:41.305065 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 19:18:41.317753 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:18:41.340748 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:18:41.344384 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:18:41.358007 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:18:41.359467 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 19:18:41.372696 udevadm[1152]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:18:41.382248 kernel: loop1: detected capacity change from 0 to 52152 Feb 13 19:18:41.431205 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:18:41.450812 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:18:41.459768 kernel: loop2: detected capacity change from 0 to 205544 Feb 13 19:18:41.514776 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Feb 13 19:18:41.515296 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Feb 13 19:18:41.533030 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:18:41.595602 kernel: loop3: detected capacity change from 0 to 138176 Feb 13 19:18:41.687233 kernel: loop4: detected capacity change from 0 to 147912 Feb 13 19:18:41.746678 kernel: loop5: detected capacity change from 0 to 52152 Feb 13 19:18:41.786565 kernel: loop6: detected capacity change from 0 to 205544 Feb 13 19:18:41.842270 kernel: loop7: detected capacity change from 0 to 138176 Feb 13 19:18:41.895353 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Feb 13 19:18:41.897560 (sd-merge)[1178]: Merged extensions into '/usr'. Feb 13 19:18:41.907126 systemd[1]: Reload requested from client PID 1150 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:18:41.907156 systemd[1]: Reloading... Feb 13 19:18:42.086664 zram_generator::config[1206]: No configuration found. Feb 13 19:18:42.322633 ldconfig[1145]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:18:42.374092 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:18:42.510017 systemd[1]: Reloading finished in 600 ms. Feb 13 19:18:42.528900 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:18:42.539466 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:18:42.553537 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:18:42.577241 systemd[1]: Starting ensure-sysext.service... Feb 13 19:18:42.586824 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:18:42.607862 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:18:42.638584 systemd[1]: Reload requested from client PID 1247 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:18:42.638622 systemd[1]: Reloading... Feb 13 19:18:42.642489 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:18:42.643194 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:18:42.650306 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:18:42.654947 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Feb 13 19:18:42.655088 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Feb 13 19:18:42.667086 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:18:42.667112 systemd-tmpfiles[1248]: Skipping /boot Feb 13 19:18:42.686249 systemd-udevd[1249]: Using default interface naming scheme 'v255'. Feb 13 19:18:42.710205 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:18:42.710234 systemd-tmpfiles[1248]: Skipping /boot Feb 13 19:18:42.828579 zram_generator::config[1285]: No configuration found. Feb 13 19:18:43.117613 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1339) Feb 13 19:18:43.131574 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Feb 13 19:18:43.176380 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Feb 13 19:18:43.153699 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:18:43.223572 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 19:18:43.346822 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:18:43.347778 systemd[1]: Reloading finished in 708 ms. Feb 13 19:18:43.359926 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:18:43.360042 kernel: EDAC MC: Ver: 3.0.0 Feb 13 19:18:43.369347 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:18:43.389849 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Feb 13 19:18:43.389981 kernel: ACPI: button: Sleep Button [SLPF] Feb 13 19:18:43.397599 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:18:43.422917 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:18:43.454798 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:18:43.479612 systemd[1]: Finished ensure-sysext.service. Feb 13 19:18:43.516701 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Feb 13 19:18:43.527911 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Feb 13 19:18:43.537761 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:18:43.549898 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:18:43.568555 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:18:43.580061 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:18:43.588816 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:18:43.607977 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:18:43.628825 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:18:43.631156 augenrules[1376]: No rules Feb 13 19:18:43.636982 lvm[1369]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:18:43.646449 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:18:43.665894 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:18:43.677736 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 19:18:43.686920 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:18:43.693083 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:18:43.704710 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:18:43.712106 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:18:43.734785 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:18:43.757822 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:18:43.767731 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:18:43.785815 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:18:43.803848 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:18:43.813694 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:18:43.819084 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:18:43.819659 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:18:43.830315 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:18:43.831126 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:18:43.831493 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:18:43.831993 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:18:43.832502 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:18:43.832831 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:18:43.833233 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:18:43.833762 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:18:43.834287 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:18:43.834687 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:18:43.839438 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:18:43.840205 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:18:43.849045 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 19:18:43.858809 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:18:43.863794 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:18:43.866495 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Feb 13 19:18:43.868689 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:18:43.868818 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:18:43.870709 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:18:43.876757 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:18:43.876865 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:18:43.878444 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:18:43.884838 lvm[1406]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:18:43.930286 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:18:43.950428 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:18:43.974776 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Feb 13 19:18:43.987072 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:18:43.999702 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:18:44.115402 systemd-networkd[1388]: lo: Link UP Feb 13 19:18:44.115422 systemd-networkd[1388]: lo: Gained carrier Feb 13 19:18:44.118201 systemd-networkd[1388]: Enumeration completed Feb 13 19:18:44.118386 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:18:44.119196 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:18:44.119214 systemd-networkd[1388]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:18:44.119872 systemd-networkd[1388]: eth0: Link UP Feb 13 19:18:44.119889 systemd-networkd[1388]: eth0: Gained carrier Feb 13 19:18:44.119916 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:18:44.126329 systemd-resolved[1389]: Positive Trust Anchors: Feb 13 19:18:44.126350 systemd-resolved[1389]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:18:44.126410 systemd-resolved[1389]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:18:44.134652 systemd-networkd[1388]: eth0: DHCPv4 address 10.128.0.48/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 13 19:18:44.134974 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 19:18:44.137513 systemd-resolved[1389]: Defaulting to hostname 'linux'. Feb 13 19:18:44.153861 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:18:44.165947 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:18:44.176022 systemd[1]: Reached target network.target - Network. Feb 13 19:18:44.184720 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:18:44.195727 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:18:44.205847 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:18:44.217760 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:18:44.228943 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:18:44.238870 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:18:44.249728 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:18:44.260704 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:18:44.260771 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:18:44.269737 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:18:44.280280 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:18:44.291501 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:18:44.304022 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 19:18:44.315954 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 19:18:44.326719 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 19:18:44.347492 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:18:44.358297 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 19:18:44.371073 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 19:18:44.383020 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:18:44.393637 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:18:44.403708 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:18:44.412813 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:18:44.412871 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:18:44.424732 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:18:44.439824 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 19:18:44.461064 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:18:44.499665 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:18:44.510586 jq[1442]: false Feb 13 19:18:44.521396 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:18:44.533218 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:18:44.540852 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:18:44.557930 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 19:18:44.563576 coreos-metadata[1440]: Feb 13 19:18:44.561 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Feb 13 19:18:44.565012 coreos-metadata[1440]: Feb 13 19:18:44.564 INFO Fetch successful Feb 13 19:18:44.565198 coreos-metadata[1440]: Feb 13 19:18:44.565 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Feb 13 19:18:44.570419 extend-filesystems[1445]: Found loop4 Feb 13 19:18:44.578768 extend-filesystems[1445]: Found loop5 Feb 13 19:18:44.578768 extend-filesystems[1445]: Found loop6 Feb 13 19:18:44.578768 extend-filesystems[1445]: Found loop7 Feb 13 19:18:44.578768 extend-filesystems[1445]: Found sda Feb 13 19:18:44.578768 extend-filesystems[1445]: Found sda1 Feb 13 19:18:44.578768 extend-filesystems[1445]: Found sda2 Feb 13 19:18:44.578768 extend-filesystems[1445]: Found sda3 Feb 13 19:18:44.578768 extend-filesystems[1445]: Found usr Feb 13 19:18:44.578768 extend-filesystems[1445]: Found sda4 Feb 13 19:18:44.578768 extend-filesystems[1445]: Found sda6 Feb 13 19:18:44.578768 extend-filesystems[1445]: Found sda7 Feb 13 19:18:44.578768 extend-filesystems[1445]: Found sda9 Feb 13 19:18:44.578768 extend-filesystems[1445]: Checking size of /dev/sda9 Feb 13 19:18:44.751732 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Feb 13 19:18:44.751827 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Feb 13 19:18:44.751865 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1288) Feb 13 19:18:44.751930 coreos-metadata[1440]: Feb 13 19:18:44.572 INFO Fetch successful Feb 13 19:18:44.751930 coreos-metadata[1440]: Feb 13 19:18:44.572 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Feb 13 19:18:44.751930 coreos-metadata[1440]: Feb 13 19:18:44.576 INFO Fetch successful Feb 13 19:18:44.751930 coreos-metadata[1440]: Feb 13 19:18:44.576 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Feb 13 19:18:44.751930 coreos-metadata[1440]: Feb 13 19:18:44.576 INFO Fetch successful Feb 13 19:18:44.575790 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:18:44.752298 extend-filesystems[1445]: Resized partition /dev/sda9 Feb 13 19:18:44.588358 dbus-daemon[1441]: [system] SELinux support is enabled Feb 13 19:18:44.621861 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:18:44.767567 extend-filesystems[1455]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:18:44.767567 extend-filesystems[1455]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 19:18:44.767567 extend-filesystems[1455]: old_desc_blocks = 1, new_desc_blocks = 2 Feb 13 19:18:44.767567 extend-filesystems[1455]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Feb 13 19:18:44.593727 dbus-daemon[1441]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1388 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 19:18:44.646289 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:18:44.860088 extend-filesystems[1445]: Resized filesystem in /dev/sda9 Feb 13 19:18:44.739796 ntpd[1448]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:04:11 UTC 2025 (1): Starting Feb 13 19:18:44.681753 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:18:44.870464 ntpd[1448]: 13 Feb 19:18:44 ntpd[1448]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:04:11 UTC 2025 (1): Starting Feb 13 19:18:44.870464 ntpd[1448]: 13 Feb 19:18:44 ntpd[1448]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:18:44.870464 ntpd[1448]: 13 Feb 19:18:44 ntpd[1448]: ---------------------------------------------------- Feb 13 19:18:44.870464 ntpd[1448]: 13 Feb 19:18:44 ntpd[1448]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:18:44.870464 ntpd[1448]: 13 Feb 19:18:44 ntpd[1448]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:18:44.870464 ntpd[1448]: 13 Feb 19:18:44 ntpd[1448]: corporation. Support and training for ntp-4 are Feb 13 19:18:44.870464 ntpd[1448]: 13 Feb 19:18:44 ntpd[1448]: available at https://www.nwtime.org/support Feb 13 19:18:44.870464 ntpd[1448]: 13 Feb 19:18:44 ntpd[1448]: ---------------------------------------------------- Feb 13 19:18:44.870464 ntpd[1448]: 13 Feb 19:18:44 ntpd[1448]: proto: precision = 0.102 usec (-23) Feb 13 19:18:44.870464 ntpd[1448]: 13 Feb 19:18:44 ntpd[1448]: basedate set to 2025-02-01 Feb 13 19:18:44.870464 ntpd[1448]: 13 Feb 19:18:44 ntpd[1448]: gps base set to 2025-02-02 (week 2352) Feb 13 19:18:44.870464 ntpd[1448]: 13 Feb 19:18:44 ntpd[1448]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:18:44.870464 ntpd[1448]: 13 Feb 19:18:44 ntpd[1448]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:18:44.870464 ntpd[1448]: 13 Feb 19:18:44 ntpd[1448]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:18:44.870464 ntpd[1448]: 13 Feb 19:18:44 ntpd[1448]: Listen normally on 3 eth0 10.128.0.48:123 Feb 13 19:18:44.870464 ntpd[1448]: 13 Feb 19:18:44 ntpd[1448]: Listen normally on 4 lo [::1]:123 Feb 13 19:18:44.870464 ntpd[1448]: 13 Feb 19:18:44 ntpd[1448]: bind(21) AF_INET6 fe80::4001:aff:fe80:30%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:18:44.870464 ntpd[1448]: 13 Feb 19:18:44 ntpd[1448]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:30%2#123 Feb 13 19:18:44.870464 ntpd[1448]: 13 Feb 19:18:44 ntpd[1448]: failed to init interface for address fe80::4001:aff:fe80:30%2 Feb 13 19:18:44.870464 ntpd[1448]: 13 Feb 19:18:44 ntpd[1448]: Listening on routing socket on fd #21 for interface updates Feb 13 19:18:44.870464 ntpd[1448]: 13 Feb 19:18:44 ntpd[1448]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:18:44.870464 ntpd[1448]: 13 Feb 19:18:44 ntpd[1448]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:18:44.739830 ntpd[1448]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 19:18:44.698500 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Feb 13 19:18:44.739845 ntpd[1448]: ---------------------------------------------------- Feb 13 19:18:44.701149 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:18:44.739859 ntpd[1448]: ntp-4 is maintained by Network Time Foundation, Feb 13 19:18:44.705825 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:18:44.739872 ntpd[1448]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 19:18:44.738701 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:18:44.876241 update_engine[1468]: I20250213 19:18:44.839072 1468 main.cc:92] Flatcar Update Engine starting Feb 13 19:18:44.876241 update_engine[1468]: I20250213 19:18:44.842034 1468 update_check_scheduler.cc:74] Next update check in 2m17s Feb 13 19:18:44.739886 ntpd[1448]: corporation. Support and training for ntp-4 are Feb 13 19:18:44.779933 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:18:44.877897 jq[1472]: true Feb 13 19:18:44.739902 ntpd[1448]: available at https://www.nwtime.org/support Feb 13 19:18:44.809094 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:18:44.739916 ntpd[1448]: ---------------------------------------------------- Feb 13 19:18:44.810731 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:18:44.746188 ntpd[1448]: proto: precision = 0.102 usec (-23) Feb 13 19:18:44.811262 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:18:44.747239 ntpd[1448]: basedate set to 2025-02-01 Feb 13 19:18:44.812651 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:18:44.747266 ntpd[1448]: gps base set to 2025-02-02 (week 2352) Feb 13 19:18:44.837371 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:18:44.751235 ntpd[1448]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 19:18:44.837691 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:18:44.751298 ntpd[1448]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 19:18:44.857250 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:18:44.751533 ntpd[1448]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 19:18:44.858669 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:18:44.752532 ntpd[1448]: Listen normally on 3 eth0 10.128.0.48:123 Feb 13 19:18:44.752649 ntpd[1448]: Listen normally on 4 lo [::1]:123 Feb 13 19:18:44.752716 ntpd[1448]: bind(21) AF_INET6 fe80::4001:aff:fe80:30%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 19:18:44.752746 ntpd[1448]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:30%2#123 Feb 13 19:18:44.752768 ntpd[1448]: failed to init interface for address fe80::4001:aff:fe80:30%2 Feb 13 19:18:44.752815 ntpd[1448]: Listening on routing socket on fd #21 for interface updates Feb 13 19:18:44.756438 ntpd[1448]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:18:44.756473 ntpd[1448]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 19:18:44.904485 systemd-logind[1466]: Watching system buttons on /dev/input/event2 (Power Button) Feb 13 19:18:44.907612 jq[1478]: true Feb 13 19:18:44.916671 systemd-logind[1466]: Watching system buttons on /dev/input/event3 (Sleep Button) Feb 13 19:18:44.916715 systemd-logind[1466]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:18:44.919921 systemd-logind[1466]: New seat seat0. Feb 13 19:18:44.938762 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:18:44.947399 dbus-daemon[1441]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:18:44.969877 (ntainerd)[1489]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:18:44.990354 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 19:18:45.007624 sshd_keygen[1473]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:18:45.033843 tar[1477]: linux-amd64/helm Feb 13 19:18:45.059761 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:18:45.069638 bash[1512]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:18:45.072719 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:18:45.088277 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:18:45.103168 systemd[1]: Starting sshkeys.service... Feb 13 19:18:45.110790 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:18:45.111105 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:18:45.132659 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 19:18:45.142797 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:18:45.143090 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:18:45.166022 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:18:45.190919 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:18:45.224824 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:18:45.252736 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 19:18:45.279083 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 19:18:45.327346 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:18:45.327761 systemd-networkd[1388]: eth0: Gained IPv6LL Feb 13 19:18:45.329653 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:18:45.350061 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:18:45.361622 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:18:45.369286 coreos-metadata[1522]: Feb 13 19:18:45.367 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Feb 13 19:18:45.369286 coreos-metadata[1522]: Feb 13 19:18:45.369 INFO Fetch failed with 404: resource not found Feb 13 19:18:45.369286 coreos-metadata[1522]: Feb 13 19:18:45.369 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Feb 13 19:18:45.378901 coreos-metadata[1522]: Feb 13 19:18:45.375 INFO Fetch successful Feb 13 19:18:45.378901 coreos-metadata[1522]: Feb 13 19:18:45.375 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Feb 13 19:18:45.378901 coreos-metadata[1522]: Feb 13 19:18:45.377 INFO Fetch failed with 404: resource not found Feb 13 19:18:45.378901 coreos-metadata[1522]: Feb 13 19:18:45.377 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Feb 13 19:18:45.378305 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:18:45.382761 coreos-metadata[1522]: Feb 13 19:18:45.380 INFO Fetch failed with 404: resource not found Feb 13 19:18:45.382761 coreos-metadata[1522]: Feb 13 19:18:45.380 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Feb 13 19:18:45.382761 coreos-metadata[1522]: Feb 13 19:18:45.380 INFO Fetch successful Feb 13 19:18:45.386249 unknown[1522]: wrote ssh authorized keys file for user: core Feb 13 19:18:45.403475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:18:45.422326 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:18:45.442180 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Feb 13 19:18:45.469767 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:18:45.508689 init.sh[1541]: + '[' -e /etc/default/instance_configs.cfg.template ']' Feb 13 19:18:45.512951 init.sh[1541]: + echo -e '[InstanceSetup]\nset_host_keys = false' Feb 13 19:18:45.512951 init.sh[1541]: + /usr/bin/google_instance_setup Feb 13 19:18:45.510211 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:18:45.515076 dbus-daemon[1441]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 19:18:45.525855 dbus-daemon[1441]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1518 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 19:18:45.529132 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:18:45.537998 locksmithd[1519]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:18:45.540611 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:18:45.551692 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 19:18:45.555621 update-ssh-keys[1542]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:18:45.565069 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 19:18:45.587719 systemd[1]: Finished sshkeys.service. Feb 13 19:18:45.597243 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:18:45.623063 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 19:18:45.688370 polkitd[1558]: Started polkitd version 121 Feb 13 19:18:45.720131 polkitd[1558]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 19:18:45.720235 polkitd[1558]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 19:18:45.720974 polkitd[1558]: Finished loading, compiling and executing 2 rules Feb 13 19:18:45.724177 dbus-daemon[1441]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 19:18:45.724440 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 19:18:45.726338 polkitd[1558]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 19:18:45.791191 systemd-hostnamed[1518]: Hostname set to (transient) Feb 13 19:18:45.794424 systemd-resolved[1389]: System hostname changed to 'ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal'. Feb 13 19:18:45.873579 containerd[1489]: time="2025-02-13T19:18:45.870285522Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:18:45.961143 containerd[1489]: time="2025-02-13T19:18:45.961036604Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:18:45.965244 containerd[1489]: time="2025-02-13T19:18:45.965181444Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:18:45.965244 containerd[1489]: time="2025-02-13T19:18:45.965239217Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:18:45.965437 containerd[1489]: time="2025-02-13T19:18:45.965266299Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:18:45.967137 containerd[1489]: time="2025-02-13T19:18:45.967094266Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:18:45.967257 containerd[1489]: time="2025-02-13T19:18:45.967142773Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:18:45.967257 containerd[1489]: time="2025-02-13T19:18:45.967234719Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:18:45.967345 containerd[1489]: time="2025-02-13T19:18:45.967258004Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:18:45.967640 containerd[1489]: time="2025-02-13T19:18:45.967603485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:18:45.967719 containerd[1489]: time="2025-02-13T19:18:45.967641690Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:18:45.967719 containerd[1489]: time="2025-02-13T19:18:45.967665708Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:18:45.967719 containerd[1489]: time="2025-02-13T19:18:45.967690726Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:18:45.967873 containerd[1489]: time="2025-02-13T19:18:45.967841225Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:18:45.969017 containerd[1489]: time="2025-02-13T19:18:45.968176803Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:18:45.969017 containerd[1489]: time="2025-02-13T19:18:45.968412540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:18:45.969017 containerd[1489]: time="2025-02-13T19:18:45.968436693Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:18:45.969017 containerd[1489]: time="2025-02-13T19:18:45.968587976Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:18:45.969017 containerd[1489]: time="2025-02-13T19:18:45.968659044Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:18:45.984325 containerd[1489]: time="2025-02-13T19:18:45.983696930Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:18:45.984325 containerd[1489]: time="2025-02-13T19:18:45.983781911Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:18:45.984325 containerd[1489]: time="2025-02-13T19:18:45.983813826Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:18:45.984325 containerd[1489]: time="2025-02-13T19:18:45.983841166Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:18:45.984325 containerd[1489]: time="2025-02-13T19:18:45.983865958Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:18:45.984325 containerd[1489]: time="2025-02-13T19:18:45.984080331Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:18:45.984704 containerd[1489]: time="2025-02-13T19:18:45.984449960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:18:45.984704 containerd[1489]: time="2025-02-13T19:18:45.984650046Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:18:45.984704 containerd[1489]: time="2025-02-13T19:18:45.984683815Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:18:45.984842 containerd[1489]: time="2025-02-13T19:18:45.984707775Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:18:45.984842 containerd[1489]: time="2025-02-13T19:18:45.984732217Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:18:45.984842 containerd[1489]: time="2025-02-13T19:18:45.984756422Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:18:45.984842 containerd[1489]: time="2025-02-13T19:18:45.984778209Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:18:45.984842 containerd[1489]: time="2025-02-13T19:18:45.984802856Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:18:45.984842 containerd[1489]: time="2025-02-13T19:18:45.984827713Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:18:45.987117 containerd[1489]: time="2025-02-13T19:18:45.984850603Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:18:45.987117 containerd[1489]: time="2025-02-13T19:18:45.984871980Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:18:45.987117 containerd[1489]: time="2025-02-13T19:18:45.984892867Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:18:45.987117 containerd[1489]: time="2025-02-13T19:18:45.984924118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:18:45.987117 containerd[1489]: time="2025-02-13T19:18:45.984951132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:18:45.987117 containerd[1489]: time="2025-02-13T19:18:45.984972110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:18:45.987117 containerd[1489]: time="2025-02-13T19:18:45.984995040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:18:45.987117 containerd[1489]: time="2025-02-13T19:18:45.985015636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:18:45.987117 containerd[1489]: time="2025-02-13T19:18:45.985037480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:18:45.987117 containerd[1489]: time="2025-02-13T19:18:45.985081434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:18:45.987117 containerd[1489]: time="2025-02-13T19:18:45.985105502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:18:45.987117 containerd[1489]: time="2025-02-13T19:18:45.985129204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:18:45.987117 containerd[1489]: time="2025-02-13T19:18:45.985153662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:18:45.987117 containerd[1489]: time="2025-02-13T19:18:45.985173150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:18:45.989090 containerd[1489]: time="2025-02-13T19:18:45.985193585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:18:45.989090 containerd[1489]: time="2025-02-13T19:18:45.985214219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:18:45.989090 containerd[1489]: time="2025-02-13T19:18:45.985258110Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:18:45.989090 containerd[1489]: time="2025-02-13T19:18:45.985302191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:18:45.989090 containerd[1489]: time="2025-02-13T19:18:45.985324772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:18:45.989090 containerd[1489]: time="2025-02-13T19:18:45.985344011Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:18:45.989090 containerd[1489]: time="2025-02-13T19:18:45.985406540Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:18:45.989090 containerd[1489]: time="2025-02-13T19:18:45.985433919Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:18:45.989090 containerd[1489]: time="2025-02-13T19:18:45.985451883Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:18:45.989090 containerd[1489]: time="2025-02-13T19:18:45.985472787Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:18:45.989090 containerd[1489]: time="2025-02-13T19:18:45.985524320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:18:45.989090 containerd[1489]: time="2025-02-13T19:18:45.987466561Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:18:45.989090 containerd[1489]: time="2025-02-13T19:18:45.987616883Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:18:45.989090 containerd[1489]: time="2025-02-13T19:18:45.987643166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:18:45.991815 containerd[1489]: time="2025-02-13T19:18:45.989577011Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:18:45.991815 containerd[1489]: time="2025-02-13T19:18:45.989680978Z" level=info msg="Connect containerd service" Feb 13 19:18:45.991815 containerd[1489]: time="2025-02-13T19:18:45.989875519Z" level=info msg="using legacy CRI server" Feb 13 19:18:45.991815 containerd[1489]: time="2025-02-13T19:18:45.989893786Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:18:45.991815 containerd[1489]: time="2025-02-13T19:18:45.990698064Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:18:45.998632 containerd[1489]: time="2025-02-13T19:18:45.997209307Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:18:45.998632 containerd[1489]: time="2025-02-13T19:18:45.997428329Z" level=info msg="Start subscribing containerd event" Feb 13 19:18:45.998632 containerd[1489]: time="2025-02-13T19:18:45.997495225Z" level=info msg="Start recovering state" Feb 13 19:18:45.998632 containerd[1489]: time="2025-02-13T19:18:45.997617183Z" level=info msg="Start event monitor" Feb 13 19:18:45.998632 containerd[1489]: time="2025-02-13T19:18:45.997642277Z" level=info msg="Start snapshots syncer" Feb 13 19:18:45.998632 containerd[1489]: time="2025-02-13T19:18:45.997658021Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:18:45.998632 containerd[1489]: time="2025-02-13T19:18:45.997670062Z" level=info msg="Start streaming server" Feb 13 19:18:45.998632 containerd[1489]: time="2025-02-13T19:18:45.998470110Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:18:46.001571 containerd[1489]: time="2025-02-13T19:18:45.999064448Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:18:46.001571 containerd[1489]: time="2025-02-13T19:18:45.999696254Z" level=info msg="containerd successfully booted in 0.131738s" Feb 13 19:18:45.999851 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:18:46.199570 tar[1477]: linux-amd64/LICENSE Feb 13 19:18:46.200079 tar[1477]: linux-amd64/README.md Feb 13 19:18:46.218850 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:18:46.400708 instance-setup[1549]: INFO Running google_set_multiqueue. Feb 13 19:18:46.420958 instance-setup[1549]: INFO Set channels for eth0 to 2. Feb 13 19:18:46.426193 instance-setup[1549]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Feb 13 19:18:46.428174 instance-setup[1549]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Feb 13 19:18:46.428752 instance-setup[1549]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Feb 13 19:18:46.430518 instance-setup[1549]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Feb 13 19:18:46.430920 instance-setup[1549]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Feb 13 19:18:46.432988 instance-setup[1549]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Feb 13 19:18:46.433316 instance-setup[1549]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Feb 13 19:18:46.436086 instance-setup[1549]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Feb 13 19:18:46.445996 instance-setup[1549]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Feb 13 19:18:46.451105 instance-setup[1549]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Feb 13 19:18:46.453527 instance-setup[1549]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Feb 13 19:18:46.453625 instance-setup[1549]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Feb 13 19:18:46.475646 init.sh[1541]: + /usr/bin/google_metadata_script_runner --script-type startup Feb 13 19:18:46.637490 startup-script[1604]: INFO Starting startup scripts. Feb 13 19:18:46.644176 startup-script[1604]: INFO No startup scripts found in metadata. Feb 13 19:18:46.644260 startup-script[1604]: INFO Finished running startup scripts. Feb 13 19:18:46.651444 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:18:46.673285 init.sh[1541]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Feb 13 19:18:46.675299 init.sh[1541]: + daemon_pids=() Feb 13 19:18:46.675299 init.sh[1541]: + for d in accounts clock_skew network Feb 13 19:18:46.675299 init.sh[1541]: + daemon_pids+=($!) Feb 13 19:18:46.675299 init.sh[1541]: + for d in accounts clock_skew network Feb 13 19:18:46.675299 init.sh[1541]: + daemon_pids+=($!) Feb 13 19:18:46.675299 init.sh[1541]: + for d in accounts clock_skew network Feb 13 19:18:46.675299 init.sh[1541]: + daemon_pids+=($!) Feb 13 19:18:46.675299 init.sh[1541]: + NOTIFY_SOCKET=/run/systemd/notify Feb 13 19:18:46.675299 init.sh[1541]: + /usr/bin/systemd-notify --ready Feb 13 19:18:46.675057 systemd[1]: Started sshd@0-10.128.0.48:22-147.75.109.163:52648.service - OpenSSH per-connection server daemon (147.75.109.163:52648). Feb 13 19:18:46.678287 init.sh[1609]: + /usr/bin/google_accounts_daemon Feb 13 19:18:46.678641 init.sh[1610]: + /usr/bin/google_clock_skew_daemon Feb 13 19:18:46.678918 init.sh[1611]: + /usr/bin/google_network_daemon Feb 13 19:18:46.712587 systemd[1]: Started oem-gce.service - GCE Linux Agent. Feb 13 19:18:46.732959 init.sh[1541]: + wait -n 1609 1610 1611 Feb 13 19:18:47.073068 sshd[1608]: Accepted publickey for core from 147.75.109.163 port 52648 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:18:47.080052 sshd-session[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:47.104313 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:18:47.117707 google-networking[1611]: INFO Starting Google Networking daemon. Feb 13 19:18:47.123637 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:18:47.127158 google-clock-skew[1610]: INFO Starting Google Clock Skew daemon. Feb 13 19:18:47.138082 google-clock-skew[1610]: INFO Clock drift token has changed: 0. Feb 13 19:18:47.157026 systemd-logind[1466]: New session 1 of user core. Feb 13 19:18:47.174891 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:18:47.195062 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:18:47.227088 groupadd[1623]: group added to /etc/group: name=google-sudoers, GID=1000 Feb 13 19:18:47.228088 (systemd)[1625]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:18:47.233312 groupadd[1623]: group added to /etc/gshadow: name=google-sudoers Feb 13 19:18:47.233447 systemd-logind[1466]: New session c1 of user core. Feb 13 19:18:47.297886 groupadd[1623]: new group: name=google-sudoers, GID=1000 Feb 13 19:18:47.351253 google-accounts[1609]: INFO Starting Google Accounts daemon. Feb 13 19:18:47.376786 google-accounts[1609]: WARNING OS Login not installed. Feb 13 19:18:47.380255 google-accounts[1609]: INFO Creating a new user account for 0. Feb 13 19:18:47.388439 init.sh[1638]: useradd: invalid user name '0': use --badname to ignore Feb 13 19:18:47.388946 google-accounts[1609]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Feb 13 19:18:47.000595 google-clock-skew[1610]: INFO Synced system time with hardware clock. Feb 13 19:18:47.018287 systemd-journald[1114]: Time jumped backwards, rotating. Feb 13 19:18:47.002142 systemd-resolved[1389]: Clock change detected. Flushing caches. Feb 13 19:18:47.006947 systemd[1625]: Queued start job for default target default.target. Feb 13 19:18:47.012722 systemd[1625]: Created slice app.slice - User Application Slice. Feb 13 19:18:47.012767 systemd[1625]: Reached target paths.target - Paths. Feb 13 19:18:47.012845 systemd[1625]: Reached target timers.target - Timers. Feb 13 19:18:47.023245 systemd[1625]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:18:47.036101 systemd[1625]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:18:47.038172 systemd[1625]: Reached target sockets.target - Sockets. Feb 13 19:18:47.038282 systemd[1625]: Reached target basic.target - Basic System. Feb 13 19:18:47.038362 systemd[1625]: Reached target default.target - Main User Target. Feb 13 19:18:47.038422 systemd[1625]: Startup finished in 284ms. Feb 13 19:18:47.038717 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:18:47.055400 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:18:47.111811 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:18:47.124122 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:18:47.127696 (kubelet)[1649]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:18:47.135624 systemd[1]: Startup finished in 1.049s (kernel) + 9.476s (initrd) + 9.447s (userspace) = 19.973s. Feb 13 19:18:47.246935 ntpd[1448]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:30%2]:123 Feb 13 19:18:47.247405 ntpd[1448]: 13 Feb 19:18:47 ntpd[1448]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:30%2]:123 Feb 13 19:18:47.298550 systemd[1]: Started sshd@1-10.128.0.48:22-147.75.109.163:52656.service - OpenSSH per-connection server daemon (147.75.109.163:52656). Feb 13 19:18:47.613818 sshd[1656]: Accepted publickey for core from 147.75.109.163 port 52656 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:18:47.616311 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:47.626068 systemd-logind[1466]: New session 2 of user core. Feb 13 19:18:47.629295 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:18:47.832484 sshd[1663]: Connection closed by 147.75.109.163 port 52656 Feb 13 19:18:47.834098 sshd-session[1656]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:47.839559 systemd[1]: sshd@1-10.128.0.48:22-147.75.109.163:52656.service: Deactivated successfully. Feb 13 19:18:47.842391 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:18:47.844842 systemd-logind[1466]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:18:47.846841 systemd-logind[1466]: Removed session 2. Feb 13 19:18:47.891982 systemd[1]: Started sshd@2-10.128.0.48:22-147.75.109.163:52660.service - OpenSSH per-connection server daemon (147.75.109.163:52660). Feb 13 19:18:48.057550 kubelet[1649]: E0213 19:18:48.057472 1649 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:18:48.060771 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:18:48.061020 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:18:48.061610 systemd[1]: kubelet.service: Consumed 1.188s CPU time, 237.7M memory peak. Feb 13 19:18:48.195952 sshd[1669]: Accepted publickey for core from 147.75.109.163 port 52660 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:18:48.197795 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:48.205597 systemd-logind[1466]: New session 3 of user core. Feb 13 19:18:48.214943 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:18:48.409335 sshd[1672]: Connection closed by 147.75.109.163 port 52660 Feb 13 19:18:48.410190 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:48.415410 systemd[1]: sshd@2-10.128.0.48:22-147.75.109.163:52660.service: Deactivated successfully. Feb 13 19:18:48.417782 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:18:48.418801 systemd-logind[1466]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:18:48.420316 systemd-logind[1466]: Removed session 3. Feb 13 19:18:48.466455 systemd[1]: Started sshd@3-10.128.0.48:22-147.75.109.163:52672.service - OpenSSH per-connection server daemon (147.75.109.163:52672). Feb 13 19:18:48.758082 sshd[1678]: Accepted publickey for core from 147.75.109.163 port 52672 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:18:48.759821 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:48.766677 systemd-logind[1466]: New session 4 of user core. Feb 13 19:18:48.774323 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:18:48.973877 sshd[1680]: Connection closed by 147.75.109.163 port 52672 Feb 13 19:18:48.974745 sshd-session[1678]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:48.979160 systemd[1]: sshd@3-10.128.0.48:22-147.75.109.163:52672.service: Deactivated successfully. Feb 13 19:18:48.981548 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:18:48.983439 systemd-logind[1466]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:18:48.984871 systemd-logind[1466]: Removed session 4. Feb 13 19:18:49.040535 systemd[1]: Started sshd@4-10.128.0.48:22-147.75.109.163:52686.service - OpenSSH per-connection server daemon (147.75.109.163:52686). Feb 13 19:18:49.331153 sshd[1686]: Accepted publickey for core from 147.75.109.163 port 52686 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:18:49.332915 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:49.338715 systemd-logind[1466]: New session 5 of user core. Feb 13 19:18:49.350287 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:18:49.525672 sudo[1689]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:18:49.526210 sudo[1689]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:18:49.544183 sudo[1689]: pam_unix(sudo:session): session closed for user root Feb 13 19:18:49.586837 sshd[1688]: Connection closed by 147.75.109.163 port 52686 Feb 13 19:18:49.588368 sshd-session[1686]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:49.594086 systemd[1]: sshd@4-10.128.0.48:22-147.75.109.163:52686.service: Deactivated successfully. Feb 13 19:18:49.596578 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:18:49.597734 systemd-logind[1466]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:18:49.599240 systemd-logind[1466]: Removed session 5. Feb 13 19:18:49.651537 systemd[1]: Started sshd@5-10.128.0.48:22-147.75.109.163:51890.service - OpenSSH per-connection server daemon (147.75.109.163:51890). Feb 13 19:18:49.942861 sshd[1695]: Accepted publickey for core from 147.75.109.163 port 51890 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:18:49.944911 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:49.950656 systemd-logind[1466]: New session 6 of user core. Feb 13 19:18:49.962277 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:18:50.123530 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:18:50.124065 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:18:50.128887 sudo[1699]: pam_unix(sudo:session): session closed for user root Feb 13 19:18:50.142899 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:18:50.143414 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:18:50.159557 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:18:50.199611 augenrules[1721]: No rules Feb 13 19:18:50.201621 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:18:50.201954 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:18:50.203846 sudo[1698]: pam_unix(sudo:session): session closed for user root Feb 13 19:18:50.246832 sshd[1697]: Connection closed by 147.75.109.163 port 51890 Feb 13 19:18:50.247646 sshd-session[1695]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:50.253464 systemd[1]: sshd@5-10.128.0.48:22-147.75.109.163:51890.service: Deactivated successfully. Feb 13 19:18:50.255935 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:18:50.256945 systemd-logind[1466]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:18:50.258355 systemd-logind[1466]: Removed session 6. Feb 13 19:18:50.304500 systemd[1]: Started sshd@6-10.128.0.48:22-147.75.109.163:51896.service - OpenSSH per-connection server daemon (147.75.109.163:51896). Feb 13 19:18:50.594321 sshd[1730]: Accepted publickey for core from 147.75.109.163 port 51896 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:18:50.596131 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:50.601931 systemd-logind[1466]: New session 7 of user core. Feb 13 19:18:50.609340 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:18:50.771949 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:18:50.772474 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:18:51.239496 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:18:51.241289 (dockerd)[1750]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:18:51.676710 dockerd[1750]: time="2025-02-13T19:18:51.676538983Z" level=info msg="Starting up" Feb 13 19:18:51.878225 dockerd[1750]: time="2025-02-13T19:18:51.878165391Z" level=info msg="Loading containers: start." Feb 13 19:18:52.095081 kernel: Initializing XFRM netlink socket Feb 13 19:18:52.211466 systemd-networkd[1388]: docker0: Link UP Feb 13 19:18:52.241815 dockerd[1750]: time="2025-02-13T19:18:52.241756769Z" level=info msg="Loading containers: done." Feb 13 19:18:52.261858 dockerd[1750]: time="2025-02-13T19:18:52.261790684Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:18:52.262061 dockerd[1750]: time="2025-02-13T19:18:52.261938262Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 19:18:52.262196 dockerd[1750]: time="2025-02-13T19:18:52.262155570Z" level=info msg="Daemon has completed initialization" Feb 13 19:18:52.304378 dockerd[1750]: time="2025-02-13T19:18:52.304309653Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:18:52.304825 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:18:53.218702 containerd[1489]: time="2025-02-13T19:18:53.218650809Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 19:18:53.731954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3760275206.mount: Deactivated successfully. Feb 13 19:18:55.254076 containerd[1489]: time="2025-02-13T19:18:55.253979385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:55.256180 containerd[1489]: time="2025-02-13T19:18:55.256110829Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=27983216" Feb 13 19:18:55.257391 containerd[1489]: time="2025-02-13T19:18:55.257262265Z" level=info msg="ImageCreate event name:\"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:55.261719 containerd[1489]: time="2025-02-13T19:18:55.261630693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:55.264023 containerd[1489]: time="2025-02-13T19:18:55.263321693Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"27973388\" in 2.044607803s" Feb 13 19:18:55.264023 containerd[1489]: time="2025-02-13T19:18:55.263384385Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\"" Feb 13 19:18:55.266223 containerd[1489]: time="2025-02-13T19:18:55.266166677Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 19:18:56.693525 containerd[1489]: time="2025-02-13T19:18:56.693447436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:56.695128 containerd[1489]: time="2025-02-13T19:18:56.695038973Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=24710127" Feb 13 19:18:56.696524 containerd[1489]: time="2025-02-13T19:18:56.696444441Z" level=info msg="ImageCreate event name:\"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:56.700198 containerd[1489]: time="2025-02-13T19:18:56.700132633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:56.701897 containerd[1489]: time="2025-02-13T19:18:56.701719694Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"26154739\" in 1.43550868s" Feb 13 19:18:56.701897 containerd[1489]: time="2025-02-13T19:18:56.701769321Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\"" Feb 13 19:18:56.702752 containerd[1489]: time="2025-02-13T19:18:56.702722650Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 19:18:57.958432 containerd[1489]: time="2025-02-13T19:18:57.958366424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:57.960366 containerd[1489]: time="2025-02-13T19:18:57.960295661Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=18654341" Feb 13 19:18:57.961792 containerd[1489]: time="2025-02-13T19:18:57.961724098Z" level=info msg="ImageCreate event name:\"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:57.965402 containerd[1489]: time="2025-02-13T19:18:57.965328212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:57.967000 containerd[1489]: time="2025-02-13T19:18:57.966833853Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"20098989\" in 1.263888678s" Feb 13 19:18:57.967000 containerd[1489]: time="2025-02-13T19:18:57.966882864Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\"" Feb 13 19:18:57.967794 containerd[1489]: time="2025-02-13T19:18:57.967630673Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 19:18:58.213519 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:18:58.222371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:18:58.495403 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:18:58.508686 (kubelet)[2007]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:18:58.561728 kubelet[2007]: E0213 19:18:58.561605 2007 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:18:58.565310 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:18:58.565482 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:18:58.565923 systemd[1]: kubelet.service: Consumed 173ms CPU time, 97.5M memory peak. Feb 13 19:18:59.473601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4055456467.mount: Deactivated successfully. Feb 13 19:19:00.104081 containerd[1489]: time="2025-02-13T19:19:00.103994487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:00.105429 containerd[1489]: time="2025-02-13T19:19:00.105355534Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=30231003" Feb 13 19:19:00.107186 containerd[1489]: time="2025-02-13T19:19:00.107110664Z" level=info msg="ImageCreate event name:\"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:00.110450 containerd[1489]: time="2025-02-13T19:19:00.110362142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:00.111451 containerd[1489]: time="2025-02-13T19:19:00.111239382Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"30228127\" in 2.14331779s" Feb 13 19:19:00.111451 containerd[1489]: time="2025-02-13T19:19:00.111288966Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\"" Feb 13 19:19:00.112432 containerd[1489]: time="2025-02-13T19:19:00.112330097Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:19:00.568852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount161767798.mount: Deactivated successfully. Feb 13 19:19:01.656215 containerd[1489]: time="2025-02-13T19:19:01.656130989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:01.660579 containerd[1489]: time="2025-02-13T19:19:01.660200631Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Feb 13 19:19:01.660579 containerd[1489]: time="2025-02-13T19:19:01.660381408Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:01.668192 containerd[1489]: time="2025-02-13T19:19:01.668103033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:01.669854 containerd[1489]: time="2025-02-13T19:19:01.669581768Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.557209499s" Feb 13 19:19:01.669854 containerd[1489]: time="2025-02-13T19:19:01.669635250Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 19:19:01.670549 containerd[1489]: time="2025-02-13T19:19:01.670492915Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:19:02.079524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2370850364.mount: Deactivated successfully. Feb 13 19:19:02.087463 containerd[1489]: time="2025-02-13T19:19:02.087387629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:02.088736 containerd[1489]: time="2025-02-13T19:19:02.088656409Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Feb 13 19:19:02.090408 containerd[1489]: time="2025-02-13T19:19:02.090328182Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:02.093539 containerd[1489]: time="2025-02-13T19:19:02.093461451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:02.094840 containerd[1489]: time="2025-02-13T19:19:02.094653646Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 423.973483ms" Feb 13 19:19:02.094840 containerd[1489]: time="2025-02-13T19:19:02.094707140Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 19:19:02.095850 containerd[1489]: time="2025-02-13T19:19:02.095540389Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 19:19:02.532423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3574647566.mount: Deactivated successfully. Feb 13 19:19:05.042063 containerd[1489]: time="2025-02-13T19:19:05.041980606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:05.043865 containerd[1489]: time="2025-02-13T19:19:05.043784660Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56786556" Feb 13 19:19:05.045085 containerd[1489]: time="2025-02-13T19:19:05.045008378Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:05.049196 containerd[1489]: time="2025-02-13T19:19:05.049111002Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:05.051281 containerd[1489]: time="2025-02-13T19:19:05.050733485Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.955152387s" Feb 13 19:19:05.051281 containerd[1489]: time="2025-02-13T19:19:05.050782305Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Feb 13 19:19:08.713705 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:19:08.723415 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:19:09.008354 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:19:09.012448 (kubelet)[2151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:19:09.088626 kubelet[2151]: E0213 19:19:09.088526 2151 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:19:09.093075 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:19:09.093517 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:19:09.094339 systemd[1]: kubelet.service: Consumed 198ms CPU time, 94M memory peak. Feb 13 19:19:09.111971 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:19:09.112601 systemd[1]: kubelet.service: Consumed 198ms CPU time, 94M memory peak. Feb 13 19:19:09.122525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:19:09.174118 systemd[1]: Reload requested from client PID 2165 ('systemctl') (unit session-7.scope)... Feb 13 19:19:09.174141 systemd[1]: Reloading... Feb 13 19:19:09.317240 zram_generator::config[2211]: No configuration found. Feb 13 19:19:09.491770 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:19:09.634273 systemd[1]: Reloading finished in 459 ms. Feb 13 19:19:09.697077 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:19:09.708921 (kubelet)[2253]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:19:09.714723 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:19:09.715425 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:19:09.715784 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:19:09.715859 systemd[1]: kubelet.service: Consumed 133ms CPU time, 84.5M memory peak. Feb 13 19:19:09.722647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:19:09.990552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:19:09.998350 (kubelet)[2265]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:19:10.054747 kubelet[2265]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:19:10.054747 kubelet[2265]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:19:10.054747 kubelet[2265]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:19:10.056545 kubelet[2265]: I0213 19:19:10.054827 2265 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:19:10.768085 kubelet[2265]: I0213 19:19:10.768010 2265 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:19:10.768085 kubelet[2265]: I0213 19:19:10.768064 2265 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:19:10.768516 kubelet[2265]: I0213 19:19:10.768473 2265 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:19:10.812526 kubelet[2265]: I0213 19:19:10.811430 2265 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:19:10.812526 kubelet[2265]: E0213 19:19:10.812465 2265 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.48:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:19:10.824544 kubelet[2265]: E0213 19:19:10.824481 2265 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:19:10.824731 kubelet[2265]: I0213 19:19:10.824578 2265 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:19:10.831128 kubelet[2265]: I0213 19:19:10.831075 2265 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:19:10.831306 kubelet[2265]: I0213 19:19:10.831253 2265 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:19:10.831533 kubelet[2265]: I0213 19:19:10.831492 2265 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:19:10.831812 kubelet[2265]: I0213 19:19:10.831533 2265 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:19:10.831812 kubelet[2265]: I0213 19:19:10.831808 2265 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:19:10.832014 kubelet[2265]: I0213 19:19:10.831828 2265 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:19:10.832014 kubelet[2265]: I0213 19:19:10.831995 2265 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:19:10.836033 kubelet[2265]: I0213 19:19:10.835964 2265 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:19:10.836033 kubelet[2265]: I0213 19:19:10.836015 2265 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:19:10.836209 kubelet[2265]: I0213 19:19:10.836094 2265 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:19:10.836209 kubelet[2265]: I0213 19:19:10.836117 2265 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:19:10.845613 kubelet[2265]: W0213 19:19:10.845531 2265 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused Feb 13 19:19:10.845773 kubelet[2265]: E0213 19:19:10.845627 2265 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.48:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:19:10.846974 kubelet[2265]: W0213 19:19:10.846608 2265 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused Feb 13 19:19:10.846974 kubelet[2265]: E0213 19:19:10.846681 2265 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.48:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:19:10.846974 kubelet[2265]: I0213 19:19:10.846804 2265 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:19:10.849619 kubelet[2265]: I0213 19:19:10.849571 2265 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:19:10.851075 kubelet[2265]: W0213 19:19:10.850919 2265 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:19:10.854373 kubelet[2265]: I0213 19:19:10.854330 2265 server.go:1269] "Started kubelet" Feb 13 19:19:10.855682 kubelet[2265]: I0213 19:19:10.855615 2265 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:19:10.857159 kubelet[2265]: I0213 19:19:10.856878 2265 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:19:10.863525 kubelet[2265]: I0213 19:19:10.863445 2265 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:19:10.864203 kubelet[2265]: I0213 19:19:10.863855 2265 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:19:10.868071 kubelet[2265]: I0213 19:19:10.868025 2265 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:19:10.872071 kubelet[2265]: E0213 19:19:10.867809 2265 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.48:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.48:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal.1823dab4ba92e702 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal,UID:ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal,},FirstTimestamp:2025-02-13 19:19:10.854301442 +0000 UTC m=+0.849122289,LastTimestamp:2025-02-13 19:19:10.854301442 +0000 UTC m=+0.849122289,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal,}" Feb 13 19:19:10.873114 kubelet[2265]: I0213 19:19:10.873086 2265 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:19:10.874028 kubelet[2265]: I0213 19:19:10.874004 2265 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:19:10.874400 kubelet[2265]: E0213 19:19:10.874374 2265 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" not found" Feb 13 19:19:10.878946 kubelet[2265]: I0213 19:19:10.878919 2265 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:19:10.881435 kubelet[2265]: I0213 19:19:10.880066 2265 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:19:10.881802 kubelet[2265]: W0213 19:19:10.881741 2265 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused Feb 13 19:19:10.881954 kubelet[2265]: E0213 19:19:10.881930 2265 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.48:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:19:10.882224 kubelet[2265]: E0213 19:19:10.882167 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.48:6443: connect: connection refused" interval="200ms" Feb 13 19:19:10.884630 kubelet[2265]: I0213 19:19:10.884604 2265 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:19:10.884757 kubelet[2265]: I0213 19:19:10.884743 2265 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:19:10.884936 kubelet[2265]: I0213 19:19:10.884913 2265 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:19:10.899622 kubelet[2265]: I0213 19:19:10.899553 2265 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:19:10.901874 kubelet[2265]: I0213 19:19:10.901819 2265 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:19:10.901874 kubelet[2265]: I0213 19:19:10.901868 2265 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:19:10.902066 kubelet[2265]: I0213 19:19:10.901902 2265 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:19:10.902066 kubelet[2265]: E0213 19:19:10.901967 2265 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:19:10.913473 kubelet[2265]: E0213 19:19:10.913144 2265 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:19:10.913473 kubelet[2265]: W0213 19:19:10.913302 2265 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused Feb 13 19:19:10.913473 kubelet[2265]: E0213 19:19:10.913356 2265 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.48:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:19:10.930484 kubelet[2265]: I0213 19:19:10.930337 2265 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:19:10.930484 kubelet[2265]: I0213 19:19:10.930366 2265 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:19:10.930484 kubelet[2265]: I0213 19:19:10.930397 2265 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:19:10.934209 kubelet[2265]: I0213 19:19:10.934168 2265 policy_none.go:49] "None policy: Start" Feb 13 19:19:10.935077 kubelet[2265]: I0213 19:19:10.935037 2265 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:19:10.935231 kubelet[2265]: I0213 19:19:10.935148 2265 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:19:10.944291 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:19:10.955643 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:19:10.960327 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:19:10.970596 kubelet[2265]: I0213 19:19:10.970547 2265 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:19:10.971694 kubelet[2265]: I0213 19:19:10.970841 2265 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:19:10.971694 kubelet[2265]: I0213 19:19:10.970862 2265 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:19:10.971694 kubelet[2265]: I0213 19:19:10.971477 2265 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:19:10.973817 kubelet[2265]: E0213 19:19:10.973791 2265 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" not found" Feb 13 19:19:11.038510 systemd[1]: Created slice kubepods-burstable-pod4e8abdcb056e0b2b721da0d5282b6b7f.slice - libcontainer container kubepods-burstable-pod4e8abdcb056e0b2b721da0d5282b6b7f.slice. Feb 13 19:19:11.053100 systemd[1]: Created slice kubepods-burstable-pod807d8b0cf5038412cca0c90fcdab4bd5.slice - libcontainer container kubepods-burstable-pod807d8b0cf5038412cca0c90fcdab4bd5.slice. Feb 13 19:19:11.069099 systemd[1]: Created slice kubepods-burstable-pod990fb438efefbd17b6d9c3055357366e.slice - libcontainer container kubepods-burstable-pod990fb438efefbd17b6d9c3055357366e.slice. Feb 13 19:19:11.077202 kubelet[2265]: I0213 19:19:11.077160 2265 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:11.077757 kubelet[2265]: E0213 19:19:11.077576 2265 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.48:6443/api/v1/nodes\": dial tcp 10.128.0.48:6443: connect: connection refused" node="ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:11.080834 kubelet[2265]: I0213 19:19:11.080788 2265 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4e8abdcb056e0b2b721da0d5282b6b7f-ca-certs\") pod \"kube-apiserver-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" (UID: \"4e8abdcb056e0b2b721da0d5282b6b7f\") " pod="kube-system/kube-apiserver-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:11.080985 kubelet[2265]: I0213 19:19:11.080868 2265 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4e8abdcb056e0b2b721da0d5282b6b7f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" (UID: \"4e8abdcb056e0b2b721da0d5282b6b7f\") " pod="kube-system/kube-apiserver-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:11.080985 kubelet[2265]: I0213 19:19:11.080907 2265 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/807d8b0cf5038412cca0c90fcdab4bd5-ca-certs\") pod \"kube-controller-manager-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" (UID: \"807d8b0cf5038412cca0c90fcdab4bd5\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:11.080985 kubelet[2265]: I0213 19:19:11.080935 2265 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/807d8b0cf5038412cca0c90fcdab4bd5-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" (UID: \"807d8b0cf5038412cca0c90fcdab4bd5\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:11.080985 kubelet[2265]: I0213 19:19:11.080965 2265 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/807d8b0cf5038412cca0c90fcdab4bd5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" (UID: \"807d8b0cf5038412cca0c90fcdab4bd5\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:11.081227 kubelet[2265]: I0213 19:19:11.080993 2265 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4e8abdcb056e0b2b721da0d5282b6b7f-k8s-certs\") pod \"kube-apiserver-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" (UID: \"4e8abdcb056e0b2b721da0d5282b6b7f\") " pod="kube-system/kube-apiserver-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:11.081227 kubelet[2265]: I0213 19:19:11.081024 2265 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/807d8b0cf5038412cca0c90fcdab4bd5-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" (UID: \"807d8b0cf5038412cca0c90fcdab4bd5\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:11.081227 kubelet[2265]: I0213 19:19:11.081189 2265 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/807d8b0cf5038412cca0c90fcdab4bd5-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" (UID: \"807d8b0cf5038412cca0c90fcdab4bd5\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:11.081227 kubelet[2265]: I0213 19:19:11.081223 2265 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/990fb438efefbd17b6d9c3055357366e-kubeconfig\") pod \"kube-scheduler-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" (UID: \"990fb438efefbd17b6d9c3055357366e\") " pod="kube-system/kube-scheduler-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:11.083773 kubelet[2265]: E0213 19:19:11.083715 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.48:6443: connect: connection refused" interval="400ms" Feb 13 19:19:11.282597 kubelet[2265]: I0213 19:19:11.282547 2265 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:11.283123 kubelet[2265]: E0213 19:19:11.283068 2265 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.48:6443/api/v1/nodes\": dial tcp 10.128.0.48:6443: connect: connection refused" node="ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:11.347800 containerd[1489]: time="2025-02-13T19:19:11.347642962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal,Uid:4e8abdcb056e0b2b721da0d5282b6b7f,Namespace:kube-system,Attempt:0,}" Feb 13 19:19:11.366267 containerd[1489]: time="2025-02-13T19:19:11.366132503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal,Uid:807d8b0cf5038412cca0c90fcdab4bd5,Namespace:kube-system,Attempt:0,}" Feb 13 19:19:11.375419 containerd[1489]: time="2025-02-13T19:19:11.374987318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal,Uid:990fb438efefbd17b6d9c3055357366e,Namespace:kube-system,Attempt:0,}" Feb 13 19:19:11.484374 kubelet[2265]: E0213 19:19:11.484299 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.48:6443: connect: connection refused" interval="800ms" Feb 13 19:19:11.689505 kubelet[2265]: I0213 19:19:11.689371 2265 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:11.689911 kubelet[2265]: E0213 19:19:11.689830 2265 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.48:6443/api/v1/nodes\": dial tcp 10.128.0.48:6443: connect: connection refused" node="ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:11.704624 kubelet[2265]: W0213 19:19:11.704528 2265 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused Feb 13 19:19:11.704767 kubelet[2265]: E0213 19:19:11.704631 2265 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.48:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.48:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:19:11.747289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount505497732.mount: Deactivated successfully. Feb 13 19:19:11.756930 containerd[1489]: time="2025-02-13T19:19:11.756853014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:19:11.759776 containerd[1489]: time="2025-02-13T19:19:11.759576482Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:19:11.759899 kubelet[2265]: W0213 19:19:11.759600 2265 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused Feb 13 19:19:11.759899 kubelet[2265]: E0213 19:19:11.759691 2265 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.48:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:19:11.761963 containerd[1489]: time="2025-02-13T19:19:11.761902260Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Feb 13 19:19:11.763031 containerd[1489]: time="2025-02-13T19:19:11.762963723Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:19:11.765589 containerd[1489]: time="2025-02-13T19:19:11.765509918Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:19:11.767904 containerd[1489]: time="2025-02-13T19:19:11.767688888Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:19:11.767904 containerd[1489]: time="2025-02-13T19:19:11.767813835Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:19:11.768744 kubelet[2265]: W0213 19:19:11.768598 2265 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused Feb 13 19:19:11.768744 kubelet[2265]: E0213 19:19:11.768660 2265 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.48:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:19:11.771018 containerd[1489]: time="2025-02-13T19:19:11.770976375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:19:11.774093 containerd[1489]: time="2025-02-13T19:19:11.773408669Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 425.624424ms" Feb 13 19:19:11.776059 containerd[1489]: time="2025-02-13T19:19:11.775984976Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 409.712815ms" Feb 13 19:19:11.786006 containerd[1489]: time="2025-02-13T19:19:11.785917749Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 410.792232ms" Feb 13 19:19:11.991130 containerd[1489]: time="2025-02-13T19:19:11.990982292Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:19:11.991130 containerd[1489]: time="2025-02-13T19:19:11.991081217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:19:11.991691 containerd[1489]: time="2025-02-13T19:19:11.991108846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:11.991691 containerd[1489]: time="2025-02-13T19:19:11.991480366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:11.992379 containerd[1489]: time="2025-02-13T19:19:11.992282413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:19:11.992549 containerd[1489]: time="2025-02-13T19:19:11.992509334Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:19:11.992715 containerd[1489]: time="2025-02-13T19:19:11.992668197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:11.993104 containerd[1489]: time="2025-02-13T19:19:11.992962367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:11.995363 containerd[1489]: time="2025-02-13T19:19:11.993761787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:19:11.995559 containerd[1489]: time="2025-02-13T19:19:11.993841483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:19:11.996956 containerd[1489]: time="2025-02-13T19:19:11.996690903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:11.996956 containerd[1489]: time="2025-02-13T19:19:11.996832069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:12.039342 systemd[1]: Started cri-containerd-fb07a2f27e87a0b514511ebb28e5302d92ed025ca22d493738f927e2351bcba1.scope - libcontainer container fb07a2f27e87a0b514511ebb28e5302d92ed025ca22d493738f927e2351bcba1. Feb 13 19:19:12.048744 systemd[1]: Started cri-containerd-012eb1ed589f1f51b573b2ea8b5cdf5de97aa3ba274487e3452dc257b9b39120.scope - libcontainer container 012eb1ed589f1f51b573b2ea8b5cdf5de97aa3ba274487e3452dc257b9b39120. Feb 13 19:19:12.051499 systemd[1]: Started cri-containerd-ee9fdaaaec86e8c7a858fca213a609fb4d4478d9427616724339f9d1ba18af2a.scope - libcontainer container ee9fdaaaec86e8c7a858fca213a609fb4d4478d9427616724339f9d1ba18af2a. Feb 13 19:19:12.154975 containerd[1489]: time="2025-02-13T19:19:12.154897995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal,Uid:807d8b0cf5038412cca0c90fcdab4bd5,Namespace:kube-system,Attempt:0,} returns sandbox id \"012eb1ed589f1f51b573b2ea8b5cdf5de97aa3ba274487e3452dc257b9b39120\"" Feb 13 19:19:12.167903 kubelet[2265]: E0213 19:19:12.167849 2265 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flat" Feb 13 19:19:12.173150 containerd[1489]: time="2025-02-13T19:19:12.172518565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal,Uid:4e8abdcb056e0b2b721da0d5282b6b7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb07a2f27e87a0b514511ebb28e5302d92ed025ca22d493738f927e2351bcba1\"" Feb 13 19:19:12.173150 containerd[1489]: time="2025-02-13T19:19:12.172753114Z" level=info msg="CreateContainer within sandbox \"012eb1ed589f1f51b573b2ea8b5cdf5de97aa3ba274487e3452dc257b9b39120\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:19:12.177153 kubelet[2265]: E0213 19:19:12.176699 2265 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-21291" Feb 13 19:19:12.179913 containerd[1489]: time="2025-02-13T19:19:12.179113927Z" level=info msg="CreateContainer within sandbox \"fb07a2f27e87a0b514511ebb28e5302d92ed025ca22d493738f927e2351bcba1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:19:12.189473 containerd[1489]: time="2025-02-13T19:19:12.189380101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal,Uid:990fb438efefbd17b6d9c3055357366e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee9fdaaaec86e8c7a858fca213a609fb4d4478d9427616724339f9d1ba18af2a\"" Feb 13 19:19:12.191417 kubelet[2265]: E0213 19:19:12.191353 2265 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-21291" Feb 13 19:19:12.193481 containerd[1489]: time="2025-02-13T19:19:12.193438591Z" level=info msg="CreateContainer within sandbox \"ee9fdaaaec86e8c7a858fca213a609fb4d4478d9427616724339f9d1ba18af2a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:19:12.202876 containerd[1489]: time="2025-02-13T19:19:12.202812762Z" level=info msg="CreateContainer within sandbox \"012eb1ed589f1f51b573b2ea8b5cdf5de97aa3ba274487e3452dc257b9b39120\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c7f2cda28f6dc62f7dadd022b9094ca539fc30bd6b4e17971d4fb65de3e17ba2\"" Feb 13 19:19:12.204028 containerd[1489]: time="2025-02-13T19:19:12.203939758Z" level=info msg="StartContainer for \"c7f2cda28f6dc62f7dadd022b9094ca539fc30bd6b4e17971d4fb65de3e17ba2\"" Feb 13 19:19:12.211207 containerd[1489]: time="2025-02-13T19:19:12.211033574Z" level=info msg="CreateContainer within sandbox \"fb07a2f27e87a0b514511ebb28e5302d92ed025ca22d493738f927e2351bcba1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7756a6cce5fe06c56010e8449feda04b8a9caef2353bafaa92e490e74dd5467a\"" Feb 13 19:19:12.211938 containerd[1489]: time="2025-02-13T19:19:12.211901790Z" level=info msg="StartContainer for \"7756a6cce5fe06c56010e8449feda04b8a9caef2353bafaa92e490e74dd5467a\"" Feb 13 19:19:12.224007 containerd[1489]: time="2025-02-13T19:19:12.223949506Z" level=info msg="CreateContainer within sandbox \"ee9fdaaaec86e8c7a858fca213a609fb4d4478d9427616724339f9d1ba18af2a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"015b8dc227ebdb973aa187c901593a2366fc6774a10e62e3cfb4890039bb9c1b\"" Feb 13 19:19:12.227084 containerd[1489]: time="2025-02-13T19:19:12.226998879Z" level=info msg="StartContainer for \"015b8dc227ebdb973aa187c901593a2366fc6774a10e62e3cfb4890039bb9c1b\"" Feb 13 19:19:12.228487 kubelet[2265]: W0213 19:19:12.228397 2265 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.48:6443: connect: connection refused Feb 13 19:19:12.228631 kubelet[2265]: E0213 19:19:12.228499 2265 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.48:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:19:12.265410 systemd[1]: Started cri-containerd-c7f2cda28f6dc62f7dadd022b9094ca539fc30bd6b4e17971d4fb65de3e17ba2.scope - libcontainer container c7f2cda28f6dc62f7dadd022b9094ca539fc30bd6b4e17971d4fb65de3e17ba2. Feb 13 19:19:12.283335 systemd[1]: Started cri-containerd-7756a6cce5fe06c56010e8449feda04b8a9caef2353bafaa92e490e74dd5467a.scope - libcontainer container 7756a6cce5fe06c56010e8449feda04b8a9caef2353bafaa92e490e74dd5467a. Feb 13 19:19:12.287813 kubelet[2265]: E0213 19:19:12.287133 2265 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.48:6443: connect: connection refused" interval="1.6s" Feb 13 19:19:12.311353 systemd[1]: Started cri-containerd-015b8dc227ebdb973aa187c901593a2366fc6774a10e62e3cfb4890039bb9c1b.scope - libcontainer container 015b8dc227ebdb973aa187c901593a2366fc6774a10e62e3cfb4890039bb9c1b. Feb 13 19:19:12.407871 containerd[1489]: time="2025-02-13T19:19:12.407542591Z" level=info msg="StartContainer for \"c7f2cda28f6dc62f7dadd022b9094ca539fc30bd6b4e17971d4fb65de3e17ba2\" returns successfully" Feb 13 19:19:12.419211 containerd[1489]: time="2025-02-13T19:19:12.416287634Z" level=info msg="StartContainer for \"7756a6cce5fe06c56010e8449feda04b8a9caef2353bafaa92e490e74dd5467a\" returns successfully" Feb 13 19:19:12.428391 containerd[1489]: time="2025-02-13T19:19:12.428120846Z" level=info msg="StartContainer for \"015b8dc227ebdb973aa187c901593a2366fc6774a10e62e3cfb4890039bb9c1b\" returns successfully" Feb 13 19:19:12.498174 kubelet[2265]: I0213 19:19:12.497630 2265 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:12.498540 kubelet[2265]: E0213 19:19:12.498497 2265 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.48:6443/api/v1/nodes\": dial tcp 10.128.0.48:6443: connect: connection refused" node="ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:14.107195 kubelet[2265]: I0213 19:19:14.104788 2265 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:15.287711 kubelet[2265]: E0213 19:19:15.287643 2265 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" not found" node="ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:15.333831 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 19:19:15.357161 kubelet[2265]: I0213 19:19:15.357114 2265 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:15.516368 kubelet[2265]: E0213 19:19:15.516301 2265 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:15.849749 kubelet[2265]: I0213 19:19:15.849426 2265 apiserver.go:52] "Watching apiserver" Feb 13 19:19:15.874409 kubelet[2265]: I0213 19:19:15.874359 2265 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:19:17.495567 systemd[1]: Reload requested from client PID 2539 ('systemctl') (unit session-7.scope)... Feb 13 19:19:17.496119 systemd[1]: Reloading... Feb 13 19:19:17.660084 zram_generator::config[2580]: No configuration found. Feb 13 19:19:17.823680 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:19:17.996260 systemd[1]: Reloading finished in 499 ms. Feb 13 19:19:18.039094 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:19:18.058838 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:19:18.059194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:19:18.059280 systemd[1]: kubelet.service: Consumed 1.359s CPU time, 117.7M memory peak. Feb 13 19:19:18.066859 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:19:18.353200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:19:18.366644 (kubelet)[2632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:19:18.432664 kubelet[2632]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:19:18.432664 kubelet[2632]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:19:18.432664 kubelet[2632]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:19:18.433269 kubelet[2632]: I0213 19:19:18.432817 2632 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:19:18.445038 kubelet[2632]: I0213 19:19:18.444989 2632 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:19:18.445038 kubelet[2632]: I0213 19:19:18.445030 2632 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:19:18.445519 kubelet[2632]: I0213 19:19:18.445480 2632 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:19:18.448316 kubelet[2632]: I0213 19:19:18.448247 2632 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:19:18.451693 kubelet[2632]: I0213 19:19:18.451656 2632 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:19:18.458932 kubelet[2632]: E0213 19:19:18.458843 2632 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:19:18.459174 kubelet[2632]: I0213 19:19:18.459021 2632 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:19:18.465982 kubelet[2632]: I0213 19:19:18.465935 2632 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:19:18.466988 kubelet[2632]: I0213 19:19:18.466151 2632 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:19:18.466988 kubelet[2632]: I0213 19:19:18.466354 2632 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:19:18.466988 kubelet[2632]: I0213 19:19:18.466393 2632 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:19:18.467331 kubelet[2632]: I0213 19:19:18.466813 2632 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:19:18.467331 kubelet[2632]: I0213 19:19:18.466838 2632 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:19:18.467331 kubelet[2632]: I0213 19:19:18.466886 2632 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:19:18.471079 kubelet[2632]: I0213 19:19:18.470109 2632 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:19:18.471079 kubelet[2632]: I0213 19:19:18.470158 2632 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:19:18.471079 kubelet[2632]: I0213 19:19:18.470207 2632 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:19:18.471079 kubelet[2632]: I0213 19:19:18.470231 2632 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:19:18.481486 kubelet[2632]: I0213 19:19:18.481447 2632 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:19:18.482863 kubelet[2632]: I0213 19:19:18.482123 2632 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:19:18.482863 kubelet[2632]: I0213 19:19:18.482773 2632 server.go:1269] "Started kubelet" Feb 13 19:19:18.489070 kubelet[2632]: I0213 19:19:18.488468 2632 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:19:18.493113 kubelet[2632]: I0213 19:19:18.493065 2632 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:19:18.497184 sudo[2645]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 19:19:18.498188 sudo[2645]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 19:19:18.500317 kubelet[2632]: I0213 19:19:18.499962 2632 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:19:18.500671 kubelet[2632]: I0213 19:19:18.500653 2632 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:19:18.502210 kubelet[2632]: I0213 19:19:18.502191 2632 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:19:18.520567 kubelet[2632]: I0213 19:19:18.520361 2632 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:19:18.530151 kubelet[2632]: I0213 19:19:18.529329 2632 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:19:18.530151 kubelet[2632]: E0213 19:19:18.529667 2632 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" not found" Feb 13 19:19:18.532957 kubelet[2632]: I0213 19:19:18.530506 2632 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:19:18.532957 kubelet[2632]: I0213 19:19:18.530714 2632 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:19:18.545074 kubelet[2632]: I0213 19:19:18.544341 2632 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:19:18.547206 kubelet[2632]: I0213 19:19:18.547171 2632 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:19:18.547334 kubelet[2632]: I0213 19:19:18.547220 2632 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:19:18.547334 kubelet[2632]: I0213 19:19:18.547244 2632 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:19:18.547334 kubelet[2632]: E0213 19:19:18.547306 2632 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:19:18.561784 kubelet[2632]: I0213 19:19:18.560088 2632 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:19:18.561784 kubelet[2632]: I0213 19:19:18.560224 2632 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:19:18.569657 kubelet[2632]: E0213 19:19:18.569619 2632 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:19:18.570732 kubelet[2632]: I0213 19:19:18.570700 2632 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:19:18.649145 kubelet[2632]: E0213 19:19:18.647395 2632 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:19:18.650981 kubelet[2632]: I0213 19:19:18.650956 2632 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:19:18.651732 kubelet[2632]: I0213 19:19:18.651165 2632 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:19:18.651732 kubelet[2632]: I0213 19:19:18.651215 2632 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:19:18.651732 kubelet[2632]: I0213 19:19:18.651512 2632 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:19:18.651732 kubelet[2632]: I0213 19:19:18.651581 2632 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:19:18.651732 kubelet[2632]: I0213 19:19:18.651610 2632 policy_none.go:49] "None policy: Start" Feb 13 19:19:18.654746 kubelet[2632]: I0213 19:19:18.654724 2632 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:19:18.654895 kubelet[2632]: I0213 19:19:18.654884 2632 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:19:18.655777 kubelet[2632]: I0213 19:19:18.655756 2632 state_mem.go:75] "Updated machine memory state" Feb 13 19:19:18.673603 kubelet[2632]: I0213 19:19:18.672003 2632 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:19:18.673603 kubelet[2632]: I0213 19:19:18.672252 2632 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:19:18.673603 kubelet[2632]: I0213 19:19:18.672267 2632 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:19:18.673603 kubelet[2632]: I0213 19:19:18.673002 2632 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:19:18.797406 kubelet[2632]: I0213 19:19:18.797230 2632 kubelet_node_status.go:72] "Attempting to register node" node="ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:18.810071 kubelet[2632]: I0213 19:19:18.810015 2632 kubelet_node_status.go:111] "Node was previously registered" node="ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:18.810383 kubelet[2632]: I0213 19:19:18.810360 2632 kubelet_node_status.go:75] "Successfully registered node" node="ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:18.862664 kubelet[2632]: W0213 19:19:18.862620 2632 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 19:19:18.864420 kubelet[2632]: W0213 19:19:18.863961 2632 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 19:19:18.865485 kubelet[2632]: W0213 19:19:18.865121 2632 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 19:19:18.933573 kubelet[2632]: I0213 19:19:18.933028 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/807d8b0cf5038412cca0c90fcdab4bd5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" (UID: \"807d8b0cf5038412cca0c90fcdab4bd5\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:18.933573 kubelet[2632]: I0213 19:19:18.933266 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/807d8b0cf5038412cca0c90fcdab4bd5-ca-certs\") pod \"kube-controller-manager-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" (UID: \"807d8b0cf5038412cca0c90fcdab4bd5\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:18.933573 kubelet[2632]: I0213 19:19:18.933363 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/807d8b0cf5038412cca0c90fcdab4bd5-flexvolume-dir\") pod \"kube-controller-manager-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" (UID: \"807d8b0cf5038412cca0c90fcdab4bd5\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:18.935098 kubelet[2632]: I0213 19:19:18.933394 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/807d8b0cf5038412cca0c90fcdab4bd5-k8s-certs\") pod \"kube-controller-manager-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" (UID: \"807d8b0cf5038412cca0c90fcdab4bd5\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:18.935098 kubelet[2632]: I0213 19:19:18.934005 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/807d8b0cf5038412cca0c90fcdab4bd5-kubeconfig\") pod \"kube-controller-manager-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" (UID: \"807d8b0cf5038412cca0c90fcdab4bd5\") " pod="kube-system/kube-controller-manager-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:18.935098 kubelet[2632]: I0213 19:19:18.934103 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/990fb438efefbd17b6d9c3055357366e-kubeconfig\") pod \"kube-scheduler-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" (UID: \"990fb438efefbd17b6d9c3055357366e\") " pod="kube-system/kube-scheduler-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:18.935688 kubelet[2632]: I0213 19:19:18.934134 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4e8abdcb056e0b2b721da0d5282b6b7f-ca-certs\") pod \"kube-apiserver-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" (UID: \"4e8abdcb056e0b2b721da0d5282b6b7f\") " pod="kube-system/kube-apiserver-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:18.935852 kubelet[2632]: I0213 19:19:18.935826 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4e8abdcb056e0b2b721da0d5282b6b7f-k8s-certs\") pod \"kube-apiserver-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" (UID: \"4e8abdcb056e0b2b721da0d5282b6b7f\") " pod="kube-system/kube-apiserver-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:18.936000 kubelet[2632]: I0213 19:19:18.935927 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4e8abdcb056e0b2b721da0d5282b6b7f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" (UID: \"4e8abdcb056e0b2b721da0d5282b6b7f\") " pod="kube-system/kube-apiserver-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" Feb 13 19:19:19.317976 sudo[2645]: pam_unix(sudo:session): session closed for user root Feb 13 19:19:19.475988 kubelet[2632]: I0213 19:19:19.475889 2632 apiserver.go:52] "Watching apiserver" Feb 13 19:19:19.531630 kubelet[2632]: I0213 19:19:19.531568 2632 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:19:19.684909 kubelet[2632]: I0213 19:19:19.684728 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" podStartSLOduration=1.684343116 podStartE2EDuration="1.684343116s" podCreationTimestamp="2025-02-13 19:19:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:19:19.661690391 +0000 UTC m=+1.289083830" watchObservedRunningTime="2025-02-13 19:19:19.684343116 +0000 UTC m=+1.311736551" Feb 13 19:19:19.685719 kubelet[2632]: I0213 19:19:19.685302 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" podStartSLOduration=1.685286492 podStartE2EDuration="1.685286492s" podCreationTimestamp="2025-02-13 19:19:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:19:19.683954824 +0000 UTC m=+1.311348254" watchObservedRunningTime="2025-02-13 19:19:19.685286492 +0000 UTC m=+1.312679932" Feb 13 19:19:19.723947 kubelet[2632]: I0213 19:19:19.723169 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" podStartSLOduration=1.723143008 podStartE2EDuration="1.723143008s" podCreationTimestamp="2025-02-13 19:19:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:19:19.701192684 +0000 UTC m=+1.328586127" watchObservedRunningTime="2025-02-13 19:19:19.723143008 +0000 UTC m=+1.350536441" Feb 13 19:19:21.602806 sudo[1733]: pam_unix(sudo:session): session closed for user root Feb 13 19:19:21.644915 sshd[1732]: Connection closed by 147.75.109.163 port 51896 Feb 13 19:19:21.645865 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Feb 13 19:19:21.650625 systemd[1]: sshd@6-10.128.0.48:22-147.75.109.163:51896.service: Deactivated successfully. Feb 13 19:19:21.653598 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:19:21.653926 systemd[1]: session-7.scope: Consumed 7.201s CPU time, 260.7M memory peak. Feb 13 19:19:21.657162 systemd-logind[1466]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:19:21.659828 systemd-logind[1466]: Removed session 7. Feb 13 19:19:23.825645 kubelet[2632]: I0213 19:19:23.825605 2632 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:19:23.826262 containerd[1489]: time="2025-02-13T19:19:23.826029607Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:19:23.826687 kubelet[2632]: I0213 19:19:23.826628 2632 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:19:24.804526 systemd[1]: Created slice kubepods-besteffort-pod4bd29a55_d63d_4b88_8021_876c04a98261.slice - libcontainer container kubepods-besteffort-pod4bd29a55_d63d_4b88_8021_876c04a98261.slice. Feb 13 19:19:24.838104 systemd[1]: Created slice kubepods-burstable-pod8264ec80_d112_4d33_adb6_113c4e34b0b7.slice - libcontainer container kubepods-burstable-pod8264ec80_d112_4d33_adb6_113c4e34b0b7.slice. Feb 13 19:19:24.875506 kubelet[2632]: I0213 19:19:24.875453 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8264ec80-d112-4d33-adb6-113c4e34b0b7-cilium-config-path\") pod \"cilium-xd6r5\" (UID: \"8264ec80-d112-4d33-adb6-113c4e34b0b7\") " pod="kube-system/cilium-xd6r5" Feb 13 19:19:24.876814 kubelet[2632]: I0213 19:19:24.876128 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-host-proc-sys-net\") pod \"cilium-xd6r5\" (UID: \"8264ec80-d112-4d33-adb6-113c4e34b0b7\") " pod="kube-system/cilium-xd6r5" Feb 13 19:19:24.876814 kubelet[2632]: I0213 19:19:24.876179 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghg4d\" (UniqueName: \"kubernetes.io/projected/4bd29a55-d63d-4b88-8021-876c04a98261-kube-api-access-ghg4d\") pod \"kube-proxy-rgwfn\" (UID: \"4bd29a55-d63d-4b88-8021-876c04a98261\") " pod="kube-system/kube-proxy-rgwfn" Feb 13 19:19:24.876814 kubelet[2632]: I0213 19:19:24.876213 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-bpf-maps\") pod \"cilium-xd6r5\" (UID: \"8264ec80-d112-4d33-adb6-113c4e34b0b7\") " pod="kube-system/cilium-xd6r5" Feb 13 19:19:24.876814 kubelet[2632]: I0213 19:19:24.876242 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8whvd\" (UniqueName: \"kubernetes.io/projected/8264ec80-d112-4d33-adb6-113c4e34b0b7-kube-api-access-8whvd\") pod \"cilium-xd6r5\" (UID: \"8264ec80-d112-4d33-adb6-113c4e34b0b7\") " pod="kube-system/cilium-xd6r5" Feb 13 19:19:24.876814 kubelet[2632]: I0213 19:19:24.876270 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-cilium-run\") pod \"cilium-xd6r5\" (UID: \"8264ec80-d112-4d33-adb6-113c4e34b0b7\") " pod="kube-system/cilium-xd6r5" Feb 13 19:19:24.876814 kubelet[2632]: I0213 19:19:24.876297 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-cni-path\") pod \"cilium-xd6r5\" (UID: \"8264ec80-d112-4d33-adb6-113c4e34b0b7\") " pod="kube-system/cilium-xd6r5" Feb 13 19:19:24.877203 kubelet[2632]: I0213 19:19:24.876322 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-xtables-lock\") pod \"cilium-xd6r5\" (UID: \"8264ec80-d112-4d33-adb6-113c4e34b0b7\") " pod="kube-system/cilium-xd6r5" Feb 13 19:19:24.877203 kubelet[2632]: I0213 19:19:24.876351 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8264ec80-d112-4d33-adb6-113c4e34b0b7-clustermesh-secrets\") pod \"cilium-xd6r5\" (UID: \"8264ec80-d112-4d33-adb6-113c4e34b0b7\") " pod="kube-system/cilium-xd6r5" Feb 13 19:19:24.877203 kubelet[2632]: I0213 19:19:24.876378 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-etc-cni-netd\") pod \"cilium-xd6r5\" (UID: \"8264ec80-d112-4d33-adb6-113c4e34b0b7\") " pod="kube-system/cilium-xd6r5" Feb 13 19:19:24.877203 kubelet[2632]: I0213 19:19:24.876408 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8264ec80-d112-4d33-adb6-113c4e34b0b7-hubble-tls\") pod \"cilium-xd6r5\" (UID: \"8264ec80-d112-4d33-adb6-113c4e34b0b7\") " pod="kube-system/cilium-xd6r5" Feb 13 19:19:24.877203 kubelet[2632]: I0213 19:19:24.876436 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-hostproc\") pod \"cilium-xd6r5\" (UID: \"8264ec80-d112-4d33-adb6-113c4e34b0b7\") " pod="kube-system/cilium-xd6r5" Feb 13 19:19:24.877203 kubelet[2632]: I0213 19:19:24.876464 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4bd29a55-d63d-4b88-8021-876c04a98261-kube-proxy\") pod \"kube-proxy-rgwfn\" (UID: \"4bd29a55-d63d-4b88-8021-876c04a98261\") " pod="kube-system/kube-proxy-rgwfn" Feb 13 19:19:24.877497 kubelet[2632]: I0213 19:19:24.876491 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4bd29a55-d63d-4b88-8021-876c04a98261-xtables-lock\") pod \"kube-proxy-rgwfn\" (UID: \"4bd29a55-d63d-4b88-8021-876c04a98261\") " pod="kube-system/kube-proxy-rgwfn" Feb 13 19:19:24.877497 kubelet[2632]: I0213 19:19:24.876517 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4bd29a55-d63d-4b88-8021-876c04a98261-lib-modules\") pod \"kube-proxy-rgwfn\" (UID: \"4bd29a55-d63d-4b88-8021-876c04a98261\") " pod="kube-system/kube-proxy-rgwfn" Feb 13 19:19:24.877497 kubelet[2632]: I0213 19:19:24.876543 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-cilium-cgroup\") pod \"cilium-xd6r5\" (UID: \"8264ec80-d112-4d33-adb6-113c4e34b0b7\") " pod="kube-system/cilium-xd6r5" Feb 13 19:19:24.877497 kubelet[2632]: I0213 19:19:24.876570 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-lib-modules\") pod \"cilium-xd6r5\" (UID: \"8264ec80-d112-4d33-adb6-113c4e34b0b7\") " pod="kube-system/cilium-xd6r5" Feb 13 19:19:24.877497 kubelet[2632]: I0213 19:19:24.876598 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-host-proc-sys-kernel\") pod \"cilium-xd6r5\" (UID: \"8264ec80-d112-4d33-adb6-113c4e34b0b7\") " pod="kube-system/cilium-xd6r5" Feb 13 19:19:24.973705 systemd[1]: Created slice kubepods-besteffort-pod185e0745_ec44_435f_88b2_242986d1231b.slice - libcontainer container kubepods-besteffort-pod185e0745_ec44_435f_88b2_242986d1231b.slice. Feb 13 19:19:24.979120 kubelet[2632]: I0213 19:19:24.978192 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7cp9\" (UniqueName: \"kubernetes.io/projected/185e0745-ec44-435f-88b2-242986d1231b-kube-api-access-b7cp9\") pod \"cilium-operator-5d85765b45-p6gh2\" (UID: \"185e0745-ec44-435f-88b2-242986d1231b\") " pod="kube-system/cilium-operator-5d85765b45-p6gh2" Feb 13 19:19:24.979120 kubelet[2632]: I0213 19:19:24.978316 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/185e0745-ec44-435f-88b2-242986d1231b-cilium-config-path\") pod \"cilium-operator-5d85765b45-p6gh2\" (UID: \"185e0745-ec44-435f-88b2-242986d1231b\") " pod="kube-system/cilium-operator-5d85765b45-p6gh2" Feb 13 19:19:25.129136 containerd[1489]: time="2025-02-13T19:19:25.128940621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rgwfn,Uid:4bd29a55-d63d-4b88-8021-876c04a98261,Namespace:kube-system,Attempt:0,}" Feb 13 19:19:25.146087 containerd[1489]: time="2025-02-13T19:19:25.145088812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xd6r5,Uid:8264ec80-d112-4d33-adb6-113c4e34b0b7,Namespace:kube-system,Attempt:0,}" Feb 13 19:19:25.176280 containerd[1489]: time="2025-02-13T19:19:25.175094643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:19:25.176280 containerd[1489]: time="2025-02-13T19:19:25.176211033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:19:25.177158 containerd[1489]: time="2025-02-13T19:19:25.176870658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:25.177158 containerd[1489]: time="2025-02-13T19:19:25.177013992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:25.211487 systemd[1]: Started cri-containerd-805ceb46f2595d4874ab695bbabd80cc3abc2e0480be5208cd62835175484c19.scope - libcontainer container 805ceb46f2595d4874ab695bbabd80cc3abc2e0480be5208cd62835175484c19. Feb 13 19:19:25.214068 containerd[1489]: time="2025-02-13T19:19:25.210950020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:19:25.214068 containerd[1489]: time="2025-02-13T19:19:25.212308824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:19:25.214068 containerd[1489]: time="2025-02-13T19:19:25.212337207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:25.214068 containerd[1489]: time="2025-02-13T19:19:25.212451787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:25.245365 systemd[1]: Started cri-containerd-dbaec7095c75aba62eac9e3f5572c47a54d9f09503be46b0fc4dd86b48510080.scope - libcontainer container dbaec7095c75aba62eac9e3f5572c47a54d9f09503be46b0fc4dd86b48510080. Feb 13 19:19:25.264204 containerd[1489]: time="2025-02-13T19:19:25.264150404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rgwfn,Uid:4bd29a55-d63d-4b88-8021-876c04a98261,Namespace:kube-system,Attempt:0,} returns sandbox id \"805ceb46f2595d4874ab695bbabd80cc3abc2e0480be5208cd62835175484c19\"" Feb 13 19:19:25.274624 containerd[1489]: time="2025-02-13T19:19:25.274418548Z" level=info msg="CreateContainer within sandbox \"805ceb46f2595d4874ab695bbabd80cc3abc2e0480be5208cd62835175484c19\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:19:25.281750 containerd[1489]: time="2025-02-13T19:19:25.281706793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-p6gh2,Uid:185e0745-ec44-435f-88b2-242986d1231b,Namespace:kube-system,Attempt:0,}" Feb 13 19:19:25.299430 containerd[1489]: time="2025-02-13T19:19:25.299376391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xd6r5,Uid:8264ec80-d112-4d33-adb6-113c4e34b0b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbaec7095c75aba62eac9e3f5572c47a54d9f09503be46b0fc4dd86b48510080\"" Feb 13 19:19:25.302629 containerd[1489]: time="2025-02-13T19:19:25.302580717Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:19:25.320579 containerd[1489]: time="2025-02-13T19:19:25.320514693Z" level=info msg="CreateContainer within sandbox \"805ceb46f2595d4874ab695bbabd80cc3abc2e0480be5208cd62835175484c19\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bfbfbfef14fdefc9679eb04523702f8b83c5cf17076512a820c2c583352f0eee\"" Feb 13 19:19:25.323237 containerd[1489]: time="2025-02-13T19:19:25.322968064Z" level=info msg="StartContainer for \"bfbfbfef14fdefc9679eb04523702f8b83c5cf17076512a820c2c583352f0eee\"" Feb 13 19:19:25.341329 containerd[1489]: time="2025-02-13T19:19:25.341186457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:19:25.341561 containerd[1489]: time="2025-02-13T19:19:25.341398783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:19:25.342020 containerd[1489]: time="2025-02-13T19:19:25.341762320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:25.343021 containerd[1489]: time="2025-02-13T19:19:25.342771080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:25.376319 systemd[1]: Started cri-containerd-87ac10627766f5732270c06d6c53e5051aef0df014142dff1401b01c5ba62c3e.scope - libcontainer container 87ac10627766f5732270c06d6c53e5051aef0df014142dff1401b01c5ba62c3e. Feb 13 19:19:25.379093 systemd[1]: Started cri-containerd-bfbfbfef14fdefc9679eb04523702f8b83c5cf17076512a820c2c583352f0eee.scope - libcontainer container bfbfbfef14fdefc9679eb04523702f8b83c5cf17076512a820c2c583352f0eee. Feb 13 19:19:25.457421 containerd[1489]: time="2025-02-13T19:19:25.457283620Z" level=info msg="StartContainer for \"bfbfbfef14fdefc9679eb04523702f8b83c5cf17076512a820c2c583352f0eee\" returns successfully" Feb 13 19:19:25.467342 containerd[1489]: time="2025-02-13T19:19:25.467025759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-p6gh2,Uid:185e0745-ec44-435f-88b2-242986d1231b,Namespace:kube-system,Attempt:0,} returns sandbox id \"87ac10627766f5732270c06d6c53e5051aef0df014142dff1401b01c5ba62c3e\"" Feb 13 19:19:25.655328 kubelet[2632]: I0213 19:19:25.655152 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rgwfn" podStartSLOduration=1.655125209 podStartE2EDuration="1.655125209s" podCreationTimestamp="2025-02-13 19:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:19:25.654452489 +0000 UTC m=+7.281845930" watchObservedRunningTime="2025-02-13 19:19:25.655125209 +0000 UTC m=+7.282518652" Feb 13 19:19:29.165950 update_engine[1468]: I20250213 19:19:29.165704 1468 update_attempter.cc:509] Updating boot flags... Feb 13 19:19:29.288245 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (3014) Feb 13 19:19:29.524078 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (3016) Feb 13 19:19:31.134024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2294023546.mount: Deactivated successfully. Feb 13 19:19:33.827681 containerd[1489]: time="2025-02-13T19:19:33.827615831Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:33.829235 containerd[1489]: time="2025-02-13T19:19:33.829160261Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 19:19:33.830892 containerd[1489]: time="2025-02-13T19:19:33.830817567Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:33.833415 containerd[1489]: time="2025-02-13T19:19:33.832880871Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.529732159s" Feb 13 19:19:33.833415 containerd[1489]: time="2025-02-13T19:19:33.832930428Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 19:19:33.835287 containerd[1489]: time="2025-02-13T19:19:33.835247127Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:19:33.837075 containerd[1489]: time="2025-02-13T19:19:33.837014795Z" level=info msg="CreateContainer within sandbox \"dbaec7095c75aba62eac9e3f5572c47a54d9f09503be46b0fc4dd86b48510080\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:19:33.859489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2469842337.mount: Deactivated successfully. Feb 13 19:19:33.861272 containerd[1489]: time="2025-02-13T19:19:33.861217839Z" level=info msg="CreateContainer within sandbox \"dbaec7095c75aba62eac9e3f5572c47a54d9f09503be46b0fc4dd86b48510080\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8806cf40c92562bc6baa4af404ed9469a81042d2cb431059f64f3d1642bfdf28\"" Feb 13 19:19:33.862396 containerd[1489]: time="2025-02-13T19:19:33.862328657Z" level=info msg="StartContainer for \"8806cf40c92562bc6baa4af404ed9469a81042d2cb431059f64f3d1642bfdf28\"" Feb 13 19:19:33.912554 systemd[1]: Started cri-containerd-8806cf40c92562bc6baa4af404ed9469a81042d2cb431059f64f3d1642bfdf28.scope - libcontainer container 8806cf40c92562bc6baa4af404ed9469a81042d2cb431059f64f3d1642bfdf28. Feb 13 19:19:33.958095 containerd[1489]: time="2025-02-13T19:19:33.958014361Z" level=info msg="StartContainer for \"8806cf40c92562bc6baa4af404ed9469a81042d2cb431059f64f3d1642bfdf28\" returns successfully" Feb 13 19:19:33.973161 systemd[1]: cri-containerd-8806cf40c92562bc6baa4af404ed9469a81042d2cb431059f64f3d1642bfdf28.scope: Deactivated successfully. Feb 13 19:19:34.851633 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8806cf40c92562bc6baa4af404ed9469a81042d2cb431059f64f3d1642bfdf28-rootfs.mount: Deactivated successfully. Feb 13 19:19:35.795246 containerd[1489]: time="2025-02-13T19:19:35.795142849Z" level=info msg="shim disconnected" id=8806cf40c92562bc6baa4af404ed9469a81042d2cb431059f64f3d1642bfdf28 namespace=k8s.io Feb 13 19:19:35.795246 containerd[1489]: time="2025-02-13T19:19:35.795223036Z" level=warning msg="cleaning up after shim disconnected" id=8806cf40c92562bc6baa4af404ed9469a81042d2cb431059f64f3d1642bfdf28 namespace=k8s.io Feb 13 19:19:35.795246 containerd[1489]: time="2025-02-13T19:19:35.795239150Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:19:35.813956 containerd[1489]: time="2025-02-13T19:19:35.813880058Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:19:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:19:36.691466 containerd[1489]: time="2025-02-13T19:19:36.691223899Z" level=info msg="CreateContainer within sandbox \"dbaec7095c75aba62eac9e3f5572c47a54d9f09503be46b0fc4dd86b48510080\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:19:36.751091 containerd[1489]: time="2025-02-13T19:19:36.750998001Z" level=info msg="CreateContainer within sandbox \"dbaec7095c75aba62eac9e3f5572c47a54d9f09503be46b0fc4dd86b48510080\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"86b05bb37e3d83750b507d69c26f1334af6f4c7ad6c0660d424d4483cb6af3ea\"" Feb 13 19:19:36.753575 containerd[1489]: time="2025-02-13T19:19:36.751981545Z" level=info msg="StartContainer for \"86b05bb37e3d83750b507d69c26f1334af6f4c7ad6c0660d424d4483cb6af3ea\"" Feb 13 19:19:36.804679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount162061059.mount: Deactivated successfully. Feb 13 19:19:36.819345 systemd[1]: Started cri-containerd-86b05bb37e3d83750b507d69c26f1334af6f4c7ad6c0660d424d4483cb6af3ea.scope - libcontainer container 86b05bb37e3d83750b507d69c26f1334af6f4c7ad6c0660d424d4483cb6af3ea. Feb 13 19:19:36.880471 containerd[1489]: time="2025-02-13T19:19:36.878149104Z" level=info msg="StartContainer for \"86b05bb37e3d83750b507d69c26f1334af6f4c7ad6c0660d424d4483cb6af3ea\" returns successfully" Feb 13 19:19:36.905871 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:19:36.906913 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:19:36.907335 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:19:36.915013 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:19:36.915753 systemd[1]: cri-containerd-86b05bb37e3d83750b507d69c26f1334af6f4c7ad6c0660d424d4483cb6af3ea.scope: Deactivated successfully. Feb 13 19:19:36.957189 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:19:36.990480 containerd[1489]: time="2025-02-13T19:19:36.990403703Z" level=info msg="shim disconnected" id=86b05bb37e3d83750b507d69c26f1334af6f4c7ad6c0660d424d4483cb6af3ea namespace=k8s.io Feb 13 19:19:36.991652 containerd[1489]: time="2025-02-13T19:19:36.991614744Z" level=warning msg="cleaning up after shim disconnected" id=86b05bb37e3d83750b507d69c26f1334af6f4c7ad6c0660d424d4483cb6af3ea namespace=k8s.io Feb 13 19:19:36.991832 containerd[1489]: time="2025-02-13T19:19:36.991809742Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:19:37.571913 containerd[1489]: time="2025-02-13T19:19:37.571836624Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:37.573158 containerd[1489]: time="2025-02-13T19:19:37.573086184Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 19:19:37.574401 containerd[1489]: time="2025-02-13T19:19:37.574331672Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:37.576290 containerd[1489]: time="2025-02-13T19:19:37.576249843Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.740952641s" Feb 13 19:19:37.576557 containerd[1489]: time="2025-02-13T19:19:37.576438736Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 19:19:37.579741 containerd[1489]: time="2025-02-13T19:19:37.579633329Z" level=info msg="CreateContainer within sandbox \"87ac10627766f5732270c06d6c53e5051aef0df014142dff1401b01c5ba62c3e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:19:37.599683 containerd[1489]: time="2025-02-13T19:19:37.599588304Z" level=info msg="CreateContainer within sandbox \"87ac10627766f5732270c06d6c53e5051aef0df014142dff1401b01c5ba62c3e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2583288ba978ff028ccce645e9a17bc320d6d96e19148e9f07073139bb79cd37\"" Feb 13 19:19:37.600635 containerd[1489]: time="2025-02-13T19:19:37.600521715Z" level=info msg="StartContainer for \"2583288ba978ff028ccce645e9a17bc320d6d96e19148e9f07073139bb79cd37\"" Feb 13 19:19:37.641368 systemd[1]: Started cri-containerd-2583288ba978ff028ccce645e9a17bc320d6d96e19148e9f07073139bb79cd37.scope - libcontainer container 2583288ba978ff028ccce645e9a17bc320d6d96e19148e9f07073139bb79cd37. Feb 13 19:19:37.682321 containerd[1489]: time="2025-02-13T19:19:37.682251679Z" level=info msg="StartContainer for \"2583288ba978ff028ccce645e9a17bc320d6d96e19148e9f07073139bb79cd37\" returns successfully" Feb 13 19:19:37.698971 containerd[1489]: time="2025-02-13T19:19:37.698324890Z" level=info msg="CreateContainer within sandbox \"dbaec7095c75aba62eac9e3f5572c47a54d9f09503be46b0fc4dd86b48510080\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:19:37.751670 containerd[1489]: time="2025-02-13T19:19:37.749518111Z" level=info msg="CreateContainer within sandbox \"dbaec7095c75aba62eac9e3f5572c47a54d9f09503be46b0fc4dd86b48510080\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"92b3e84484c6ba781457d8e01dac7c0b2fff64d1c54bc37d05c8e3154a55216f\"" Feb 13 19:19:37.753077 containerd[1489]: time="2025-02-13T19:19:37.751836507Z" level=info msg="StartContainer for \"92b3e84484c6ba781457d8e01dac7c0b2fff64d1c54bc37d05c8e3154a55216f\"" Feb 13 19:19:37.753962 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86b05bb37e3d83750b507d69c26f1334af6f4c7ad6c0660d424d4483cb6af3ea-rootfs.mount: Deactivated successfully. Feb 13 19:19:37.853182 systemd[1]: run-containerd-runc-k8s.io-92b3e84484c6ba781457d8e01dac7c0b2fff64d1c54bc37d05c8e3154a55216f-runc.QUCio7.mount: Deactivated successfully. Feb 13 19:19:37.864279 systemd[1]: Started cri-containerd-92b3e84484c6ba781457d8e01dac7c0b2fff64d1c54bc37d05c8e3154a55216f.scope - libcontainer container 92b3e84484c6ba781457d8e01dac7c0b2fff64d1c54bc37d05c8e3154a55216f. Feb 13 19:19:37.945740 containerd[1489]: time="2025-02-13T19:19:37.945663852Z" level=info msg="StartContainer for \"92b3e84484c6ba781457d8e01dac7c0b2fff64d1c54bc37d05c8e3154a55216f\" returns successfully" Feb 13 19:19:37.949492 systemd[1]: cri-containerd-92b3e84484c6ba781457d8e01dac7c0b2fff64d1c54bc37d05c8e3154a55216f.scope: Deactivated successfully. Feb 13 19:19:38.005425 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92b3e84484c6ba781457d8e01dac7c0b2fff64d1c54bc37d05c8e3154a55216f-rootfs.mount: Deactivated successfully. Feb 13 19:19:38.156805 containerd[1489]: time="2025-02-13T19:19:38.156619354Z" level=info msg="shim disconnected" id=92b3e84484c6ba781457d8e01dac7c0b2fff64d1c54bc37d05c8e3154a55216f namespace=k8s.io Feb 13 19:19:38.156805 containerd[1489]: time="2025-02-13T19:19:38.156705100Z" level=warning msg="cleaning up after shim disconnected" id=92b3e84484c6ba781457d8e01dac7c0b2fff64d1c54bc37d05c8e3154a55216f namespace=k8s.io Feb 13 19:19:38.156805 containerd[1489]: time="2025-02-13T19:19:38.156719597Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:19:38.711657 containerd[1489]: time="2025-02-13T19:19:38.711078586Z" level=info msg="CreateContainer within sandbox \"dbaec7095c75aba62eac9e3f5572c47a54d9f09503be46b0fc4dd86b48510080\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:19:38.732834 containerd[1489]: time="2025-02-13T19:19:38.732770308Z" level=info msg="CreateContainer within sandbox \"dbaec7095c75aba62eac9e3f5572c47a54d9f09503be46b0fc4dd86b48510080\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"885ef190e891430c5b3bb077e59b80eb0fdfd438084f74c2b00fd90efa431e36\"" Feb 13 19:19:38.733894 containerd[1489]: time="2025-02-13T19:19:38.733547385Z" level=info msg="StartContainer for \"885ef190e891430c5b3bb077e59b80eb0fdfd438084f74c2b00fd90efa431e36\"" Feb 13 19:19:38.845992 systemd[1]: Started cri-containerd-885ef190e891430c5b3bb077e59b80eb0fdfd438084f74c2b00fd90efa431e36.scope - libcontainer container 885ef190e891430c5b3bb077e59b80eb0fdfd438084f74c2b00fd90efa431e36. Feb 13 19:19:38.870321 kubelet[2632]: I0213 19:19:38.870227 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-p6gh2" podStartSLOduration=2.7621251559999997 podStartE2EDuration="14.870199865s" podCreationTimestamp="2025-02-13 19:19:24 +0000 UTC" firstStartedPulling="2025-02-13 19:19:25.469426612 +0000 UTC m=+7.096820044" lastFinishedPulling="2025-02-13 19:19:37.577501316 +0000 UTC m=+19.204894753" observedRunningTime="2025-02-13 19:19:37.815625396 +0000 UTC m=+19.443018859" watchObservedRunningTime="2025-02-13 19:19:38.870199865 +0000 UTC m=+20.497593308" Feb 13 19:19:38.974712 systemd[1]: cri-containerd-885ef190e891430c5b3bb077e59b80eb0fdfd438084f74c2b00fd90efa431e36.scope: Deactivated successfully. Feb 13 19:19:38.980242 containerd[1489]: time="2025-02-13T19:19:38.980192251Z" level=info msg="StartContainer for \"885ef190e891430c5b3bb077e59b80eb0fdfd438084f74c2b00fd90efa431e36\" returns successfully" Feb 13 19:19:39.047648 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-885ef190e891430c5b3bb077e59b80eb0fdfd438084f74c2b00fd90efa431e36-rootfs.mount: Deactivated successfully. Feb 13 19:19:39.052516 containerd[1489]: time="2025-02-13T19:19:39.048027903Z" level=info msg="shim disconnected" id=885ef190e891430c5b3bb077e59b80eb0fdfd438084f74c2b00fd90efa431e36 namespace=k8s.io Feb 13 19:19:39.052516 containerd[1489]: time="2025-02-13T19:19:39.048128588Z" level=warning msg="cleaning up after shim disconnected" id=885ef190e891430c5b3bb077e59b80eb0fdfd438084f74c2b00fd90efa431e36 namespace=k8s.io Feb 13 19:19:39.052516 containerd[1489]: time="2025-02-13T19:19:39.048144679Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:19:39.723737 containerd[1489]: time="2025-02-13T19:19:39.723643252Z" level=info msg="CreateContainer within sandbox \"dbaec7095c75aba62eac9e3f5572c47a54d9f09503be46b0fc4dd86b48510080\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:19:39.756652 containerd[1489]: time="2025-02-13T19:19:39.756568286Z" level=info msg="CreateContainer within sandbox \"dbaec7095c75aba62eac9e3f5572c47a54d9f09503be46b0fc4dd86b48510080\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"38a9ac5dfd264280342783f982bf6d21abf233d084662a046f3a9c962d25c994\"" Feb 13 19:19:39.757725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1804949270.mount: Deactivated successfully. Feb 13 19:19:39.757904 containerd[1489]: time="2025-02-13T19:19:39.757719050Z" level=info msg="StartContainer for \"38a9ac5dfd264280342783f982bf6d21abf233d084662a046f3a9c962d25c994\"" Feb 13 19:19:39.806345 systemd[1]: Started cri-containerd-38a9ac5dfd264280342783f982bf6d21abf233d084662a046f3a9c962d25c994.scope - libcontainer container 38a9ac5dfd264280342783f982bf6d21abf233d084662a046f3a9c962d25c994. Feb 13 19:19:39.852575 containerd[1489]: time="2025-02-13T19:19:39.852490321Z" level=info msg="StartContainer for \"38a9ac5dfd264280342783f982bf6d21abf233d084662a046f3a9c962d25c994\" returns successfully" Feb 13 19:19:39.986652 kubelet[2632]: I0213 19:19:39.984723 2632 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 19:19:40.040460 systemd[1]: Created slice kubepods-burstable-podf562332d_f54e_4875_b6dd_a75973f3a384.slice - libcontainer container kubepods-burstable-podf562332d_f54e_4875_b6dd_a75973f3a384.slice. Feb 13 19:19:40.057470 systemd[1]: Created slice kubepods-burstable-poda1b16578_9ee5_4500_b773_9525576c5c5f.slice - libcontainer container kubepods-burstable-poda1b16578_9ee5_4500_b773_9525576c5c5f.slice. Feb 13 19:19:40.105116 kubelet[2632]: I0213 19:19:40.105066 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1b16578-9ee5-4500-b773-9525576c5c5f-config-volume\") pod \"coredns-6f6b679f8f-8g24w\" (UID: \"a1b16578-9ee5-4500-b773-9525576c5c5f\") " pod="kube-system/coredns-6f6b679f8f-8g24w" Feb 13 19:19:40.105325 kubelet[2632]: I0213 19:19:40.105138 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f562332d-f54e-4875-b6dd-a75973f3a384-config-volume\") pod \"coredns-6f6b679f8f-mg7xx\" (UID: \"f562332d-f54e-4875-b6dd-a75973f3a384\") " pod="kube-system/coredns-6f6b679f8f-mg7xx" Feb 13 19:19:40.105325 kubelet[2632]: I0213 19:19:40.105173 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjv6m\" (UniqueName: \"kubernetes.io/projected/f562332d-f54e-4875-b6dd-a75973f3a384-kube-api-access-qjv6m\") pod \"coredns-6f6b679f8f-mg7xx\" (UID: \"f562332d-f54e-4875-b6dd-a75973f3a384\") " pod="kube-system/coredns-6f6b679f8f-mg7xx" Feb 13 19:19:40.105325 kubelet[2632]: I0213 19:19:40.105205 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q4wn\" (UniqueName: \"kubernetes.io/projected/a1b16578-9ee5-4500-b773-9525576c5c5f-kube-api-access-7q4wn\") pod \"coredns-6f6b679f8f-8g24w\" (UID: \"a1b16578-9ee5-4500-b773-9525576c5c5f\") " pod="kube-system/coredns-6f6b679f8f-8g24w" Feb 13 19:19:40.351133 containerd[1489]: time="2025-02-13T19:19:40.350533636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mg7xx,Uid:f562332d-f54e-4875-b6dd-a75973f3a384,Namespace:kube-system,Attempt:0,}" Feb 13 19:19:40.365098 containerd[1489]: time="2025-02-13T19:19:40.364512469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8g24w,Uid:a1b16578-9ee5-4500-b773-9525576c5c5f,Namespace:kube-system,Attempt:0,}" Feb 13 19:19:42.405279 systemd-networkd[1388]: cilium_host: Link UP Feb 13 19:19:42.406222 systemd-networkd[1388]: cilium_net: Link UP Feb 13 19:19:42.407497 systemd-networkd[1388]: cilium_net: Gained carrier Feb 13 19:19:42.407832 systemd-networkd[1388]: cilium_host: Gained carrier Feb 13 19:19:42.408069 systemd-networkd[1388]: cilium_net: Gained IPv6LL Feb 13 19:19:42.408334 systemd-networkd[1388]: cilium_host: Gained IPv6LL Feb 13 19:19:42.556558 systemd-networkd[1388]: cilium_vxlan: Link UP Feb 13 19:19:42.556570 systemd-networkd[1388]: cilium_vxlan: Gained carrier Feb 13 19:19:42.838108 kernel: NET: Registered PF_ALG protocol family Feb 13 19:19:43.702671 systemd-networkd[1388]: lxc_health: Link UP Feb 13 19:19:43.705335 systemd-networkd[1388]: lxc_health: Gained carrier Feb 13 19:19:43.963257 kernel: eth0: renamed from tmp520d8 Feb 13 19:19:43.956484 systemd-networkd[1388]: lxcbabb9b08955d: Link UP Feb 13 19:19:43.978718 systemd-networkd[1388]: lxcbabb9b08955d: Gained carrier Feb 13 19:19:44.288820 systemd-networkd[1388]: cilium_vxlan: Gained IPv6LL Feb 13 19:19:44.431636 systemd-networkd[1388]: lxc075c119c3ade: Link UP Feb 13 19:19:44.439666 kernel: eth0: renamed from tmp8a2bd Feb 13 19:19:44.451100 systemd-networkd[1388]: lxc075c119c3ade: Gained carrier Feb 13 19:19:45.192788 kubelet[2632]: I0213 19:19:45.192234 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xd6r5" podStartSLOduration=12.658888491999999 podStartE2EDuration="21.19220967s" podCreationTimestamp="2025-02-13 19:19:24 +0000 UTC" firstStartedPulling="2025-02-13 19:19:25.301187802 +0000 UTC m=+6.928581227" lastFinishedPulling="2025-02-13 19:19:33.834508982 +0000 UTC m=+15.461902405" observedRunningTime="2025-02-13 19:19:40.76540417 +0000 UTC m=+22.392797635" watchObservedRunningTime="2025-02-13 19:19:45.19220967 +0000 UTC m=+26.819603113" Feb 13 19:19:45.506225 systemd-networkd[1388]: lxc_health: Gained IPv6LL Feb 13 19:19:45.888638 systemd-networkd[1388]: lxcbabb9b08955d: Gained IPv6LL Feb 13 19:19:46.208753 systemd-networkd[1388]: lxc075c119c3ade: Gained IPv6LL Feb 13 19:19:46.577028 kubelet[2632]: I0213 19:19:46.576096 2632 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:19:48.247100 ntpd[1448]: Listen normally on 7 cilium_host 192.168.0.82:123 Feb 13 19:19:48.248326 ntpd[1448]: 13 Feb 19:19:48 ntpd[1448]: Listen normally on 7 cilium_host 192.168.0.82:123 Feb 13 19:19:48.248326 ntpd[1448]: 13 Feb 19:19:48 ntpd[1448]: Listen normally on 8 cilium_net [fe80::1ce9:d0ff:fe22:3d11%4]:123 Feb 13 19:19:48.248326 ntpd[1448]: 13 Feb 19:19:48 ntpd[1448]: Listen normally on 9 cilium_host [fe80::cce8:c3ff:feeb:eca%5]:123 Feb 13 19:19:48.248326 ntpd[1448]: 13 Feb 19:19:48 ntpd[1448]: Listen normally on 10 cilium_vxlan [fe80::3855:e1ff:fe76:3ecb%6]:123 Feb 13 19:19:48.248326 ntpd[1448]: 13 Feb 19:19:48 ntpd[1448]: Listen normally on 11 lxc_health [fe80::5484:e0ff:fec8:5f88%8]:123 Feb 13 19:19:48.248326 ntpd[1448]: 13 Feb 19:19:48 ntpd[1448]: Listen normally on 12 lxcbabb9b08955d [fe80::4f4:36ff:fe69:970e%10]:123 Feb 13 19:19:48.248326 ntpd[1448]: 13 Feb 19:19:48 ntpd[1448]: Listen normally on 13 lxc075c119c3ade [fe80::68b6:7ff:fe6a:ed21%12]:123 Feb 13 19:19:48.247245 ntpd[1448]: Listen normally on 8 cilium_net [fe80::1ce9:d0ff:fe22:3d11%4]:123 Feb 13 19:19:48.247324 ntpd[1448]: Listen normally on 9 cilium_host [fe80::cce8:c3ff:feeb:eca%5]:123 Feb 13 19:19:48.247385 ntpd[1448]: Listen normally on 10 cilium_vxlan [fe80::3855:e1ff:fe76:3ecb%6]:123 Feb 13 19:19:48.247440 ntpd[1448]: Listen normally on 11 lxc_health [fe80::5484:e0ff:fec8:5f88%8]:123 Feb 13 19:19:48.247495 ntpd[1448]: Listen normally on 12 lxcbabb9b08955d [fe80::4f4:36ff:fe69:970e%10]:123 Feb 13 19:19:48.247565 ntpd[1448]: Listen normally on 13 lxc075c119c3ade [fe80::68b6:7ff:fe6a:ed21%12]:123 Feb 13 19:19:49.333758 containerd[1489]: time="2025-02-13T19:19:49.333371113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:19:49.333758 containerd[1489]: time="2025-02-13T19:19:49.333479030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:19:49.333758 containerd[1489]: time="2025-02-13T19:19:49.333507857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:49.333758 containerd[1489]: time="2025-02-13T19:19:49.333633996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:49.365843 containerd[1489]: time="2025-02-13T19:19:49.364101576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:19:49.365843 containerd[1489]: time="2025-02-13T19:19:49.364484919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:19:49.365843 containerd[1489]: time="2025-02-13T19:19:49.364969342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:49.367210 containerd[1489]: time="2025-02-13T19:19:49.367097458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:49.407600 systemd[1]: Started cri-containerd-520d86f0466ed8f297bac1ef781f484622bd6953d3c78c6a95bc3805c824b896.scope - libcontainer container 520d86f0466ed8f297bac1ef781f484622bd6953d3c78c6a95bc3805c824b896. Feb 13 19:19:49.445924 systemd[1]: Started cri-containerd-8a2bd3f574d4d09e039052c014544cca629df5e2b67acf4e442fd408594f1b2a.scope - libcontainer container 8a2bd3f574d4d09e039052c014544cca629df5e2b67acf4e442fd408594f1b2a. Feb 13 19:19:49.543557 containerd[1489]: time="2025-02-13T19:19:49.543492456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-8g24w,Uid:a1b16578-9ee5-4500-b773-9525576c5c5f,Namespace:kube-system,Attempt:0,} returns sandbox id \"520d86f0466ed8f297bac1ef781f484622bd6953d3c78c6a95bc3805c824b896\"" Feb 13 19:19:49.550580 containerd[1489]: time="2025-02-13T19:19:49.550359942Z" level=info msg="CreateContainer within sandbox \"520d86f0466ed8f297bac1ef781f484622bd6953d3c78c6a95bc3805c824b896\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:19:49.580069 containerd[1489]: time="2025-02-13T19:19:49.578814147Z" level=info msg="CreateContainer within sandbox \"520d86f0466ed8f297bac1ef781f484622bd6953d3c78c6a95bc3805c824b896\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"86033f04f1acfe07299bc23f6ed9bc45c02163f21479c6dd90170852f683b8ff\"" Feb 13 19:19:49.581326 containerd[1489]: time="2025-02-13T19:19:49.581213146Z" level=info msg="StartContainer for \"86033f04f1acfe07299bc23f6ed9bc45c02163f21479c6dd90170852f683b8ff\"" Feb 13 19:19:49.651113 containerd[1489]: time="2025-02-13T19:19:49.649384125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-mg7xx,Uid:f562332d-f54e-4875-b6dd-a75973f3a384,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a2bd3f574d4d09e039052c014544cca629df5e2b67acf4e442fd408594f1b2a\"" Feb 13 19:19:49.658424 systemd[1]: Started cri-containerd-86033f04f1acfe07299bc23f6ed9bc45c02163f21479c6dd90170852f683b8ff.scope - libcontainer container 86033f04f1acfe07299bc23f6ed9bc45c02163f21479c6dd90170852f683b8ff. Feb 13 19:19:49.665931 containerd[1489]: time="2025-02-13T19:19:49.665716311Z" level=info msg="CreateContainer within sandbox \"8a2bd3f574d4d09e039052c014544cca629df5e2b67acf4e442fd408594f1b2a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:19:49.697502 containerd[1489]: time="2025-02-13T19:19:49.697423140Z" level=info msg="CreateContainer within sandbox \"8a2bd3f574d4d09e039052c014544cca629df5e2b67acf4e442fd408594f1b2a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4886888cbc71f023ea7ea8a56f383df377dccd94c4bb02e3df5b6d2b2ae7d2ee\"" Feb 13 19:19:49.699845 containerd[1489]: time="2025-02-13T19:19:49.699517587Z" level=info msg="StartContainer for \"4886888cbc71f023ea7ea8a56f383df377dccd94c4bb02e3df5b6d2b2ae7d2ee\"" Feb 13 19:19:49.745129 containerd[1489]: time="2025-02-13T19:19:49.743890867Z" level=info msg="StartContainer for \"86033f04f1acfe07299bc23f6ed9bc45c02163f21479c6dd90170852f683b8ff\" returns successfully" Feb 13 19:19:49.784013 systemd[1]: Started cri-containerd-4886888cbc71f023ea7ea8a56f383df377dccd94c4bb02e3df5b6d2b2ae7d2ee.scope - libcontainer container 4886888cbc71f023ea7ea8a56f383df377dccd94c4bb02e3df5b6d2b2ae7d2ee. Feb 13 19:19:49.884078 containerd[1489]: time="2025-02-13T19:19:49.882382046Z" level=info msg="StartContainer for \"4886888cbc71f023ea7ea8a56f383df377dccd94c4bb02e3df5b6d2b2ae7d2ee\" returns successfully" Feb 13 19:19:50.350700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount546881579.mount: Deactivated successfully. Feb 13 19:19:50.387193 kubelet[2632]: I0213 19:19:50.386657 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-8g24w" podStartSLOduration=26.386629698 podStartE2EDuration="26.386629698s" podCreationTimestamp="2025-02-13 19:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:19:49.812758079 +0000 UTC m=+31.440151527" watchObservedRunningTime="2025-02-13 19:19:50.386629698 +0000 UTC m=+32.014023144" Feb 13 19:19:50.805120 kubelet[2632]: I0213 19:19:50.805023 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-mg7xx" podStartSLOduration=26.804995207 podStartE2EDuration="26.804995207s" podCreationTimestamp="2025-02-13 19:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:19:50.803934444 +0000 UTC m=+32.431327888" watchObservedRunningTime="2025-02-13 19:19:50.804995207 +0000 UTC m=+32.432388649" Feb 13 19:20:07.075534 systemd[1]: Started sshd@7-10.128.0.48:22-147.75.109.163:46186.service - OpenSSH per-connection server daemon (147.75.109.163:46186). Feb 13 19:20:07.373953 sshd[4026]: Accepted publickey for core from 147.75.109.163 port 46186 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:20:07.375912 sshd-session[4026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:07.382323 systemd-logind[1466]: New session 8 of user core. Feb 13 19:20:07.388307 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:20:07.694103 sshd[4028]: Connection closed by 147.75.109.163 port 46186 Feb 13 19:20:07.695382 sshd-session[4026]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:07.701415 systemd[1]: sshd@7-10.128.0.48:22-147.75.109.163:46186.service: Deactivated successfully. Feb 13 19:20:07.704878 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:20:07.706419 systemd-logind[1466]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:20:07.708314 systemd-logind[1466]: Removed session 8. Feb 13 19:20:12.749523 systemd[1]: Started sshd@8-10.128.0.48:22-147.75.109.163:35688.service - OpenSSH per-connection server daemon (147.75.109.163:35688). Feb 13 19:20:13.051838 sshd[4044]: Accepted publickey for core from 147.75.109.163 port 35688 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:20:13.053690 sshd-session[4044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:13.060314 systemd-logind[1466]: New session 9 of user core. Feb 13 19:20:13.070370 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:20:13.344859 sshd[4046]: Connection closed by 147.75.109.163 port 35688 Feb 13 19:20:13.346222 sshd-session[4044]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:13.352389 systemd[1]: sshd@8-10.128.0.48:22-147.75.109.163:35688.service: Deactivated successfully. Feb 13 19:20:13.355349 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:20:13.356768 systemd-logind[1466]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:20:13.358513 systemd-logind[1466]: Removed session 9. Feb 13 19:20:18.402509 systemd[1]: Started sshd@9-10.128.0.48:22-147.75.109.163:35700.service - OpenSSH per-connection server daemon (147.75.109.163:35700). Feb 13 19:20:18.701628 sshd[4059]: Accepted publickey for core from 147.75.109.163 port 35700 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:20:18.703589 sshd-session[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:18.709986 systemd-logind[1466]: New session 10 of user core. Feb 13 19:20:18.716371 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:20:18.993539 sshd[4063]: Connection closed by 147.75.109.163 port 35700 Feb 13 19:20:18.994470 sshd-session[4059]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:19.000189 systemd[1]: sshd@9-10.128.0.48:22-147.75.109.163:35700.service: Deactivated successfully. Feb 13 19:20:19.003670 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:20:19.004975 systemd-logind[1466]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:20:19.006702 systemd-logind[1466]: Removed session 10. Feb 13 19:20:24.052577 systemd[1]: Started sshd@10-10.128.0.48:22-147.75.109.163:50964.service - OpenSSH per-connection server daemon (147.75.109.163:50964). Feb 13 19:20:24.352857 sshd[4077]: Accepted publickey for core from 147.75.109.163 port 50964 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:20:24.354851 sshd-session[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:24.361200 systemd-logind[1466]: New session 11 of user core. Feb 13 19:20:24.368347 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:20:24.654463 sshd[4079]: Connection closed by 147.75.109.163 port 50964 Feb 13 19:20:24.655845 sshd-session[4077]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:24.661420 systemd[1]: sshd@10-10.128.0.48:22-147.75.109.163:50964.service: Deactivated successfully. Feb 13 19:20:24.664873 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:20:24.666262 systemd-logind[1466]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:20:24.667883 systemd-logind[1466]: Removed session 11. Feb 13 19:20:24.715535 systemd[1]: Started sshd@11-10.128.0.48:22-147.75.109.163:50976.service - OpenSSH per-connection server daemon (147.75.109.163:50976). Feb 13 19:20:25.008984 sshd[4092]: Accepted publickey for core from 147.75.109.163 port 50976 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:20:25.011011 sshd-session[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:25.018507 systemd-logind[1466]: New session 12 of user core. Feb 13 19:20:25.029375 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:20:25.357879 sshd[4094]: Connection closed by 147.75.109.163 port 50976 Feb 13 19:20:25.359280 sshd-session[4092]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:25.364610 systemd[1]: sshd@11-10.128.0.48:22-147.75.109.163:50976.service: Deactivated successfully. Feb 13 19:20:25.368260 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:20:25.370913 systemd-logind[1466]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:20:25.372923 systemd-logind[1466]: Removed session 12. Feb 13 19:20:25.417948 systemd[1]: Started sshd@12-10.128.0.48:22-147.75.109.163:50984.service - OpenSSH per-connection server daemon (147.75.109.163:50984). Feb 13 19:20:25.715169 sshd[4104]: Accepted publickey for core from 147.75.109.163 port 50984 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:20:25.717095 sshd-session[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:25.723465 systemd-logind[1466]: New session 13 of user core. Feb 13 19:20:25.732380 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:20:26.011767 sshd[4108]: Connection closed by 147.75.109.163 port 50984 Feb 13 19:20:26.012434 sshd-session[4104]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:26.017572 systemd[1]: sshd@12-10.128.0.48:22-147.75.109.163:50984.service: Deactivated successfully. Feb 13 19:20:26.021179 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:20:26.023920 systemd-logind[1466]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:20:26.025964 systemd-logind[1466]: Removed session 13. Feb 13 19:20:31.074613 systemd[1]: Started sshd@13-10.128.0.48:22-147.75.109.163:52668.service - OpenSSH per-connection server daemon (147.75.109.163:52668). Feb 13 19:20:31.367988 sshd[4120]: Accepted publickey for core from 147.75.109.163 port 52668 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:20:31.369979 sshd-session[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:31.377508 systemd-logind[1466]: New session 14 of user core. Feb 13 19:20:31.382328 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:20:31.662480 sshd[4122]: Connection closed by 147.75.109.163 port 52668 Feb 13 19:20:31.663415 sshd-session[4120]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:31.667828 systemd[1]: sshd@13-10.128.0.48:22-147.75.109.163:52668.service: Deactivated successfully. Feb 13 19:20:31.670782 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:20:31.673105 systemd-logind[1466]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:20:31.674942 systemd-logind[1466]: Removed session 14. Feb 13 19:20:36.724513 systemd[1]: Started sshd@14-10.128.0.48:22-147.75.109.163:52672.service - OpenSSH per-connection server daemon (147.75.109.163:52672). Feb 13 19:20:37.029413 sshd[4133]: Accepted publickey for core from 147.75.109.163 port 52672 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:20:37.031388 sshd-session[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:37.037812 systemd-logind[1466]: New session 15 of user core. Feb 13 19:20:37.047369 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:20:37.334936 sshd[4135]: Connection closed by 147.75.109.163 port 52672 Feb 13 19:20:37.336258 sshd-session[4133]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:37.341082 systemd[1]: sshd@14-10.128.0.48:22-147.75.109.163:52672.service: Deactivated successfully. Feb 13 19:20:37.343975 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:20:37.346400 systemd-logind[1466]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:20:37.348109 systemd-logind[1466]: Removed session 15. Feb 13 19:20:37.398500 systemd[1]: Started sshd@15-10.128.0.48:22-147.75.109.163:52674.service - OpenSSH per-connection server daemon (147.75.109.163:52674). Feb 13 19:20:37.696626 sshd[4147]: Accepted publickey for core from 147.75.109.163 port 52674 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:20:37.698691 sshd-session[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:37.704883 systemd-logind[1466]: New session 16 of user core. Feb 13 19:20:37.710330 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:20:38.064349 sshd[4149]: Connection closed by 147.75.109.163 port 52674 Feb 13 19:20:38.065266 sshd-session[4147]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:38.071241 systemd[1]: sshd@15-10.128.0.48:22-147.75.109.163:52674.service: Deactivated successfully. Feb 13 19:20:38.074734 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:20:38.076076 systemd-logind[1466]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:20:38.077728 systemd-logind[1466]: Removed session 16. Feb 13 19:20:38.122498 systemd[1]: Started sshd@16-10.128.0.48:22-147.75.109.163:52684.service - OpenSSH per-connection server daemon (147.75.109.163:52684). Feb 13 19:20:38.422198 sshd[4159]: Accepted publickey for core from 147.75.109.163 port 52684 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:20:38.424427 sshd-session[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:38.434444 systemd-logind[1466]: New session 17 of user core. Feb 13 19:20:38.439311 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:20:40.271295 sshd[4161]: Connection closed by 147.75.109.163 port 52684 Feb 13 19:20:40.272321 sshd-session[4159]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:40.278862 systemd[1]: sshd@16-10.128.0.48:22-147.75.109.163:52684.service: Deactivated successfully. Feb 13 19:20:40.282548 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:20:40.283925 systemd-logind[1466]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:20:40.285796 systemd-logind[1466]: Removed session 17. Feb 13 19:20:40.327997 systemd[1]: Started sshd@17-10.128.0.48:22-147.75.109.163:45904.service - OpenSSH per-connection server daemon (147.75.109.163:45904). Feb 13 19:20:40.623987 sshd[4178]: Accepted publickey for core from 147.75.109.163 port 45904 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:20:40.626141 sshd-session[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:40.634327 systemd-logind[1466]: New session 18 of user core. Feb 13 19:20:40.641342 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:20:41.055754 sshd[4180]: Connection closed by 147.75.109.163 port 45904 Feb 13 19:20:41.056402 sshd-session[4178]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:41.063788 systemd[1]: sshd@17-10.128.0.48:22-147.75.109.163:45904.service: Deactivated successfully. Feb 13 19:20:41.066925 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:20:41.068584 systemd-logind[1466]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:20:41.070473 systemd-logind[1466]: Removed session 18. Feb 13 19:20:41.119489 systemd[1]: Started sshd@18-10.128.0.48:22-147.75.109.163:45908.service - OpenSSH per-connection server daemon (147.75.109.163:45908). Feb 13 19:20:41.409918 sshd[4189]: Accepted publickey for core from 147.75.109.163 port 45908 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:20:41.411770 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:41.417588 systemd-logind[1466]: New session 19 of user core. Feb 13 19:20:41.424328 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:20:41.694347 sshd[4191]: Connection closed by 147.75.109.163 port 45908 Feb 13 19:20:41.695594 sshd-session[4189]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:41.699994 systemd[1]: sshd@18-10.128.0.48:22-147.75.109.163:45908.service: Deactivated successfully. Feb 13 19:20:41.702724 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:20:41.705497 systemd-logind[1466]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:20:41.707260 systemd-logind[1466]: Removed session 19. Feb 13 19:20:46.750550 systemd[1]: Started sshd@19-10.128.0.48:22-147.75.109.163:45920.service - OpenSSH per-connection server daemon (147.75.109.163:45920). Feb 13 19:20:47.052104 sshd[4202]: Accepted publickey for core from 147.75.109.163 port 45920 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:20:47.054595 sshd-session[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:47.064131 systemd-logind[1466]: New session 20 of user core. Feb 13 19:20:47.069324 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:20:47.342564 sshd[4204]: Connection closed by 147.75.109.163 port 45920 Feb 13 19:20:47.344029 sshd-session[4202]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:47.348741 systemd[1]: sshd@19-10.128.0.48:22-147.75.109.163:45920.service: Deactivated successfully. Feb 13 19:20:47.352298 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:20:47.355106 systemd-logind[1466]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:20:47.357319 systemd-logind[1466]: Removed session 20. Feb 13 19:20:52.408604 systemd[1]: Started sshd@20-10.128.0.48:22-147.75.109.163:34528.service - OpenSSH per-connection server daemon (147.75.109.163:34528). Feb 13 19:20:52.702965 sshd[4219]: Accepted publickey for core from 147.75.109.163 port 34528 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:20:52.704727 sshd-session[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:52.711471 systemd-logind[1466]: New session 21 of user core. Feb 13 19:20:52.719438 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:20:52.996851 sshd[4221]: Connection closed by 147.75.109.163 port 34528 Feb 13 19:20:52.997866 sshd-session[4219]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:53.002932 systemd[1]: sshd@20-10.128.0.48:22-147.75.109.163:34528.service: Deactivated successfully. Feb 13 19:20:53.006235 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:20:53.008807 systemd-logind[1466]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:20:53.010783 systemd-logind[1466]: Removed session 21. Feb 13 19:20:58.052507 systemd[1]: Started sshd@21-10.128.0.48:22-147.75.109.163:34532.service - OpenSSH per-connection server daemon (147.75.109.163:34532). Feb 13 19:20:58.358517 sshd[4235]: Accepted publickey for core from 147.75.109.163 port 34532 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:20:58.360477 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:20:58.368096 systemd-logind[1466]: New session 22 of user core. Feb 13 19:20:58.377344 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:20:58.651982 sshd[4237]: Connection closed by 147.75.109.163 port 34532 Feb 13 19:20:58.653397 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Feb 13 19:20:58.658446 systemd[1]: sshd@21-10.128.0.48:22-147.75.109.163:34532.service: Deactivated successfully. Feb 13 19:20:58.662005 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:20:58.664330 systemd-logind[1466]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:20:58.665837 systemd-logind[1466]: Removed session 22. Feb 13 19:21:01.165610 update_engine[1468]: I20250213 19:21:01.165518 1468 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 13 19:21:01.165610 update_engine[1468]: I20250213 19:21:01.165591 1468 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 13 19:21:01.166292 update_engine[1468]: I20250213 19:21:01.165896 1468 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 13 19:21:01.166861 update_engine[1468]: I20250213 19:21:01.166671 1468 omaha_request_params.cc:62] Current group set to alpha Feb 13 19:21:01.166959 update_engine[1468]: I20250213 19:21:01.166867 1468 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 13 19:21:01.166959 update_engine[1468]: I20250213 19:21:01.166887 1468 update_attempter.cc:643] Scheduling an action processor start. Feb 13 19:21:01.166959 update_engine[1468]: I20250213 19:21:01.166913 1468 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 13 19:21:01.167131 update_engine[1468]: I20250213 19:21:01.166963 1468 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 13 19:21:01.168007 update_engine[1468]: I20250213 19:21:01.167253 1468 omaha_request_action.cc:271] Posting an Omaha request to disabled Feb 13 19:21:01.168007 update_engine[1468]: I20250213 19:21:01.167282 1468 omaha_request_action.cc:272] Request: Feb 13 19:21:01.168007 update_engine[1468]: Feb 13 19:21:01.168007 update_engine[1468]: Feb 13 19:21:01.168007 update_engine[1468]: Feb 13 19:21:01.168007 update_engine[1468]: Feb 13 19:21:01.168007 update_engine[1468]: Feb 13 19:21:01.168007 update_engine[1468]: Feb 13 19:21:01.168007 update_engine[1468]: Feb 13 19:21:01.168007 update_engine[1468]: Feb 13 19:21:01.168007 update_engine[1468]: I20250213 19:21:01.167295 1468 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:21:01.168885 locksmithd[1519]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 13 19:21:01.169400 update_engine[1468]: I20250213 19:21:01.169363 1468 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:21:01.169867 update_engine[1468]: I20250213 19:21:01.169800 1468 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:21:01.220703 update_engine[1468]: E20250213 19:21:01.220599 1468 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:21:01.220878 update_engine[1468]: I20250213 19:21:01.220757 1468 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 13 19:21:03.712585 systemd[1]: Started sshd@22-10.128.0.48:22-147.75.109.163:48566.service - OpenSSH per-connection server daemon (147.75.109.163:48566). Feb 13 19:21:04.013913 sshd[4249]: Accepted publickey for core from 147.75.109.163 port 48566 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:21:04.015796 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:21:04.023259 systemd-logind[1466]: New session 23 of user core. Feb 13 19:21:04.028346 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:21:04.307481 sshd[4251]: Connection closed by 147.75.109.163 port 48566 Feb 13 19:21:04.308738 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Feb 13 19:21:04.314717 systemd[1]: sshd@22-10.128.0.48:22-147.75.109.163:48566.service: Deactivated successfully. Feb 13 19:21:04.318227 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:21:04.319428 systemd-logind[1466]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:21:04.320820 systemd-logind[1466]: Removed session 23. Feb 13 19:21:04.368526 systemd[1]: Started sshd@23-10.128.0.48:22-147.75.109.163:48568.service - OpenSSH per-connection server daemon (147.75.109.163:48568). Feb 13 19:21:04.666402 sshd[4263]: Accepted publickey for core from 147.75.109.163 port 48568 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:21:04.668373 sshd-session[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:21:04.674519 systemd-logind[1466]: New session 24 of user core. Feb 13 19:21:04.680284 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:21:06.378837 containerd[1489]: time="2025-02-13T19:21:06.378719830Z" level=info msg="StopContainer for \"2583288ba978ff028ccce645e9a17bc320d6d96e19148e9f07073139bb79cd37\" with timeout 30 (s)" Feb 13 19:21:06.380948 containerd[1489]: time="2025-02-13T19:21:06.380185751Z" level=info msg="Stop container \"2583288ba978ff028ccce645e9a17bc320d6d96e19148e9f07073139bb79cd37\" with signal terminated" Feb 13 19:21:06.413808 systemd[1]: cri-containerd-2583288ba978ff028ccce645e9a17bc320d6d96e19148e9f07073139bb79cd37.scope: Deactivated successfully. Feb 13 19:21:06.420258 containerd[1489]: time="2025-02-13T19:21:06.420171424Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:21:06.434355 containerd[1489]: time="2025-02-13T19:21:06.434306130Z" level=info msg="StopContainer for \"38a9ac5dfd264280342783f982bf6d21abf233d084662a046f3a9c962d25c994\" with timeout 2 (s)" Feb 13 19:21:06.435211 containerd[1489]: time="2025-02-13T19:21:06.435131765Z" level=info msg="Stop container \"38a9ac5dfd264280342783f982bf6d21abf233d084662a046f3a9c962d25c994\" with signal terminated" Feb 13 19:21:06.448210 systemd-networkd[1388]: lxc_health: Link DOWN Feb 13 19:21:06.448228 systemd-networkd[1388]: lxc_health: Lost carrier Feb 13 19:21:06.471634 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2583288ba978ff028ccce645e9a17bc320d6d96e19148e9f07073139bb79cd37-rootfs.mount: Deactivated successfully. Feb 13 19:21:06.476211 systemd[1]: cri-containerd-38a9ac5dfd264280342783f982bf6d21abf233d084662a046f3a9c962d25c994.scope: Deactivated successfully. Feb 13 19:21:06.477152 systemd[1]: cri-containerd-38a9ac5dfd264280342783f982bf6d21abf233d084662a046f3a9c962d25c994.scope: Consumed 9.918s CPU time, 126.2M memory peak, 136K read from disk, 13.3M written to disk. Feb 13 19:21:06.510288 containerd[1489]: time="2025-02-13T19:21:06.510187494Z" level=info msg="shim disconnected" id=2583288ba978ff028ccce645e9a17bc320d6d96e19148e9f07073139bb79cd37 namespace=k8s.io Feb 13 19:21:06.510288 containerd[1489]: time="2025-02-13T19:21:06.510276821Z" level=warning msg="cleaning up after shim disconnected" id=2583288ba978ff028ccce645e9a17bc320d6d96e19148e9f07073139bb79cd37 namespace=k8s.io Feb 13 19:21:06.510288 containerd[1489]: time="2025-02-13T19:21:06.510291730Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:21:06.521356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38a9ac5dfd264280342783f982bf6d21abf233d084662a046f3a9c962d25c994-rootfs.mount: Deactivated successfully. Feb 13 19:21:06.529175 containerd[1489]: time="2025-02-13T19:21:06.528660909Z" level=info msg="shim disconnected" id=38a9ac5dfd264280342783f982bf6d21abf233d084662a046f3a9c962d25c994 namespace=k8s.io Feb 13 19:21:06.529646 containerd[1489]: time="2025-02-13T19:21:06.529474650Z" level=warning msg="cleaning up after shim disconnected" id=38a9ac5dfd264280342783f982bf6d21abf233d084662a046f3a9c962d25c994 namespace=k8s.io Feb 13 19:21:06.529646 containerd[1489]: time="2025-02-13T19:21:06.529536438Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:21:06.550012 containerd[1489]: time="2025-02-13T19:21:06.549799159Z" level=info msg="StopContainer for \"2583288ba978ff028ccce645e9a17bc320d6d96e19148e9f07073139bb79cd37\" returns successfully" Feb 13 19:21:06.550801 containerd[1489]: time="2025-02-13T19:21:06.550741384Z" level=info msg="StopPodSandbox for \"87ac10627766f5732270c06d6c53e5051aef0df014142dff1401b01c5ba62c3e\"" Feb 13 19:21:06.551179 containerd[1489]: time="2025-02-13T19:21:06.551029591Z" level=info msg="Container to stop \"2583288ba978ff028ccce645e9a17bc320d6d96e19148e9f07073139bb79cd37\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:21:06.558067 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-87ac10627766f5732270c06d6c53e5051aef0df014142dff1401b01c5ba62c3e-shm.mount: Deactivated successfully. Feb 13 19:21:06.567528 containerd[1489]: time="2025-02-13T19:21:06.567446418Z" level=warning msg="cleanup warnings time=\"2025-02-13T19:21:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 19:21:06.571741 systemd[1]: cri-containerd-87ac10627766f5732270c06d6c53e5051aef0df014142dff1401b01c5ba62c3e.scope: Deactivated successfully. Feb 13 19:21:06.572843 containerd[1489]: time="2025-02-13T19:21:06.572777762Z" level=info msg="StopContainer for \"38a9ac5dfd264280342783f982bf6d21abf233d084662a046f3a9c962d25c994\" returns successfully" Feb 13 19:21:06.573786 containerd[1489]: time="2025-02-13T19:21:06.573753305Z" level=info msg="StopPodSandbox for \"dbaec7095c75aba62eac9e3f5572c47a54d9f09503be46b0fc4dd86b48510080\"" Feb 13 19:21:06.574458 containerd[1489]: time="2025-02-13T19:21:06.574266952Z" level=info msg="Container to stop \"8806cf40c92562bc6baa4af404ed9469a81042d2cb431059f64f3d1642bfdf28\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:21:06.574735 containerd[1489]: time="2025-02-13T19:21:06.574665047Z" level=info msg="Container to stop \"86b05bb37e3d83750b507d69c26f1334af6f4c7ad6c0660d424d4483cb6af3ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:21:06.574946 containerd[1489]: time="2025-02-13T19:21:06.574920767Z" level=info msg="Container to stop \"92b3e84484c6ba781457d8e01dac7c0b2fff64d1c54bc37d05c8e3154a55216f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:21:06.575199 containerd[1489]: time="2025-02-13T19:21:06.575174482Z" level=info msg="Container to stop \"885ef190e891430c5b3bb077e59b80eb0fdfd438084f74c2b00fd90efa431e36\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:21:06.575425 containerd[1489]: time="2025-02-13T19:21:06.575401010Z" level=info msg="Container to stop \"38a9ac5dfd264280342783f982bf6d21abf233d084662a046f3a9c962d25c994\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:21:06.580820 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dbaec7095c75aba62eac9e3f5572c47a54d9f09503be46b0fc4dd86b48510080-shm.mount: Deactivated successfully. Feb 13 19:21:06.590846 systemd[1]: cri-containerd-dbaec7095c75aba62eac9e3f5572c47a54d9f09503be46b0fc4dd86b48510080.scope: Deactivated successfully. Feb 13 19:21:06.642291 containerd[1489]: time="2025-02-13T19:21:06.642119294Z" level=info msg="shim disconnected" id=87ac10627766f5732270c06d6c53e5051aef0df014142dff1401b01c5ba62c3e namespace=k8s.io Feb 13 19:21:06.643220 containerd[1489]: time="2025-02-13T19:21:06.643033985Z" level=warning msg="cleaning up after shim disconnected" id=87ac10627766f5732270c06d6c53e5051aef0df014142dff1401b01c5ba62c3e namespace=k8s.io Feb 13 19:21:06.643524 containerd[1489]: time="2025-02-13T19:21:06.642961404Z" level=info msg="shim disconnected" id=dbaec7095c75aba62eac9e3f5572c47a54d9f09503be46b0fc4dd86b48510080 namespace=k8s.io Feb 13 19:21:06.643651 containerd[1489]: time="2025-02-13T19:21:06.643631089Z" level=warning msg="cleaning up after shim disconnected" id=dbaec7095c75aba62eac9e3f5572c47a54d9f09503be46b0fc4dd86b48510080 namespace=k8s.io Feb 13 19:21:06.643726 containerd[1489]: time="2025-02-13T19:21:06.643711597Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:21:06.644094 containerd[1489]: time="2025-02-13T19:21:06.644067792Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:21:06.687624 containerd[1489]: time="2025-02-13T19:21:06.687573510Z" level=info msg="TearDown network for sandbox \"dbaec7095c75aba62eac9e3f5572c47a54d9f09503be46b0fc4dd86b48510080\" successfully" Feb 13 19:21:06.687624 containerd[1489]: time="2025-02-13T19:21:06.687618792Z" level=info msg="StopPodSandbox for \"dbaec7095c75aba62eac9e3f5572c47a54d9f09503be46b0fc4dd86b48510080\" returns successfully" Feb 13 19:21:06.690729 containerd[1489]: time="2025-02-13T19:21:06.690679532Z" level=info msg="TearDown network for sandbox \"87ac10627766f5732270c06d6c53e5051aef0df014142dff1401b01c5ba62c3e\" successfully" Feb 13 19:21:06.690949 containerd[1489]: time="2025-02-13T19:21:06.690920760Z" level=info msg="StopPodSandbox for \"87ac10627766f5732270c06d6c53e5051aef0df014142dff1401b01c5ba62c3e\" returns successfully" Feb 13 19:21:06.832742 kubelet[2632]: I0213 19:21:06.832674 2632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-lib-modules\") pod \"8264ec80-d112-4d33-adb6-113c4e34b0b7\" (UID: \"8264ec80-d112-4d33-adb6-113c4e34b0b7\") " Feb 13 19:21:06.832742 kubelet[2632]: I0213 19:21:06.832746 2632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-host-proc-sys-net\") pod \"8264ec80-d112-4d33-adb6-113c4e34b0b7\" (UID: \"8264ec80-d112-4d33-adb6-113c4e34b0b7\") " Feb 13 19:21:06.833635 kubelet[2632]: I0213 19:21:06.832779 2632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-cni-path\") pod \"8264ec80-d112-4d33-adb6-113c4e34b0b7\" (UID: \"8264ec80-d112-4d33-adb6-113c4e34b0b7\") " Feb 13 19:21:06.833635 kubelet[2632]: I0213 19:21:06.832828 2632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8264ec80-d112-4d33-adb6-113c4e34b0b7-hubble-tls\") pod \"8264ec80-d112-4d33-adb6-113c4e34b0b7\" (UID: \"8264ec80-d112-4d33-adb6-113c4e34b0b7\") " Feb 13 19:21:06.833635 kubelet[2632]: I0213 19:21:06.832903 2632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/185e0745-ec44-435f-88b2-242986d1231b-cilium-config-path\") pod \"185e0745-ec44-435f-88b2-242986d1231b\" (UID: \"185e0745-ec44-435f-88b2-242986d1231b\") " Feb 13 19:21:06.833635 kubelet[2632]: I0213 19:21:06.832934 2632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-cilium-run\") pod \"8264ec80-d112-4d33-adb6-113c4e34b0b7\" (UID: \"8264ec80-d112-4d33-adb6-113c4e34b0b7\") " Feb 13 19:21:06.833635 kubelet[2632]: I0213 19:21:06.832968 2632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8whvd\" (UniqueName: \"kubernetes.io/projected/8264ec80-d112-4d33-adb6-113c4e34b0b7-kube-api-access-8whvd\") pod \"8264ec80-d112-4d33-adb6-113c4e34b0b7\" (UID: \"8264ec80-d112-4d33-adb6-113c4e34b0b7\") " Feb 13 19:21:06.833635 kubelet[2632]: I0213 19:21:06.832995 2632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8264ec80-d112-4d33-adb6-113c4e34b0b7-clustermesh-secrets\") pod \"8264ec80-d112-4d33-adb6-113c4e34b0b7\" (UID: \"8264ec80-d112-4d33-adb6-113c4e34b0b7\") " Feb 13 19:21:06.833993 kubelet[2632]: I0213 19:21:06.833018 2632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-hostproc\") pod \"8264ec80-d112-4d33-adb6-113c4e34b0b7\" (UID: \"8264ec80-d112-4d33-adb6-113c4e34b0b7\") " Feb 13 19:21:06.833993 kubelet[2632]: I0213 19:21:06.833063 2632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8264ec80-d112-4d33-adb6-113c4e34b0b7-cilium-config-path\") pod \"8264ec80-d112-4d33-adb6-113c4e34b0b7\" (UID: \"8264ec80-d112-4d33-adb6-113c4e34b0b7\") " Feb 13 19:21:06.833993 kubelet[2632]: I0213 19:21:06.833088 2632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-xtables-lock\") pod \"8264ec80-d112-4d33-adb6-113c4e34b0b7\" (UID: \"8264ec80-d112-4d33-adb6-113c4e34b0b7\") " Feb 13 19:21:06.833993 kubelet[2632]: I0213 19:21:06.833113 2632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-etc-cni-netd\") pod \"8264ec80-d112-4d33-adb6-113c4e34b0b7\" (UID: \"8264ec80-d112-4d33-adb6-113c4e34b0b7\") " Feb 13 19:21:06.833993 kubelet[2632]: I0213 19:21:06.833227 2632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-cilium-cgroup\") pod \"8264ec80-d112-4d33-adb6-113c4e34b0b7\" (UID: \"8264ec80-d112-4d33-adb6-113c4e34b0b7\") " Feb 13 19:21:06.833993 kubelet[2632]: I0213 19:21:06.833294 2632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7cp9\" (UniqueName: \"kubernetes.io/projected/185e0745-ec44-435f-88b2-242986d1231b-kube-api-access-b7cp9\") pod \"185e0745-ec44-435f-88b2-242986d1231b\" (UID: \"185e0745-ec44-435f-88b2-242986d1231b\") " Feb 13 19:21:06.835190 kubelet[2632]: I0213 19:21:06.833359 2632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-host-proc-sys-kernel\") pod \"8264ec80-d112-4d33-adb6-113c4e34b0b7\" (UID: \"8264ec80-d112-4d33-adb6-113c4e34b0b7\") " Feb 13 19:21:06.835190 kubelet[2632]: I0213 19:21:06.833394 2632 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-bpf-maps\") pod \"8264ec80-d112-4d33-adb6-113c4e34b0b7\" (UID: \"8264ec80-d112-4d33-adb6-113c4e34b0b7\") " Feb 13 19:21:06.835190 kubelet[2632]: I0213 19:21:06.833571 2632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8264ec80-d112-4d33-adb6-113c4e34b0b7" (UID: "8264ec80-d112-4d33-adb6-113c4e34b0b7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:21:06.835190 kubelet[2632]: I0213 19:21:06.833640 2632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8264ec80-d112-4d33-adb6-113c4e34b0b7" (UID: "8264ec80-d112-4d33-adb6-113c4e34b0b7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:21:06.835190 kubelet[2632]: I0213 19:21:06.833673 2632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8264ec80-d112-4d33-adb6-113c4e34b0b7" (UID: "8264ec80-d112-4d33-adb6-113c4e34b0b7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:21:06.835605 kubelet[2632]: I0213 19:21:06.833758 2632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-hostproc" (OuterVolumeSpecName: "hostproc") pod "8264ec80-d112-4d33-adb6-113c4e34b0b7" (UID: "8264ec80-d112-4d33-adb6-113c4e34b0b7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:21:06.835605 kubelet[2632]: I0213 19:21:06.833808 2632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-cni-path" (OuterVolumeSpecName: "cni-path") pod "8264ec80-d112-4d33-adb6-113c4e34b0b7" (UID: "8264ec80-d112-4d33-adb6-113c4e34b0b7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:21:06.835761 kubelet[2632]: I0213 19:21:06.835702 2632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8264ec80-d112-4d33-adb6-113c4e34b0b7" (UID: "8264ec80-d112-4d33-adb6-113c4e34b0b7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:21:06.835848 kubelet[2632]: I0213 19:21:06.835759 2632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8264ec80-d112-4d33-adb6-113c4e34b0b7" (UID: "8264ec80-d112-4d33-adb6-113c4e34b0b7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:21:06.835848 kubelet[2632]: I0213 19:21:06.835786 2632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8264ec80-d112-4d33-adb6-113c4e34b0b7" (UID: "8264ec80-d112-4d33-adb6-113c4e34b0b7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:21:06.840015 kubelet[2632]: I0213 19:21:06.839966 2632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8264ec80-d112-4d33-adb6-113c4e34b0b7" (UID: "8264ec80-d112-4d33-adb6-113c4e34b0b7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:21:06.843304 kubelet[2632]: I0213 19:21:06.840392 2632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8264ec80-d112-4d33-adb6-113c4e34b0b7" (UID: "8264ec80-d112-4d33-adb6-113c4e34b0b7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 19:21:06.846305 kubelet[2632]: I0213 19:21:06.846256 2632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/185e0745-ec44-435f-88b2-242986d1231b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "185e0745-ec44-435f-88b2-242986d1231b" (UID: "185e0745-ec44-435f-88b2-242986d1231b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:21:06.846453 kubelet[2632]: I0213 19:21:06.846401 2632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/185e0745-ec44-435f-88b2-242986d1231b-kube-api-access-b7cp9" (OuterVolumeSpecName: "kube-api-access-b7cp9") pod "185e0745-ec44-435f-88b2-242986d1231b" (UID: "185e0745-ec44-435f-88b2-242986d1231b"). InnerVolumeSpecName "kube-api-access-b7cp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:21:06.846517 kubelet[2632]: I0213 19:21:06.846486 2632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8264ec80-d112-4d33-adb6-113c4e34b0b7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8264ec80-d112-4d33-adb6-113c4e34b0b7" (UID: "8264ec80-d112-4d33-adb6-113c4e34b0b7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:21:06.848416 kubelet[2632]: I0213 19:21:06.848326 2632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8264ec80-d112-4d33-adb6-113c4e34b0b7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8264ec80-d112-4d33-adb6-113c4e34b0b7" (UID: "8264ec80-d112-4d33-adb6-113c4e34b0b7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 19:21:06.848718 kubelet[2632]: I0213 19:21:06.848694 2632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8264ec80-d112-4d33-adb6-113c4e34b0b7-kube-api-access-8whvd" (OuterVolumeSpecName: "kube-api-access-8whvd") pod "8264ec80-d112-4d33-adb6-113c4e34b0b7" (UID: "8264ec80-d112-4d33-adb6-113c4e34b0b7"). InnerVolumeSpecName "kube-api-access-8whvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 19:21:06.853530 kubelet[2632]: I0213 19:21:06.853482 2632 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8264ec80-d112-4d33-adb6-113c4e34b0b7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8264ec80-d112-4d33-adb6-113c4e34b0b7" (UID: "8264ec80-d112-4d33-adb6-113c4e34b0b7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 19:21:06.934482 kubelet[2632]: I0213 19:21:06.934270 2632 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8264ec80-d112-4d33-adb6-113c4e34b0b7-cilium-config-path\") on node \"ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:21:06.934482 kubelet[2632]: I0213 19:21:06.934322 2632 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-xtables-lock\") on node \"ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:21:06.934482 kubelet[2632]: I0213 19:21:06.934338 2632 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-etc-cni-netd\") on node \"ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:21:06.934482 kubelet[2632]: I0213 19:21:06.934354 2632 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-hostproc\") on node \"ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:21:06.934482 kubelet[2632]: I0213 19:21:06.934372 2632 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-host-proc-sys-kernel\") on node \"ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:21:06.934482 kubelet[2632]: I0213 19:21:06.934405 2632 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-bpf-maps\") on node \"ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:21:06.934482 kubelet[2632]: I0213 19:21:06.934421 2632 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-cilium-cgroup\") on node \"ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:21:06.935072 kubelet[2632]: I0213 19:21:06.934436 2632 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-b7cp9\" (UniqueName: \"kubernetes.io/projected/185e0745-ec44-435f-88b2-242986d1231b-kube-api-access-b7cp9\") on node \"ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:21:06.935072 kubelet[2632]: I0213 19:21:06.934455 2632 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-lib-modules\") on node \"ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:21:06.935072 kubelet[2632]: I0213 19:21:06.934470 2632 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-host-proc-sys-net\") on node \"ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:21:06.935072 kubelet[2632]: I0213 19:21:06.934484 2632 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-cni-path\") on node \"ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:21:06.935072 kubelet[2632]: I0213 19:21:06.934502 2632 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8264ec80-d112-4d33-adb6-113c4e34b0b7-hubble-tls\") on node \"ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:21:06.935072 kubelet[2632]: I0213 19:21:06.934533 2632 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8264ec80-d112-4d33-adb6-113c4e34b0b7-cilium-run\") on node \"ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:21:06.935072 kubelet[2632]: I0213 19:21:06.934551 2632 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/185e0745-ec44-435f-88b2-242986d1231b-cilium-config-path\") on node \"ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:21:06.935420 kubelet[2632]: I0213 19:21:06.934567 2632 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8whvd\" (UniqueName: \"kubernetes.io/projected/8264ec80-d112-4d33-adb6-113c4e34b0b7-kube-api-access-8whvd\") on node \"ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:21:06.935420 kubelet[2632]: I0213 19:21:06.934584 2632 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8264ec80-d112-4d33-adb6-113c4e34b0b7-clustermesh-secrets\") on node \"ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 19:21:06.973726 kubelet[2632]: I0213 19:21:06.973603 2632 scope.go:117] "RemoveContainer" containerID="38a9ac5dfd264280342783f982bf6d21abf233d084662a046f3a9c962d25c994" Feb 13 19:21:06.978687 containerd[1489]: time="2025-02-13T19:21:06.978372739Z" level=info msg="RemoveContainer for \"38a9ac5dfd264280342783f982bf6d21abf233d084662a046f3a9c962d25c994\"" Feb 13 19:21:06.989297 containerd[1489]: time="2025-02-13T19:21:06.989245749Z" level=info msg="RemoveContainer for \"38a9ac5dfd264280342783f982bf6d21abf233d084662a046f3a9c962d25c994\" returns successfully" Feb 13 19:21:06.990241 kubelet[2632]: I0213 19:21:06.990103 2632 scope.go:117] "RemoveContainer" containerID="885ef190e891430c5b3bb077e59b80eb0fdfd438084f74c2b00fd90efa431e36" Feb 13 19:21:06.992651 systemd[1]: Removed slice kubepods-besteffort-pod185e0745_ec44_435f_88b2_242986d1231b.slice - libcontainer container kubepods-besteffort-pod185e0745_ec44_435f_88b2_242986d1231b.slice. Feb 13 19:21:06.993531 containerd[1489]: time="2025-02-13T19:21:06.992636523Z" level=info msg="RemoveContainer for \"885ef190e891430c5b3bb077e59b80eb0fdfd438084f74c2b00fd90efa431e36\"" Feb 13 19:21:06.995129 systemd[1]: Removed slice kubepods-burstable-pod8264ec80_d112_4d33_adb6_113c4e34b0b7.slice - libcontainer container kubepods-burstable-pod8264ec80_d112_4d33_adb6_113c4e34b0b7.slice. Feb 13 19:21:06.995343 systemd[1]: kubepods-burstable-pod8264ec80_d112_4d33_adb6_113c4e34b0b7.slice: Consumed 10.054s CPU time, 126.7M memory peak, 136K read from disk, 13.3M written to disk. Feb 13 19:21:06.999927 containerd[1489]: time="2025-02-13T19:21:06.999394625Z" level=info msg="RemoveContainer for \"885ef190e891430c5b3bb077e59b80eb0fdfd438084f74c2b00fd90efa431e36\" returns successfully" Feb 13 19:21:07.000199 kubelet[2632]: I0213 19:21:06.999811 2632 scope.go:117] "RemoveContainer" containerID="92b3e84484c6ba781457d8e01dac7c0b2fff64d1c54bc37d05c8e3154a55216f" Feb 13 19:21:07.002681 containerd[1489]: time="2025-02-13T19:21:07.002638222Z" level=info msg="RemoveContainer for \"92b3e84484c6ba781457d8e01dac7c0b2fff64d1c54bc37d05c8e3154a55216f\"" Feb 13 19:21:07.007499 containerd[1489]: time="2025-02-13T19:21:07.007448273Z" level=info msg="RemoveContainer for \"92b3e84484c6ba781457d8e01dac7c0b2fff64d1c54bc37d05c8e3154a55216f\" returns successfully" Feb 13 19:21:07.007941 kubelet[2632]: I0213 19:21:07.007762 2632 scope.go:117] "RemoveContainer" containerID="86b05bb37e3d83750b507d69c26f1334af6f4c7ad6c0660d424d4483cb6af3ea" Feb 13 19:21:07.012229 containerd[1489]: time="2025-02-13T19:21:07.011348897Z" level=info msg="RemoveContainer for \"86b05bb37e3d83750b507d69c26f1334af6f4c7ad6c0660d424d4483cb6af3ea\"" Feb 13 19:21:07.018096 containerd[1489]: time="2025-02-13T19:21:07.017839045Z" level=info msg="RemoveContainer for \"86b05bb37e3d83750b507d69c26f1334af6f4c7ad6c0660d424d4483cb6af3ea\" returns successfully" Feb 13 19:21:07.019631 kubelet[2632]: I0213 19:21:07.018572 2632 scope.go:117] "RemoveContainer" containerID="8806cf40c92562bc6baa4af404ed9469a81042d2cb431059f64f3d1642bfdf28" Feb 13 19:21:07.022570 containerd[1489]: time="2025-02-13T19:21:07.022508847Z" level=info msg="RemoveContainer for \"8806cf40c92562bc6baa4af404ed9469a81042d2cb431059f64f3d1642bfdf28\"" Feb 13 19:21:07.030986 containerd[1489]: time="2025-02-13T19:21:07.030759533Z" level=info msg="RemoveContainer for \"8806cf40c92562bc6baa4af404ed9469a81042d2cb431059f64f3d1642bfdf28\" returns successfully" Feb 13 19:21:07.033558 kubelet[2632]: I0213 19:21:07.033311 2632 scope.go:117] "RemoveContainer" containerID="38a9ac5dfd264280342783f982bf6d21abf233d084662a046f3a9c962d25c994" Feb 13 19:21:07.036231 containerd[1489]: time="2025-02-13T19:21:07.033840377Z" level=error msg="ContainerStatus for \"38a9ac5dfd264280342783f982bf6d21abf233d084662a046f3a9c962d25c994\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"38a9ac5dfd264280342783f982bf6d21abf233d084662a046f3a9c962d25c994\": not found" Feb 13 19:21:07.036231 containerd[1489]: time="2025-02-13T19:21:07.034803471Z" level=error msg="ContainerStatus for \"885ef190e891430c5b3bb077e59b80eb0fdfd438084f74c2b00fd90efa431e36\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"885ef190e891430c5b3bb077e59b80eb0fdfd438084f74c2b00fd90efa431e36\": not found" Feb 13 19:21:07.036510 kubelet[2632]: E0213 19:21:07.034163 2632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"38a9ac5dfd264280342783f982bf6d21abf233d084662a046f3a9c962d25c994\": not found" containerID="38a9ac5dfd264280342783f982bf6d21abf233d084662a046f3a9c962d25c994" Feb 13 19:21:07.036510 kubelet[2632]: I0213 19:21:07.034213 2632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"38a9ac5dfd264280342783f982bf6d21abf233d084662a046f3a9c962d25c994"} err="failed to get container status \"38a9ac5dfd264280342783f982bf6d21abf233d084662a046f3a9c962d25c994\": rpc error: code = NotFound desc = an error occurred when try to find container \"38a9ac5dfd264280342783f982bf6d21abf233d084662a046f3a9c962d25c994\": not found" Feb 13 19:21:07.036510 kubelet[2632]: I0213 19:21:07.034326 2632 scope.go:117] "RemoveContainer" containerID="885ef190e891430c5b3bb077e59b80eb0fdfd438084f74c2b00fd90efa431e36" Feb 13 19:21:07.036510 kubelet[2632]: E0213 19:21:07.035861 2632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"885ef190e891430c5b3bb077e59b80eb0fdfd438084f74c2b00fd90efa431e36\": not found" containerID="885ef190e891430c5b3bb077e59b80eb0fdfd438084f74c2b00fd90efa431e36" Feb 13 19:21:07.036510 kubelet[2632]: I0213 19:21:07.035907 2632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"885ef190e891430c5b3bb077e59b80eb0fdfd438084f74c2b00fd90efa431e36"} err="failed to get container status \"885ef190e891430c5b3bb077e59b80eb0fdfd438084f74c2b00fd90efa431e36\": rpc error: code = NotFound desc = an error occurred when try to find container \"885ef190e891430c5b3bb077e59b80eb0fdfd438084f74c2b00fd90efa431e36\": not found" Feb 13 19:21:07.036510 kubelet[2632]: I0213 19:21:07.035942 2632 scope.go:117] "RemoveContainer" containerID="92b3e84484c6ba781457d8e01dac7c0b2fff64d1c54bc37d05c8e3154a55216f" Feb 13 19:21:07.037136 containerd[1489]: time="2025-02-13T19:21:07.036910445Z" level=error msg="ContainerStatus for \"92b3e84484c6ba781457d8e01dac7c0b2fff64d1c54bc37d05c8e3154a55216f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"92b3e84484c6ba781457d8e01dac7c0b2fff64d1c54bc37d05c8e3154a55216f\": not found" Feb 13 19:21:07.037838 kubelet[2632]: E0213 19:21:07.037465 2632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"92b3e84484c6ba781457d8e01dac7c0b2fff64d1c54bc37d05c8e3154a55216f\": not found" containerID="92b3e84484c6ba781457d8e01dac7c0b2fff64d1c54bc37d05c8e3154a55216f" Feb 13 19:21:07.037838 kubelet[2632]: I0213 19:21:07.037505 2632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"92b3e84484c6ba781457d8e01dac7c0b2fff64d1c54bc37d05c8e3154a55216f"} err="failed to get container status \"92b3e84484c6ba781457d8e01dac7c0b2fff64d1c54bc37d05c8e3154a55216f\": rpc error: code = NotFound desc = an error occurred when try to find container \"92b3e84484c6ba781457d8e01dac7c0b2fff64d1c54bc37d05c8e3154a55216f\": not found" Feb 13 19:21:07.037838 kubelet[2632]: I0213 19:21:07.037664 2632 scope.go:117] "RemoveContainer" containerID="86b05bb37e3d83750b507d69c26f1334af6f4c7ad6c0660d424d4483cb6af3ea" Feb 13 19:21:07.039110 containerd[1489]: time="2025-02-13T19:21:07.038848149Z" level=error msg="ContainerStatus for \"86b05bb37e3d83750b507d69c26f1334af6f4c7ad6c0660d424d4483cb6af3ea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"86b05bb37e3d83750b507d69c26f1334af6f4c7ad6c0660d424d4483cb6af3ea\": not found" Feb 13 19:21:07.039467 kubelet[2632]: E0213 19:21:07.039225 2632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"86b05bb37e3d83750b507d69c26f1334af6f4c7ad6c0660d424d4483cb6af3ea\": not found" containerID="86b05bb37e3d83750b507d69c26f1334af6f4c7ad6c0660d424d4483cb6af3ea" Feb 13 19:21:07.039467 kubelet[2632]: I0213 19:21:07.039314 2632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"86b05bb37e3d83750b507d69c26f1334af6f4c7ad6c0660d424d4483cb6af3ea"} err="failed to get container status \"86b05bb37e3d83750b507d69c26f1334af6f4c7ad6c0660d424d4483cb6af3ea\": rpc error: code = NotFound desc = an error occurred when try to find container \"86b05bb37e3d83750b507d69c26f1334af6f4c7ad6c0660d424d4483cb6af3ea\": not found" Feb 13 19:21:07.039467 kubelet[2632]: I0213 19:21:07.039347 2632 scope.go:117] "RemoveContainer" containerID="8806cf40c92562bc6baa4af404ed9469a81042d2cb431059f64f3d1642bfdf28" Feb 13 19:21:07.040099 containerd[1489]: time="2025-02-13T19:21:07.039858063Z" level=error msg="ContainerStatus for \"8806cf40c92562bc6baa4af404ed9469a81042d2cb431059f64f3d1642bfdf28\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8806cf40c92562bc6baa4af404ed9469a81042d2cb431059f64f3d1642bfdf28\": not found" Feb 13 19:21:07.040464 kubelet[2632]: E0213 19:21:07.040298 2632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8806cf40c92562bc6baa4af404ed9469a81042d2cb431059f64f3d1642bfdf28\": not found" containerID="8806cf40c92562bc6baa4af404ed9469a81042d2cb431059f64f3d1642bfdf28" Feb 13 19:21:07.040464 kubelet[2632]: I0213 19:21:07.040339 2632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8806cf40c92562bc6baa4af404ed9469a81042d2cb431059f64f3d1642bfdf28"} err="failed to get container status \"8806cf40c92562bc6baa4af404ed9469a81042d2cb431059f64f3d1642bfdf28\": rpc error: code = NotFound desc = an error occurred when try to find container \"8806cf40c92562bc6baa4af404ed9469a81042d2cb431059f64f3d1642bfdf28\": not found" Feb 13 19:21:07.040464 kubelet[2632]: I0213 19:21:07.040367 2632 scope.go:117] "RemoveContainer" containerID="2583288ba978ff028ccce645e9a17bc320d6d96e19148e9f07073139bb79cd37" Feb 13 19:21:07.042504 containerd[1489]: time="2025-02-13T19:21:07.042435073Z" level=info msg="RemoveContainer for \"2583288ba978ff028ccce645e9a17bc320d6d96e19148e9f07073139bb79cd37\"" Feb 13 19:21:07.048401 containerd[1489]: time="2025-02-13T19:21:07.048347353Z" level=info msg="RemoveContainer for \"2583288ba978ff028ccce645e9a17bc320d6d96e19148e9f07073139bb79cd37\" returns successfully" Feb 13 19:21:07.048960 kubelet[2632]: I0213 19:21:07.048824 2632 scope.go:117] "RemoveContainer" containerID="2583288ba978ff028ccce645e9a17bc320d6d96e19148e9f07073139bb79cd37" Feb 13 19:21:07.049332 containerd[1489]: time="2025-02-13T19:21:07.049246023Z" level=error msg="ContainerStatus for \"2583288ba978ff028ccce645e9a17bc320d6d96e19148e9f07073139bb79cd37\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2583288ba978ff028ccce645e9a17bc320d6d96e19148e9f07073139bb79cd37\": not found" Feb 13 19:21:07.049510 kubelet[2632]: E0213 19:21:07.049446 2632 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2583288ba978ff028ccce645e9a17bc320d6d96e19148e9f07073139bb79cd37\": not found" containerID="2583288ba978ff028ccce645e9a17bc320d6d96e19148e9f07073139bb79cd37" Feb 13 19:21:07.049639 kubelet[2632]: I0213 19:21:07.049497 2632 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2583288ba978ff028ccce645e9a17bc320d6d96e19148e9f07073139bb79cd37"} err="failed to get container status \"2583288ba978ff028ccce645e9a17bc320d6d96e19148e9f07073139bb79cd37\": rpc error: code = NotFound desc = an error occurred when try to find container \"2583288ba978ff028ccce645e9a17bc320d6d96e19148e9f07073139bb79cd37\": not found" Feb 13 19:21:07.391586 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87ac10627766f5732270c06d6c53e5051aef0df014142dff1401b01c5ba62c3e-rootfs.mount: Deactivated successfully. Feb 13 19:21:07.391756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbaec7095c75aba62eac9e3f5572c47a54d9f09503be46b0fc4dd86b48510080-rootfs.mount: Deactivated successfully. Feb 13 19:21:07.391866 systemd[1]: var-lib-kubelet-pods-185e0745\x2dec44\x2d435f\x2d88b2\x2d242986d1231b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db7cp9.mount: Deactivated successfully. Feb 13 19:21:07.391983 systemd[1]: var-lib-kubelet-pods-8264ec80\x2dd112\x2d4d33\x2dadb6\x2d113c4e34b0b7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8whvd.mount: Deactivated successfully. Feb 13 19:21:07.392127 systemd[1]: var-lib-kubelet-pods-8264ec80\x2dd112\x2d4d33\x2dadb6\x2d113c4e34b0b7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:21:07.392235 systemd[1]: var-lib-kubelet-pods-8264ec80\x2dd112\x2d4d33\x2dadb6\x2d113c4e34b0b7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:21:08.348286 sshd[4265]: Connection closed by 147.75.109.163 port 48568 Feb 13 19:21:08.349258 sshd-session[4263]: pam_unix(sshd:session): session closed for user core Feb 13 19:21:08.354289 systemd[1]: sshd@23-10.128.0.48:22-147.75.109.163:48568.service: Deactivated successfully. Feb 13 19:21:08.357740 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:21:08.360308 systemd-logind[1466]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:21:08.362114 systemd-logind[1466]: Removed session 24. Feb 13 19:21:08.409505 systemd[1]: Started sshd@24-10.128.0.48:22-147.75.109.163:48574.service - OpenSSH per-connection server daemon (147.75.109.163:48574). Feb 13 19:21:08.551882 kubelet[2632]: I0213 19:21:08.551812 2632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="185e0745-ec44-435f-88b2-242986d1231b" path="/var/lib/kubelet/pods/185e0745-ec44-435f-88b2-242986d1231b/volumes" Feb 13 19:21:08.552712 kubelet[2632]: I0213 19:21:08.552626 2632 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8264ec80-d112-4d33-adb6-113c4e34b0b7" path="/var/lib/kubelet/pods/8264ec80-d112-4d33-adb6-113c4e34b0b7/volumes" Feb 13 19:21:08.702609 kubelet[2632]: E0213 19:21:08.702445 2632 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:21:08.706800 sshd[4426]: Accepted publickey for core from 147.75.109.163 port 48574 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:21:08.708641 sshd-session[4426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:21:08.715580 systemd-logind[1466]: New session 25 of user core. Feb 13 19:21:08.722316 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:21:09.247121 ntpd[1448]: Deleting interface #11 lxc_health, fe80::5484:e0ff:fec8:5f88%8#123, interface stats: received=0, sent=0, dropped=0, active_time=81 secs Feb 13 19:21:09.247725 ntpd[1448]: 13 Feb 19:21:09 ntpd[1448]: Deleting interface #11 lxc_health, fe80::5484:e0ff:fec8:5f88%8#123, interface stats: received=0, sent=0, dropped=0, active_time=81 secs Feb 13 19:21:10.067084 sshd[4428]: Connection closed by 147.75.109.163 port 48574 Feb 13 19:21:10.068023 sshd-session[4426]: pam_unix(sshd:session): session closed for user core Feb 13 19:21:10.079079 kubelet[2632]: E0213 19:21:10.076288 2632 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="185e0745-ec44-435f-88b2-242986d1231b" containerName="cilium-operator" Feb 13 19:21:10.079079 kubelet[2632]: E0213 19:21:10.076331 2632 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8264ec80-d112-4d33-adb6-113c4e34b0b7" containerName="clean-cilium-state" Feb 13 19:21:10.079079 kubelet[2632]: E0213 19:21:10.076344 2632 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8264ec80-d112-4d33-adb6-113c4e34b0b7" containerName="mount-cgroup" Feb 13 19:21:10.079079 kubelet[2632]: E0213 19:21:10.076354 2632 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8264ec80-d112-4d33-adb6-113c4e34b0b7" containerName="apply-sysctl-overwrites" Feb 13 19:21:10.079079 kubelet[2632]: E0213 19:21:10.076363 2632 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8264ec80-d112-4d33-adb6-113c4e34b0b7" containerName="mount-bpf-fs" Feb 13 19:21:10.079079 kubelet[2632]: E0213 19:21:10.076373 2632 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8264ec80-d112-4d33-adb6-113c4e34b0b7" containerName="cilium-agent" Feb 13 19:21:10.079079 kubelet[2632]: I0213 19:21:10.076413 2632 memory_manager.go:354] "RemoveStaleState removing state" podUID="8264ec80-d112-4d33-adb6-113c4e34b0b7" containerName="cilium-agent" Feb 13 19:21:10.079079 kubelet[2632]: I0213 19:21:10.076426 2632 memory_manager.go:354] "RemoveStaleState removing state" podUID="185e0745-ec44-435f-88b2-242986d1231b" containerName="cilium-operator" Feb 13 19:21:10.083699 systemd[1]: sshd@24-10.128.0.48:22-147.75.109.163:48574.service: Deactivated successfully. Feb 13 19:21:10.083769 systemd-logind[1466]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:21:10.091355 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:21:10.092659 systemd[1]: session-25.scope: Consumed 1.092s CPU time, 23.9M memory peak. Feb 13 19:21:10.105127 systemd-logind[1466]: Removed session 25. Feb 13 19:21:10.113368 kubelet[2632]: W0213 19:21:10.111744 2632 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal' and this object Feb 13 19:21:10.113368 kubelet[2632]: E0213 19:21:10.111847 2632 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal' and this object" logger="UnhandledError" Feb 13 19:21:10.113368 kubelet[2632]: W0213 19:21:10.111942 2632 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal' and this object Feb 13 19:21:10.113368 kubelet[2632]: E0213 19:21:10.111962 2632 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal' and this object" logger="UnhandledError" Feb 13 19:21:10.113893 kubelet[2632]: W0213 19:21:10.112036 2632 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal' and this object Feb 13 19:21:10.113893 kubelet[2632]: E0213 19:21:10.112093 2632 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal' and this object" logger="UnhandledError" Feb 13 19:21:10.113893 kubelet[2632]: W0213 19:21:10.112156 2632 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal' and this object Feb 13 19:21:10.113893 kubelet[2632]: E0213 19:21:10.112175 2632 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal' and this object" logger="UnhandledError" Feb 13 19:21:10.139108 systemd[1]: Created slice kubepods-burstable-pod55a15bdc_d129_48bd_b3c2_4eabe802c83a.slice - libcontainer container kubepods-burstable-pod55a15bdc_d129_48bd_b3c2_4eabe802c83a.slice. Feb 13 19:21:10.146977 systemd[1]: Started sshd@25-10.128.0.48:22-147.75.109.163:40130.service - OpenSSH per-connection server daemon (147.75.109.163:40130). Feb 13 19:21:10.156966 kubelet[2632]: I0213 19:21:10.156296 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/55a15bdc-d129-48bd-b3c2-4eabe802c83a-cni-path\") pod \"cilium-mfl4d\" (UID: \"55a15bdc-d129-48bd-b3c2-4eabe802c83a\") " pod="kube-system/cilium-mfl4d" Feb 13 19:21:10.156966 kubelet[2632]: I0213 19:21:10.156371 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55a15bdc-d129-48bd-b3c2-4eabe802c83a-lib-modules\") pod \"cilium-mfl4d\" (UID: \"55a15bdc-d129-48bd-b3c2-4eabe802c83a\") " pod="kube-system/cilium-mfl4d" Feb 13 19:21:10.156966 kubelet[2632]: I0213 19:21:10.156403 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/55a15bdc-d129-48bd-b3c2-4eabe802c83a-hubble-tls\") pod \"cilium-mfl4d\" (UID: \"55a15bdc-d129-48bd-b3c2-4eabe802c83a\") " pod="kube-system/cilium-mfl4d" Feb 13 19:21:10.156966 kubelet[2632]: I0213 19:21:10.156460 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/55a15bdc-d129-48bd-b3c2-4eabe802c83a-cilium-run\") pod \"cilium-mfl4d\" (UID: \"55a15bdc-d129-48bd-b3c2-4eabe802c83a\") " pod="kube-system/cilium-mfl4d" Feb 13 19:21:10.156966 kubelet[2632]: I0213 19:21:10.156490 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55a15bdc-d129-48bd-b3c2-4eabe802c83a-xtables-lock\") pod \"cilium-mfl4d\" (UID: \"55a15bdc-d129-48bd-b3c2-4eabe802c83a\") " pod="kube-system/cilium-mfl4d" Feb 13 19:21:10.156966 kubelet[2632]: I0213 19:21:10.156550 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/55a15bdc-d129-48bd-b3c2-4eabe802c83a-cilium-config-path\") pod \"cilium-mfl4d\" (UID: \"55a15bdc-d129-48bd-b3c2-4eabe802c83a\") " pod="kube-system/cilium-mfl4d" Feb 13 19:21:10.157449 kubelet[2632]: I0213 19:21:10.157097 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/55a15bdc-d129-48bd-b3c2-4eabe802c83a-clustermesh-secrets\") pod \"cilium-mfl4d\" (UID: \"55a15bdc-d129-48bd-b3c2-4eabe802c83a\") " pod="kube-system/cilium-mfl4d" Feb 13 19:21:10.157449 kubelet[2632]: I0213 19:21:10.157166 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/55a15bdc-d129-48bd-b3c2-4eabe802c83a-bpf-maps\") pod \"cilium-mfl4d\" (UID: \"55a15bdc-d129-48bd-b3c2-4eabe802c83a\") " pod="kube-system/cilium-mfl4d" Feb 13 19:21:10.157449 kubelet[2632]: I0213 19:21:10.157206 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/55a15bdc-d129-48bd-b3c2-4eabe802c83a-hostproc\") pod \"cilium-mfl4d\" (UID: \"55a15bdc-d129-48bd-b3c2-4eabe802c83a\") " pod="kube-system/cilium-mfl4d" Feb 13 19:21:10.157449 kubelet[2632]: I0213 19:21:10.157256 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92c5l\" (UniqueName: \"kubernetes.io/projected/55a15bdc-d129-48bd-b3c2-4eabe802c83a-kube-api-access-92c5l\") pod \"cilium-mfl4d\" (UID: \"55a15bdc-d129-48bd-b3c2-4eabe802c83a\") " pod="kube-system/cilium-mfl4d" Feb 13 19:21:10.157449 kubelet[2632]: I0213 19:21:10.157288 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/55a15bdc-d129-48bd-b3c2-4eabe802c83a-cilium-cgroup\") pod \"cilium-mfl4d\" (UID: \"55a15bdc-d129-48bd-b3c2-4eabe802c83a\") " pod="kube-system/cilium-mfl4d" Feb 13 19:21:10.157449 kubelet[2632]: I0213 19:21:10.157333 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/55a15bdc-d129-48bd-b3c2-4eabe802c83a-cilium-ipsec-secrets\") pod \"cilium-mfl4d\" (UID: \"55a15bdc-d129-48bd-b3c2-4eabe802c83a\") " pod="kube-system/cilium-mfl4d" Feb 13 19:21:10.157759 kubelet[2632]: I0213 19:21:10.157360 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/55a15bdc-d129-48bd-b3c2-4eabe802c83a-host-proc-sys-net\") pod \"cilium-mfl4d\" (UID: \"55a15bdc-d129-48bd-b3c2-4eabe802c83a\") " pod="kube-system/cilium-mfl4d" Feb 13 19:21:10.161455 kubelet[2632]: I0213 19:21:10.159099 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/55a15bdc-d129-48bd-b3c2-4eabe802c83a-host-proc-sys-kernel\") pod \"cilium-mfl4d\" (UID: \"55a15bdc-d129-48bd-b3c2-4eabe802c83a\") " pod="kube-system/cilium-mfl4d" Feb 13 19:21:10.161455 kubelet[2632]: I0213 19:21:10.159179 2632 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/55a15bdc-d129-48bd-b3c2-4eabe802c83a-etc-cni-netd\") pod \"cilium-mfl4d\" (UID: \"55a15bdc-d129-48bd-b3c2-4eabe802c83a\") " pod="kube-system/cilium-mfl4d" Feb 13 19:21:10.492384 sshd[4439]: Accepted publickey for core from 147.75.109.163 port 40130 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:21:10.494289 sshd-session[4439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:21:10.502175 systemd-logind[1466]: New session 26 of user core. Feb 13 19:21:10.509400 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:21:10.550537 kubelet[2632]: E0213 19:21:10.548534 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-mg7xx" podUID="f562332d-f54e-4875-b6dd-a75973f3a384" Feb 13 19:21:10.704085 sshd[4442]: Connection closed by 147.75.109.163 port 40130 Feb 13 19:21:10.705223 sshd-session[4439]: pam_unix(sshd:session): session closed for user core Feb 13 19:21:10.710465 systemd[1]: sshd@25-10.128.0.48:22-147.75.109.163:40130.service: Deactivated successfully. Feb 13 19:21:10.713733 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:21:10.716020 systemd-logind[1466]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:21:10.717814 systemd-logind[1466]: Removed session 26. Feb 13 19:21:10.766904 systemd[1]: Started sshd@26-10.128.0.48:22-147.75.109.163:40134.service - OpenSSH per-connection server daemon (147.75.109.163:40134). Feb 13 19:21:10.962001 kubelet[2632]: I0213 19:21:10.961915 2632 setters.go:600] "Node became not ready" node="ci-4230-0-1-bf2a7060b6c6e46d9656.c.flatcar-212911.internal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:21:10Z","lastTransitionTime":"2025-02-13T19:21:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:21:11.068842 sshd[4449]: Accepted publickey for core from 147.75.109.163 port 40134 ssh2: RSA SHA256:zltUTJDzCGzh+4Ob4FyBZy7Qu5iYYFymmuRw6QDsEkw Feb 13 19:21:11.070960 sshd-session[4449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:21:11.078362 systemd-logind[1466]: New session 27 of user core. Feb 13 19:21:11.086335 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:21:11.162887 update_engine[1468]: I20250213 19:21:11.162684 1468 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:21:11.163503 update_engine[1468]: I20250213 19:21:11.163221 1468 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:21:11.163688 update_engine[1468]: I20250213 19:21:11.163628 1468 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:21:11.176126 update_engine[1468]: E20250213 19:21:11.176000 1468 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:21:11.176313 update_engine[1468]: I20250213 19:21:11.176182 1468 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 13 19:21:11.262340 kubelet[2632]: E0213 19:21:11.260992 2632 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Feb 13 19:21:11.262340 kubelet[2632]: E0213 19:21:11.261089 2632 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-mfl4d: failed to sync secret cache: timed out waiting for the condition Feb 13 19:21:11.262340 kubelet[2632]: E0213 19:21:11.261206 2632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/55a15bdc-d129-48bd-b3c2-4eabe802c83a-hubble-tls podName:55a15bdc-d129-48bd-b3c2-4eabe802c83a nodeName:}" failed. No retries permitted until 2025-02-13 19:21:11.761173369 +0000 UTC m=+113.388566812 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/55a15bdc-d129-48bd-b3c2-4eabe802c83a-hubble-tls") pod "cilium-mfl4d" (UID: "55a15bdc-d129-48bd-b3c2-4eabe802c83a") : failed to sync secret cache: timed out waiting for the condition Feb 13 19:21:11.264418 kubelet[2632]: E0213 19:21:11.263739 2632 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 13 19:21:11.264418 kubelet[2632]: E0213 19:21:11.264356 2632 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55a15bdc-d129-48bd-b3c2-4eabe802c83a-cilium-config-path podName:55a15bdc-d129-48bd-b3c2-4eabe802c83a nodeName:}" failed. No retries permitted until 2025-02-13 19:21:11.763946957 +0000 UTC m=+113.391340399 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/55a15bdc-d129-48bd-b3c2-4eabe802c83a-cilium-config-path") pod "cilium-mfl4d" (UID: "55a15bdc-d129-48bd-b3c2-4eabe802c83a") : failed to sync configmap cache: timed out waiting for the condition Feb 13 19:21:11.954205 containerd[1489]: time="2025-02-13T19:21:11.954126306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mfl4d,Uid:55a15bdc-d129-48bd-b3c2-4eabe802c83a,Namespace:kube-system,Attempt:0,}" Feb 13 19:21:11.988326 containerd[1489]: time="2025-02-13T19:21:11.988115818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:21:11.988326 containerd[1489]: time="2025-02-13T19:21:11.988202313Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:21:11.988326 containerd[1489]: time="2025-02-13T19:21:11.988220472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:21:11.991102 containerd[1489]: time="2025-02-13T19:21:11.988355462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:21:12.040431 systemd[1]: Started cri-containerd-3198957cf4decd53961d7dd7b858d2cf052d82f3477a3608b51e65effe0af0d1.scope - libcontainer container 3198957cf4decd53961d7dd7b858d2cf052d82f3477a3608b51e65effe0af0d1. Feb 13 19:21:12.084199 containerd[1489]: time="2025-02-13T19:21:12.084102428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mfl4d,Uid:55a15bdc-d129-48bd-b3c2-4eabe802c83a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3198957cf4decd53961d7dd7b858d2cf052d82f3477a3608b51e65effe0af0d1\"" Feb 13 19:21:12.089235 containerd[1489]: time="2025-02-13T19:21:12.089184482Z" level=info msg="CreateContainer within sandbox \"3198957cf4decd53961d7dd7b858d2cf052d82f3477a3608b51e65effe0af0d1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:21:12.108113 containerd[1489]: time="2025-02-13T19:21:12.107964863Z" level=info msg="CreateContainer within sandbox \"3198957cf4decd53961d7dd7b858d2cf052d82f3477a3608b51e65effe0af0d1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3d97f11ae0bdc8c706347649a463289f4391b1c8c2d2861dcd133c1cb5dd9e9a\"" Feb 13 19:21:12.110073 containerd[1489]: time="2025-02-13T19:21:12.108819975Z" level=info msg="StartContainer for \"3d97f11ae0bdc8c706347649a463289f4391b1c8c2d2861dcd133c1cb5dd9e9a\"" Feb 13 19:21:12.149343 systemd[1]: Started cri-containerd-3d97f11ae0bdc8c706347649a463289f4391b1c8c2d2861dcd133c1cb5dd9e9a.scope - libcontainer container 3d97f11ae0bdc8c706347649a463289f4391b1c8c2d2861dcd133c1cb5dd9e9a. Feb 13 19:21:12.189589 containerd[1489]: time="2025-02-13T19:21:12.189510736Z" level=info msg="StartContainer for \"3d97f11ae0bdc8c706347649a463289f4391b1c8c2d2861dcd133c1cb5dd9e9a\" returns successfully" Feb 13 19:21:12.199702 systemd[1]: cri-containerd-3d97f11ae0bdc8c706347649a463289f4391b1c8c2d2861dcd133c1cb5dd9e9a.scope: Deactivated successfully. Feb 13 19:21:12.242111 containerd[1489]: time="2025-02-13T19:21:12.241959436Z" level=info msg="shim disconnected" id=3d97f11ae0bdc8c706347649a463289f4391b1c8c2d2861dcd133c1cb5dd9e9a namespace=k8s.io Feb 13 19:21:12.242111 containerd[1489]: time="2025-02-13T19:21:12.242037491Z" level=warning msg="cleaning up after shim disconnected" id=3d97f11ae0bdc8c706347649a463289f4391b1c8c2d2861dcd133c1cb5dd9e9a namespace=k8s.io Feb 13 19:21:12.242111 containerd[1489]: time="2025-02-13T19:21:12.242076500Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:21:12.549569 kubelet[2632]: E0213 19:21:12.548701 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-mg7xx" podUID="f562332d-f54e-4875-b6dd-a75973f3a384" Feb 13 19:21:12.781575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount555152478.mount: Deactivated successfully. Feb 13 19:21:13.013204 containerd[1489]: time="2025-02-13T19:21:13.013097232Z" level=info msg="CreateContainer within sandbox \"3198957cf4decd53961d7dd7b858d2cf052d82f3477a3608b51e65effe0af0d1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:21:13.036691 containerd[1489]: time="2025-02-13T19:21:13.036622921Z" level=info msg="CreateContainer within sandbox \"3198957cf4decd53961d7dd7b858d2cf052d82f3477a3608b51e65effe0af0d1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"62aa446392133faa623922e655067a09dc068aef539d22eca52db17efb7d1962\"" Feb 13 19:21:13.037927 containerd[1489]: time="2025-02-13T19:21:13.037616861Z" level=info msg="StartContainer for \"62aa446392133faa623922e655067a09dc068aef539d22eca52db17efb7d1962\"" Feb 13 19:21:13.100434 systemd[1]: Started cri-containerd-62aa446392133faa623922e655067a09dc068aef539d22eca52db17efb7d1962.scope - libcontainer container 62aa446392133faa623922e655067a09dc068aef539d22eca52db17efb7d1962. Feb 13 19:21:13.141271 containerd[1489]: time="2025-02-13T19:21:13.140703139Z" level=info msg="StartContainer for \"62aa446392133faa623922e655067a09dc068aef539d22eca52db17efb7d1962\" returns successfully" Feb 13 19:21:13.145371 systemd[1]: cri-containerd-62aa446392133faa623922e655067a09dc068aef539d22eca52db17efb7d1962.scope: Deactivated successfully. Feb 13 19:21:13.181452 containerd[1489]: time="2025-02-13T19:21:13.181371914Z" level=info msg="shim disconnected" id=62aa446392133faa623922e655067a09dc068aef539d22eca52db17efb7d1962 namespace=k8s.io Feb 13 19:21:13.181452 containerd[1489]: time="2025-02-13T19:21:13.181446913Z" level=warning msg="cleaning up after shim disconnected" id=62aa446392133faa623922e655067a09dc068aef539d22eca52db17efb7d1962 namespace=k8s.io Feb 13 19:21:13.181452 containerd[1489]: time="2025-02-13T19:21:13.181460327Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:21:13.704768 kubelet[2632]: E0213 19:21:13.704696 2632 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:21:13.781423 systemd[1]: run-containerd-runc-k8s.io-62aa446392133faa623922e655067a09dc068aef539d22eca52db17efb7d1962-runc.huRUw3.mount: Deactivated successfully. Feb 13 19:21:13.781602 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62aa446392133faa623922e655067a09dc068aef539d22eca52db17efb7d1962-rootfs.mount: Deactivated successfully. Feb 13 19:21:14.016680 containerd[1489]: time="2025-02-13T19:21:14.016395596Z" level=info msg="CreateContainer within sandbox \"3198957cf4decd53961d7dd7b858d2cf052d82f3477a3608b51e65effe0af0d1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:21:14.043822 containerd[1489]: time="2025-02-13T19:21:14.042990775Z" level=info msg="CreateContainer within sandbox \"3198957cf4decd53961d7dd7b858d2cf052d82f3477a3608b51e65effe0af0d1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e87c5c5f4a83ff29821a1db737d94705e0ac2afbf1e95b298daec5fbbdb85639\"" Feb 13 19:21:14.046285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount436155978.mount: Deactivated successfully. Feb 13 19:21:14.046604 containerd[1489]: time="2025-02-13T19:21:14.046565213Z" level=info msg="StartContainer for \"e87c5c5f4a83ff29821a1db737d94705e0ac2afbf1e95b298daec5fbbdb85639\"" Feb 13 19:21:14.108318 systemd[1]: Started cri-containerd-e87c5c5f4a83ff29821a1db737d94705e0ac2afbf1e95b298daec5fbbdb85639.scope - libcontainer container e87c5c5f4a83ff29821a1db737d94705e0ac2afbf1e95b298daec5fbbdb85639. Feb 13 19:21:14.154808 containerd[1489]: time="2025-02-13T19:21:14.154741594Z" level=info msg="StartContainer for \"e87c5c5f4a83ff29821a1db737d94705e0ac2afbf1e95b298daec5fbbdb85639\" returns successfully" Feb 13 19:21:14.155574 systemd[1]: cri-containerd-e87c5c5f4a83ff29821a1db737d94705e0ac2afbf1e95b298daec5fbbdb85639.scope: Deactivated successfully. Feb 13 19:21:14.192214 containerd[1489]: time="2025-02-13T19:21:14.192131231Z" level=info msg="shim disconnected" id=e87c5c5f4a83ff29821a1db737d94705e0ac2afbf1e95b298daec5fbbdb85639 namespace=k8s.io Feb 13 19:21:14.192505 containerd[1489]: time="2025-02-13T19:21:14.192261465Z" level=warning msg="cleaning up after shim disconnected" id=e87c5c5f4a83ff29821a1db737d94705e0ac2afbf1e95b298daec5fbbdb85639 namespace=k8s.io Feb 13 19:21:14.192505 containerd[1489]: time="2025-02-13T19:21:14.192295515Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:21:14.550010 kubelet[2632]: E0213 19:21:14.548550 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-mg7xx" podUID="f562332d-f54e-4875-b6dd-a75973f3a384" Feb 13 19:21:14.781355 systemd[1]: run-containerd-runc-k8s.io-e87c5c5f4a83ff29821a1db737d94705e0ac2afbf1e95b298daec5fbbdb85639-runc.iYxBWZ.mount: Deactivated successfully. Feb 13 19:21:14.781530 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e87c5c5f4a83ff29821a1db737d94705e0ac2afbf1e95b298daec5fbbdb85639-rootfs.mount: Deactivated successfully. Feb 13 19:21:15.022680 containerd[1489]: time="2025-02-13T19:21:15.022626329Z" level=info msg="CreateContainer within sandbox \"3198957cf4decd53961d7dd7b858d2cf052d82f3477a3608b51e65effe0af0d1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:21:15.048519 containerd[1489]: time="2025-02-13T19:21:15.048345478Z" level=info msg="CreateContainer within sandbox \"3198957cf4decd53961d7dd7b858d2cf052d82f3477a3608b51e65effe0af0d1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"95ae0d22abd7caa7cb0e3609ba2d902e3b26fa1d9fb316f66619f6ef176dc62d\"" Feb 13 19:21:15.051416 containerd[1489]: time="2025-02-13T19:21:15.049378752Z" level=info msg="StartContainer for \"95ae0d22abd7caa7cb0e3609ba2d902e3b26fa1d9fb316f66619f6ef176dc62d\"" Feb 13 19:21:15.114291 systemd[1]: Started cri-containerd-95ae0d22abd7caa7cb0e3609ba2d902e3b26fa1d9fb316f66619f6ef176dc62d.scope - libcontainer container 95ae0d22abd7caa7cb0e3609ba2d902e3b26fa1d9fb316f66619f6ef176dc62d. Feb 13 19:21:15.148237 systemd[1]: cri-containerd-95ae0d22abd7caa7cb0e3609ba2d902e3b26fa1d9fb316f66619f6ef176dc62d.scope: Deactivated successfully. Feb 13 19:21:15.152967 containerd[1489]: time="2025-02-13T19:21:15.152795139Z" level=info msg="StartContainer for \"95ae0d22abd7caa7cb0e3609ba2d902e3b26fa1d9fb316f66619f6ef176dc62d\" returns successfully" Feb 13 19:21:15.184810 containerd[1489]: time="2025-02-13T19:21:15.184720695Z" level=info msg="shim disconnected" id=95ae0d22abd7caa7cb0e3609ba2d902e3b26fa1d9fb316f66619f6ef176dc62d namespace=k8s.io Feb 13 19:21:15.184810 containerd[1489]: time="2025-02-13T19:21:15.184797834Z" level=warning msg="cleaning up after shim disconnected" id=95ae0d22abd7caa7cb0e3609ba2d902e3b26fa1d9fb316f66619f6ef176dc62d namespace=k8s.io Feb 13 19:21:15.184810 containerd[1489]: time="2025-02-13T19:21:15.184815474Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:21:15.781617 systemd[1]: run-containerd-runc-k8s.io-95ae0d22abd7caa7cb0e3609ba2d902e3b26fa1d9fb316f66619f6ef176dc62d-runc.KW7rqP.mount: Deactivated successfully. Feb 13 19:21:15.781792 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95ae0d22abd7caa7cb0e3609ba2d902e3b26fa1d9fb316f66619f6ef176dc62d-rootfs.mount: Deactivated successfully. Feb 13 19:21:16.028475 containerd[1489]: time="2025-02-13T19:21:16.027581303Z" level=info msg="CreateContainer within sandbox \"3198957cf4decd53961d7dd7b858d2cf052d82f3477a3608b51e65effe0af0d1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:21:16.058566 containerd[1489]: time="2025-02-13T19:21:16.058312180Z" level=info msg="CreateContainer within sandbox \"3198957cf4decd53961d7dd7b858d2cf052d82f3477a3608b51e65effe0af0d1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7983143ec382b38d772de8c2dcdadd32c01aa6fd9c61d314f3e639584a3f70f1\"" Feb 13 19:21:16.063036 containerd[1489]: time="2025-02-13T19:21:16.060801483Z" level=info msg="StartContainer for \"7983143ec382b38d772de8c2dcdadd32c01aa6fd9c61d314f3e639584a3f70f1\"" Feb 13 19:21:16.114310 systemd[1]: Started cri-containerd-7983143ec382b38d772de8c2dcdadd32c01aa6fd9c61d314f3e639584a3f70f1.scope - libcontainer container 7983143ec382b38d772de8c2dcdadd32c01aa6fd9c61d314f3e639584a3f70f1. Feb 13 19:21:16.154746 containerd[1489]: time="2025-02-13T19:21:16.154167672Z" level=info msg="StartContainer for \"7983143ec382b38d772de8c2dcdadd32c01aa6fd9c61d314f3e639584a3f70f1\" returns successfully" Feb 13 19:21:16.551216 kubelet[2632]: E0213 19:21:16.549902 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-mg7xx" podUID="f562332d-f54e-4875-b6dd-a75973f3a384" Feb 13 19:21:16.627099 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 19:21:17.051325 kubelet[2632]: I0213 19:21:17.050880 2632 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mfl4d" podStartSLOduration=7.050838402 podStartE2EDuration="7.050838402s" podCreationTimestamp="2025-02-13 19:21:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:21:17.04966439 +0000 UTC m=+118.677057835" watchObservedRunningTime="2025-02-13 19:21:17.050838402 +0000 UTC m=+118.678231838" Feb 13 19:21:17.547958 kubelet[2632]: E0213 19:21:17.547600 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-8g24w" podUID="a1b16578-9ee5-4500-b773-9525576c5c5f" Feb 13 19:21:18.553075 kubelet[2632]: E0213 19:21:18.551287 2632 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-mg7xx" podUID="f562332d-f54e-4875-b6dd-a75973f3a384" Feb 13 19:21:18.568853 containerd[1489]: time="2025-02-13T19:21:18.568799253Z" level=info msg="StopPodSandbox for \"dbaec7095c75aba62eac9e3f5572c47a54d9f09503be46b0fc4dd86b48510080\"" Feb 13 19:21:18.569465 containerd[1489]: time="2025-02-13T19:21:18.568937598Z" level=info msg="TearDown network for sandbox \"dbaec7095c75aba62eac9e3f5572c47a54d9f09503be46b0fc4dd86b48510080\" successfully" Feb 13 19:21:18.569465 containerd[1489]: time="2025-02-13T19:21:18.568970083Z" level=info msg="StopPodSandbox for \"dbaec7095c75aba62eac9e3f5572c47a54d9f09503be46b0fc4dd86b48510080\" returns successfully" Feb 13 19:21:18.571089 containerd[1489]: time="2025-02-13T19:21:18.569994354Z" level=info msg="RemovePodSandbox for \"dbaec7095c75aba62eac9e3f5572c47a54d9f09503be46b0fc4dd86b48510080\"" Feb 13 19:21:18.571089 containerd[1489]: time="2025-02-13T19:21:18.570067688Z" level=info msg="Forcibly stopping sandbox \"dbaec7095c75aba62eac9e3f5572c47a54d9f09503be46b0fc4dd86b48510080\"" Feb 13 19:21:18.571089 containerd[1489]: time="2025-02-13T19:21:18.570209402Z" level=info msg="TearDown network for sandbox \"dbaec7095c75aba62eac9e3f5572c47a54d9f09503be46b0fc4dd86b48510080\" successfully" Feb 13 19:21:18.575847 containerd[1489]: time="2025-02-13T19:21:18.575788457Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dbaec7095c75aba62eac9e3f5572c47a54d9f09503be46b0fc4dd86b48510080\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:21:18.576088 containerd[1489]: time="2025-02-13T19:21:18.575886950Z" level=info msg="RemovePodSandbox \"dbaec7095c75aba62eac9e3f5572c47a54d9f09503be46b0fc4dd86b48510080\" returns successfully" Feb 13 19:21:18.576867 containerd[1489]: time="2025-02-13T19:21:18.576826538Z" level=info msg="StopPodSandbox for \"87ac10627766f5732270c06d6c53e5051aef0df014142dff1401b01c5ba62c3e\"" Feb 13 19:21:18.576977 containerd[1489]: time="2025-02-13T19:21:18.576957410Z" level=info msg="TearDown network for sandbox \"87ac10627766f5732270c06d6c53e5051aef0df014142dff1401b01c5ba62c3e\" successfully" Feb 13 19:21:18.577035 containerd[1489]: time="2025-02-13T19:21:18.576978307Z" level=info msg="StopPodSandbox for \"87ac10627766f5732270c06d6c53e5051aef0df014142dff1401b01c5ba62c3e\" returns successfully" Feb 13 19:21:18.578293 containerd[1489]: time="2025-02-13T19:21:18.578252820Z" level=info msg="RemovePodSandbox for \"87ac10627766f5732270c06d6c53e5051aef0df014142dff1401b01c5ba62c3e\"" Feb 13 19:21:18.578402 containerd[1489]: time="2025-02-13T19:21:18.578302035Z" level=info msg="Forcibly stopping sandbox \"87ac10627766f5732270c06d6c53e5051aef0df014142dff1401b01c5ba62c3e\"" Feb 13 19:21:18.578459 containerd[1489]: time="2025-02-13T19:21:18.578390178Z" level=info msg="TearDown network for sandbox \"87ac10627766f5732270c06d6c53e5051aef0df014142dff1401b01c5ba62c3e\" successfully" Feb 13 19:21:18.585715 containerd[1489]: time="2025-02-13T19:21:18.585588298Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"87ac10627766f5732270c06d6c53e5051aef0df014142dff1401b01c5ba62c3e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:21:18.585889 containerd[1489]: time="2025-02-13T19:21:18.585758613Z" level=info msg="RemovePodSandbox \"87ac10627766f5732270c06d6c53e5051aef0df014142dff1401b01c5ba62c3e\" returns successfully" Feb 13 19:21:19.701007 systemd[1]: run-containerd-runc-k8s.io-7983143ec382b38d772de8c2dcdadd32c01aa6fd9c61d314f3e639584a3f70f1-runc.eZ8sej.mount: Deactivated successfully. Feb 13 19:21:20.028393 systemd-networkd[1388]: lxc_health: Link UP Feb 13 19:21:20.039562 systemd-networkd[1388]: lxc_health: Gained carrier Feb 13 19:21:21.170091 update_engine[1468]: I20250213 19:21:21.167135 1468 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 13 19:21:21.170091 update_engine[1468]: I20250213 19:21:21.167554 1468 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 13 19:21:21.170091 update_engine[1468]: I20250213 19:21:21.167963 1468 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 13 19:21:21.179786 update_engine[1468]: E20250213 19:21:21.179683 1468 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 13 19:21:21.180000 update_engine[1468]: I20250213 19:21:21.179831 1468 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 13 19:21:21.184371 systemd-networkd[1388]: lxc_health: Gained IPv6LL Feb 13 19:21:23.247117 ntpd[1448]: Listen normally on 14 lxc_health [fe80::800e:c4ff:fe57:fb59%14]:123 Feb 13 19:21:23.247902 ntpd[1448]: 13 Feb 19:21:23 ntpd[1448]: Listen normally on 14 lxc_health [fe80::800e:c4ff:fe57:fb59%14]:123 Feb 13 19:21:26.629981 sshd[4453]: Connection closed by 147.75.109.163 port 40134 Feb 13 19:21:26.631266 sshd-session[4449]: pam_unix(sshd:session): session closed for user core Feb 13 19:21:26.637610 systemd[1]: sshd@26-10.128.0.48:22-147.75.109.163:40134.service: Deactivated successfully. Feb 13 19:21:26.640601 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:21:26.642031 systemd-logind[1466]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:21:26.643851 systemd-logind[1466]: Removed session 27.