Dec 13 01:25:33.080804 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:25:33.080848 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:25:33.080866 kernel: BIOS-provided physical RAM map: Dec 13 01:25:33.080880 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Dec 13 01:25:33.080893 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Dec 13 01:25:33.080907 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Dec 13 01:25:33.080923 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Dec 13 01:25:33.080942 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Dec 13 01:25:33.080956 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Dec 13 01:25:33.080970 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Dec 13 01:25:33.080984 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Dec 13 01:25:33.080999 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Dec 13 01:25:33.081014 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Dec 13 01:25:33.081030 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Dec 13 01:25:33.081052 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Dec 13 01:25:33.081069 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Dec 13 01:25:33.081084 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Dec 13 01:25:33.081101 kernel: NX (Execute Disable) protection: active Dec 13 01:25:33.081116 kernel: APIC: Static calls initialized Dec 13 01:25:33.081132 kernel: efi: EFI v2.7 by EDK II Dec 13 01:25:33.081148 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Dec 13 01:25:33.081164 kernel: SMBIOS 2.4 present. Dec 13 01:25:33.081180 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Dec 13 01:25:33.081196 kernel: Hypervisor detected: KVM Dec 13 01:25:33.081216 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:25:33.081253 kernel: kvm-clock: using sched offset of 11865149517 cycles Dec 13 01:25:33.081268 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:25:33.081284 kernel: tsc: Detected 2299.998 MHz processor Dec 13 01:25:33.081300 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:25:33.081317 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:25:33.081340 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Dec 13 01:25:33.081356 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Dec 13 01:25:33.081373 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:25:33.081394 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 13 01:25:33.081410 kernel: Using GB pages for direct mapping Dec 13 01:25:33.081426 kernel: Secure boot disabled Dec 13 01:25:33.081442 kernel: ACPI: Early table checksum verification disabled Dec 13 01:25:33.081459 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Dec 13 01:25:33.081476 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Dec 13 01:25:33.081494 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Dec 13 01:25:33.081517 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Dec 13 01:25:33.081538 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Dec 13 01:25:33.081555 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Dec 13 01:25:33.081572 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Dec 13 01:25:33.081590 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Dec 13 01:25:33.081607 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Dec 13 01:25:33.081623 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Dec 13 01:25:33.081644 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Dec 13 01:25:33.081662 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Dec 13 01:25:33.081680 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Dec 13 01:25:33.081697 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Dec 13 01:25:33.081715 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Dec 13 01:25:33.081734 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Dec 13 01:25:33.081751 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Dec 13 01:25:33.081769 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Dec 13 01:25:33.081787 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Dec 13 01:25:33.081809 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Dec 13 01:25:33.081827 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 01:25:33.081843 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 01:25:33.081860 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 01:25:33.081878 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Dec 13 01:25:33.081896 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Dec 13 01:25:33.081914 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Dec 13 01:25:33.081932 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Dec 13 01:25:33.081950 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Dec 13 01:25:33.081971 kernel: Zone ranges: Dec 13 01:25:33.081989 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:25:33.082005 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 01:25:33.082023 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Dec 13 01:25:33.082040 kernel: Movable zone start for each node Dec 13 01:25:33.082058 kernel: Early memory node ranges Dec 13 01:25:33.082076 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Dec 13 01:25:33.082112 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Dec 13 01:25:33.082129 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Dec 13 01:25:33.082150 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Dec 13 01:25:33.082168 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Dec 13 01:25:33.082186 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Dec 13 01:25:33.082204 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:25:33.082222 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Dec 13 01:25:33.082266 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Dec 13 01:25:33.082284 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 13 01:25:33.082302 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Dec 13 01:25:33.082320 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 01:25:33.082346 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:25:33.082368 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:25:33.082387 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:25:33.082405 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:25:33.082423 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:25:33.082442 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:25:33.082460 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:25:33.082478 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:25:33.082496 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 01:25:33.082518 kernel: Booting paravirtualized kernel on KVM Dec 13 01:25:33.082534 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:25:33.082551 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:25:33.082569 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 01:25:33.082587 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 01:25:33.082605 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:25:33.082622 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:25:33.082640 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:25:33.082658 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:25:33.082681 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:25:33.082698 kernel: random: crng init done Dec 13 01:25:33.082714 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 01:25:33.082733 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:25:33.082750 kernel: Fallback order for Node 0: 0 Dec 13 01:25:33.082767 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Dec 13 01:25:33.082784 kernel: Policy zone: Normal Dec 13 01:25:33.082801 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:25:33.082819 kernel: software IO TLB: area num 2. Dec 13 01:25:33.082840 kernel: Memory: 7513380K/7860584K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 346944K reserved, 0K cma-reserved) Dec 13 01:25:33.082859 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:25:33.082877 kernel: Kernel/User page tables isolation: enabled Dec 13 01:25:33.082895 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:25:33.082912 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:25:33.082929 kernel: Dynamic Preempt: voluntary Dec 13 01:25:33.082947 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:25:33.082965 kernel: rcu: RCU event tracing is enabled. Dec 13 01:25:33.083001 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:25:33.083021 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:25:33.083039 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:25:33.083062 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:25:33.083080 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:25:33.083099 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:25:33.083118 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 01:25:33.083135 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:25:33.083155 kernel: Console: colour dummy device 80x25 Dec 13 01:25:33.083178 kernel: printk: console [ttyS0] enabled Dec 13 01:25:33.083195 kernel: ACPI: Core revision 20230628 Dec 13 01:25:33.083214 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:25:33.083249 kernel: x2apic enabled Dec 13 01:25:33.083267 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:25:33.083285 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Dec 13 01:25:33.083305 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 01:25:33.083323 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Dec 13 01:25:33.083355 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Dec 13 01:25:33.083375 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Dec 13 01:25:33.083395 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:25:33.083414 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 01:25:33.083433 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 01:25:33.083452 kernel: Spectre V2 : Mitigation: IBRS Dec 13 01:25:33.083471 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:25:33.083491 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:25:33.083510 kernel: RETBleed: Mitigation: IBRS Dec 13 01:25:33.083534 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:25:33.083553 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Dec 13 01:25:33.083573 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:25:33.083592 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 01:25:33.083612 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:25:33.083631 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:25:33.083650 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:25:33.083668 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:25:33.083687 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:25:33.083711 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 01:25:33.083730 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:25:33.083749 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:25:33.083768 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:25:33.083788 kernel: landlock: Up and running. Dec 13 01:25:33.083808 kernel: SELinux: Initializing. Dec 13 01:25:33.083827 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:25:33.083846 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:25:33.083865 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Dec 13 01:25:33.083888 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:25:33.083908 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:25:33.083928 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:25:33.083948 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Dec 13 01:25:33.083967 kernel: signal: max sigframe size: 1776 Dec 13 01:25:33.083987 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:25:33.084007 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:25:33.084027 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 01:25:33.084046 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:25:33.084069 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:25:33.084088 kernel: .... node #0, CPUs: #1 Dec 13 01:25:33.084107 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 01:25:33.084127 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 01:25:33.084146 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:25:33.084166 kernel: smpboot: Max logical packages: 1 Dec 13 01:25:33.084185 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 13 01:25:33.084204 kernel: devtmpfs: initialized Dec 13 01:25:33.084227 kernel: x86/mm: Memory block size: 128MB Dec 13 01:25:33.084260 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Dec 13 01:25:33.084280 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:25:33.084300 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:25:33.084319 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:25:33.084346 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:25:33.084365 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:25:33.084385 kernel: audit: type=2000 audit(1734053131.930:1): state=initialized audit_enabled=0 res=1 Dec 13 01:25:33.084402 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:25:33.084426 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:25:33.084445 kernel: cpuidle: using governor menu Dec 13 01:25:33.084464 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:25:33.084484 kernel: dca service started, version 1.12.1 Dec 13 01:25:33.084503 kernel: PCI: Using configuration type 1 for base access Dec 13 01:25:33.084523 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:25:33.084543 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:25:33.084561 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:25:33.084580 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:25:33.084603 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:25:33.084622 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:25:33.084640 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:25:33.084659 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:25:33.084677 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:25:33.084695 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 01:25:33.084714 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:25:33.084732 kernel: ACPI: Interpreter enabled Dec 13 01:25:33.084750 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:25:33.084771 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:25:33.084790 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:25:33.084809 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 13 01:25:33.084829 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 01:25:33.084849 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:25:33.085105 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:25:33.085345 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 01:25:33.085537 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 01:25:33.085568 kernel: PCI host bridge to bus 0000:00 Dec 13 01:25:33.085772 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:25:33.085938 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:25:33.086099 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:25:33.086275 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Dec 13 01:25:33.086441 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:25:33.086638 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 01:25:33.086831 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Dec 13 01:25:33.087024 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 01:25:33.087213 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 01:25:33.087443 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Dec 13 01:25:33.087635 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Dec 13 01:25:33.087830 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Dec 13 01:25:33.088025 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:25:33.088213 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Dec 13 01:25:33.088446 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Dec 13 01:25:33.088642 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:25:33.088830 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Dec 13 01:25:33.089016 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Dec 13 01:25:33.089062 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:25:33.089083 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:25:33.089103 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:25:33.089123 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:25:33.089144 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 01:25:33.089163 kernel: iommu: Default domain type: Translated Dec 13 01:25:33.089183 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:25:33.089202 kernel: efivars: Registered efivars operations Dec 13 01:25:33.089222 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:25:33.089270 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:25:33.089287 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Dec 13 01:25:33.089304 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Dec 13 01:25:33.089321 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Dec 13 01:25:33.089345 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Dec 13 01:25:33.089363 kernel: vgaarb: loaded Dec 13 01:25:33.089382 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:25:33.089398 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:25:33.089417 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:25:33.089443 kernel: pnp: PnP ACPI init Dec 13 01:25:33.089461 kernel: pnp: PnP ACPI: found 7 devices Dec 13 01:25:33.089479 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:25:33.089498 kernel: NET: Registered PF_INET protocol family Dec 13 01:25:33.089518 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:25:33.089539 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 01:25:33.089558 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:25:33.089579 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:25:33.089598 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 13 01:25:33.089623 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 01:25:33.089643 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:25:33.089663 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:25:33.089683 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:25:33.089701 kernel: NET: Registered PF_XDP protocol family Dec 13 01:25:33.089918 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:25:33.090092 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:25:33.090317 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:25:33.090509 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Dec 13 01:25:33.090701 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 01:25:33.090728 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:25:33.090749 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 01:25:33.090768 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Dec 13 01:25:33.090786 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 01:25:33.090805 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 01:25:33.090823 kernel: clocksource: Switched to clocksource tsc Dec 13 01:25:33.090847 kernel: Initialise system trusted keyrings Dec 13 01:25:33.090865 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 01:25:33.090885 kernel: Key type asymmetric registered Dec 13 01:25:33.090904 kernel: Asymmetric key parser 'x509' registered Dec 13 01:25:33.090922 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:25:33.090942 kernel: io scheduler mq-deadline registered Dec 13 01:25:33.090961 kernel: io scheduler kyber registered Dec 13 01:25:33.090979 kernel: io scheduler bfq registered Dec 13 01:25:33.090997 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:25:33.091021 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 01:25:33.091208 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Dec 13 01:25:33.091248 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Dec 13 01:25:33.091450 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Dec 13 01:25:33.091475 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 01:25:33.091662 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Dec 13 01:25:33.091688 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:25:33.091709 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:25:33.091728 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 01:25:33.091752 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Dec 13 01:25:33.091771 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Dec 13 01:25:33.091967 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Dec 13 01:25:33.091991 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:25:33.092009 kernel: i8042: Warning: Keylock active Dec 13 01:25:33.092028 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:25:33.092043 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:25:33.092313 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 01:25:33.092506 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 01:25:33.092670 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T01:25:32 UTC (1734053132) Dec 13 01:25:33.092833 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 01:25:33.092855 kernel: intel_pstate: CPU model not supported Dec 13 01:25:33.092874 kernel: pstore: Using crash dump compression: deflate Dec 13 01:25:33.092893 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 01:25:33.092912 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:25:33.092930 kernel: Segment Routing with IPv6 Dec 13 01:25:33.092954 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:25:33.092972 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:25:33.092991 kernel: Key type dns_resolver registered Dec 13 01:25:33.093009 kernel: IPI shorthand broadcast: enabled Dec 13 01:25:33.093028 kernel: sched_clock: Marking stable (819004110, 126547640)->(959290795, -13739045) Dec 13 01:25:33.093046 kernel: registered taskstats version 1 Dec 13 01:25:33.093065 kernel: Loading compiled-in X.509 certificates Dec 13 01:25:33.093083 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:25:33.093102 kernel: Key type .fscrypt registered Dec 13 01:25:33.093124 kernel: Key type fscrypt-provisioning registered Dec 13 01:25:33.093142 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:25:33.093161 kernel: ima: No architecture policies found Dec 13 01:25:33.093180 kernel: clk: Disabling unused clocks Dec 13 01:25:33.093199 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:25:33.093217 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:25:33.093271 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:25:33.093290 kernel: Run /init as init process Dec 13 01:25:33.093309 kernel: with arguments: Dec 13 01:25:33.093339 kernel: /init Dec 13 01:25:33.093357 kernel: with environment: Dec 13 01:25:33.093375 kernel: HOME=/ Dec 13 01:25:33.093393 kernel: TERM=linux Dec 13 01:25:33.093412 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:25:33.093431 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:25:33.093454 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:25:33.093480 systemd[1]: Detected virtualization google. Dec 13 01:25:33.093499 systemd[1]: Detected architecture x86-64. Dec 13 01:25:33.093518 systemd[1]: Running in initrd. Dec 13 01:25:33.093537 systemd[1]: No hostname configured, using default hostname. Dec 13 01:25:33.093556 systemd[1]: Hostname set to . Dec 13 01:25:33.093576 systemd[1]: Initializing machine ID from random generator. Dec 13 01:25:33.093595 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:25:33.093615 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:25:33.093638 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:25:33.093659 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:25:33.093678 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:25:33.093698 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:25:33.093717 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:25:33.093739 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:25:33.093759 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:25:33.093783 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:25:33.093803 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:25:33.093842 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:25:33.093866 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:25:33.093887 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:25:33.093906 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:25:33.093931 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:25:33.093951 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:25:33.093972 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:25:33.093992 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:25:33.094013 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:25:33.094033 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:25:33.094053 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:25:33.094074 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:25:33.094095 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:25:33.094119 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:25:33.094139 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:25:33.094160 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:25:33.094180 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:25:33.094201 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:25:33.094221 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:33.094263 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:25:33.094316 systemd-journald[183]: Collecting audit messages is disabled. Dec 13 01:25:33.094369 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:25:33.094390 systemd-journald[183]: Journal started Dec 13 01:25:33.094433 systemd-journald[183]: Runtime Journal (/run/log/journal/5953e75bb0d8487b8fb9b21badb1d511) is 8.0M, max 148.7M, 140.7M free. Dec 13 01:25:33.096790 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:25:33.103619 systemd-modules-load[184]: Inserted module 'overlay' Dec 13 01:25:33.104256 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:25:33.113456 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:25:33.122380 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:25:33.134495 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:33.144637 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:25:33.150369 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:25:33.159348 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:25:33.162737 kernel: Bridge firewalling registered Dec 13 01:25:33.161756 systemd-modules-load[184]: Inserted module 'br_netfilter' Dec 13 01:25:33.163609 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:25:33.170474 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:25:33.182445 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:25:33.196868 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:25:33.197407 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:25:33.202449 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:25:33.214576 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:25:33.221494 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:33.234464 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:25:33.258413 systemd-resolved[209]: Positive Trust Anchors: Dec 13 01:25:33.258851 systemd-resolved[209]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:25:33.258922 systemd-resolved[209]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:25:33.277452 dracut-cmdline[218]: dracut-dracut-053 Dec 13 01:25:33.277452 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:25:33.265521 systemd-resolved[209]: Defaulting to hostname 'linux'. Dec 13 01:25:33.267097 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:25:33.301285 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:25:33.364276 kernel: SCSI subsystem initialized Dec 13 01:25:33.374266 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:25:33.386284 kernel: iscsi: registered transport (tcp) Dec 13 01:25:33.409283 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:25:33.409345 kernel: QLogic iSCSI HBA Driver Dec 13 01:25:33.459932 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:25:33.466435 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:25:33.512853 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:25:33.512930 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:25:33.512959 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:25:33.558279 kernel: raid6: avx2x4 gen() 18423 MB/s Dec 13 01:25:33.575271 kernel: raid6: avx2x2 gen() 18432 MB/s Dec 13 01:25:33.592637 kernel: raid6: avx2x1 gen() 14381 MB/s Dec 13 01:25:33.592686 kernel: raid6: using algorithm avx2x2 gen() 18432 MB/s Dec 13 01:25:33.610638 kernel: raid6: .... xor() 17975 MB/s, rmw enabled Dec 13 01:25:33.610676 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:25:33.633271 kernel: xor: automatically using best checksumming function avx Dec 13 01:25:33.804275 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:25:33.817602 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:25:33.828459 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:25:33.855740 systemd-udevd[400]: Using default interface naming scheme 'v255'. Dec 13 01:25:33.862512 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:25:33.870440 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:25:33.899332 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Dec 13 01:25:33.935902 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:25:33.941455 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:25:34.029280 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:25:34.042460 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:25:34.086453 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:25:34.095873 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:25:34.100396 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:25:34.104360 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:25:34.116702 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:25:34.161270 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:25:34.163964 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:25:34.210862 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:25:34.240576 kernel: scsi host0: Virtio SCSI HBA Dec 13 01:25:34.240874 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:25:34.240904 kernel: AES CTR mode by8 optimization enabled Dec 13 01:25:34.240929 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Dec 13 01:25:34.211069 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:34.219624 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:25:34.223395 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:25:34.223619 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:34.234145 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:34.243660 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:34.278147 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:34.288493 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:25:34.301614 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Dec 13 01:25:34.317739 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Dec 13 01:25:34.317989 kernel: sd 0:0:1:0: [sda] Write Protect is off Dec 13 01:25:34.318227 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Dec 13 01:25:34.318495 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 01:25:34.318731 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:25:34.318759 kernel: GPT:17805311 != 25165823 Dec 13 01:25:34.318785 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:25:34.318818 kernel: GPT:17805311 != 25165823 Dec 13 01:25:34.318842 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:25:34.318866 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:34.318893 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Dec 13 01:25:34.327379 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:34.377263 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (447) Dec 13 01:25:34.380749 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (455) Dec 13 01:25:34.400856 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Dec 13 01:25:34.408809 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Dec 13 01:25:34.415030 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Dec 13 01:25:34.415148 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Dec 13 01:25:34.428788 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Dec 13 01:25:34.432432 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:25:34.458651 disk-uuid[550]: Primary Header is updated. Dec 13 01:25:34.458651 disk-uuid[550]: Secondary Entries is updated. Dec 13 01:25:34.458651 disk-uuid[550]: Secondary Header is updated. Dec 13 01:25:34.474253 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:34.500268 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:34.523448 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:35.517267 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:35.517903 disk-uuid[551]: The operation has completed successfully. Dec 13 01:25:35.587156 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:25:35.587318 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:25:35.623438 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:25:35.643637 sh[568]: Success Dec 13 01:25:35.656364 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 01:25:35.736589 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:25:35.743302 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:25:35.770709 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:25:35.812052 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:25:35.812120 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:25:35.812158 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:25:35.821479 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:25:35.828306 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:25:35.861264 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:25:35.865351 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:25:35.866280 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:25:35.871436 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:25:35.883412 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:25:35.942229 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:25:35.942316 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:25:35.942343 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:25:35.960423 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:25:35.960497 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:25:35.974583 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:25:35.990382 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:25:35.989883 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:25:36.017472 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:25:36.101336 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:25:36.108561 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:25:36.205985 systemd-networkd[751]: lo: Link UP Dec 13 01:25:36.206002 systemd-networkd[751]: lo: Gained carrier Dec 13 01:25:36.209124 systemd-networkd[751]: Enumeration completed Dec 13 01:25:36.230561 ignition[672]: Ignition 2.19.0 Dec 13 01:25:36.209301 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:25:36.230570 ignition[672]: Stage: fetch-offline Dec 13 01:25:36.209861 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:25:36.230609 ignition[672]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:36.209868 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:25:36.230620 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:25:36.212075 systemd-networkd[751]: eth0: Link UP Dec 13 01:25:36.230748 ignition[672]: parsed url from cmdline: "" Dec 13 01:25:36.212083 systemd-networkd[751]: eth0: Gained carrier Dec 13 01:25:36.230754 ignition[672]: no config URL provided Dec 13 01:25:36.212097 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:25:36.230764 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:25:36.221328 systemd-networkd[751]: eth0: DHCPv4 address 10.128.0.80/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 01:25:36.230776 ignition[672]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:25:36.224483 systemd[1]: Reached target network.target - Network. Dec 13 01:25:36.230783 ignition[672]: failed to fetch config: resource requires networking Dec 13 01:25:36.233709 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:25:36.231171 ignition[672]: Ignition finished successfully Dec 13 01:25:36.247523 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:25:36.281504 ignition[760]: Ignition 2.19.0 Dec 13 01:25:36.292471 unknown[760]: fetched base config from "system" Dec 13 01:25:36.281512 ignition[760]: Stage: fetch Dec 13 01:25:36.292484 unknown[760]: fetched base config from "system" Dec 13 01:25:36.281708 ignition[760]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:36.292494 unknown[760]: fetched user config from "gcp" Dec 13 01:25:36.281720 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:25:36.294778 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:25:36.281834 ignition[760]: parsed url from cmdline: "" Dec 13 01:25:36.323424 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:25:36.281841 ignition[760]: no config URL provided Dec 13 01:25:36.360672 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:25:36.281848 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:25:36.384566 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:25:36.281858 ignition[760]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:25:36.430755 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:25:36.281880 ignition[760]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Dec 13 01:25:36.438631 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:25:36.285708 ignition[760]: GET result: OK Dec 13 01:25:36.453510 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:25:36.285810 ignition[760]: parsing config with SHA512: 9417f4000f1743cb12e86ae2d5f8f94d1c1100752137d59fbbaafc59461f44a1aabdd2318e8fd13d15c99552ca3da8eac3f5389f5d5aa086f48d0b8e8d4b994a Dec 13 01:25:36.471495 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:25:36.293016 ignition[760]: fetch: fetch complete Dec 13 01:25:36.500461 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:25:36.293024 ignition[760]: fetch: fetch passed Dec 13 01:25:36.507490 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:25:36.293074 ignition[760]: Ignition finished successfully Dec 13 01:25:36.538448 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:25:36.344420 ignition[766]: Ignition 2.19.0 Dec 13 01:25:36.344429 ignition[766]: Stage: kargs Dec 13 01:25:36.344625 ignition[766]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:36.344637 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:25:36.345797 ignition[766]: kargs: kargs passed Dec 13 01:25:36.345853 ignition[766]: Ignition finished successfully Dec 13 01:25:36.424940 ignition[772]: Ignition 2.19.0 Dec 13 01:25:36.424950 ignition[772]: Stage: disks Dec 13 01:25:36.425130 ignition[772]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:36.425141 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:25:36.426141 ignition[772]: disks: disks passed Dec 13 01:25:36.426216 ignition[772]: Ignition finished successfully Dec 13 01:25:36.574995 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 01:25:36.786873 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:25:36.806365 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:25:36.925660 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:25:36.926542 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:25:36.936004 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:25:36.971363 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:25:36.980737 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:25:37.005919 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:25:37.074392 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (788) Dec 13 01:25:37.074442 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:25:37.074479 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:25:37.074502 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:25:37.074517 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:25:37.074532 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:25:37.006017 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:25:37.006057 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:25:37.062386 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:25:37.083925 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:25:37.116578 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:25:37.238095 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:25:37.248382 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:25:37.258493 initrd-setup-root[827]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:25:37.268402 initrd-setup-root[834]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:25:37.262430 systemd-networkd[751]: eth0: Gained IPv6LL Dec 13 01:25:37.397754 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:25:37.405372 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:25:37.437284 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:25:37.447440 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:25:37.456330 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:25:37.500126 ignition[901]: INFO : Ignition 2.19.0 Dec 13 01:25:37.500126 ignition[901]: INFO : Stage: mount Dec 13 01:25:37.500126 ignition[901]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:37.500126 ignition[901]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:25:37.500126 ignition[901]: INFO : mount: mount passed Dec 13 01:25:37.500126 ignition[901]: INFO : Ignition finished successfully Dec 13 01:25:37.502283 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:25:37.508828 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:25:37.528370 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:25:37.943514 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:25:37.992389 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (913) Dec 13 01:25:37.992427 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:25:37.992452 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:25:37.992477 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:25:38.005938 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:25:38.006020 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:25:38.009429 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:25:38.057050 ignition[930]: INFO : Ignition 2.19.0 Dec 13 01:25:38.057050 ignition[930]: INFO : Stage: files Dec 13 01:25:38.071356 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:38.071356 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:25:38.071356 ignition[930]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:25:38.071356 ignition[930]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:25:38.071356 ignition[930]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:25:38.071356 ignition[930]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:25:38.071356 ignition[930]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:25:38.071356 ignition[930]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:25:38.069329 unknown[930]: wrote ssh authorized keys file for user: core Dec 13 01:25:38.170330 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:25:38.170330 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:25:38.216954 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:25:38.451202 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:25:38.451202 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:25:38.451202 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 01:25:38.731677 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:25:38.877808 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:25:38.893379 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:25:38.893379 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:25:38.893379 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:25:38.893379 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:25:38.893379 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:25:38.893379 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:25:38.893379 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:25:38.893379 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:25:38.893379 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:25:38.893379 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:25:38.893379 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:25:38.893379 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:25:38.893379 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:25:38.893379 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 01:25:39.124427 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:25:39.413492 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:25:39.413492 ignition[930]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 01:25:39.451380 ignition[930]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:25:39.451380 ignition[930]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:25:39.451380 ignition[930]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 01:25:39.451380 ignition[930]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:25:39.451380 ignition[930]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:25:39.451380 ignition[930]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:25:39.451380 ignition[930]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:25:39.451380 ignition[930]: INFO : files: files passed Dec 13 01:25:39.451380 ignition[930]: INFO : Ignition finished successfully Dec 13 01:25:39.419451 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:25:39.439517 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:25:39.473448 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:25:39.503737 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:25:39.682489 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:25:39.682489 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:25:39.503903 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:25:39.745366 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:25:39.524262 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:25:39.527603 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:25:39.563428 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:25:39.642136 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:25:39.642260 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:25:39.661959 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:25:39.682355 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:25:39.699449 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:25:39.705415 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:25:39.785070 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:25:39.809410 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:25:39.831499 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:25:39.850539 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:25:39.872573 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:25:39.893533 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:25:39.893735 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:25:39.923682 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:25:39.934664 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:25:39.952652 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:25:39.969646 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:25:39.986644 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:25:40.004657 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:25:40.023643 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:25:40.040652 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:25:40.058732 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:25:40.076645 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:25:40.094595 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:25:40.094784 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:25:40.128641 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:25:40.139632 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:25:40.157588 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:25:40.157770 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:25:40.175601 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:25:40.175785 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:25:40.226524 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:25:40.226749 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:25:40.320362 ignition[983]: INFO : Ignition 2.19.0 Dec 13 01:25:40.320362 ignition[983]: INFO : Stage: umount Dec 13 01:25:40.320362 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:40.320362 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:25:40.320362 ignition[983]: INFO : umount: umount passed Dec 13 01:25:40.320362 ignition[983]: INFO : Ignition finished successfully Dec 13 01:25:40.237704 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:25:40.237880 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:25:40.262585 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:25:40.278501 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:25:40.328357 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:25:40.328572 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:25:40.339610 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:25:40.339832 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:25:40.368955 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:25:40.369920 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:25:40.370034 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:25:40.385921 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:25:40.386035 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:25:40.412009 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:25:40.412195 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:25:40.420651 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:25:40.420712 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:25:40.447530 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:25:40.447600 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:25:40.465532 systemd[1]: Stopped target network.target - Network. Dec 13 01:25:40.483454 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:25:40.483542 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:25:40.491561 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:25:40.509524 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:25:40.513357 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:25:40.526534 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:25:40.544533 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:25:40.561577 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:25:40.561638 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:25:40.576578 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:25:40.576634 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:25:40.593561 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:25:40.593629 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:25:40.610582 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:25:40.610646 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:25:40.627568 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:25:40.627632 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:25:40.654721 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:25:40.659284 systemd-networkd[751]: eth0: DHCPv6 lease lost Dec 13 01:25:40.672545 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:25:40.691973 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:25:40.692097 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:25:40.711367 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:25:40.711522 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:25:40.727863 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:25:40.727981 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:25:40.739672 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:25:40.739741 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:25:40.760356 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:25:40.780466 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:25:40.780537 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:25:40.797589 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:25:40.797648 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:25:41.275372 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Dec 13 01:25:40.825533 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:25:40.825599 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:25:40.845537 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:25:40.845607 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:25:40.858706 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:25:40.876953 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:25:40.877112 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:25:40.900424 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:25:40.900603 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:25:40.911579 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:25:40.911626 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:25:40.936550 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:25:40.936612 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:25:40.972472 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:25:40.972654 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:25:40.998539 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:25:40.998609 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:41.049447 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:25:41.061514 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:25:41.061584 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:25:41.078588 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:25:41.078650 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:25:41.108541 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:25:41.108623 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:25:41.127523 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:25:41.127588 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:41.138036 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:25:41.138329 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:25:41.155923 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:25:41.156033 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:25:41.174740 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:25:41.198538 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:25:41.232387 systemd[1]: Switching root. Dec 13 01:25:41.602321 systemd-journald[183]: Journal stopped Dec 13 01:25:33.080804 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:25:33.080848 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:25:33.080866 kernel: BIOS-provided physical RAM map: Dec 13 01:25:33.080880 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Dec 13 01:25:33.080893 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Dec 13 01:25:33.080907 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Dec 13 01:25:33.080923 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Dec 13 01:25:33.080942 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Dec 13 01:25:33.080956 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Dec 13 01:25:33.080970 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Dec 13 01:25:33.080984 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Dec 13 01:25:33.080999 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Dec 13 01:25:33.081014 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Dec 13 01:25:33.081030 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Dec 13 01:25:33.081052 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Dec 13 01:25:33.081069 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Dec 13 01:25:33.081084 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Dec 13 01:25:33.081101 kernel: NX (Execute Disable) protection: active Dec 13 01:25:33.081116 kernel: APIC: Static calls initialized Dec 13 01:25:33.081132 kernel: efi: EFI v2.7 by EDK II Dec 13 01:25:33.081148 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Dec 13 01:25:33.081164 kernel: SMBIOS 2.4 present. Dec 13 01:25:33.081180 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Dec 13 01:25:33.081196 kernel: Hypervisor detected: KVM Dec 13 01:25:33.081216 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:25:33.081253 kernel: kvm-clock: using sched offset of 11865149517 cycles Dec 13 01:25:33.081268 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:25:33.081284 kernel: tsc: Detected 2299.998 MHz processor Dec 13 01:25:33.081300 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:25:33.081317 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:25:33.081340 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Dec 13 01:25:33.081356 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Dec 13 01:25:33.081373 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:25:33.081394 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 13 01:25:33.081410 kernel: Using GB pages for direct mapping Dec 13 01:25:33.081426 kernel: Secure boot disabled Dec 13 01:25:33.081442 kernel: ACPI: Early table checksum verification disabled Dec 13 01:25:33.081459 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Dec 13 01:25:33.081476 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Dec 13 01:25:33.081494 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Dec 13 01:25:33.081517 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Dec 13 01:25:33.081538 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Dec 13 01:25:33.081555 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Dec 13 01:25:33.081572 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Dec 13 01:25:33.081590 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Dec 13 01:25:33.081607 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Dec 13 01:25:33.081623 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Dec 13 01:25:33.081644 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Dec 13 01:25:33.081662 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Dec 13 01:25:33.081680 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Dec 13 01:25:33.081697 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Dec 13 01:25:33.081715 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Dec 13 01:25:33.081734 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Dec 13 01:25:33.081751 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Dec 13 01:25:33.081769 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Dec 13 01:25:33.081787 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Dec 13 01:25:33.081809 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Dec 13 01:25:33.081827 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 01:25:33.081843 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 01:25:33.081860 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 01:25:33.081878 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Dec 13 01:25:33.081896 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Dec 13 01:25:33.081914 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Dec 13 01:25:33.081932 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Dec 13 01:25:33.081950 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Dec 13 01:25:33.081971 kernel: Zone ranges: Dec 13 01:25:33.081989 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:25:33.082005 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 01:25:33.082023 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Dec 13 01:25:33.082040 kernel: Movable zone start for each node Dec 13 01:25:33.082058 kernel: Early memory node ranges Dec 13 01:25:33.082076 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Dec 13 01:25:33.082112 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Dec 13 01:25:33.082129 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Dec 13 01:25:33.082150 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Dec 13 01:25:33.082168 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Dec 13 01:25:33.082186 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Dec 13 01:25:33.082204 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:25:33.082222 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Dec 13 01:25:33.082266 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Dec 13 01:25:33.082284 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 13 01:25:33.082302 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Dec 13 01:25:33.082320 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 01:25:33.082346 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:25:33.082368 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:25:33.082387 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:25:33.082405 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:25:33.082423 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:25:33.082442 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:25:33.082460 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:25:33.082478 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:25:33.082496 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 01:25:33.082518 kernel: Booting paravirtualized kernel on KVM Dec 13 01:25:33.082534 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:25:33.082551 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:25:33.082569 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 01:25:33.082587 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 01:25:33.082605 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:25:33.082622 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:25:33.082640 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:25:33.082658 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:25:33.082681 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:25:33.082698 kernel: random: crng init done Dec 13 01:25:33.082714 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 01:25:33.082733 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:25:33.082750 kernel: Fallback order for Node 0: 0 Dec 13 01:25:33.082767 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Dec 13 01:25:33.082784 kernel: Policy zone: Normal Dec 13 01:25:33.082801 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:25:33.082819 kernel: software IO TLB: area num 2. Dec 13 01:25:33.082840 kernel: Memory: 7513380K/7860584K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 346944K reserved, 0K cma-reserved) Dec 13 01:25:33.082859 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:25:33.082877 kernel: Kernel/User page tables isolation: enabled Dec 13 01:25:33.082895 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:25:33.082912 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:25:33.082929 kernel: Dynamic Preempt: voluntary Dec 13 01:25:33.082947 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:25:33.082965 kernel: rcu: RCU event tracing is enabled. Dec 13 01:25:33.083001 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:25:33.083021 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:25:33.083039 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:25:33.083062 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:25:33.083080 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:25:33.083099 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:25:33.083118 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 01:25:33.083135 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:25:33.083155 kernel: Console: colour dummy device 80x25 Dec 13 01:25:33.083178 kernel: printk: console [ttyS0] enabled Dec 13 01:25:33.083195 kernel: ACPI: Core revision 20230628 Dec 13 01:25:33.083214 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:25:33.083249 kernel: x2apic enabled Dec 13 01:25:33.083267 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:25:33.083285 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Dec 13 01:25:33.083305 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 01:25:33.083323 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Dec 13 01:25:33.083355 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Dec 13 01:25:33.083375 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Dec 13 01:25:33.083395 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:25:33.083414 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 01:25:33.083433 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 01:25:33.083452 kernel: Spectre V2 : Mitigation: IBRS Dec 13 01:25:33.083471 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:25:33.083491 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:25:33.083510 kernel: RETBleed: Mitigation: IBRS Dec 13 01:25:33.083534 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:25:33.083553 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Dec 13 01:25:33.083573 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:25:33.083592 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 01:25:33.083612 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:25:33.083631 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:25:33.083650 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:25:33.083668 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:25:33.083687 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:25:33.083711 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 01:25:33.083730 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:25:33.083749 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:25:33.083768 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:25:33.083788 kernel: landlock: Up and running. Dec 13 01:25:33.083808 kernel: SELinux: Initializing. Dec 13 01:25:33.083827 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:25:33.083846 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:25:33.083865 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Dec 13 01:25:33.083888 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:25:33.083908 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:25:33.083928 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:25:33.083948 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Dec 13 01:25:33.083967 kernel: signal: max sigframe size: 1776 Dec 13 01:25:33.083987 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:25:33.084007 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:25:33.084027 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 01:25:33.084046 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:25:33.084069 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:25:33.084088 kernel: .... node #0, CPUs: #1 Dec 13 01:25:33.084107 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 01:25:33.084127 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 01:25:33.084146 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:25:33.084166 kernel: smpboot: Max logical packages: 1 Dec 13 01:25:33.084185 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 13 01:25:33.084204 kernel: devtmpfs: initialized Dec 13 01:25:33.084227 kernel: x86/mm: Memory block size: 128MB Dec 13 01:25:33.084260 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Dec 13 01:25:33.084280 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:25:33.084300 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:25:33.084319 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:25:33.084346 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:25:33.084365 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:25:33.084385 kernel: audit: type=2000 audit(1734053131.930:1): state=initialized audit_enabled=0 res=1 Dec 13 01:25:33.084402 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:25:33.084426 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:25:33.084445 kernel: cpuidle: using governor menu Dec 13 01:25:33.084464 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:25:33.084484 kernel: dca service started, version 1.12.1 Dec 13 01:25:33.084503 kernel: PCI: Using configuration type 1 for base access Dec 13 01:25:33.084523 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:25:33.084543 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:25:33.084561 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:25:33.084580 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:25:33.084603 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:25:33.084622 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:25:33.084640 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:25:33.084659 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:25:33.084677 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:25:33.084695 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 01:25:33.084714 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:25:33.084732 kernel: ACPI: Interpreter enabled Dec 13 01:25:33.084750 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:25:33.084771 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:25:33.084790 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:25:33.084809 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 13 01:25:33.084829 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 01:25:33.084849 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:25:33.085105 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:25:33.085345 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 01:25:33.085537 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 01:25:33.085568 kernel: PCI host bridge to bus 0000:00 Dec 13 01:25:33.085772 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:25:33.085938 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:25:33.086099 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:25:33.086275 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Dec 13 01:25:33.086441 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:25:33.086638 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 01:25:33.086831 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Dec 13 01:25:33.087024 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 01:25:33.087213 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 01:25:33.087443 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Dec 13 01:25:33.087635 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Dec 13 01:25:33.087830 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Dec 13 01:25:33.088025 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:25:33.088213 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Dec 13 01:25:33.088446 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Dec 13 01:25:33.088642 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:25:33.088830 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Dec 13 01:25:33.089016 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Dec 13 01:25:33.089062 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:25:33.089083 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:25:33.089103 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:25:33.089123 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:25:33.089144 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 01:25:33.089163 kernel: iommu: Default domain type: Translated Dec 13 01:25:33.089183 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:25:33.089202 kernel: efivars: Registered efivars operations Dec 13 01:25:33.089222 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:25:33.089270 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:25:33.089287 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Dec 13 01:25:33.089304 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Dec 13 01:25:33.089321 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Dec 13 01:25:33.089345 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Dec 13 01:25:33.089363 kernel: vgaarb: loaded Dec 13 01:25:33.089382 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:25:33.089398 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:25:33.089417 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:25:33.089443 kernel: pnp: PnP ACPI init Dec 13 01:25:33.089461 kernel: pnp: PnP ACPI: found 7 devices Dec 13 01:25:33.089479 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:25:33.089498 kernel: NET: Registered PF_INET protocol family Dec 13 01:25:33.089518 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:25:33.089539 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 01:25:33.089558 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:25:33.089579 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:25:33.089598 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 13 01:25:33.089623 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 01:25:33.089643 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:25:33.089663 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:25:33.089683 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:25:33.089701 kernel: NET: Registered PF_XDP protocol family Dec 13 01:25:33.089918 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:25:33.090092 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:25:33.090317 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:25:33.090509 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Dec 13 01:25:33.090701 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 01:25:33.090728 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:25:33.090749 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 01:25:33.090768 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Dec 13 01:25:33.090786 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 01:25:33.090805 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 01:25:33.090823 kernel: clocksource: Switched to clocksource tsc Dec 13 01:25:33.090847 kernel: Initialise system trusted keyrings Dec 13 01:25:33.090865 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 01:25:33.090885 kernel: Key type asymmetric registered Dec 13 01:25:33.090904 kernel: Asymmetric key parser 'x509' registered Dec 13 01:25:33.090922 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:25:33.090942 kernel: io scheduler mq-deadline registered Dec 13 01:25:33.090961 kernel: io scheduler kyber registered Dec 13 01:25:33.090979 kernel: io scheduler bfq registered Dec 13 01:25:33.090997 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:25:33.091021 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 01:25:33.091208 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Dec 13 01:25:33.091248 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Dec 13 01:25:33.091450 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Dec 13 01:25:33.091475 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 01:25:33.091662 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Dec 13 01:25:33.091688 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:25:33.091709 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:25:33.091728 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 01:25:33.091752 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Dec 13 01:25:33.091771 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Dec 13 01:25:33.091967 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Dec 13 01:25:33.091991 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:25:33.092009 kernel: i8042: Warning: Keylock active Dec 13 01:25:33.092028 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:25:33.092043 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:25:33.092313 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 01:25:33.092506 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 01:25:33.092670 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T01:25:32 UTC (1734053132) Dec 13 01:25:33.092833 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 01:25:33.092855 kernel: intel_pstate: CPU model not supported Dec 13 01:25:33.092874 kernel: pstore: Using crash dump compression: deflate Dec 13 01:25:33.092893 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 01:25:33.092912 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:25:33.092930 kernel: Segment Routing with IPv6 Dec 13 01:25:33.092954 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:25:33.092972 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:25:33.092991 kernel: Key type dns_resolver registered Dec 13 01:25:33.093009 kernel: IPI shorthand broadcast: enabled Dec 13 01:25:33.093028 kernel: sched_clock: Marking stable (819004110, 126547640)->(959290795, -13739045) Dec 13 01:25:33.093046 kernel: registered taskstats version 1 Dec 13 01:25:33.093065 kernel: Loading compiled-in X.509 certificates Dec 13 01:25:33.093083 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:25:33.093102 kernel: Key type .fscrypt registered Dec 13 01:25:33.093124 kernel: Key type fscrypt-provisioning registered Dec 13 01:25:33.093142 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:25:33.093161 kernel: ima: No architecture policies found Dec 13 01:25:33.093180 kernel: clk: Disabling unused clocks Dec 13 01:25:33.093199 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:25:33.093217 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:25:33.093271 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:25:33.093290 kernel: Run /init as init process Dec 13 01:25:33.093309 kernel: with arguments: Dec 13 01:25:33.093339 kernel: /init Dec 13 01:25:33.093357 kernel: with environment: Dec 13 01:25:33.093375 kernel: HOME=/ Dec 13 01:25:33.093393 kernel: TERM=linux Dec 13 01:25:33.093412 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:25:33.093431 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:25:33.093454 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:25:33.093480 systemd[1]: Detected virtualization google. Dec 13 01:25:33.093499 systemd[1]: Detected architecture x86-64. Dec 13 01:25:33.093518 systemd[1]: Running in initrd. Dec 13 01:25:33.093537 systemd[1]: No hostname configured, using default hostname. Dec 13 01:25:33.093556 systemd[1]: Hostname set to . Dec 13 01:25:33.093576 systemd[1]: Initializing machine ID from random generator. Dec 13 01:25:33.093595 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:25:33.093615 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:25:33.093638 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:25:33.093659 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:25:33.093678 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:25:33.093698 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:25:33.093717 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:25:33.093739 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:25:33.093759 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:25:33.093783 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:25:33.093803 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:25:33.093842 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:25:33.093866 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:25:33.093887 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:25:33.093906 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:25:33.093931 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:25:33.093951 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:25:33.093972 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:25:33.093992 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:25:33.094013 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:25:33.094033 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:25:33.094053 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:25:33.094074 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:25:33.094095 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:25:33.094119 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:25:33.094139 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:25:33.094160 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:25:33.094180 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:25:33.094201 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:25:33.094221 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:33.094263 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:25:33.094316 systemd-journald[183]: Collecting audit messages is disabled. Dec 13 01:25:33.094369 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:25:33.094390 systemd-journald[183]: Journal started Dec 13 01:25:33.094433 systemd-journald[183]: Runtime Journal (/run/log/journal/5953e75bb0d8487b8fb9b21badb1d511) is 8.0M, max 148.7M, 140.7M free. Dec 13 01:25:33.096790 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:25:33.103619 systemd-modules-load[184]: Inserted module 'overlay' Dec 13 01:25:33.104256 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:25:33.113456 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:25:33.122380 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:25:33.134495 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:33.144637 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:25:33.150369 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:25:33.159348 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:25:33.162737 kernel: Bridge firewalling registered Dec 13 01:25:33.161756 systemd-modules-load[184]: Inserted module 'br_netfilter' Dec 13 01:25:33.163609 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:25:33.170474 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:25:33.182445 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:25:33.196868 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:25:33.197407 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:25:33.202449 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:25:33.214576 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:25:33.221494 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:33.234464 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:25:33.258413 systemd-resolved[209]: Positive Trust Anchors: Dec 13 01:25:33.258851 systemd-resolved[209]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:25:33.258922 systemd-resolved[209]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:25:33.277452 dracut-cmdline[218]: dracut-dracut-053 Dec 13 01:25:33.277452 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:25:33.265521 systemd-resolved[209]: Defaulting to hostname 'linux'. Dec 13 01:25:33.267097 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:25:33.301285 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:25:33.364276 kernel: SCSI subsystem initialized Dec 13 01:25:33.374266 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:25:33.386284 kernel: iscsi: registered transport (tcp) Dec 13 01:25:33.409283 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:25:33.409345 kernel: QLogic iSCSI HBA Driver Dec 13 01:25:33.459932 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:25:33.466435 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:25:33.512853 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:25:33.512930 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:25:33.512959 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:25:33.558279 kernel: raid6: avx2x4 gen() 18423 MB/s Dec 13 01:25:33.575271 kernel: raid6: avx2x2 gen() 18432 MB/s Dec 13 01:25:33.592637 kernel: raid6: avx2x1 gen() 14381 MB/s Dec 13 01:25:33.592686 kernel: raid6: using algorithm avx2x2 gen() 18432 MB/s Dec 13 01:25:33.610638 kernel: raid6: .... xor() 17975 MB/s, rmw enabled Dec 13 01:25:33.610676 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:25:33.633271 kernel: xor: automatically using best checksumming function avx Dec 13 01:25:33.804275 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:25:33.817602 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:25:33.828459 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:25:33.855740 systemd-udevd[400]: Using default interface naming scheme 'v255'. Dec 13 01:25:33.862512 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:25:33.870440 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:25:33.899332 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Dec 13 01:25:33.935902 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:25:33.941455 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:25:34.029280 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:25:34.042460 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:25:34.086453 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:25:34.095873 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:25:34.100396 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:25:34.104360 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:25:34.116702 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:25:34.161270 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:25:34.163964 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:25:34.210862 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:25:34.240576 kernel: scsi host0: Virtio SCSI HBA Dec 13 01:25:34.240874 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:25:34.240904 kernel: AES CTR mode by8 optimization enabled Dec 13 01:25:34.240929 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Dec 13 01:25:34.211069 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:34.219624 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:25:34.223395 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:25:34.223619 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:34.234145 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:34.243660 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:34.278147 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:34.288493 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:25:34.301614 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Dec 13 01:25:34.317739 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Dec 13 01:25:34.317989 kernel: sd 0:0:1:0: [sda] Write Protect is off Dec 13 01:25:34.318227 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Dec 13 01:25:34.318495 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 01:25:34.318731 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:25:34.318759 kernel: GPT:17805311 != 25165823 Dec 13 01:25:34.318785 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:25:34.318818 kernel: GPT:17805311 != 25165823 Dec 13 01:25:34.318842 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:25:34.318866 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:34.318893 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Dec 13 01:25:34.327379 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:34.377263 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (447) Dec 13 01:25:34.380749 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (455) Dec 13 01:25:34.400856 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Dec 13 01:25:34.408809 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Dec 13 01:25:34.415030 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Dec 13 01:25:34.415148 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Dec 13 01:25:34.428788 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Dec 13 01:25:34.432432 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:25:34.458651 disk-uuid[550]: Primary Header is updated. Dec 13 01:25:34.458651 disk-uuid[550]: Secondary Entries is updated. Dec 13 01:25:34.458651 disk-uuid[550]: Secondary Header is updated. Dec 13 01:25:34.474253 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:34.500268 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:34.523448 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:35.517267 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:25:35.517903 disk-uuid[551]: The operation has completed successfully. Dec 13 01:25:35.587156 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:25:35.587318 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:25:35.623438 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:25:35.643637 sh[568]: Success Dec 13 01:25:35.656364 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 01:25:35.736589 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:25:35.743302 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:25:35.770709 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:25:35.812052 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:25:35.812120 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:25:35.812158 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:25:35.821479 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:25:35.828306 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:25:35.861264 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:25:35.865351 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:25:35.866280 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:25:35.871436 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:25:35.883412 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:25:35.942229 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:25:35.942316 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:25:35.942343 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:25:35.960423 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:25:35.960497 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:25:35.974583 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:25:35.990382 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:25:35.989883 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:25:36.017472 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:25:36.101336 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:25:36.108561 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:25:36.205985 systemd-networkd[751]: lo: Link UP Dec 13 01:25:36.206002 systemd-networkd[751]: lo: Gained carrier Dec 13 01:25:36.209124 systemd-networkd[751]: Enumeration completed Dec 13 01:25:36.230561 ignition[672]: Ignition 2.19.0 Dec 13 01:25:36.209301 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:25:36.230570 ignition[672]: Stage: fetch-offline Dec 13 01:25:36.209861 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:25:36.230609 ignition[672]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:36.209868 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:25:36.230620 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:25:36.212075 systemd-networkd[751]: eth0: Link UP Dec 13 01:25:36.230748 ignition[672]: parsed url from cmdline: "" Dec 13 01:25:36.212083 systemd-networkd[751]: eth0: Gained carrier Dec 13 01:25:36.230754 ignition[672]: no config URL provided Dec 13 01:25:36.212097 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:25:36.230764 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:25:36.221328 systemd-networkd[751]: eth0: DHCPv4 address 10.128.0.80/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 01:25:36.230776 ignition[672]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:25:36.224483 systemd[1]: Reached target network.target - Network. Dec 13 01:25:36.230783 ignition[672]: failed to fetch config: resource requires networking Dec 13 01:25:36.233709 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:25:36.231171 ignition[672]: Ignition finished successfully Dec 13 01:25:36.247523 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:25:36.281504 ignition[760]: Ignition 2.19.0 Dec 13 01:25:36.292471 unknown[760]: fetched base config from "system" Dec 13 01:25:36.281512 ignition[760]: Stage: fetch Dec 13 01:25:36.292484 unknown[760]: fetched base config from "system" Dec 13 01:25:36.281708 ignition[760]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:36.292494 unknown[760]: fetched user config from "gcp" Dec 13 01:25:36.281720 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:25:36.294778 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:25:36.281834 ignition[760]: parsed url from cmdline: "" Dec 13 01:25:36.323424 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:25:36.281841 ignition[760]: no config URL provided Dec 13 01:25:36.360672 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:25:36.281848 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:25:36.384566 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:25:36.281858 ignition[760]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:25:36.430755 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:25:36.281880 ignition[760]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Dec 13 01:25:36.438631 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:25:36.285708 ignition[760]: GET result: OK Dec 13 01:25:36.453510 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:25:36.285810 ignition[760]: parsing config with SHA512: 9417f4000f1743cb12e86ae2d5f8f94d1c1100752137d59fbbaafc59461f44a1aabdd2318e8fd13d15c99552ca3da8eac3f5389f5d5aa086f48d0b8e8d4b994a Dec 13 01:25:36.471495 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:25:36.293016 ignition[760]: fetch: fetch complete Dec 13 01:25:36.500461 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:25:36.293024 ignition[760]: fetch: fetch passed Dec 13 01:25:36.507490 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:25:36.293074 ignition[760]: Ignition finished successfully Dec 13 01:25:36.538448 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:25:36.344420 ignition[766]: Ignition 2.19.0 Dec 13 01:25:36.344429 ignition[766]: Stage: kargs Dec 13 01:25:36.344625 ignition[766]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:36.344637 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:25:36.345797 ignition[766]: kargs: kargs passed Dec 13 01:25:36.345853 ignition[766]: Ignition finished successfully Dec 13 01:25:36.424940 ignition[772]: Ignition 2.19.0 Dec 13 01:25:36.424950 ignition[772]: Stage: disks Dec 13 01:25:36.425130 ignition[772]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:36.425141 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:25:36.426141 ignition[772]: disks: disks passed Dec 13 01:25:36.426216 ignition[772]: Ignition finished successfully Dec 13 01:25:36.574995 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 01:25:36.786873 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:25:36.806365 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:25:36.925660 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:25:36.926542 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:25:36.936004 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:25:36.971363 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:25:36.980737 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:25:37.005919 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:25:37.074392 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (788) Dec 13 01:25:37.074442 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:25:37.074479 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:25:37.074502 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:25:37.074517 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:25:37.074532 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:25:37.006017 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:25:37.006057 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:25:37.062386 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:25:37.083925 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:25:37.116578 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:25:37.238095 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:25:37.248382 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:25:37.258493 initrd-setup-root[827]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:25:37.268402 initrd-setup-root[834]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:25:37.262430 systemd-networkd[751]: eth0: Gained IPv6LL Dec 13 01:25:37.397754 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:25:37.405372 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:25:37.437284 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:25:37.447440 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:25:37.456330 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:25:37.500126 ignition[901]: INFO : Ignition 2.19.0 Dec 13 01:25:37.500126 ignition[901]: INFO : Stage: mount Dec 13 01:25:37.500126 ignition[901]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:37.500126 ignition[901]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:25:37.500126 ignition[901]: INFO : mount: mount passed Dec 13 01:25:37.500126 ignition[901]: INFO : Ignition finished successfully Dec 13 01:25:37.502283 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:25:37.508828 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:25:37.528370 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:25:37.943514 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:25:37.992389 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (913) Dec 13 01:25:37.992427 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:25:37.992452 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:25:37.992477 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:25:38.005938 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 01:25:38.006020 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:25:38.009429 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:25:38.057050 ignition[930]: INFO : Ignition 2.19.0 Dec 13 01:25:38.057050 ignition[930]: INFO : Stage: files Dec 13 01:25:38.071356 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:38.071356 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:25:38.071356 ignition[930]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:25:38.071356 ignition[930]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:25:38.071356 ignition[930]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:25:38.071356 ignition[930]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:25:38.071356 ignition[930]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:25:38.071356 ignition[930]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:25:38.069329 unknown[930]: wrote ssh authorized keys file for user: core Dec 13 01:25:38.170330 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:25:38.170330 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:25:38.216954 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:25:38.451202 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:25:38.451202 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:25:38.451202 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 01:25:38.731677 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:25:38.877808 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 01:25:38.893379 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:25:38.893379 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:25:38.893379 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:25:38.893379 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:25:38.893379 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:25:38.893379 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:25:38.893379 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:25:38.893379 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:25:38.893379 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:25:38.893379 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:25:38.893379 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:25:38.893379 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:25:38.893379 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:25:38.893379 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 01:25:39.124427 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:25:39.413492 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 01:25:39.413492 ignition[930]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 01:25:39.451380 ignition[930]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:25:39.451380 ignition[930]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:25:39.451380 ignition[930]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 01:25:39.451380 ignition[930]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:25:39.451380 ignition[930]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:25:39.451380 ignition[930]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:25:39.451380 ignition[930]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:25:39.451380 ignition[930]: INFO : files: files passed Dec 13 01:25:39.451380 ignition[930]: INFO : Ignition finished successfully Dec 13 01:25:39.419451 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:25:39.439517 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:25:39.473448 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:25:39.503737 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:25:39.682489 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:25:39.682489 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:25:39.503903 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:25:39.745366 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:25:39.524262 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:25:39.527603 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:25:39.563428 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:25:39.642136 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:25:39.642260 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:25:39.661959 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:25:39.682355 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:25:39.699449 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:25:39.705415 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:25:39.785070 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:25:39.809410 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:25:39.831499 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:25:39.850539 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:25:39.872573 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:25:39.893533 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:25:39.893735 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:25:39.923682 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:25:39.934664 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:25:39.952652 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:25:39.969646 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:25:39.986644 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:25:40.004657 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:25:40.023643 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:25:40.040652 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:25:40.058732 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:25:40.076645 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:25:40.094595 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:25:40.094784 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:25:40.128641 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:25:40.139632 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:25:40.157588 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:25:40.157770 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:25:40.175601 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:25:40.175785 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:25:40.226524 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:25:40.226749 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:25:40.320362 ignition[983]: INFO : Ignition 2.19.0 Dec 13 01:25:40.320362 ignition[983]: INFO : Stage: umount Dec 13 01:25:40.320362 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:25:40.320362 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 01:25:40.320362 ignition[983]: INFO : umount: umount passed Dec 13 01:25:40.320362 ignition[983]: INFO : Ignition finished successfully Dec 13 01:25:40.237704 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:25:40.237880 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:25:40.262585 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:25:40.278501 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:25:40.328357 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:25:40.328572 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:25:40.339610 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:25:40.339832 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:25:40.368955 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:25:40.369920 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:25:40.370034 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:25:40.385921 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:25:40.386035 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:25:40.412009 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:25:40.412195 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:25:40.420651 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:25:40.420712 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:25:40.447530 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:25:40.447600 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:25:40.465532 systemd[1]: Stopped target network.target - Network. Dec 13 01:25:40.483454 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:25:40.483542 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:25:40.491561 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:25:40.509524 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:25:40.513357 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:25:40.526534 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:25:40.544533 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:25:40.561577 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:25:40.561638 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:25:40.576578 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:25:40.576634 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:25:40.593561 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:25:40.593629 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:25:40.610582 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:25:40.610646 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:25:40.627568 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:25:40.627632 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:25:40.654721 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:25:40.659284 systemd-networkd[751]: eth0: DHCPv6 lease lost Dec 13 01:25:40.672545 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:25:40.691973 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:25:40.692097 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:25:40.711367 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:25:40.711522 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:25:40.727863 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:25:40.727981 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:25:40.739672 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:25:40.739741 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:25:40.760356 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:25:40.780466 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:25:40.780537 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:25:40.797589 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:25:40.797648 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:25:41.275372 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Dec 13 01:25:40.825533 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:25:40.825599 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:25:40.845537 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:25:40.845607 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:25:40.858706 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:25:40.876953 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:25:40.877112 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:25:40.900424 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:25:40.900603 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:25:40.911579 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:25:40.911626 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:25:40.936550 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:25:40.936612 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:25:40.972472 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:25:40.972654 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:25:40.998539 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:25:40.998609 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:25:41.049447 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:25:41.061514 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:25:41.061584 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:25:41.078588 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:25:41.078650 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:25:41.108541 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:25:41.108623 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:25:41.127523 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:25:41.127588 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:41.138036 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:25:41.138329 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:25:41.155923 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:25:41.156033 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:25:41.174740 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:25:41.198538 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:25:41.232387 systemd[1]: Switching root. Dec 13 01:25:41.602321 systemd-journald[183]: Journal stopped Dec 13 01:25:44.033738 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:25:44.033793 kernel: SELinux: policy capability open_perms=1 Dec 13 01:25:44.033815 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:25:44.033833 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:25:44.033849 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:25:44.033867 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:25:44.033888 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:25:44.033910 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:25:44.033929 kernel: audit: type=1403 audit(1734053141.952:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:25:44.033951 systemd[1]: Successfully loaded SELinux policy in 88.938ms. Dec 13 01:25:44.033974 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.626ms. Dec 13 01:25:44.033997 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:25:44.034017 systemd[1]: Detected virtualization google. Dec 13 01:25:44.034038 systemd[1]: Detected architecture x86-64. Dec 13 01:25:44.034064 systemd[1]: Detected first boot. Dec 13 01:25:44.034087 systemd[1]: Initializing machine ID from random generator. Dec 13 01:25:44.034109 zram_generator::config[1025]: No configuration found. Dec 13 01:25:44.034132 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:25:44.034153 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:25:44.034180 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:25:44.034202 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:25:44.034225 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:25:44.034263 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:25:44.034294 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:25:44.034317 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:25:44.034338 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:25:44.034364 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:25:44.034384 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:25:44.034403 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:25:44.034423 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:25:44.034447 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:25:44.034467 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:25:44.034497 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:25:44.034517 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:25:44.034544 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:25:44.034566 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:25:44.034589 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:25:44.034611 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:25:44.034633 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:25:44.034655 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:25:44.034685 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:25:44.034707 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:25:44.034729 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:25:44.034756 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:25:44.034778 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:25:44.034800 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:25:44.034822 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:25:44.034844 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:25:44.034866 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:25:44.034888 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:25:44.034915 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:25:44.034937 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:25:44.034960 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:25:44.034983 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:25:44.035005 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:25:44.035031 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:25:44.035054 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:25:44.035077 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:25:44.035100 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:25:44.035122 systemd[1]: Reached target machines.target - Containers. Dec 13 01:25:44.035145 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:25:44.035169 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:25:44.035192 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:25:44.035218 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:25:44.035270 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:25:44.035293 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:25:44.035316 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:25:44.035339 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:25:44.035360 kernel: fuse: init (API version 7.39) Dec 13 01:25:44.035382 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:25:44.035405 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:25:44.035432 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:25:44.035454 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:25:44.035477 kernel: ACPI: bus type drm_connector registered Dec 13 01:25:44.035504 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:25:44.035527 kernel: loop: module loaded Dec 13 01:25:44.035547 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:25:44.035570 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:25:44.035593 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:25:44.035615 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:25:44.035674 systemd-journald[1112]: Collecting audit messages is disabled. Dec 13 01:25:44.035720 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:25:44.035744 systemd-journald[1112]: Journal started Dec 13 01:25:44.035792 systemd-journald[1112]: Runtime Journal (/run/log/journal/61d0a2a86ea349ae8a5b3f6472f02ef5) is 8.0M, max 148.7M, 140.7M free. Dec 13 01:25:42.809321 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:25:42.828885 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 01:25:42.829466 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:25:44.077402 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:25:44.077488 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:25:44.077519 systemd[1]: Stopped verity-setup.service. Dec 13 01:25:44.114426 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:25:44.124277 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:25:44.134708 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:25:44.145619 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:25:44.155630 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:25:44.166622 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:25:44.176604 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:25:44.186562 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:25:44.197867 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:25:44.210844 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:25:44.222772 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:25:44.223007 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:25:44.234669 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:25:44.234912 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:25:44.246675 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:25:44.246897 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:25:44.256631 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:25:44.256857 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:25:44.268637 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:25:44.268860 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:25:44.278614 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:25:44.278825 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:25:44.288697 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:25:44.298614 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:25:44.309667 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:25:44.320644 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:25:44.344942 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:25:44.371389 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:25:44.383622 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:25:44.393360 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:25:44.393426 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:25:44.404612 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:25:44.422449 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:25:44.439447 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:25:44.449512 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:25:44.457465 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:25:44.480321 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:25:44.491387 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:25:44.504389 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:25:44.506757 systemd-journald[1112]: Time spent on flushing to /var/log/journal/61d0a2a86ea349ae8a5b3f6472f02ef5 is 91.974ms for 932 entries. Dec 13 01:25:44.506757 systemd-journald[1112]: System Journal (/var/log/journal/61d0a2a86ea349ae8a5b3f6472f02ef5) is 8.0M, max 584.8M, 576.8M free. Dec 13 01:25:44.626564 systemd-journald[1112]: Received client request to flush runtime journal. Dec 13 01:25:44.626628 kernel: loop0: detected capacity change from 0 to 142488 Dec 13 01:25:44.522447 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:25:44.528686 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:25:44.548526 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:25:44.568217 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:25:44.587092 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:25:44.603704 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:25:44.620711 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:25:44.632928 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:25:44.644756 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:25:44.656765 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:25:44.668795 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:25:44.688862 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:25:44.692976 systemd-tmpfiles[1144]: ACLs are not supported, ignoring. Dec 13 01:25:44.694377 systemd-tmpfiles[1144]: ACLs are not supported, ignoring. Dec 13 01:25:44.723065 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:25:44.723327 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:25:44.734870 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:25:44.751115 kernel: loop1: detected capacity change from 0 to 205544 Dec 13 01:25:44.760314 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:25:44.771003 udevadm[1145]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:25:44.783937 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:25:44.784908 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:25:44.847091 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:25:44.880458 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:25:44.900293 kernel: loop2: detected capacity change from 0 to 54824 Dec 13 01:25:44.935900 systemd-tmpfiles[1165]: ACLs are not supported, ignoring. Dec 13 01:25:44.936299 systemd-tmpfiles[1165]: ACLs are not supported, ignoring. Dec 13 01:25:44.943888 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:25:44.989691 kernel: loop3: detected capacity change from 0 to 140768 Dec 13 01:25:45.075295 kernel: loop4: detected capacity change from 0 to 142488 Dec 13 01:25:45.128300 kernel: loop5: detected capacity change from 0 to 205544 Dec 13 01:25:45.174297 kernel: loop6: detected capacity change from 0 to 54824 Dec 13 01:25:45.208276 kernel: loop7: detected capacity change from 0 to 140768 Dec 13 01:25:45.262991 (sd-merge)[1170]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Dec 13 01:25:45.265822 (sd-merge)[1170]: Merged extensions into '/usr'. Dec 13 01:25:45.286146 systemd[1]: Reloading requested from client PID 1143 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:25:45.286172 systemd[1]: Reloading... Dec 13 01:25:45.439899 zram_generator::config[1199]: No configuration found. Dec 13 01:25:45.642267 ldconfig[1138]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:25:45.693189 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:25:45.797710 systemd[1]: Reloading finished in 510 ms. Dec 13 01:25:45.827409 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:25:45.837879 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:25:45.861560 systemd[1]: Starting ensure-sysext.service... Dec 13 01:25:45.877508 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:25:45.900400 systemd[1]: Reloading requested from client PID 1236 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:25:45.900421 systemd[1]: Reloading... Dec 13 01:25:45.924940 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:25:45.925664 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:25:45.927989 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:25:45.928623 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Dec 13 01:25:45.928762 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Dec 13 01:25:45.934684 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:25:45.934703 systemd-tmpfiles[1237]: Skipping /boot Dec 13 01:25:45.950982 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:25:45.951010 systemd-tmpfiles[1237]: Skipping /boot Dec 13 01:25:46.030322 zram_generator::config[1264]: No configuration found. Dec 13 01:25:46.157584 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:25:46.222745 systemd[1]: Reloading finished in 321 ms. Dec 13 01:25:46.241726 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:25:46.257828 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:25:46.283503 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:25:46.307472 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:25:46.328628 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:25:46.348643 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:25:46.366500 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:25:46.382523 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:25:46.391876 augenrules[1327]: No rules Dec 13 01:25:46.394478 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:25:46.406000 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:25:46.436787 systemd-udevd[1323]: Using default interface naming scheme 'v255'. Dec 13 01:25:46.437750 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:25:46.438289 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:25:46.447631 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:25:46.463765 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:25:46.482683 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:25:46.492502 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:25:46.503676 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:25:46.522174 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:25:46.532364 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:25:46.536040 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:25:46.555789 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:25:46.569208 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:25:46.569677 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:25:46.582318 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:25:46.583563 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:25:46.596995 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:25:46.598353 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:25:46.610384 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:25:46.633301 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:25:46.670421 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1338) Dec 13 01:25:46.682518 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1338) Dec 13 01:25:46.684137 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:25:46.699570 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:25:46.706925 systemd[1]: Finished ensure-sysext.service. Dec 13 01:25:46.719499 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:25:46.719816 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:25:46.725633 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:25:46.749437 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:25:46.771508 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:25:46.793539 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:25:46.800262 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 01:25:46.821505 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:25:46.821415 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 13 01:25:46.829509 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:25:46.841885 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:25:46.853274 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Dec 13 01:25:46.859421 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:25:46.866267 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 01:25:46.874412 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:25:46.874459 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:25:46.875522 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:25:46.876056 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:25:46.892852 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 01:25:46.897149 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:25:46.899315 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:25:46.909844 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:25:46.910082 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:25:46.922897 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 13 01:25:46.930132 systemd-resolved[1320]: Positive Trust Anchors: Dec 13 01:25:46.930815 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:25:46.931031 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:25:46.931726 systemd-resolved[1320]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:25:46.931875 systemd-resolved[1320]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:25:46.942029 systemd-resolved[1320]: Defaulting to hostname 'linux'. Dec 13 01:25:46.949261 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:25:46.952002 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:25:46.982314 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:25:46.994447 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:25:46.994565 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:25:46.995323 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 13 01:25:47.007411 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1356) Dec 13 01:25:47.026491 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Dec 13 01:25:47.064026 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:25:47.129987 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:25:47.129862 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Dec 13 01:25:47.154212 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:25:47.170142 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:25:47.171138 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Dec 13 01:25:47.182529 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:25:47.190890 systemd-networkd[1383]: lo: Link UP Dec 13 01:25:47.190902 systemd-networkd[1383]: lo: Gained carrier Dec 13 01:25:47.198308 systemd-networkd[1383]: Enumeration completed Dec 13 01:25:47.198944 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:25:47.198951 systemd-networkd[1383]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:25:47.199629 systemd-networkd[1383]: eth0: Link UP Dec 13 01:25:47.199636 systemd-networkd[1383]: eth0: Gained carrier Dec 13 01:25:47.199661 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:25:47.200355 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:25:47.200596 systemd[1]: Reached target network.target - Network. Dec 13 01:25:47.209444 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:25:47.216674 lvm[1413]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:25:47.215319 systemd-networkd[1383]: eth0: DHCPv4 address 10.128.0.80/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 01:25:47.219796 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:25:47.265872 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:25:47.267072 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:25:47.274654 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:25:47.284752 lvm[1419]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:25:47.297373 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:25:47.309612 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:25:47.320494 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:25:47.331438 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:25:47.342565 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:25:47.352496 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:25:47.363365 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:25:47.374341 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:25:47.374397 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:25:47.382347 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:25:47.392614 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:25:47.403962 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:25:47.423094 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:25:47.433270 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:25:47.444597 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:25:47.455150 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:25:47.465360 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:25:47.473411 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:25:47.473459 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:25:47.485369 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:25:47.500466 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:25:47.517400 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:25:47.532111 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:25:47.559486 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:25:47.567638 jq[1428]: false Dec 13 01:25:47.569400 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:25:47.577467 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:25:47.594475 systemd[1]: Started ntpd.service - Network Time Service. Dec 13 01:25:47.612452 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:25:47.634513 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:25:47.636614 coreos-metadata[1426]: Dec 13 01:25:47.636 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Dec 13 01:25:47.642132 coreos-metadata[1426]: Dec 13 01:25:47.641 INFO Fetch successful Dec 13 01:25:47.642132 coreos-metadata[1426]: Dec 13 01:25:47.641 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Dec 13 01:25:47.642132 coreos-metadata[1426]: Dec 13 01:25:47.641 INFO Fetch successful Dec 13 01:25:47.642132 coreos-metadata[1426]: Dec 13 01:25:47.642 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Dec 13 01:25:47.643072 coreos-metadata[1426]: Dec 13 01:25:47.642 INFO Fetch successful Dec 13 01:25:47.643072 coreos-metadata[1426]: Dec 13 01:25:47.642 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Dec 13 01:25:47.644705 coreos-metadata[1426]: Dec 13 01:25:47.643 INFO Fetch successful Dec 13 01:25:47.655618 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:25:47.656819 extend-filesystems[1429]: Found loop4 Dec 13 01:25:47.673413 extend-filesystems[1429]: Found loop5 Dec 13 01:25:47.673413 extend-filesystems[1429]: Found loop6 Dec 13 01:25:47.673413 extend-filesystems[1429]: Found loop7 Dec 13 01:25:47.673413 extend-filesystems[1429]: Found sda Dec 13 01:25:47.673413 extend-filesystems[1429]: Found sda1 Dec 13 01:25:47.673413 extend-filesystems[1429]: Found sda2 Dec 13 01:25:47.673413 extend-filesystems[1429]: Found sda3 Dec 13 01:25:47.673413 extend-filesystems[1429]: Found usr Dec 13 01:25:47.673413 extend-filesystems[1429]: Found sda4 Dec 13 01:25:47.673413 extend-filesystems[1429]: Found sda6 Dec 13 01:25:47.673413 extend-filesystems[1429]: Found sda7 Dec 13 01:25:47.673413 extend-filesystems[1429]: Found sda9 Dec 13 01:25:47.673413 extend-filesystems[1429]: Checking size of /dev/sda9 Dec 13 01:25:47.867618 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Dec 13 01:25:47.867676 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Dec 13 01:25:47.867713 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1348) Dec 13 01:25:47.672759 dbus-daemon[1427]: [system] SELinux support is enabled Dec 13 01:25:47.868213 ntpd[1433]: 13 Dec 01:25:47 ntpd[1433]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:36:14 UTC 2024 (1): Starting Dec 13 01:25:47.868213 ntpd[1433]: 13 Dec 01:25:47 ntpd[1433]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:25:47.868213 ntpd[1433]: 13 Dec 01:25:47 ntpd[1433]: ---------------------------------------------------- Dec 13 01:25:47.868213 ntpd[1433]: 13 Dec 01:25:47 ntpd[1433]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:25:47.868213 ntpd[1433]: 13 Dec 01:25:47 ntpd[1433]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:25:47.868213 ntpd[1433]: 13 Dec 01:25:47 ntpd[1433]: corporation. Support and training for ntp-4 are Dec 13 01:25:47.868213 ntpd[1433]: 13 Dec 01:25:47 ntpd[1433]: available at https://www.nwtime.org/support Dec 13 01:25:47.868213 ntpd[1433]: 13 Dec 01:25:47 ntpd[1433]: ---------------------------------------------------- Dec 13 01:25:47.868213 ntpd[1433]: 13 Dec 01:25:47 ntpd[1433]: proto: precision = 0.114 usec (-23) Dec 13 01:25:47.868213 ntpd[1433]: 13 Dec 01:25:47 ntpd[1433]: basedate set to 2024-11-30 Dec 13 01:25:47.868213 ntpd[1433]: 13 Dec 01:25:47 ntpd[1433]: gps base set to 2024-12-01 (week 2343) Dec 13 01:25:47.868213 ntpd[1433]: 13 Dec 01:25:47 ntpd[1433]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:25:47.868213 ntpd[1433]: 13 Dec 01:25:47 ntpd[1433]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:25:47.868213 ntpd[1433]: 13 Dec 01:25:47 ntpd[1433]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:25:47.868213 ntpd[1433]: 13 Dec 01:25:47 ntpd[1433]: Listen normally on 3 eth0 10.128.0.80:123 Dec 13 01:25:47.868213 ntpd[1433]: 13 Dec 01:25:47 ntpd[1433]: Listen normally on 4 lo [::1]:123 Dec 13 01:25:47.868213 ntpd[1433]: 13 Dec 01:25:47 ntpd[1433]: bind(21) AF_INET6 fe80::4001:aff:fe80:50%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:25:47.868213 ntpd[1433]: 13 Dec 01:25:47 ntpd[1433]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:50%2#123 Dec 13 01:25:47.868213 ntpd[1433]: 13 Dec 01:25:47 ntpd[1433]: failed to init interface for address fe80::4001:aff:fe80:50%2 Dec 13 01:25:47.868213 ntpd[1433]: 13 Dec 01:25:47 ntpd[1433]: Listening on routing socket on fd #21 for interface updates Dec 13 01:25:47.868213 ntpd[1433]: 13 Dec 01:25:47 ntpd[1433]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:25:47.868213 ntpd[1433]: 13 Dec 01:25:47 ntpd[1433]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:25:47.869524 extend-filesystems[1429]: Resized partition /dev/sda9 Dec 13 01:25:47.675389 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:25:47.675003 dbus-daemon[1427]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1383 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 01:25:47.889762 extend-filesystems[1451]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:25:47.889762 extend-filesystems[1451]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 01:25:47.889762 extend-filesystems[1451]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 13 01:25:47.889762 extend-filesystems[1451]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Dec 13 01:25:47.695095 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Dec 13 01:25:47.679603 ntpd[1433]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:36:14 UTC 2024 (1): Starting Dec 13 01:25:47.935067 extend-filesystems[1429]: Resized filesystem in /dev/sda9 Dec 13 01:25:47.695989 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:25:47.944528 sshd_keygen[1457]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:25:47.944716 update_engine[1453]: I20241213 01:25:47.817225 1453 main.cc:92] Flatcar Update Engine starting Dec 13 01:25:47.944716 update_engine[1453]: I20241213 01:25:47.822993 1453 update_check_scheduler.cc:74] Next update check in 3m25s Dec 13 01:25:47.679637 ntpd[1433]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:25:47.704393 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:25:47.679654 ntpd[1433]: ---------------------------------------------------- Dec 13 01:25:47.734125 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:25:47.679668 ntpd[1433]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:25:47.749579 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:25:47.679681 ntpd[1433]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:25:47.784102 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:25:47.679695 ntpd[1433]: corporation. Support and training for ntp-4 are Dec 13 01:25:47.785323 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:25:47.949528 jq[1458]: true Dec 13 01:25:47.679710 ntpd[1433]: available at https://www.nwtime.org/support Dec 13 01:25:47.785778 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:25:47.679724 ntpd[1433]: ---------------------------------------------------- Dec 13 01:25:47.786010 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:25:47.681436 ntpd[1433]: proto: precision = 0.114 usec (-23) Dec 13 01:25:47.821486 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:25:47.682535 ntpd[1433]: basedate set to 2024-11-30 Dec 13 01:25:47.822311 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:25:47.682558 ntpd[1433]: gps base set to 2024-12-01 (week 2343) Dec 13 01:25:47.840840 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:25:47.685127 ntpd[1433]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:25:47.841329 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:25:47.685172 ntpd[1433]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:25:47.945960 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:25:47.685393 ntpd[1433]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:25:47.685433 ntpd[1433]: Listen normally on 3 eth0 10.128.0.80:123 Dec 13 01:25:47.685472 ntpd[1433]: Listen normally on 4 lo [::1]:123 Dec 13 01:25:47.685517 ntpd[1433]: bind(21) AF_INET6 fe80::4001:aff:fe80:50%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:25:47.685540 ntpd[1433]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:50%2#123 Dec 13 01:25:47.685555 ntpd[1433]: failed to init interface for address fe80::4001:aff:fe80:50%2 Dec 13 01:25:47.685585 ntpd[1433]: Listening on routing socket on fd #21 for interface updates Dec 13 01:25:47.687013 ntpd[1433]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:25:47.687047 ntpd[1433]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:25:47.954088 systemd-logind[1446]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:25:47.954526 systemd-logind[1446]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 01:25:47.954564 systemd-logind[1446]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:25:47.954866 systemd-logind[1446]: New seat seat0. Dec 13 01:25:47.959570 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:25:47.964887 (ntainerd)[1469]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:25:47.967074 dbus-daemon[1427]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 01:25:48.003816 jq[1475]: true Dec 13 01:25:48.008602 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:25:48.024612 tar[1462]: linux-amd64/helm Dec 13 01:25:48.045056 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:25:48.059075 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:25:48.059721 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:25:48.059989 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:25:48.083352 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 01:25:48.094423 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:25:48.094707 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:25:48.118794 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:25:48.139762 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:25:48.158287 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:25:48.161334 bash[1504]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:25:48.174636 systemd[1]: Started sshd@0-10.128.0.80:22-147.75.109.163:57210.service - OpenSSH per-connection server daemon (147.75.109.163:57210). Dec 13 01:25:48.187547 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:25:48.214693 systemd[1]: Starting sshkeys.service... Dec 13 01:25:48.222028 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:25:48.226113 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:25:48.247670 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:25:48.304550 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:25:48.328789 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:25:48.353095 dbus-daemon[1427]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 01:25:48.353327 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 01:25:48.355130 dbus-daemon[1427]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1499 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 01:25:48.364366 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:25:48.407997 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:25:48.432771 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 01:25:48.438031 coreos-metadata[1515]: Dec 13 01:25:48.437 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Dec 13 01:25:48.439416 coreos-metadata[1515]: Dec 13 01:25:48.439 INFO Fetch failed with 404: resource not found Dec 13 01:25:48.439416 coreos-metadata[1515]: Dec 13 01:25:48.439 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Dec 13 01:25:48.440004 coreos-metadata[1515]: Dec 13 01:25:48.439 INFO Fetch successful Dec 13 01:25:48.440004 coreos-metadata[1515]: Dec 13 01:25:48.439 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Dec 13 01:25:48.446389 coreos-metadata[1515]: Dec 13 01:25:48.443 INFO Fetch failed with 404: resource not found Dec 13 01:25:48.446389 coreos-metadata[1515]: Dec 13 01:25:48.443 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Dec 13 01:25:48.446389 coreos-metadata[1515]: Dec 13 01:25:48.445 INFO Fetch failed with 404: resource not found Dec 13 01:25:48.446389 coreos-metadata[1515]: Dec 13 01:25:48.446 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Dec 13 01:25:48.449918 coreos-metadata[1515]: Dec 13 01:25:48.449 INFO Fetch successful Dec 13 01:25:48.451031 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:25:48.461999 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:25:48.462815 systemd-networkd[1383]: eth0: Gained IPv6LL Dec 13 01:25:48.466013 unknown[1515]: wrote ssh authorized keys file for user: core Dec 13 01:25:48.474849 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:25:48.489371 locksmithd[1503]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:25:48.513267 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:25:48.534488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:25:48.549308 update-ssh-keys[1531]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:25:48.551144 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:25:48.569372 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Dec 13 01:25:48.580621 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:25:48.593124 polkitd[1524]: Started polkitd version 121 Dec 13 01:25:48.612346 systemd[1]: Finished sshkeys.service. Dec 13 01:25:48.633145 init.sh[1537]: + '[' -e /etc/default/instance_configs.cfg.template ']' Dec 13 01:25:48.633145 init.sh[1537]: + echo -e '[InstanceSetup]\nset_host_keys = false' Dec 13 01:25:48.633145 init.sh[1537]: + /usr/bin/google_instance_setup Dec 13 01:25:48.650930 polkitd[1524]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 01:25:48.651018 polkitd[1524]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 01:25:48.656131 polkitd[1524]: Finished loading, compiling and executing 2 rules Dec 13 01:25:48.661625 dbus-daemon[1427]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 01:25:48.661867 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 01:25:48.662799 polkitd[1524]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 01:25:48.720393 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:25:48.722729 sshd[1507]: Accepted publickey for core from 147.75.109.163 port 57210 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:25:48.726949 sshd[1507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:25:48.742218 systemd-hostnamed[1499]: Hostname set to (transient) Dec 13 01:25:48.747422 systemd-resolved[1320]: System hostname changed to 'ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal'. Dec 13 01:25:48.761611 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:25:48.780666 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:25:48.801710 systemd-logind[1446]: New session 1 of user core. Dec 13 01:25:48.831106 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:25:48.839266 containerd[1469]: time="2024-12-13T01:25:48.839119343Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:25:48.856672 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:25:48.894919 (systemd)[1559]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:25:48.965807 containerd[1469]: time="2024-12-13T01:25:48.965456155Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:25:48.973017 containerd[1469]: time="2024-12-13T01:25:48.972409267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:25:48.973017 containerd[1469]: time="2024-12-13T01:25:48.972484311Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:25:48.973017 containerd[1469]: time="2024-12-13T01:25:48.972534855Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:25:48.973017 containerd[1469]: time="2024-12-13T01:25:48.972814641Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:25:48.973017 containerd[1469]: time="2024-12-13T01:25:48.972872641Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:25:48.973892 containerd[1469]: time="2024-12-13T01:25:48.972981102Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:25:48.973892 containerd[1469]: time="2024-12-13T01:25:48.973327114Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:25:48.975432 containerd[1469]: time="2024-12-13T01:25:48.975107331Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:25:48.975432 containerd[1469]: time="2024-12-13T01:25:48.975145833Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:25:48.975432 containerd[1469]: time="2024-12-13T01:25:48.975191948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:25:48.975432 containerd[1469]: time="2024-12-13T01:25:48.975211876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:25:48.975908 containerd[1469]: time="2024-12-13T01:25:48.975686346Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:25:48.977026 containerd[1469]: time="2024-12-13T01:25:48.976676923Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:25:48.977490 containerd[1469]: time="2024-12-13T01:25:48.977424935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:25:48.977490 containerd[1469]: time="2024-12-13T01:25:48.977460275Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:25:48.979511 containerd[1469]: time="2024-12-13T01:25:48.977833744Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:25:48.979511 containerd[1469]: time="2024-12-13T01:25:48.977927599Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:25:48.994482 containerd[1469]: time="2024-12-13T01:25:48.994435893Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:25:48.996483 containerd[1469]: time="2024-12-13T01:25:48.996350027Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:25:48.998693 containerd[1469]: time="2024-12-13T01:25:48.998312347Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:25:48.998693 containerd[1469]: time="2024-12-13T01:25:48.998383945Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:25:48.998693 containerd[1469]: time="2024-12-13T01:25:48.998422828Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:25:48.998693 containerd[1469]: time="2024-12-13T01:25:48.998620999Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:25:49.002304 containerd[1469]: time="2024-12-13T01:25:48.999629753Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:25:49.002304 containerd[1469]: time="2024-12-13T01:25:48.999817726Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:25:49.002304 containerd[1469]: time="2024-12-13T01:25:48.999845836Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:25:49.002304 containerd[1469]: time="2024-12-13T01:25:48.999867856Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:25:49.002304 containerd[1469]: time="2024-12-13T01:25:48.999892753Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:25:49.002304 containerd[1469]: time="2024-12-13T01:25:48.999915883Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:25:49.002304 containerd[1469]: time="2024-12-13T01:25:48.999937440Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:25:49.002304 containerd[1469]: time="2024-12-13T01:25:48.999960804Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:25:49.002304 containerd[1469]: time="2024-12-13T01:25:48.999984652Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:25:49.002304 containerd[1469]: time="2024-12-13T01:25:49.000006225Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:25:49.002304 containerd[1469]: time="2024-12-13T01:25:49.000028706Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:25:49.002304 containerd[1469]: time="2024-12-13T01:25:49.000048697Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:25:49.002304 containerd[1469]: time="2024-12-13T01:25:49.000080162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:25:49.002304 containerd[1469]: time="2024-12-13T01:25:49.000103201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:25:49.004663 containerd[1469]: time="2024-12-13T01:25:49.000123883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:25:49.004663 containerd[1469]: time="2024-12-13T01:25:49.000147090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:25:49.004663 containerd[1469]: time="2024-12-13T01:25:49.000178536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:25:49.004663 containerd[1469]: time="2024-12-13T01:25:49.000205003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:25:49.004663 containerd[1469]: time="2024-12-13T01:25:49.000225593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:25:49.004663 containerd[1469]: time="2024-12-13T01:25:49.000270614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:25:49.004663 containerd[1469]: time="2024-12-13T01:25:49.000294254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:25:49.004663 containerd[1469]: time="2024-12-13T01:25:49.000319913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:25:49.004663 containerd[1469]: time="2024-12-13T01:25:49.000339873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:25:49.004663 containerd[1469]: time="2024-12-13T01:25:49.000370036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:25:49.004663 containerd[1469]: time="2024-12-13T01:25:49.000393156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:25:49.004663 containerd[1469]: time="2024-12-13T01:25:49.000418983Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:25:49.004663 containerd[1469]: time="2024-12-13T01:25:49.000451588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:25:49.004663 containerd[1469]: time="2024-12-13T01:25:49.000471809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:25:49.004663 containerd[1469]: time="2024-12-13T01:25:49.000491029Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:25:49.014891 containerd[1469]: time="2024-12-13T01:25:49.000556586Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:25:49.014891 containerd[1469]: time="2024-12-13T01:25:49.000585747Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:25:49.014891 containerd[1469]: time="2024-12-13T01:25:49.000606528Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:25:49.014891 containerd[1469]: time="2024-12-13T01:25:49.000628787Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:25:49.014891 containerd[1469]: time="2024-12-13T01:25:49.000647465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:25:49.014891 containerd[1469]: time="2024-12-13T01:25:49.000668216Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:25:49.014891 containerd[1469]: time="2024-12-13T01:25:49.000691740Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:25:49.014891 containerd[1469]: time="2024-12-13T01:25:49.000714321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:25:49.007661 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:25:49.021166 containerd[1469]: time="2024-12-13T01:25:49.001201272Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:25:49.021166 containerd[1469]: time="2024-12-13T01:25:49.001454108Z" level=info msg="Connect containerd service" Dec 13 01:25:49.021166 containerd[1469]: time="2024-12-13T01:25:49.001534482Z" level=info msg="using legacy CRI server" Dec 13 01:25:49.021166 containerd[1469]: time="2024-12-13T01:25:49.001549315Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:25:49.021166 containerd[1469]: time="2024-12-13T01:25:49.001712874Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:25:49.021166 containerd[1469]: time="2024-12-13T01:25:49.002562657Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:25:49.021166 containerd[1469]: time="2024-12-13T01:25:49.004674450Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:25:49.021166 containerd[1469]: time="2024-12-13T01:25:49.004764380Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:25:49.021166 containerd[1469]: time="2024-12-13T01:25:49.005412312Z" level=info msg="Start subscribing containerd event" Dec 13 01:25:49.021166 containerd[1469]: time="2024-12-13T01:25:49.005497413Z" level=info msg="Start recovering state" Dec 13 01:25:49.021166 containerd[1469]: time="2024-12-13T01:25:49.005601854Z" level=info msg="Start event monitor" Dec 13 01:25:49.021166 containerd[1469]: time="2024-12-13T01:25:49.005620884Z" level=info msg="Start snapshots syncer" Dec 13 01:25:49.021166 containerd[1469]: time="2024-12-13T01:25:49.005638122Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:25:49.021166 containerd[1469]: time="2024-12-13T01:25:49.005658887Z" level=info msg="Start streaming server" Dec 13 01:25:49.021166 containerd[1469]: time="2024-12-13T01:25:49.007548360Z" level=info msg="containerd successfully booted in 0.169796s" Dec 13 01:25:49.172401 systemd[1559]: Queued start job for default target default.target. Dec 13 01:25:49.181190 systemd[1559]: Created slice app.slice - User Application Slice. Dec 13 01:25:49.181260 systemd[1559]: Reached target paths.target - Paths. Dec 13 01:25:49.181289 systemd[1559]: Reached target timers.target - Timers. Dec 13 01:25:49.185758 systemd[1559]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:25:49.231428 systemd[1559]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:25:49.231610 systemd[1559]: Reached target sockets.target - Sockets. Dec 13 01:25:49.231634 systemd[1559]: Reached target basic.target - Basic System. Dec 13 01:25:49.231699 systemd[1559]: Reached target default.target - Main User Target. Dec 13 01:25:49.231755 systemd[1559]: Startup finished in 319ms. Dec 13 01:25:49.232023 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:25:49.250557 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:25:49.485322 tar[1462]: linux-amd64/LICENSE Dec 13 01:25:49.485322 tar[1462]: linux-amd64/README.md Dec 13 01:25:49.523227 systemd[1]: Started sshd@1-10.128.0.80:22-147.75.109.163:57216.service - OpenSSH per-connection server daemon (147.75.109.163:57216). Dec 13 01:25:49.536453 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:25:49.713049 instance-setup[1544]: INFO Running google_set_multiqueue. Dec 13 01:25:49.731383 instance-setup[1544]: INFO Set channels for eth0 to 2. Dec 13 01:25:49.735137 instance-setup[1544]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Dec 13 01:25:49.736815 instance-setup[1544]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Dec 13 01:25:49.737576 instance-setup[1544]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Dec 13 01:25:49.739449 instance-setup[1544]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Dec 13 01:25:49.739828 instance-setup[1544]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Dec 13 01:25:49.742171 instance-setup[1544]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Dec 13 01:25:49.742258 instance-setup[1544]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Dec 13 01:25:49.743749 instance-setup[1544]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Dec 13 01:25:49.751870 instance-setup[1544]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Dec 13 01:25:49.756165 instance-setup[1544]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Dec 13 01:25:49.758148 instance-setup[1544]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Dec 13 01:25:49.758204 instance-setup[1544]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Dec 13 01:25:49.781532 init.sh[1537]: + /usr/bin/google_metadata_script_runner --script-type startup Dec 13 01:25:49.874868 sshd[1577]: Accepted publickey for core from 147.75.109.163 port 57216 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:25:49.877766 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:25:49.886411 systemd-logind[1446]: New session 2 of user core. Dec 13 01:25:49.892499 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:25:49.951929 startup-script[1608]: INFO Starting startup scripts. Dec 13 01:25:49.958341 startup-script[1608]: INFO No startup scripts found in metadata. Dec 13 01:25:49.958415 startup-script[1608]: INFO Finished running startup scripts. Dec 13 01:25:49.977851 init.sh[1537]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Dec 13 01:25:49.977851 init.sh[1537]: + daemon_pids=() Dec 13 01:25:49.977851 init.sh[1537]: + for d in accounts clock_skew network Dec 13 01:25:49.978081 init.sh[1537]: + daemon_pids+=($!) Dec 13 01:25:49.978081 init.sh[1537]: + for d in accounts clock_skew network Dec 13 01:25:49.978687 init.sh[1537]: + daemon_pids+=($!) Dec 13 01:25:49.978687 init.sh[1537]: + for d in accounts clock_skew network Dec 13 01:25:49.978687 init.sh[1537]: + daemon_pids+=($!) Dec 13 01:25:49.978687 init.sh[1537]: + NOTIFY_SOCKET=/run/systemd/notify Dec 13 01:25:49.978687 init.sh[1537]: + /usr/bin/systemd-notify --ready Dec 13 01:25:49.979224 init.sh[1613]: + /usr/bin/google_clock_skew_daemon Dec 13 01:25:49.979608 init.sh[1612]: + /usr/bin/google_accounts_daemon Dec 13 01:25:49.979892 init.sh[1614]: + /usr/bin/google_network_daemon Dec 13 01:25:49.999872 systemd[1]: Started oem-gce.service - GCE Linux Agent. Dec 13 01:25:50.014796 init.sh[1537]: + wait -n 1612 1613 1614 Dec 13 01:25:50.105525 sshd[1577]: pam_unix(sshd:session): session closed for user core Dec 13 01:25:50.112638 systemd[1]: sshd@1-10.128.0.80:22-147.75.109.163:57216.service: Deactivated successfully. Dec 13 01:25:50.117032 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:25:50.123690 systemd-logind[1446]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:25:50.126416 systemd-logind[1446]: Removed session 2. Dec 13 01:25:50.168597 systemd[1]: Started sshd@2-10.128.0.80:22-147.75.109.163:57228.service - OpenSSH per-connection server daemon (147.75.109.163:57228). Dec 13 01:25:50.340181 google-clock-skew[1613]: INFO Starting Google Clock Skew daemon. Dec 13 01:25:50.357826 google-clock-skew[1613]: INFO Clock drift token has changed: 0. Dec 13 01:25:50.377582 google-networking[1614]: INFO Starting Google Networking daemon. Dec 13 01:25:50.458213 groupadd[1630]: group added to /etc/group: name=google-sudoers, GID=1000 Dec 13 01:25:50.462993 groupadd[1630]: group added to /etc/gshadow: name=google-sudoers Dec 13 01:25:50.499169 sshd[1620]: Accepted publickey for core from 147.75.109.163 port 57228 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:25:50.501448 sshd[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:25:50.509779 systemd-logind[1446]: New session 3 of user core. Dec 13 01:25:50.511489 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:25:50.526392 groupadd[1630]: new group: name=google-sudoers, GID=1000 Dec 13 01:25:50.555374 google-accounts[1612]: INFO Starting Google Accounts daemon. Dec 13 01:25:50.566846 google-accounts[1612]: WARNING OS Login not installed. Dec 13 01:25:50.568785 google-accounts[1612]: INFO Creating a new user account for 0. Dec 13 01:25:50.573407 init.sh[1639]: useradd: invalid user name '0': use --badname to ignore Dec 13 01:25:50.573704 google-accounts[1612]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Dec 13 01:25:50.658871 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:25:50.671485 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:25:50.680104 ntpd[1433]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:50%2]:123 Dec 13 01:25:50.680490 ntpd[1433]: 13 Dec 01:25:50 ntpd[1433]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:50%2]:123 Dec 13 01:25:50.681958 systemd[1]: Startup finished in 999ms (kernel) + 9.172s (initrd) + 8.806s (userspace) = 18.978s. Dec 13 01:25:50.691189 (kubelet)[1646]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:25:50.725177 sshd[1620]: pam_unix(sshd:session): session closed for user core Dec 13 01:25:50.731086 systemd[1]: sshd@2-10.128.0.80:22-147.75.109.163:57228.service: Deactivated successfully. Dec 13 01:25:50.733892 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:25:50.736814 systemd-logind[1446]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:25:50.738443 systemd-logind[1446]: Removed session 3. Dec 13 01:25:51.000217 systemd-resolved[1320]: Clock change detected. Flushing caches. Dec 13 01:25:51.000531 google-clock-skew[1613]: INFO Synced system time with hardware clock. Dec 13 01:25:51.627607 kubelet[1646]: E1213 01:25:51.627543 1646 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:25:51.630442 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:25:51.630694 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:25:51.631117 systemd[1]: kubelet.service: Consumed 1.164s CPU time. Dec 13 01:26:00.875002 systemd[1]: Started sshd@3-10.128.0.80:22-147.75.109.163:60138.service - OpenSSH per-connection server daemon (147.75.109.163:60138). Dec 13 01:26:01.156826 sshd[1661]: Accepted publickey for core from 147.75.109.163 port 60138 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:26:01.158747 sshd[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:01.163884 systemd-logind[1446]: New session 4 of user core. Dec 13 01:26:01.174401 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:26:01.372590 sshd[1661]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:01.377937 systemd[1]: sshd@3-10.128.0.80:22-147.75.109.163:60138.service: Deactivated successfully. Dec 13 01:26:01.380487 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:26:01.381488 systemd-logind[1446]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:26:01.382922 systemd-logind[1446]: Removed session 4. Dec 13 01:26:01.427990 systemd[1]: Started sshd@4-10.128.0.80:22-147.75.109.163:60152.service - OpenSSH per-connection server daemon (147.75.109.163:60152). Dec 13 01:26:01.668901 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:26:01.678633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:01.726997 sshd[1668]: Accepted publickey for core from 147.75.109.163 port 60152 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:26:01.728841 sshd[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:01.734580 systemd-logind[1446]: New session 5 of user core. Dec 13 01:26:01.742403 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:26:01.936493 sshd[1668]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:01.942415 systemd[1]: sshd@4-10.128.0.80:22-147.75.109.163:60152.service: Deactivated successfully. Dec 13 01:26:01.948377 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:01.949458 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:26:01.951278 systemd-logind[1446]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:26:01.955696 (kubelet)[1681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:26:01.956433 systemd-logind[1446]: Removed session 5. Dec 13 01:26:02.000630 systemd[1]: Started sshd@5-10.128.0.80:22-147.75.109.163:60162.service - OpenSSH per-connection server daemon (147.75.109.163:60162). Dec 13 01:26:02.022593 kubelet[1681]: E1213 01:26:02.022548 1681 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:26:02.027025 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:26:02.027287 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:26:02.290553 sshd[1689]: Accepted publickey for core from 147.75.109.163 port 60162 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:26:02.292642 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:02.299140 systemd-logind[1446]: New session 6 of user core. Dec 13 01:26:02.306389 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:26:02.504253 sshd[1689]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:02.509765 systemd[1]: sshd@5-10.128.0.80:22-147.75.109.163:60162.service: Deactivated successfully. Dec 13 01:26:02.512000 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:26:02.513008 systemd-logind[1446]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:26:02.514591 systemd-logind[1446]: Removed session 6. Dec 13 01:26:02.560565 systemd[1]: Started sshd@6-10.128.0.80:22-147.75.109.163:60176.service - OpenSSH per-connection server daemon (147.75.109.163:60176). Dec 13 01:26:02.839539 sshd[1698]: Accepted publickey for core from 147.75.109.163 port 60176 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:26:02.841366 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:02.847757 systemd-logind[1446]: New session 7 of user core. Dec 13 01:26:02.857386 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:26:03.031253 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:26:03.031751 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:26:03.047875 sudo[1701]: pam_unix(sudo:session): session closed for user root Dec 13 01:26:03.090720 sshd[1698]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:03.097048 systemd[1]: sshd@6-10.128.0.80:22-147.75.109.163:60176.service: Deactivated successfully. Dec 13 01:26:03.099442 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:26:03.100390 systemd-logind[1446]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:26:03.101845 systemd-logind[1446]: Removed session 7. Dec 13 01:26:03.147550 systemd[1]: Started sshd@7-10.128.0.80:22-147.75.109.163:60178.service - OpenSSH per-connection server daemon (147.75.109.163:60178). Dec 13 01:26:03.443280 sshd[1706]: Accepted publickey for core from 147.75.109.163 port 60178 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:26:03.445367 sshd[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:03.451596 systemd-logind[1446]: New session 8 of user core. Dec 13 01:26:03.461433 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:26:03.624257 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:26:03.624754 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:26:03.629744 sudo[1710]: pam_unix(sudo:session): session closed for user root Dec 13 01:26:03.642756 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:26:03.643239 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:26:03.665587 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:26:03.667719 auditctl[1713]: No rules Dec 13 01:26:03.668208 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:26:03.668462 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:26:03.671603 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:26:03.711291 augenrules[1731]: No rules Dec 13 01:26:03.712481 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:26:03.714001 sudo[1709]: pam_unix(sudo:session): session closed for user root Dec 13 01:26:03.758159 sshd[1706]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:03.763617 systemd[1]: sshd@7-10.128.0.80:22-147.75.109.163:60178.service: Deactivated successfully. Dec 13 01:26:03.765860 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:26:03.766851 systemd-logind[1446]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:26:03.768657 systemd-logind[1446]: Removed session 8. Dec 13 01:26:03.809298 systemd[1]: Started sshd@8-10.128.0.80:22-147.75.109.163:60194.service - OpenSSH per-connection server daemon (147.75.109.163:60194). Dec 13 01:26:04.092951 sshd[1739]: Accepted publickey for core from 147.75.109.163 port 60194 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:26:04.094755 sshd[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:26:04.100127 systemd-logind[1446]: New session 9 of user core. Dec 13 01:26:04.107400 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:26:04.270982 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:26:04.271486 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:26:04.701571 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:26:04.713816 (dockerd)[1758]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:26:05.150613 dockerd[1758]: time="2024-12-13T01:26:05.150530573Z" level=info msg="Starting up" Dec 13 01:26:05.300986 dockerd[1758]: time="2024-12-13T01:26:05.300905298Z" level=info msg="Loading containers: start." Dec 13 01:26:05.441220 kernel: Initializing XFRM netlink socket Dec 13 01:26:05.546832 systemd-networkd[1383]: docker0: Link UP Dec 13 01:26:05.566873 dockerd[1758]: time="2024-12-13T01:26:05.566813984Z" level=info msg="Loading containers: done." Dec 13 01:26:05.587664 dockerd[1758]: time="2024-12-13T01:26:05.587541657Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:26:05.587884 dockerd[1758]: time="2024-12-13T01:26:05.587764515Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:26:05.587955 dockerd[1758]: time="2024-12-13T01:26:05.587926168Z" level=info msg="Daemon has completed initialization" Dec 13 01:26:05.627702 dockerd[1758]: time="2024-12-13T01:26:05.627272228Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:26:05.627565 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:26:06.611725 containerd[1469]: time="2024-12-13T01:26:06.611666857Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 13 01:26:07.122112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount985297495.mount: Deactivated successfully. Dec 13 01:26:08.774372 containerd[1469]: time="2024-12-13T01:26:08.774303573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:08.775955 containerd[1469]: time="2024-12-13T01:26:08.775891915Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=27982111" Dec 13 01:26:08.777238 containerd[1469]: time="2024-12-13T01:26:08.777188229Z" level=info msg="ImageCreate event name:\"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:08.781356 containerd[1469]: time="2024-12-13T01:26:08.781285055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:08.783327 containerd[1469]: time="2024-12-13T01:26:08.782793652Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"27972283\" in 2.171078327s" Dec 13 01:26:08.783327 containerd[1469]: time="2024-12-13T01:26:08.782846670Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Dec 13 01:26:08.785694 containerd[1469]: time="2024-12-13T01:26:08.785666351Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 13 01:26:10.182239 containerd[1469]: time="2024-12-13T01:26:10.182157164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:10.183757 containerd[1469]: time="2024-12-13T01:26:10.183677149Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=24704091" Dec 13 01:26:10.185267 containerd[1469]: time="2024-12-13T01:26:10.185197869Z" level=info msg="ImageCreate event name:\"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:10.188832 containerd[1469]: time="2024-12-13T01:26:10.188765954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:10.190369 containerd[1469]: time="2024-12-13T01:26:10.190195821Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"26147269\" in 1.404466076s" Dec 13 01:26:10.190369 containerd[1469]: time="2024-12-13T01:26:10.190251587Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Dec 13 01:26:10.191212 containerd[1469]: time="2024-12-13T01:26:10.191158300Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 13 01:26:11.360513 containerd[1469]: time="2024-12-13T01:26:11.360449738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:11.362077 containerd[1469]: time="2024-12-13T01:26:11.362010132Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=18653983" Dec 13 01:26:11.363197 containerd[1469]: time="2024-12-13T01:26:11.363136489Z" level=info msg="ImageCreate event name:\"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:11.368218 containerd[1469]: time="2024-12-13T01:26:11.367627264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:11.370115 containerd[1469]: time="2024-12-13T01:26:11.370072182Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"20097197\" in 1.178728193s" Dec 13 01:26:11.370297 containerd[1469]: time="2024-12-13T01:26:11.370119506Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Dec 13 01:26:11.371504 containerd[1469]: time="2024-12-13T01:26:11.371430424Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 01:26:12.061577 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:26:12.072907 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:12.334395 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:12.348058 (kubelet)[1970]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:26:12.436594 kubelet[1970]: E1213 01:26:12.436308 1970 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:26:12.441295 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:26:12.441736 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:26:12.546629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1854831305.mount: Deactivated successfully. Dec 13 01:26:13.162902 containerd[1469]: time="2024-12-13T01:26:13.162772134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:13.164368 containerd[1469]: time="2024-12-13T01:26:13.164285844Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30232138" Dec 13 01:26:13.165900 containerd[1469]: time="2024-12-13T01:26:13.165823905Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:13.169125 containerd[1469]: time="2024-12-13T01:26:13.169048033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:13.170346 containerd[1469]: time="2024-12-13T01:26:13.170152668Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 1.798680758s" Dec 13 01:26:13.170346 containerd[1469]: time="2024-12-13T01:26:13.170221304Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 01:26:13.171125 containerd[1469]: time="2024-12-13T01:26:13.171073014Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:26:13.618757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4293198302.mount: Deactivated successfully. Dec 13 01:26:14.654937 containerd[1469]: time="2024-12-13T01:26:14.654871001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:14.658816 containerd[1469]: time="2024-12-13T01:26:14.658535496Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Dec 13 01:26:14.662129 containerd[1469]: time="2024-12-13T01:26:14.662078073Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:14.666216 containerd[1469]: time="2024-12-13T01:26:14.666129484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:14.668155 containerd[1469]: time="2024-12-13T01:26:14.667643677Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.496527166s" Dec 13 01:26:14.668155 containerd[1469]: time="2024-12-13T01:26:14.667695055Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:26:14.668555 containerd[1469]: time="2024-12-13T01:26:14.668526832Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 01:26:15.077494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2029255514.mount: Deactivated successfully. Dec 13 01:26:15.086608 containerd[1469]: time="2024-12-13T01:26:15.086545764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:15.087872 containerd[1469]: time="2024-12-13T01:26:15.087815836Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Dec 13 01:26:15.088972 containerd[1469]: time="2024-12-13T01:26:15.088894576Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:15.092788 containerd[1469]: time="2024-12-13T01:26:15.092719141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:15.094545 containerd[1469]: time="2024-12-13T01:26:15.093982808Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 425.321026ms" Dec 13 01:26:15.094545 containerd[1469]: time="2024-12-13T01:26:15.094027099Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 13 01:26:15.095017 containerd[1469]: time="2024-12-13T01:26:15.094985096Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 13 01:26:15.535131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1752188973.mount: Deactivated successfully. Dec 13 01:26:17.652119 containerd[1469]: time="2024-12-13T01:26:17.652045579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:17.653870 containerd[1469]: time="2024-12-13T01:26:17.653803138Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56786556" Dec 13 01:26:17.655267 containerd[1469]: time="2024-12-13T01:26:17.655193689Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:17.659243 containerd[1469]: time="2024-12-13T01:26:17.659161539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:17.661003 containerd[1469]: time="2024-12-13T01:26:17.660791340Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.565758149s" Dec 13 01:26:17.661003 containerd[1469]: time="2024-12-13T01:26:17.660839888Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Dec 13 01:26:18.869527 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 01:26:21.690115 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:21.697553 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:21.738146 systemd[1]: Reloading requested from client PID 2112 ('systemctl') (unit session-9.scope)... Dec 13 01:26:21.738188 systemd[1]: Reloading... Dec 13 01:26:21.885232 zram_generator::config[2148]: No configuration found. Dec 13 01:26:22.055549 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:26:22.161475 systemd[1]: Reloading finished in 422 ms. Dec 13 01:26:22.227407 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:26:22.227549 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:26:22.227891 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:22.237239 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:22.856940 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:22.871774 (kubelet)[2201]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:26:22.924040 kubelet[2201]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:26:22.924040 kubelet[2201]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:26:22.924040 kubelet[2201]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:26:22.926774 kubelet[2201]: I1213 01:26:22.926710 2201 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:26:23.487369 kubelet[2201]: I1213 01:26:23.487314 2201 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 01:26:23.487369 kubelet[2201]: I1213 01:26:23.487348 2201 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:26:23.487739 kubelet[2201]: I1213 01:26:23.487703 2201 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 01:26:23.531311 kubelet[2201]: I1213 01:26:23.530994 2201 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:26:23.532131 kubelet[2201]: E1213 01:26:23.532070 2201 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.80:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.80:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:26:23.546405 kubelet[2201]: E1213 01:26:23.546359 2201 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 01:26:23.546405 kubelet[2201]: I1213 01:26:23.546399 2201 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 01:26:23.551754 kubelet[2201]: I1213 01:26:23.551728 2201 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:26:23.555716 kubelet[2201]: I1213 01:26:23.555671 2201 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 01:26:23.555981 kubelet[2201]: I1213 01:26:23.555926 2201 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:26:23.556256 kubelet[2201]: I1213 01:26:23.555967 2201 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 01:26:23.556459 kubelet[2201]: I1213 01:26:23.556263 2201 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:26:23.556459 kubelet[2201]: I1213 01:26:23.556282 2201 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 01:26:23.556459 kubelet[2201]: I1213 01:26:23.556427 2201 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:26:23.560115 kubelet[2201]: I1213 01:26:23.559818 2201 kubelet.go:408] "Attempting to sync node with API server" Dec 13 01:26:23.560115 kubelet[2201]: I1213 01:26:23.559854 2201 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:26:23.560115 kubelet[2201]: I1213 01:26:23.559901 2201 kubelet.go:314] "Adding apiserver pod source" Dec 13 01:26:23.560115 kubelet[2201]: I1213 01:26:23.559929 2201 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:26:23.568730 kubelet[2201]: W1213 01:26:23.568561 2201 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.80:6443: connect: connection refused Dec 13 01:26:23.568730 kubelet[2201]: E1213 01:26:23.568648 2201 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.80:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:26:23.569145 kubelet[2201]: I1213 01:26:23.569011 2201 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:26:23.571485 kubelet[2201]: I1213 01:26:23.571462 2201 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:26:23.572950 kubelet[2201]: W1213 01:26:23.572755 2201 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:26:23.574405 kubelet[2201]: I1213 01:26:23.574163 2201 server.go:1269] "Started kubelet" Dec 13 01:26:23.576707 kubelet[2201]: W1213 01:26:23.576652 2201 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.80:6443: connect: connection refused Dec 13 01:26:23.576808 kubelet[2201]: E1213 01:26:23.576724 2201 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.80:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:26:23.576808 kubelet[2201]: I1213 01:26:23.576772 2201 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:26:23.579620 kubelet[2201]: I1213 01:26:23.579578 2201 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:26:23.581471 kubelet[2201]: I1213 01:26:23.581391 2201 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:26:23.582237 kubelet[2201]: I1213 01:26:23.581822 2201 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:26:23.586946 kubelet[2201]: E1213 01:26:23.582067 2201 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.80:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.80:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal.1810982edc9204d5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal,UID:ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal,},FirstTimestamp:2024-12-13 01:26:23.574115541 +0000 UTC m=+0.697444646,LastTimestamp:2024-12-13 01:26:23.574115541 +0000 UTC m=+0.697444646,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal,}" Dec 13 01:26:23.587957 kubelet[2201]: I1213 01:26:23.587934 2201 server.go:460] "Adding debug handlers to kubelet server" Dec 13 01:26:23.589367 kubelet[2201]: I1213 01:26:23.589347 2201 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 01:26:23.590537 kubelet[2201]: E1213 01:26:23.590509 2201 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" not found" Dec 13 01:26:23.593415 kubelet[2201]: E1213 01:26:23.593349 2201 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.80:6443: connect: connection refused" interval="200ms" Dec 13 01:26:23.594223 kubelet[2201]: I1213 01:26:23.593846 2201 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:26:23.594901 kubelet[2201]: I1213 01:26:23.594879 2201 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 01:26:23.595025 kubelet[2201]: I1213 01:26:23.589422 2201 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 01:26:23.595302 kubelet[2201]: I1213 01:26:23.595286 2201 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:26:23.597130 kubelet[2201]: W1213 01:26:23.596608 2201 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.80:6443: connect: connection refused Dec 13 01:26:23.597130 kubelet[2201]: E1213 01:26:23.596682 2201 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.80:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:26:23.597717 kubelet[2201]: I1213 01:26:23.597689 2201 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:26:23.597717 kubelet[2201]: I1213 01:26:23.597717 2201 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:26:23.611965 kubelet[2201]: E1213 01:26:23.611927 2201 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:26:23.619279 kubelet[2201]: I1213 01:26:23.619212 2201 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:26:23.627270 kubelet[2201]: I1213 01:26:23.627242 2201 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:26:23.627742 kubelet[2201]: I1213 01:26:23.627409 2201 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:26:23.627742 kubelet[2201]: I1213 01:26:23.627447 2201 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 01:26:23.627742 kubelet[2201]: E1213 01:26:23.627503 2201 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:26:23.631334 kubelet[2201]: W1213 01:26:23.631267 2201 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.80:6443: connect: connection refused Dec 13 01:26:23.631449 kubelet[2201]: E1213 01:26:23.631350 2201 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.80:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:26:23.631449 kubelet[2201]: I1213 01:26:23.631433 2201 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:26:23.631449 kubelet[2201]: I1213 01:26:23.631446 2201 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:26:23.631594 kubelet[2201]: I1213 01:26:23.631467 2201 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:26:23.635368 kubelet[2201]: I1213 01:26:23.635336 2201 policy_none.go:49] "None policy: Start" Dec 13 01:26:23.636275 kubelet[2201]: I1213 01:26:23.636146 2201 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:26:23.636275 kubelet[2201]: I1213 01:26:23.636197 2201 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:26:23.643696 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:26:23.655348 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:26:23.659661 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:26:23.669245 kubelet[2201]: I1213 01:26:23.669214 2201 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:26:23.669504 kubelet[2201]: I1213 01:26:23.669483 2201 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 01:26:23.669588 kubelet[2201]: I1213 01:26:23.669508 2201 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:26:23.669935 kubelet[2201]: I1213 01:26:23.669912 2201 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:26:23.672300 kubelet[2201]: E1213 01:26:23.672275 2201 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" not found" Dec 13 01:26:23.749981 systemd[1]: Created slice kubepods-burstable-podd1b7837b6eede8570d02181fcb3753fa.slice - libcontainer container kubepods-burstable-podd1b7837b6eede8570d02181fcb3753fa.slice. Dec 13 01:26:23.763113 systemd[1]: Created slice kubepods-burstable-podc799f5bcf918915daf193cd97689e507.slice - libcontainer container kubepods-burstable-podc799f5bcf918915daf193cd97689e507.slice. Dec 13 01:26:23.775961 kubelet[2201]: I1213 01:26:23.775928 2201 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:23.778241 kubelet[2201]: E1213 01:26:23.776635 2201 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.80:6443/api/v1/nodes\": dial tcp 10.128.0.80:6443: connect: connection refused" node="ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:23.777944 systemd[1]: Created slice kubepods-burstable-pod0a69c2e0a8725d1138d625c384ca72b9.slice - libcontainer container kubepods-burstable-pod0a69c2e0a8725d1138d625c384ca72b9.slice. Dec 13 01:26:23.794216 kubelet[2201]: E1213 01:26:23.794149 2201 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.80:6443: connect: connection refused" interval="400ms" Dec 13 01:26:23.797396 kubelet[2201]: I1213 01:26:23.797363 2201 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c799f5bcf918915daf193cd97689e507-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" (UID: \"c799f5bcf918915daf193cd97689e507\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:23.797672 kubelet[2201]: I1213 01:26:23.797419 2201 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a69c2e0a8725d1138d625c384ca72b9-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" (UID: \"0a69c2e0a8725d1138d625c384ca72b9\") " pod="kube-system/kube-scheduler-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:23.797672 kubelet[2201]: I1213 01:26:23.797483 2201 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c799f5bcf918915daf193cd97689e507-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" (UID: \"c799f5bcf918915daf193cd97689e507\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:23.797672 kubelet[2201]: I1213 01:26:23.797513 2201 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c799f5bcf918915daf193cd97689e507-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" (UID: \"c799f5bcf918915daf193cd97689e507\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:23.797672 kubelet[2201]: I1213 01:26:23.797542 2201 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c799f5bcf918915daf193cd97689e507-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" (UID: \"c799f5bcf918915daf193cd97689e507\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:23.797842 kubelet[2201]: I1213 01:26:23.797583 2201 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c799f5bcf918915daf193cd97689e507-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" (UID: \"c799f5bcf918915daf193cd97689e507\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:23.797954 kubelet[2201]: I1213 01:26:23.797926 2201 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1b7837b6eede8570d02181fcb3753fa-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" (UID: \"d1b7837b6eede8570d02181fcb3753fa\") " pod="kube-system/kube-apiserver-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:23.798042 kubelet[2201]: I1213 01:26:23.797989 2201 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1b7837b6eede8570d02181fcb3753fa-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" (UID: \"d1b7837b6eede8570d02181fcb3753fa\") " pod="kube-system/kube-apiserver-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:23.798042 kubelet[2201]: I1213 01:26:23.798021 2201 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1b7837b6eede8570d02181fcb3753fa-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" (UID: \"d1b7837b6eede8570d02181fcb3753fa\") " pod="kube-system/kube-apiserver-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:23.988325 kubelet[2201]: I1213 01:26:23.988271 2201 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:23.988871 kubelet[2201]: E1213 01:26:23.988710 2201 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.80:6443/api/v1/nodes\": dial tcp 10.128.0.80:6443: connect: connection refused" node="ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:24.058773 containerd[1469]: time="2024-12-13T01:26:24.058608485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal,Uid:d1b7837b6eede8570d02181fcb3753fa,Namespace:kube-system,Attempt:0,}" Dec 13 01:26:24.074522 containerd[1469]: time="2024-12-13T01:26:24.074453463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal,Uid:c799f5bcf918915daf193cd97689e507,Namespace:kube-system,Attempt:0,}" Dec 13 01:26:24.082479 containerd[1469]: time="2024-12-13T01:26:24.082411597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal,Uid:0a69c2e0a8725d1138d625c384ca72b9,Namespace:kube-system,Attempt:0,}" Dec 13 01:26:24.195572 kubelet[2201]: E1213 01:26:24.195511 2201 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.80:6443: connect: connection refused" interval="800ms" Dec 13 01:26:24.394485 kubelet[2201]: I1213 01:26:24.394435 2201 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:24.394861 kubelet[2201]: E1213 01:26:24.394810 2201 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.80:6443/api/v1/nodes\": dial tcp 10.128.0.80:6443: connect: connection refused" node="ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:24.437354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount203444745.mount: Deactivated successfully. Dec 13 01:26:24.445027 containerd[1469]: time="2024-12-13T01:26:24.444966606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:26:24.446272 containerd[1469]: time="2024-12-13T01:26:24.446219104Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:26:24.447467 containerd[1469]: time="2024-12-13T01:26:24.447381382Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:26:24.448329 containerd[1469]: time="2024-12-13T01:26:24.448274400Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Dec 13 01:26:24.449719 containerd[1469]: time="2024-12-13T01:26:24.449665874Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:26:24.451365 containerd[1469]: time="2024-12-13T01:26:24.451223693Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:26:24.451548 containerd[1469]: time="2024-12-13T01:26:24.451500163Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:26:24.454425 containerd[1469]: time="2024-12-13T01:26:24.454355699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:26:24.457514 containerd[1469]: time="2024-12-13T01:26:24.456857092Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 374.349484ms" Dec 13 01:26:24.460447 containerd[1469]: time="2024-12-13T01:26:24.460118773Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 385.583407ms" Dec 13 01:26:24.464319 containerd[1469]: time="2024-12-13T01:26:24.463569258Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 404.862455ms" Dec 13 01:26:24.496348 kubelet[2201]: W1213 01:26:24.496255 2201 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.80:6443: connect: connection refused Dec 13 01:26:24.496586 kubelet[2201]: E1213 01:26:24.496555 2201 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.80:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.80:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:26:24.636031 containerd[1469]: time="2024-12-13T01:26:24.635570306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:26:24.636031 containerd[1469]: time="2024-12-13T01:26:24.635685874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:26:24.636031 containerd[1469]: time="2024-12-13T01:26:24.635714077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:26:24.636031 containerd[1469]: time="2024-12-13T01:26:24.635841947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:26:24.638239 containerd[1469]: time="2024-12-13T01:26:24.637955095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:26:24.638239 containerd[1469]: time="2024-12-13T01:26:24.638020651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:26:24.638239 containerd[1469]: time="2024-12-13T01:26:24.638048776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:26:24.638708 containerd[1469]: time="2024-12-13T01:26:24.638602698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:26:24.647666 containerd[1469]: time="2024-12-13T01:26:24.646933431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:26:24.647666 containerd[1469]: time="2024-12-13T01:26:24.647041424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:26:24.647666 containerd[1469]: time="2024-12-13T01:26:24.647072882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:26:24.647666 containerd[1469]: time="2024-12-13T01:26:24.647232651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:26:24.683401 systemd[1]: Started cri-containerd-2e261a215156a753b8427cb8366de3dfbe692a349451b704f9ff01344ed0b116.scope - libcontainer container 2e261a215156a753b8427cb8366de3dfbe692a349451b704f9ff01344ed0b116. Dec 13 01:26:24.685299 systemd[1]: Started cri-containerd-50bf952440c7073b46e716af79c8e4e6a4d60ad75ef1ff18c84cbf4031aacfaf.scope - libcontainer container 50bf952440c7073b46e716af79c8e4e6a4d60ad75ef1ff18c84cbf4031aacfaf. Dec 13 01:26:24.706290 systemd[1]: Started cri-containerd-f7c19a93b6ce1be8c5740a1e040322edfe74aa45c7d64493aa5e929b94f033e5.scope - libcontainer container f7c19a93b6ce1be8c5740a1e040322edfe74aa45c7d64493aa5e929b94f033e5. Dec 13 01:26:24.728293 kubelet[2201]: W1213 01:26:24.728221 2201 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.80:6443: connect: connection refused Dec 13 01:26:24.728811 kubelet[2201]: E1213 01:26:24.728761 2201 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.80:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.80:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:26:24.747767 kubelet[2201]: W1213 01:26:24.747640 2201 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.80:6443: connect: connection refused Dec 13 01:26:24.747767 kubelet[2201]: E1213 01:26:24.747732 2201 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.80:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.80:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:26:24.780941 containerd[1469]: time="2024-12-13T01:26:24.780708741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal,Uid:c799f5bcf918915daf193cd97689e507,Namespace:kube-system,Attempt:0,} returns sandbox id \"50bf952440c7073b46e716af79c8e4e6a4d60ad75ef1ff18c84cbf4031aacfaf\"" Dec 13 01:26:24.790262 kubelet[2201]: E1213 01:26:24.789596 2201 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-2-1-7bf6c49e76e7895cf114.c.flat" Dec 13 01:26:24.791921 kubelet[2201]: E1213 01:26:24.791612 2201 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.80:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.80:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal.1810982edc9204d5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal,UID:ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal,},FirstTimestamp:2024-12-13 01:26:23.574115541 +0000 UTC m=+0.697444646,LastTimestamp:2024-12-13 01:26:23.574115541 +0000 UTC m=+0.697444646,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal,}" Dec 13 01:26:24.793970 containerd[1469]: time="2024-12-13T01:26:24.793579406Z" level=info msg="CreateContainer within sandbox \"50bf952440c7073b46e716af79c8e4e6a4d60ad75ef1ff18c84cbf4031aacfaf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:26:24.803321 containerd[1469]: time="2024-12-13T01:26:24.801651187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal,Uid:d1b7837b6eede8570d02181fcb3753fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e261a215156a753b8427cb8366de3dfbe692a349451b704f9ff01344ed0b116\"" Dec 13 01:26:24.805583 kubelet[2201]: E1213 01:26:24.805144 2201 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-21291" Dec 13 01:26:24.809567 containerd[1469]: time="2024-12-13T01:26:24.809484107Z" level=info msg="CreateContainer within sandbox \"2e261a215156a753b8427cb8366de3dfbe692a349451b704f9ff01344ed0b116\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:26:24.825032 containerd[1469]: time="2024-12-13T01:26:24.824989032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal,Uid:0a69c2e0a8725d1138d625c384ca72b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7c19a93b6ce1be8c5740a1e040322edfe74aa45c7d64493aa5e929b94f033e5\"" Dec 13 01:26:24.826945 containerd[1469]: time="2024-12-13T01:26:24.826880203Z" level=info msg="CreateContainer within sandbox \"50bf952440c7073b46e716af79c8e4e6a4d60ad75ef1ff18c84cbf4031aacfaf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"eca8cb7f4b862b5f3913b959f4d7d161dc38bfb21712dcfd13594f46d2f0d719\"" Dec 13 01:26:24.827736 kubelet[2201]: E1213 01:26:24.827692 2201 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-21291" Dec 13 01:26:24.829195 containerd[1469]: time="2024-12-13T01:26:24.828485377Z" level=info msg="StartContainer for \"eca8cb7f4b862b5f3913b959f4d7d161dc38bfb21712dcfd13594f46d2f0d719\"" Dec 13 01:26:24.830550 containerd[1469]: time="2024-12-13T01:26:24.830513120Z" level=info msg="CreateContainer within sandbox \"f7c19a93b6ce1be8c5740a1e040322edfe74aa45c7d64493aa5e929b94f033e5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:26:24.837307 containerd[1469]: time="2024-12-13T01:26:24.837256947Z" level=info msg="CreateContainer within sandbox \"2e261a215156a753b8427cb8366de3dfbe692a349451b704f9ff01344ed0b116\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2d81d3f0ea1e5c56e1661618c19df63e7c13f0052d578303bbfcccba5c3d60bd\"" Dec 13 01:26:24.838024 containerd[1469]: time="2024-12-13T01:26:24.837991822Z" level=info msg="StartContainer for \"2d81d3f0ea1e5c56e1661618c19df63e7c13f0052d578303bbfcccba5c3d60bd\"" Dec 13 01:26:24.859637 containerd[1469]: time="2024-12-13T01:26:24.859526533Z" level=info msg="CreateContainer within sandbox \"f7c19a93b6ce1be8c5740a1e040322edfe74aa45c7d64493aa5e929b94f033e5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"49910f7bfcf7bef39dd88133b44db28bd72634a12b0550ec1a54f20f91df7e5b\"" Dec 13 01:26:24.860677 containerd[1469]: time="2024-12-13T01:26:24.860581757Z" level=info msg="StartContainer for \"49910f7bfcf7bef39dd88133b44db28bd72634a12b0550ec1a54f20f91df7e5b\"" Dec 13 01:26:24.887661 systemd[1]: Started cri-containerd-2d81d3f0ea1e5c56e1661618c19df63e7c13f0052d578303bbfcccba5c3d60bd.scope - libcontainer container 2d81d3f0ea1e5c56e1661618c19df63e7c13f0052d578303bbfcccba5c3d60bd. Dec 13 01:26:24.901956 systemd[1]: Started cri-containerd-eca8cb7f4b862b5f3913b959f4d7d161dc38bfb21712dcfd13594f46d2f0d719.scope - libcontainer container eca8cb7f4b862b5f3913b959f4d7d161dc38bfb21712dcfd13594f46d2f0d719. Dec 13 01:26:24.926406 systemd[1]: Started cri-containerd-49910f7bfcf7bef39dd88133b44db28bd72634a12b0550ec1a54f20f91df7e5b.scope - libcontainer container 49910f7bfcf7bef39dd88133b44db28bd72634a12b0550ec1a54f20f91df7e5b. Dec 13 01:26:24.996760 kubelet[2201]: E1213 01:26:24.996676 2201 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.80:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.80:6443: connect: connection refused" interval="1.6s" Dec 13 01:26:25.017070 containerd[1469]: time="2024-12-13T01:26:25.016831944Z" level=info msg="StartContainer for \"eca8cb7f4b862b5f3913b959f4d7d161dc38bfb21712dcfd13594f46d2f0d719\" returns successfully" Dec 13 01:26:25.017070 containerd[1469]: time="2024-12-13T01:26:25.016965349Z" level=info msg="StartContainer for \"2d81d3f0ea1e5c56e1661618c19df63e7c13f0052d578303bbfcccba5c3d60bd\" returns successfully" Dec 13 01:26:25.050917 kubelet[2201]: W1213 01:26:25.050104 2201 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.80:6443: connect: connection refused Dec 13 01:26:25.050917 kubelet[2201]: E1213 01:26:25.050243 2201 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.80:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.80:6443: connect: connection refused" logger="UnhandledError" Dec 13 01:26:25.061299 containerd[1469]: time="2024-12-13T01:26:25.061225822Z" level=info msg="StartContainer for \"49910f7bfcf7bef39dd88133b44db28bd72634a12b0550ec1a54f20f91df7e5b\" returns successfully" Dec 13 01:26:25.209051 kubelet[2201]: I1213 01:26:25.208903 2201 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:27.578136 kubelet[2201]: I1213 01:26:27.577913 2201 apiserver.go:52] "Watching apiserver" Dec 13 01:26:27.678953 kubelet[2201]: E1213 01:26:27.678824 2201 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" not found" node="ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:27.696200 kubelet[2201]: I1213 01:26:27.695504 2201 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 01:26:27.758553 kubelet[2201]: I1213 01:26:27.758305 2201 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:27.758553 kubelet[2201]: E1213 01:26:27.758355 2201 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\": node \"ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" not found" Dec 13 01:26:28.299323 kubelet[2201]: E1213 01:26:28.299276 2201 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:29.832157 systemd[1]: Reloading requested from client PID 2478 ('systemctl') (unit session-9.scope)... Dec 13 01:26:29.832203 systemd[1]: Reloading... Dec 13 01:26:29.965203 zram_generator::config[2521]: No configuration found. Dec 13 01:26:30.112873 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:26:30.235014 systemd[1]: Reloading finished in 402 ms. Dec 13 01:26:30.287683 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:30.294954 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:26:30.295281 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:30.295351 systemd[1]: kubelet.service: Consumed 1.134s CPU time, 118.3M memory peak, 0B memory swap peak. Dec 13 01:26:30.308576 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:26:30.528350 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:26:30.543689 (kubelet)[2566]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:26:30.617234 kubelet[2566]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:26:30.617234 kubelet[2566]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:26:30.617234 kubelet[2566]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:26:30.617234 kubelet[2566]: I1213 01:26:30.616899 2566 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:26:30.625389 kubelet[2566]: I1213 01:26:30.625337 2566 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 01:26:30.625389 kubelet[2566]: I1213 01:26:30.625376 2566 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:26:30.626448 kubelet[2566]: I1213 01:26:30.625987 2566 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 01:26:30.628474 kubelet[2566]: I1213 01:26:30.628447 2566 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:26:30.631513 kubelet[2566]: I1213 01:26:30.631119 2566 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:26:30.636145 kubelet[2566]: E1213 01:26:30.636109 2566 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 01:26:30.636145 kubelet[2566]: I1213 01:26:30.636145 2566 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 01:26:30.641199 kubelet[2566]: I1213 01:26:30.639766 2566 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:26:30.641199 kubelet[2566]: I1213 01:26:30.639928 2566 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 01:26:30.641199 kubelet[2566]: I1213 01:26:30.640114 2566 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:26:30.641416 kubelet[2566]: I1213 01:26:30.640142 2566 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 01:26:30.641416 kubelet[2566]: I1213 01:26:30.640562 2566 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:26:30.641416 kubelet[2566]: I1213 01:26:30.640582 2566 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 01:26:30.641416 kubelet[2566]: I1213 01:26:30.640632 2566 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:26:30.641416 kubelet[2566]: I1213 01:26:30.640782 2566 kubelet.go:408] "Attempting to sync node with API server" Dec 13 01:26:30.641416 kubelet[2566]: I1213 01:26:30.640806 2566 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:26:30.641416 kubelet[2566]: I1213 01:26:30.640848 2566 kubelet.go:314] "Adding apiserver pod source" Dec 13 01:26:30.641416 kubelet[2566]: I1213 01:26:30.640870 2566 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:26:30.643889 kubelet[2566]: I1213 01:26:30.643869 2566 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:26:30.645052 kubelet[2566]: I1213 01:26:30.645034 2566 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:26:30.646935 kubelet[2566]: I1213 01:26:30.646897 2566 server.go:1269] "Started kubelet" Dec 13 01:26:30.654678 kubelet[2566]: I1213 01:26:30.654626 2566 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:26:30.656619 kubelet[2566]: I1213 01:26:30.656598 2566 server.go:460] "Adding debug handlers to kubelet server" Dec 13 01:26:30.658155 kubelet[2566]: I1213 01:26:30.658097 2566 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:26:30.658520 kubelet[2566]: I1213 01:26:30.658495 2566 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:26:30.665563 kubelet[2566]: E1213 01:26:30.665536 2566 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:26:30.666572 kubelet[2566]: I1213 01:26:30.666527 2566 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:26:30.666753 kubelet[2566]: I1213 01:26:30.666716 2566 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 01:26:30.679021 kubelet[2566]: I1213 01:26:30.678997 2566 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 01:26:30.682019 kubelet[2566]: E1213 01:26:30.679454 2566 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" not found" Dec 13 01:26:30.682019 kubelet[2566]: I1213 01:26:30.679752 2566 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 01:26:30.682019 kubelet[2566]: I1213 01:26:30.679954 2566 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:26:30.683842 kubelet[2566]: I1213 01:26:30.683747 2566 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:26:30.700633 kubelet[2566]: I1213 01:26:30.699723 2566 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:26:30.700633 kubelet[2566]: I1213 01:26:30.699758 2566 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:26:30.712452 kubelet[2566]: I1213 01:26:30.712408 2566 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:26:30.714331 kubelet[2566]: I1213 01:26:30.714289 2566 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:26:30.714331 kubelet[2566]: I1213 01:26:30.714319 2566 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:26:30.714520 kubelet[2566]: I1213 01:26:30.714355 2566 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 01:26:30.714520 kubelet[2566]: E1213 01:26:30.714418 2566 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:26:30.774897 kubelet[2566]: I1213 01:26:30.774853 2566 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:26:30.775431 kubelet[2566]: I1213 01:26:30.775007 2566 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:26:30.775431 kubelet[2566]: I1213 01:26:30.775127 2566 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:26:30.775431 kubelet[2566]: I1213 01:26:30.775383 2566 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:26:30.775431 kubelet[2566]: I1213 01:26:30.775401 2566 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:26:30.775431 kubelet[2566]: I1213 01:26:30.775427 2566 policy_none.go:49] "None policy: Start" Dec 13 01:26:30.777767 kubelet[2566]: I1213 01:26:30.776557 2566 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:26:30.777767 kubelet[2566]: I1213 01:26:30.776591 2566 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:26:30.777767 kubelet[2566]: I1213 01:26:30.776877 2566 state_mem.go:75] "Updated machine memory state" Dec 13 01:26:30.785317 kubelet[2566]: I1213 01:26:30.784399 2566 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:26:30.785317 kubelet[2566]: I1213 01:26:30.784613 2566 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 01:26:30.785317 kubelet[2566]: I1213 01:26:30.784629 2566 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:26:30.785317 kubelet[2566]: I1213 01:26:30.785205 2566 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:26:30.828875 kubelet[2566]: W1213 01:26:30.828833 2566 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 01:26:30.831383 kubelet[2566]: W1213 01:26:30.831347 2566 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 01:26:30.834344 kubelet[2566]: W1213 01:26:30.834317 2566 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 01:26:30.847084 sudo[2599]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 01:26:30.847693 sudo[2599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 01:26:30.901821 kubelet[2566]: I1213 01:26:30.901727 2566 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:30.912411 kubelet[2566]: I1213 01:26:30.912373 2566 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:30.912572 kubelet[2566]: I1213 01:26:30.912475 2566 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:30.981910 kubelet[2566]: I1213 01:26:30.980846 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c799f5bcf918915daf193cd97689e507-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" (UID: \"c799f5bcf918915daf193cd97689e507\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:30.981910 kubelet[2566]: I1213 01:26:30.981156 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a69c2e0a8725d1138d625c384ca72b9-kubeconfig\") pod \"kube-scheduler-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" (UID: \"0a69c2e0a8725d1138d625c384ca72b9\") " pod="kube-system/kube-scheduler-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:30.981910 kubelet[2566]: I1213 01:26:30.981244 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c799f5bcf918915daf193cd97689e507-ca-certs\") pod \"kube-controller-manager-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" (UID: \"c799f5bcf918915daf193cd97689e507\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:30.981910 kubelet[2566]: I1213 01:26:30.981326 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c799f5bcf918915daf193cd97689e507-k8s-certs\") pod \"kube-controller-manager-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" (UID: \"c799f5bcf918915daf193cd97689e507\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:30.981910 kubelet[2566]: I1213 01:26:30.981393 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c799f5bcf918915daf193cd97689e507-kubeconfig\") pod \"kube-controller-manager-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" (UID: \"c799f5bcf918915daf193cd97689e507\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:30.981910 kubelet[2566]: I1213 01:26:30.981463 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c799f5bcf918915daf193cd97689e507-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" (UID: \"c799f5bcf918915daf193cd97689e507\") " pod="kube-system/kube-controller-manager-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:30.981910 kubelet[2566]: I1213 01:26:30.981498 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1b7837b6eede8570d02181fcb3753fa-ca-certs\") pod \"kube-apiserver-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" (UID: \"d1b7837b6eede8570d02181fcb3753fa\") " pod="kube-system/kube-apiserver-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:30.981910 kubelet[2566]: I1213 01:26:30.981569 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1b7837b6eede8570d02181fcb3753fa-k8s-certs\") pod \"kube-apiserver-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" (UID: \"d1b7837b6eede8570d02181fcb3753fa\") " pod="kube-system/kube-apiserver-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:30.981910 kubelet[2566]: I1213 01:26:30.981630 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1b7837b6eede8570d02181fcb3753fa-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" (UID: \"d1b7837b6eede8570d02181fcb3753fa\") " pod="kube-system/kube-apiserver-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:31.584120 sudo[2599]: pam_unix(sudo:session): session closed for user root Dec 13 01:26:31.658940 kubelet[2566]: I1213 01:26:31.658550 2566 apiserver.go:52] "Watching apiserver" Dec 13 01:26:31.680455 kubelet[2566]: I1213 01:26:31.680356 2566 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 01:26:31.759396 kubelet[2566]: W1213 01:26:31.759355 2566 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 01:26:31.759593 kubelet[2566]: E1213 01:26:31.759439 2566 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" Dec 13 01:26:31.795202 kubelet[2566]: I1213 01:26:31.794240 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" podStartSLOduration=1.794217831 podStartE2EDuration="1.794217831s" podCreationTimestamp="2024-12-13 01:26:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:26:31.784885953 +0000 UTC m=+1.235518701" watchObservedRunningTime="2024-12-13 01:26:31.794217831 +0000 UTC m=+1.244850581" Dec 13 01:26:31.795202 kubelet[2566]: I1213 01:26:31.794386 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" podStartSLOduration=1.7943796619999999 podStartE2EDuration="1.794379662s" podCreationTimestamp="2024-12-13 01:26:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:26:31.793612531 +0000 UTC m=+1.244245281" watchObservedRunningTime="2024-12-13 01:26:31.794379662 +0000 UTC m=+1.245012414" Dec 13 01:26:31.814719 kubelet[2566]: I1213 01:26:31.814551 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal" podStartSLOduration=1.814528423 podStartE2EDuration="1.814528423s" podCreationTimestamp="2024-12-13 01:26:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:26:31.808751238 +0000 UTC m=+1.259383991" watchObservedRunningTime="2024-12-13 01:26:31.814528423 +0000 UTC m=+1.265161175" Dec 13 01:26:32.796319 update_engine[1453]: I20241213 01:26:32.796233 1453 update_attempter.cc:509] Updating boot flags... Dec 13 01:26:32.901317 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2629) Dec 13 01:26:33.081406 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2630) Dec 13 01:26:33.190219 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2630) Dec 13 01:26:33.458911 sudo[1742]: pam_unix(sudo:session): session closed for user root Dec 13 01:26:33.502531 sshd[1739]: pam_unix(sshd:session): session closed for user core Dec 13 01:26:33.508454 systemd[1]: sshd@8-10.128.0.80:22-147.75.109.163:60194.service: Deactivated successfully. Dec 13 01:26:33.511213 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:26:33.511610 systemd[1]: session-9.scope: Consumed 6.623s CPU time, 156.3M memory peak, 0B memory swap peak. Dec 13 01:26:33.512667 systemd-logind[1446]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:26:33.514128 systemd-logind[1446]: Removed session 9. Dec 13 01:26:35.058968 kubelet[2566]: I1213 01:26:35.058910 2566 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:26:35.059627 containerd[1469]: time="2024-12-13T01:26:35.059394655Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:26:35.060057 kubelet[2566]: I1213 01:26:35.059681 2566 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:26:35.865302 systemd[1]: Created slice kubepods-besteffort-podf5991462_91ef_46f9_a08e_8035f388a1e6.slice - libcontainer container kubepods-besteffort-podf5991462_91ef_46f9_a08e_8035f388a1e6.slice. Dec 13 01:26:35.909691 systemd[1]: Created slice kubepods-burstable-pod2de7733b_8502_4431_b3f9_45c7f0b51cc6.slice - libcontainer container kubepods-burstable-pod2de7733b_8502_4431_b3f9_45c7f0b51cc6.slice. Dec 13 01:26:35.913725 kubelet[2566]: I1213 01:26:35.913687 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5991462-91ef-46f9-a08e-8035f388a1e6-lib-modules\") pod \"kube-proxy-86fkr\" (UID: \"f5991462-91ef-46f9-a08e-8035f388a1e6\") " pod="kube-system/kube-proxy-86fkr" Dec 13 01:26:35.913972 kubelet[2566]: I1213 01:26:35.913945 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-host-proc-sys-net\") pod \"cilium-m9wgs\" (UID: \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\") " pod="kube-system/cilium-m9wgs" Dec 13 01:26:35.914160 kubelet[2566]: I1213 01:26:35.914137 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-etc-cni-netd\") pod \"cilium-m9wgs\" (UID: \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\") " pod="kube-system/cilium-m9wgs" Dec 13 01:26:35.914491 kubelet[2566]: I1213 01:26:35.914455 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-hostproc\") pod \"cilium-m9wgs\" (UID: \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\") " pod="kube-system/cilium-m9wgs" Dec 13 01:26:35.915821 kubelet[2566]: I1213 01:26:35.915255 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2de7733b-8502-4431-b3f9-45c7f0b51cc6-cilium-config-path\") pod \"cilium-m9wgs\" (UID: \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\") " pod="kube-system/cilium-m9wgs" Dec 13 01:26:35.915821 kubelet[2566]: I1213 01:26:35.915294 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc2qk\" (UniqueName: \"kubernetes.io/projected/2de7733b-8502-4431-b3f9-45c7f0b51cc6-kube-api-access-dc2qk\") pod \"cilium-m9wgs\" (UID: \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\") " pod="kube-system/cilium-m9wgs" Dec 13 01:26:35.915821 kubelet[2566]: I1213 01:26:35.915323 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f5991462-91ef-46f9-a08e-8035f388a1e6-kube-proxy\") pod \"kube-proxy-86fkr\" (UID: \"f5991462-91ef-46f9-a08e-8035f388a1e6\") " pod="kube-system/kube-proxy-86fkr" Dec 13 01:26:35.915821 kubelet[2566]: I1213 01:26:35.915349 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-cilium-cgroup\") pod \"cilium-m9wgs\" (UID: \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\") " pod="kube-system/cilium-m9wgs" Dec 13 01:26:35.915821 kubelet[2566]: I1213 01:26:35.915374 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-cni-path\") pod \"cilium-m9wgs\" (UID: \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\") " pod="kube-system/cilium-m9wgs" Dec 13 01:26:35.915821 kubelet[2566]: I1213 01:26:35.915399 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-xtables-lock\") pod \"cilium-m9wgs\" (UID: \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\") " pod="kube-system/cilium-m9wgs" Dec 13 01:26:35.915821 kubelet[2566]: I1213 01:26:35.915427 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5991462-91ef-46f9-a08e-8035f388a1e6-xtables-lock\") pod \"kube-proxy-86fkr\" (UID: \"f5991462-91ef-46f9-a08e-8035f388a1e6\") " pod="kube-system/kube-proxy-86fkr" Dec 13 01:26:35.915821 kubelet[2566]: I1213 01:26:35.915453 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8ftz\" (UniqueName: \"kubernetes.io/projected/f5991462-91ef-46f9-a08e-8035f388a1e6-kube-api-access-m8ftz\") pod \"kube-proxy-86fkr\" (UID: \"f5991462-91ef-46f9-a08e-8035f388a1e6\") " pod="kube-system/kube-proxy-86fkr" Dec 13 01:26:35.915821 kubelet[2566]: I1213 01:26:35.915488 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-cilium-run\") pod \"cilium-m9wgs\" (UID: \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\") " pod="kube-system/cilium-m9wgs" Dec 13 01:26:35.915821 kubelet[2566]: I1213 01:26:35.915513 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-host-proc-sys-kernel\") pod \"cilium-m9wgs\" (UID: \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\") " pod="kube-system/cilium-m9wgs" Dec 13 01:26:35.915821 kubelet[2566]: I1213 01:26:35.915539 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2de7733b-8502-4431-b3f9-45c7f0b51cc6-hubble-tls\") pod \"cilium-m9wgs\" (UID: \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\") " pod="kube-system/cilium-m9wgs" Dec 13 01:26:35.915821 kubelet[2566]: I1213 01:26:35.915574 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-bpf-maps\") pod \"cilium-m9wgs\" (UID: \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\") " pod="kube-system/cilium-m9wgs" Dec 13 01:26:35.915821 kubelet[2566]: I1213 01:26:35.915599 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-lib-modules\") pod \"cilium-m9wgs\" (UID: \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\") " pod="kube-system/cilium-m9wgs" Dec 13 01:26:35.915821 kubelet[2566]: I1213 01:26:35.915625 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2de7733b-8502-4431-b3f9-45c7f0b51cc6-clustermesh-secrets\") pod \"cilium-m9wgs\" (UID: \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\") " pod="kube-system/cilium-m9wgs" Dec 13 01:26:36.134462 systemd[1]: Created slice kubepods-besteffort-podc1168053_9e94_487d_b091_8c92aa694e49.slice - libcontainer container kubepods-besteffort-podc1168053_9e94_487d_b091_8c92aa694e49.slice. Dec 13 01:26:36.179068 containerd[1469]: time="2024-12-13T01:26:36.179012174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-86fkr,Uid:f5991462-91ef-46f9-a08e-8035f388a1e6,Namespace:kube-system,Attempt:0,}" Dec 13 01:26:36.216017 containerd[1469]: time="2024-12-13T01:26:36.215915081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:26:36.216808 containerd[1469]: time="2024-12-13T01:26:36.216725907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:26:36.216917 containerd[1469]: time="2024-12-13T01:26:36.216843266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:26:36.217091 containerd[1469]: time="2024-12-13T01:26:36.217023261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:26:36.219725 kubelet[2566]: I1213 01:26:36.218818 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9c4d\" (UniqueName: \"kubernetes.io/projected/c1168053-9e94-487d-b091-8c92aa694e49-kube-api-access-l9c4d\") pod \"cilium-operator-5d85765b45-pqdpq\" (UID: \"c1168053-9e94-487d-b091-8c92aa694e49\") " pod="kube-system/cilium-operator-5d85765b45-pqdpq" Dec 13 01:26:36.219725 kubelet[2566]: I1213 01:26:36.219657 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1168053-9e94-487d-b091-8c92aa694e49-cilium-config-path\") pod \"cilium-operator-5d85765b45-pqdpq\" (UID: \"c1168053-9e94-487d-b091-8c92aa694e49\") " pod="kube-system/cilium-operator-5d85765b45-pqdpq" Dec 13 01:26:36.220681 containerd[1469]: time="2024-12-13T01:26:36.220157538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m9wgs,Uid:2de7733b-8502-4431-b3f9-45c7f0b51cc6,Namespace:kube-system,Attempt:0,}" Dec 13 01:26:36.248405 systemd[1]: Started cri-containerd-f99a7add68826cfbbf83fe9aa3e0817cb0bee84c86e855718a163d550ea0d4cc.scope - libcontainer container f99a7add68826cfbbf83fe9aa3e0817cb0bee84c86e855718a163d550ea0d4cc. Dec 13 01:26:36.262927 containerd[1469]: time="2024-12-13T01:26:36.262475076Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:26:36.262927 containerd[1469]: time="2024-12-13T01:26:36.262551692Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:26:36.262927 containerd[1469]: time="2024-12-13T01:26:36.262588184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:26:36.262927 containerd[1469]: time="2024-12-13T01:26:36.262739061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:26:36.299401 systemd[1]: Started cri-containerd-316b477ec6497a335c12aa1c0c9ed11002f911482fca26c9248194efa5662f73.scope - libcontainer container 316b477ec6497a335c12aa1c0c9ed11002f911482fca26c9248194efa5662f73. Dec 13 01:26:36.301269 containerd[1469]: time="2024-12-13T01:26:36.300619914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-86fkr,Uid:f5991462-91ef-46f9-a08e-8035f388a1e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"f99a7add68826cfbbf83fe9aa3e0817cb0bee84c86e855718a163d550ea0d4cc\"" Dec 13 01:26:36.307586 containerd[1469]: time="2024-12-13T01:26:36.307385549Z" level=info msg="CreateContainer within sandbox \"f99a7add68826cfbbf83fe9aa3e0817cb0bee84c86e855718a163d550ea0d4cc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:26:36.351807 containerd[1469]: time="2024-12-13T01:26:36.351737137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m9wgs,Uid:2de7733b-8502-4431-b3f9-45c7f0b51cc6,Namespace:kube-system,Attempt:0,} returns sandbox id \"316b477ec6497a335c12aa1c0c9ed11002f911482fca26c9248194efa5662f73\"" Dec 13 01:26:36.352595 containerd[1469]: time="2024-12-13T01:26:36.352503324Z" level=info msg="CreateContainer within sandbox \"f99a7add68826cfbbf83fe9aa3e0817cb0bee84c86e855718a163d550ea0d4cc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8018190ee99362a06ae7121f9a1a83f92114cfee08b06511fca549962d632018\"" Dec 13 01:26:36.355731 containerd[1469]: time="2024-12-13T01:26:36.355688734Z" level=info msg="StartContainer for \"8018190ee99362a06ae7121f9a1a83f92114cfee08b06511fca549962d632018\"" Dec 13 01:26:36.360627 containerd[1469]: time="2024-12-13T01:26:36.360412166Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:26:36.396393 systemd[1]: Started cri-containerd-8018190ee99362a06ae7121f9a1a83f92114cfee08b06511fca549962d632018.scope - libcontainer container 8018190ee99362a06ae7121f9a1a83f92114cfee08b06511fca549962d632018. Dec 13 01:26:36.440047 containerd[1469]: time="2024-12-13T01:26:36.440001553Z" level=info msg="StartContainer for \"8018190ee99362a06ae7121f9a1a83f92114cfee08b06511fca549962d632018\" returns successfully" Dec 13 01:26:36.445006 containerd[1469]: time="2024-12-13T01:26:36.444127713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-pqdpq,Uid:c1168053-9e94-487d-b091-8c92aa694e49,Namespace:kube-system,Attempt:0,}" Dec 13 01:26:36.484910 containerd[1469]: time="2024-12-13T01:26:36.484081570Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:26:36.484910 containerd[1469]: time="2024-12-13T01:26:36.484272084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:26:36.484910 containerd[1469]: time="2024-12-13T01:26:36.484304315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:26:36.484910 containerd[1469]: time="2024-12-13T01:26:36.484417949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:26:36.523799 systemd[1]: Started cri-containerd-13a567e1f7738ad9d212088e2d2ce5692e7bda218aa253ad9fb13732d702bd40.scope - libcontainer container 13a567e1f7738ad9d212088e2d2ce5692e7bda218aa253ad9fb13732d702bd40. Dec 13 01:26:36.601929 containerd[1469]: time="2024-12-13T01:26:36.601732071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-pqdpq,Uid:c1168053-9e94-487d-b091-8c92aa694e49,Namespace:kube-system,Attempt:0,} returns sandbox id \"13a567e1f7738ad9d212088e2d2ce5692e7bda218aa253ad9fb13732d702bd40\"" Dec 13 01:26:36.794900 kubelet[2566]: I1213 01:26:36.794163 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-86fkr" podStartSLOduration=1.794138173 podStartE2EDuration="1.794138173s" podCreationTimestamp="2024-12-13 01:26:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:26:36.791485885 +0000 UTC m=+6.242118639" watchObservedRunningTime="2024-12-13 01:26:36.794138173 +0000 UTC m=+6.244770925" Dec 13 01:26:41.262964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1222587533.mount: Deactivated successfully. Dec 13 01:26:43.981024 containerd[1469]: time="2024-12-13T01:26:43.980940941Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:43.982444 containerd[1469]: time="2024-12-13T01:26:43.982370838Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166733543" Dec 13 01:26:43.983964 containerd[1469]: time="2024-12-13T01:26:43.983896258Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:43.986125 containerd[1469]: time="2024-12-13T01:26:43.986082137Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 7.625620947s" Dec 13 01:26:43.986440 containerd[1469]: time="2024-12-13T01:26:43.986299742Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 01:26:43.988943 containerd[1469]: time="2024-12-13T01:26:43.988675805Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 01:26:43.991152 containerd[1469]: time="2024-12-13T01:26:43.991112626Z" level=info msg="CreateContainer within sandbox \"316b477ec6497a335c12aa1c0c9ed11002f911482fca26c9248194efa5662f73\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:26:44.017322 containerd[1469]: time="2024-12-13T01:26:44.016469101Z" level=info msg="CreateContainer within sandbox \"316b477ec6497a335c12aa1c0c9ed11002f911482fca26c9248194efa5662f73\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f4a6eab261690acb75fa8b6a33d0ce5d12beb027880d053909d286d82bd32bb1\"" Dec 13 01:26:44.017878 containerd[1469]: time="2024-12-13T01:26:44.017842936Z" level=info msg="StartContainer for \"f4a6eab261690acb75fa8b6a33d0ce5d12beb027880d053909d286d82bd32bb1\"" Dec 13 01:26:44.066388 systemd[1]: Started cri-containerd-f4a6eab261690acb75fa8b6a33d0ce5d12beb027880d053909d286d82bd32bb1.scope - libcontainer container f4a6eab261690acb75fa8b6a33d0ce5d12beb027880d053909d286d82bd32bb1. Dec 13 01:26:44.105024 containerd[1469]: time="2024-12-13T01:26:44.104941598Z" level=info msg="StartContainer for \"f4a6eab261690acb75fa8b6a33d0ce5d12beb027880d053909d286d82bd32bb1\" returns successfully" Dec 13 01:26:44.119452 systemd[1]: cri-containerd-f4a6eab261690acb75fa8b6a33d0ce5d12beb027880d053909d286d82bd32bb1.scope: Deactivated successfully. Dec 13 01:26:45.004531 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4a6eab261690acb75fa8b6a33d0ce5d12beb027880d053909d286d82bd32bb1-rootfs.mount: Deactivated successfully. Dec 13 01:26:45.955975 containerd[1469]: time="2024-12-13T01:26:45.955852616Z" level=info msg="shim disconnected" id=f4a6eab261690acb75fa8b6a33d0ce5d12beb027880d053909d286d82bd32bb1 namespace=k8s.io Dec 13 01:26:45.955975 containerd[1469]: time="2024-12-13T01:26:45.955968937Z" level=warning msg="cleaning up after shim disconnected" id=f4a6eab261690acb75fa8b6a33d0ce5d12beb027880d053909d286d82bd32bb1 namespace=k8s.io Dec 13 01:26:45.955975 containerd[1469]: time="2024-12-13T01:26:45.955985533Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:26:46.474229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount47960112.mount: Deactivated successfully. Dec 13 01:26:46.800294 containerd[1469]: time="2024-12-13T01:26:46.798702922Z" level=info msg="CreateContainer within sandbox \"316b477ec6497a335c12aa1c0c9ed11002f911482fca26c9248194efa5662f73\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:26:46.830808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1072350627.mount: Deactivated successfully. Dec 13 01:26:46.841134 containerd[1469]: time="2024-12-13T01:26:46.840994144Z" level=info msg="CreateContainer within sandbox \"316b477ec6497a335c12aa1c0c9ed11002f911482fca26c9248194efa5662f73\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"baddbad5ea1ede5525b9056674d3664a6023bfbfd3e864088f01c3d579920636\"" Dec 13 01:26:46.842881 containerd[1469]: time="2024-12-13T01:26:46.842137029Z" level=info msg="StartContainer for \"baddbad5ea1ede5525b9056674d3664a6023bfbfd3e864088f01c3d579920636\"" Dec 13 01:26:46.893406 systemd[1]: Started cri-containerd-baddbad5ea1ede5525b9056674d3664a6023bfbfd3e864088f01c3d579920636.scope - libcontainer container baddbad5ea1ede5525b9056674d3664a6023bfbfd3e864088f01c3d579920636. Dec 13 01:26:46.943166 containerd[1469]: time="2024-12-13T01:26:46.941542491Z" level=info msg="StartContainer for \"baddbad5ea1ede5525b9056674d3664a6023bfbfd3e864088f01c3d579920636\" returns successfully" Dec 13 01:26:46.973339 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:26:46.973597 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:26:46.973701 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:26:46.982537 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:26:46.982837 systemd[1]: cri-containerd-baddbad5ea1ede5525b9056674d3664a6023bfbfd3e864088f01c3d579920636.scope: Deactivated successfully. Dec 13 01:26:47.023888 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:26:47.073980 containerd[1469]: time="2024-12-13T01:26:47.073780170Z" level=info msg="shim disconnected" id=baddbad5ea1ede5525b9056674d3664a6023bfbfd3e864088f01c3d579920636 namespace=k8s.io Dec 13 01:26:47.075192 containerd[1469]: time="2024-12-13T01:26:47.074692994Z" level=warning msg="cleaning up after shim disconnected" id=baddbad5ea1ede5525b9056674d3664a6023bfbfd3e864088f01c3d579920636 namespace=k8s.io Dec 13 01:26:47.075192 containerd[1469]: time="2024-12-13T01:26:47.074743199Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:26:47.459052 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-baddbad5ea1ede5525b9056674d3664a6023bfbfd3e864088f01c3d579920636-rootfs.mount: Deactivated successfully. Dec 13 01:26:47.526570 containerd[1469]: time="2024-12-13T01:26:47.526506056Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:47.527838 containerd[1469]: time="2024-12-13T01:26:47.527755019Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18906629" Dec 13 01:26:47.529701 containerd[1469]: time="2024-12-13T01:26:47.529624244Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:26:47.531925 containerd[1469]: time="2024-12-13T01:26:47.531430702Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.542708906s" Dec 13 01:26:47.531925 containerd[1469]: time="2024-12-13T01:26:47.531482125Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 01:26:47.534447 containerd[1469]: time="2024-12-13T01:26:47.534271536Z" level=info msg="CreateContainer within sandbox \"13a567e1f7738ad9d212088e2d2ce5692e7bda218aa253ad9fb13732d702bd40\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 01:26:47.552019 containerd[1469]: time="2024-12-13T01:26:47.551970719Z" level=info msg="CreateContainer within sandbox \"13a567e1f7738ad9d212088e2d2ce5692e7bda218aa253ad9fb13732d702bd40\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a86ee7043efb6625549a292a485096649511f17d9788e399dd314de13466995d\"" Dec 13 01:26:47.553237 containerd[1469]: time="2024-12-13T01:26:47.552523024Z" level=info msg="StartContainer for \"a86ee7043efb6625549a292a485096649511f17d9788e399dd314de13466995d\"" Dec 13 01:26:47.603380 systemd[1]: Started cri-containerd-a86ee7043efb6625549a292a485096649511f17d9788e399dd314de13466995d.scope - libcontainer container a86ee7043efb6625549a292a485096649511f17d9788e399dd314de13466995d. Dec 13 01:26:47.637262 containerd[1469]: time="2024-12-13T01:26:47.637145615Z" level=info msg="StartContainer for \"a86ee7043efb6625549a292a485096649511f17d9788e399dd314de13466995d\" returns successfully" Dec 13 01:26:47.810899 containerd[1469]: time="2024-12-13T01:26:47.809907018Z" level=info msg="CreateContainer within sandbox \"316b477ec6497a335c12aa1c0c9ed11002f911482fca26c9248194efa5662f73\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:26:47.834456 containerd[1469]: time="2024-12-13T01:26:47.834294514Z" level=info msg="CreateContainer within sandbox \"316b477ec6497a335c12aa1c0c9ed11002f911482fca26c9248194efa5662f73\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b6673b2998760b082cf95c3619b596c6d2fff8b48034277179c88f5ffe8c25b2\"" Dec 13 01:26:47.835300 containerd[1469]: time="2024-12-13T01:26:47.835260650Z" level=info msg="StartContainer for \"b6673b2998760b082cf95c3619b596c6d2fff8b48034277179c88f5ffe8c25b2\"" Dec 13 01:26:47.869491 kubelet[2566]: I1213 01:26:47.869251 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-pqdpq" podStartSLOduration=0.941121022 podStartE2EDuration="11.869221065s" podCreationTimestamp="2024-12-13 01:26:36 +0000 UTC" firstStartedPulling="2024-12-13 01:26:36.604463138 +0000 UTC m=+6.055095879" lastFinishedPulling="2024-12-13 01:26:47.532563181 +0000 UTC m=+16.983195922" observedRunningTime="2024-12-13 01:26:47.856448172 +0000 UTC m=+17.307080928" watchObservedRunningTime="2024-12-13 01:26:47.869221065 +0000 UTC m=+17.319853823" Dec 13 01:26:47.911418 systemd[1]: Started cri-containerd-b6673b2998760b082cf95c3619b596c6d2fff8b48034277179c88f5ffe8c25b2.scope - libcontainer container b6673b2998760b082cf95c3619b596c6d2fff8b48034277179c88f5ffe8c25b2. Dec 13 01:26:48.042506 containerd[1469]: time="2024-12-13T01:26:48.042444615Z" level=info msg="StartContainer for \"b6673b2998760b082cf95c3619b596c6d2fff8b48034277179c88f5ffe8c25b2\" returns successfully" Dec 13 01:26:48.060429 systemd[1]: cri-containerd-b6673b2998760b082cf95c3619b596c6d2fff8b48034277179c88f5ffe8c25b2.scope: Deactivated successfully. Dec 13 01:26:48.208219 containerd[1469]: time="2024-12-13T01:26:48.207123664Z" level=info msg="shim disconnected" id=b6673b2998760b082cf95c3619b596c6d2fff8b48034277179c88f5ffe8c25b2 namespace=k8s.io Dec 13 01:26:48.208219 containerd[1469]: time="2024-12-13T01:26:48.207231950Z" level=warning msg="cleaning up after shim disconnected" id=b6673b2998760b082cf95c3619b596c6d2fff8b48034277179c88f5ffe8c25b2 namespace=k8s.io Dec 13 01:26:48.208219 containerd[1469]: time="2024-12-13T01:26:48.207249182Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:26:48.813428 containerd[1469]: time="2024-12-13T01:26:48.813286201Z" level=info msg="CreateContainer within sandbox \"316b477ec6497a335c12aa1c0c9ed11002f911482fca26c9248194efa5662f73\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:26:48.835203 containerd[1469]: time="2024-12-13T01:26:48.832827424Z" level=info msg="CreateContainer within sandbox \"316b477ec6497a335c12aa1c0c9ed11002f911482fca26c9248194efa5662f73\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"64ef4349ce402ecbc3848aa0a6863770d30dac88cc61aa42e51514976a55a91e\"" Dec 13 01:26:48.838358 containerd[1469]: time="2024-12-13T01:26:48.836113040Z" level=info msg="StartContainer for \"64ef4349ce402ecbc3848aa0a6863770d30dac88cc61aa42e51514976a55a91e\"" Dec 13 01:26:48.881376 systemd[1]: Started cri-containerd-64ef4349ce402ecbc3848aa0a6863770d30dac88cc61aa42e51514976a55a91e.scope - libcontainer container 64ef4349ce402ecbc3848aa0a6863770d30dac88cc61aa42e51514976a55a91e. Dec 13 01:26:48.917067 systemd[1]: cri-containerd-64ef4349ce402ecbc3848aa0a6863770d30dac88cc61aa42e51514976a55a91e.scope: Deactivated successfully. Dec 13 01:26:48.922217 containerd[1469]: time="2024-12-13T01:26:48.922103738Z" level=info msg="StartContainer for \"64ef4349ce402ecbc3848aa0a6863770d30dac88cc61aa42e51514976a55a91e\" returns successfully" Dec 13 01:26:48.950595 containerd[1469]: time="2024-12-13T01:26:48.950515360Z" level=info msg="shim disconnected" id=64ef4349ce402ecbc3848aa0a6863770d30dac88cc61aa42e51514976a55a91e namespace=k8s.io Dec 13 01:26:48.950595 containerd[1469]: time="2024-12-13T01:26:48.950575211Z" level=warning msg="cleaning up after shim disconnected" id=64ef4349ce402ecbc3848aa0a6863770d30dac88cc61aa42e51514976a55a91e namespace=k8s.io Dec 13 01:26:48.950595 containerd[1469]: time="2024-12-13T01:26:48.950591016Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:26:49.458685 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64ef4349ce402ecbc3848aa0a6863770d30dac88cc61aa42e51514976a55a91e-rootfs.mount: Deactivated successfully. Dec 13 01:26:49.819098 containerd[1469]: time="2024-12-13T01:26:49.819054083Z" level=info msg="CreateContainer within sandbox \"316b477ec6497a335c12aa1c0c9ed11002f911482fca26c9248194efa5662f73\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:26:49.857425 containerd[1469]: time="2024-12-13T01:26:49.856939472Z" level=info msg="CreateContainer within sandbox \"316b477ec6497a335c12aa1c0c9ed11002f911482fca26c9248194efa5662f73\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"10ec9b81423cd163c22f469c6de41135a3fe51332529e8efc23a169477488323\"" Dec 13 01:26:49.859073 containerd[1469]: time="2024-12-13T01:26:49.859034509Z" level=info msg="StartContainer for \"10ec9b81423cd163c22f469c6de41135a3fe51332529e8efc23a169477488323\"" Dec 13 01:26:49.903935 systemd[1]: Started cri-containerd-10ec9b81423cd163c22f469c6de41135a3fe51332529e8efc23a169477488323.scope - libcontainer container 10ec9b81423cd163c22f469c6de41135a3fe51332529e8efc23a169477488323. Dec 13 01:26:49.947876 containerd[1469]: time="2024-12-13T01:26:49.947727412Z" level=info msg="StartContainer for \"10ec9b81423cd163c22f469c6de41135a3fe51332529e8efc23a169477488323\" returns successfully" Dec 13 01:26:50.099302 kubelet[2566]: I1213 01:26:50.098739 2566 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 01:26:50.159209 systemd[1]: Created slice kubepods-burstable-pod9debee37_0389_4ef8_93d0_bb6338332c76.slice - libcontainer container kubepods-burstable-pod9debee37_0389_4ef8_93d0_bb6338332c76.slice. Dec 13 01:26:50.168054 systemd[1]: Created slice kubepods-burstable-podfd1dc9b7_47b9_4b91_ac95_d0697fe9d17c.slice - libcontainer container kubepods-burstable-podfd1dc9b7_47b9_4b91_ac95_d0697fe9d17c.slice. Dec 13 01:26:50.219487 kubelet[2566]: I1213 01:26:50.219435 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9debee37-0389-4ef8-93d0-bb6338332c76-config-volume\") pod \"coredns-6f6b679f8f-54b75\" (UID: \"9debee37-0389-4ef8-93d0-bb6338332c76\") " pod="kube-system/coredns-6f6b679f8f-54b75" Dec 13 01:26:50.219487 kubelet[2566]: I1213 01:26:50.219495 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fd1dc9b7-47b9-4b91-ac95-d0697fe9d17c-config-volume\") pod \"coredns-6f6b679f8f-2hrh7\" (UID: \"fd1dc9b7-47b9-4b91-ac95-d0697fe9d17c\") " pod="kube-system/coredns-6f6b679f8f-2hrh7" Dec 13 01:26:50.219761 kubelet[2566]: I1213 01:26:50.219525 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlst6\" (UniqueName: \"kubernetes.io/projected/fd1dc9b7-47b9-4b91-ac95-d0697fe9d17c-kube-api-access-xlst6\") pod \"coredns-6f6b679f8f-2hrh7\" (UID: \"fd1dc9b7-47b9-4b91-ac95-d0697fe9d17c\") " pod="kube-system/coredns-6f6b679f8f-2hrh7" Dec 13 01:26:50.219761 kubelet[2566]: I1213 01:26:50.219554 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdggw\" (UniqueName: \"kubernetes.io/projected/9debee37-0389-4ef8-93d0-bb6338332c76-kube-api-access-xdggw\") pod \"coredns-6f6b679f8f-54b75\" (UID: \"9debee37-0389-4ef8-93d0-bb6338332c76\") " pod="kube-system/coredns-6f6b679f8f-54b75" Dec 13 01:26:50.466814 containerd[1469]: time="2024-12-13T01:26:50.466238292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-54b75,Uid:9debee37-0389-4ef8-93d0-bb6338332c76,Namespace:kube-system,Attempt:0,}" Dec 13 01:26:50.477148 containerd[1469]: time="2024-12-13T01:26:50.477089498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2hrh7,Uid:fd1dc9b7-47b9-4b91-ac95-d0697fe9d17c,Namespace:kube-system,Attempt:0,}" Dec 13 01:26:50.842038 kubelet[2566]: I1213 01:26:50.841942 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-m9wgs" podStartSLOduration=8.212219863 podStartE2EDuration="15.84189626s" podCreationTimestamp="2024-12-13 01:26:35 +0000 UTC" firstStartedPulling="2024-12-13 01:26:36.357925718 +0000 UTC m=+5.808558459" lastFinishedPulling="2024-12-13 01:26:43.987602107 +0000 UTC m=+13.438234856" observedRunningTime="2024-12-13 01:26:50.839918578 +0000 UTC m=+20.290551330" watchObservedRunningTime="2024-12-13 01:26:50.84189626 +0000 UTC m=+20.292529011" Dec 13 01:26:52.268889 systemd-networkd[1383]: cilium_host: Link UP Dec 13 01:26:52.271296 systemd-networkd[1383]: cilium_net: Link UP Dec 13 01:26:52.272456 systemd-networkd[1383]: cilium_net: Gained carrier Dec 13 01:26:52.274241 systemd-networkd[1383]: cilium_host: Gained carrier Dec 13 01:26:52.416090 systemd-networkd[1383]: cilium_vxlan: Link UP Dec 13 01:26:52.417160 systemd-networkd[1383]: cilium_vxlan: Gained carrier Dec 13 01:26:52.513373 systemd-networkd[1383]: cilium_net: Gained IPv6LL Dec 13 01:26:52.691367 kernel: NET: Registered PF_ALG protocol family Dec 13 01:26:53.194792 systemd-networkd[1383]: cilium_host: Gained IPv6LL Dec 13 01:26:53.531298 systemd-networkd[1383]: lxc_health: Link UP Dec 13 01:26:53.535275 systemd-networkd[1383]: lxc_health: Gained carrier Dec 13 01:26:53.642027 systemd-networkd[1383]: cilium_vxlan: Gained IPv6LL Dec 13 01:26:54.055428 systemd-networkd[1383]: lxca0f3c7c684c6: Link UP Dec 13 01:26:54.064257 kernel: eth0: renamed from tmpb7af8 Dec 13 01:26:54.074233 systemd-networkd[1383]: lxca0f3c7c684c6: Gained carrier Dec 13 01:26:54.112152 systemd-networkd[1383]: lxcc40c8c4c81f9: Link UP Dec 13 01:26:54.123314 kernel: eth0: renamed from tmp74122 Dec 13 01:26:54.135304 systemd-networkd[1383]: lxcc40c8c4c81f9: Gained carrier Dec 13 01:26:54.793374 systemd-networkd[1383]: lxc_health: Gained IPv6LL Dec 13 01:26:55.497844 systemd-networkd[1383]: lxca0f3c7c684c6: Gained IPv6LL Dec 13 01:26:56.011907 systemd-networkd[1383]: lxcc40c8c4c81f9: Gained IPv6LL Dec 13 01:26:58.771340 ntpd[1433]: Listen normally on 7 cilium_host 192.168.0.104:123 Dec 13 01:26:58.774709 ntpd[1433]: 13 Dec 01:26:58 ntpd[1433]: Listen normally on 7 cilium_host 192.168.0.104:123 Dec 13 01:26:58.774709 ntpd[1433]: 13 Dec 01:26:58 ntpd[1433]: Listen normally on 8 cilium_net [fe80::8485:2aff:fe59:bd65%4]:123 Dec 13 01:26:58.774709 ntpd[1433]: 13 Dec 01:26:58 ntpd[1433]: Listen normally on 9 cilium_host [fe80::f04f:36ff:fe68:7a39%5]:123 Dec 13 01:26:58.774709 ntpd[1433]: 13 Dec 01:26:58 ntpd[1433]: Listen normally on 10 cilium_vxlan [fe80::4c9d:b5ff:fe57:8091%6]:123 Dec 13 01:26:58.774709 ntpd[1433]: 13 Dec 01:26:58 ntpd[1433]: Listen normally on 11 lxc_health [fe80::6872:46ff:fe85:2551%8]:123 Dec 13 01:26:58.774709 ntpd[1433]: 13 Dec 01:26:58 ntpd[1433]: Listen normally on 12 lxca0f3c7c684c6 [fe80::e4da:2eff:feed:79ac%10]:123 Dec 13 01:26:58.774709 ntpd[1433]: 13 Dec 01:26:58 ntpd[1433]: Listen normally on 13 lxcc40c8c4c81f9 [fe80::f8e5:27ff:fedb:7e87%12]:123 Dec 13 01:26:58.771472 ntpd[1433]: Listen normally on 8 cilium_net [fe80::8485:2aff:fe59:bd65%4]:123 Dec 13 01:26:58.771554 ntpd[1433]: Listen normally on 9 cilium_host [fe80::f04f:36ff:fe68:7a39%5]:123 Dec 13 01:26:58.771613 ntpd[1433]: Listen normally on 10 cilium_vxlan [fe80::4c9d:b5ff:fe57:8091%6]:123 Dec 13 01:26:58.771670 ntpd[1433]: Listen normally on 11 lxc_health [fe80::6872:46ff:fe85:2551%8]:123 Dec 13 01:26:58.771726 ntpd[1433]: Listen normally on 12 lxca0f3c7c684c6 [fe80::e4da:2eff:feed:79ac%10]:123 Dec 13 01:26:58.771784 ntpd[1433]: Listen normally on 13 lxcc40c8c4c81f9 [fe80::f8e5:27ff:fedb:7e87%12]:123 Dec 13 01:26:58.908659 containerd[1469]: time="2024-12-13T01:26:58.908528579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:26:58.908659 containerd[1469]: time="2024-12-13T01:26:58.908605026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:26:58.909819 containerd[1469]: time="2024-12-13T01:26:58.909236332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:26:58.909819 containerd[1469]: time="2024-12-13T01:26:58.909520060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:26:58.966267 containerd[1469]: time="2024-12-13T01:26:58.963376483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:26:58.966267 containerd[1469]: time="2024-12-13T01:26:58.963489940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:26:58.966267 containerd[1469]: time="2024-12-13T01:26:58.963510368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:26:58.966267 containerd[1469]: time="2024-12-13T01:26:58.963638373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:26:58.967271 systemd[1]: Started cri-containerd-b7af804030bef8d0e042b87c48ae71329a8dd610f5edaac61ed7d87c2e03c853.scope - libcontainer container b7af804030bef8d0e042b87c48ae71329a8dd610f5edaac61ed7d87c2e03c853. Dec 13 01:26:59.020253 systemd[1]: run-containerd-runc-k8s.io-741222a6765d6782621b71170261fd8230983c2a76bc4dbaa00a6b854ce8f39a-runc.jvS0zP.mount: Deactivated successfully. Dec 13 01:26:59.032422 systemd[1]: Started cri-containerd-741222a6765d6782621b71170261fd8230983c2a76bc4dbaa00a6b854ce8f39a.scope - libcontainer container 741222a6765d6782621b71170261fd8230983c2a76bc4dbaa00a6b854ce8f39a. Dec 13 01:26:59.106659 containerd[1469]: time="2024-12-13T01:26:59.106605557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-54b75,Uid:9debee37-0389-4ef8-93d0-bb6338332c76,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7af804030bef8d0e042b87c48ae71329a8dd610f5edaac61ed7d87c2e03c853\"" Dec 13 01:26:59.116667 containerd[1469]: time="2024-12-13T01:26:59.116614952Z" level=info msg="CreateContainer within sandbox \"b7af804030bef8d0e042b87c48ae71329a8dd610f5edaac61ed7d87c2e03c853\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:26:59.148601 containerd[1469]: time="2024-12-13T01:26:59.148550732Z" level=info msg="CreateContainer within sandbox \"b7af804030bef8d0e042b87c48ae71329a8dd610f5edaac61ed7d87c2e03c853\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ae4b202774e4de2e8203f80a8763e99d5ff67a8600294a16a542e0fe921db08d\"" Dec 13 01:26:59.150775 containerd[1469]: time="2024-12-13T01:26:59.149445741Z" level=info msg="StartContainer for \"ae4b202774e4de2e8203f80a8763e99d5ff67a8600294a16a542e0fe921db08d\"" Dec 13 01:26:59.179605 containerd[1469]: time="2024-12-13T01:26:59.179539631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2hrh7,Uid:fd1dc9b7-47b9-4b91-ac95-d0697fe9d17c,Namespace:kube-system,Attempt:0,} returns sandbox id \"741222a6765d6782621b71170261fd8230983c2a76bc4dbaa00a6b854ce8f39a\"" Dec 13 01:26:59.190211 containerd[1469]: time="2024-12-13T01:26:59.188856277Z" level=info msg="CreateContainer within sandbox \"741222a6765d6782621b71170261fd8230983c2a76bc4dbaa00a6b854ce8f39a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:26:59.209773 containerd[1469]: time="2024-12-13T01:26:59.209722361Z" level=info msg="CreateContainer within sandbox \"741222a6765d6782621b71170261fd8230983c2a76bc4dbaa00a6b854ce8f39a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d0b4281f476d69ba87970b7842451f81a7997fa9d21aba192e03688f669904aa\"" Dec 13 01:26:59.211217 containerd[1469]: time="2024-12-13T01:26:59.210671109Z" level=info msg="StartContainer for \"d0b4281f476d69ba87970b7842451f81a7997fa9d21aba192e03688f669904aa\"" Dec 13 01:26:59.212414 systemd[1]: Started cri-containerd-ae4b202774e4de2e8203f80a8763e99d5ff67a8600294a16a542e0fe921db08d.scope - libcontainer container ae4b202774e4de2e8203f80a8763e99d5ff67a8600294a16a542e0fe921db08d. Dec 13 01:26:59.264424 systemd[1]: Started cri-containerd-d0b4281f476d69ba87970b7842451f81a7997fa9d21aba192e03688f669904aa.scope - libcontainer container d0b4281f476d69ba87970b7842451f81a7997fa9d21aba192e03688f669904aa. Dec 13 01:26:59.269501 containerd[1469]: time="2024-12-13T01:26:59.269449196Z" level=info msg="StartContainer for \"ae4b202774e4de2e8203f80a8763e99d5ff67a8600294a16a542e0fe921db08d\" returns successfully" Dec 13 01:26:59.320436 containerd[1469]: time="2024-12-13T01:26:59.320066905Z" level=info msg="StartContainer for \"d0b4281f476d69ba87970b7842451f81a7997fa9d21aba192e03688f669904aa\" returns successfully" Dec 13 01:26:59.883116 kubelet[2566]: I1213 01:26:59.882989 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-54b75" podStartSLOduration=23.882963657 podStartE2EDuration="23.882963657s" podCreationTimestamp="2024-12-13 01:26:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:26:59.882536386 +0000 UTC m=+29.333169137" watchObservedRunningTime="2024-12-13 01:26:59.882963657 +0000 UTC m=+29.333596409" Dec 13 01:26:59.884742 kubelet[2566]: I1213 01:26:59.883897 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-2hrh7" podStartSLOduration=23.883879172 podStartE2EDuration="23.883879172s" podCreationTimestamp="2024-12-13 01:26:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:26:59.864463518 +0000 UTC m=+29.315096273" watchObservedRunningTime="2024-12-13 01:26:59.883879172 +0000 UTC m=+29.334511925" Dec 13 01:27:05.504982 kubelet[2566]: I1213 01:27:05.504918 2566 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:27:26.788592 systemd[1]: Started sshd@9-10.128.0.80:22-147.75.109.163:39880.service - OpenSSH per-connection server daemon (147.75.109.163:39880). Dec 13 01:27:27.083422 sshd[3954]: Accepted publickey for core from 147.75.109.163 port 39880 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:27:27.085421 sshd[3954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:27.091253 systemd-logind[1446]: New session 10 of user core. Dec 13 01:27:27.097400 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:27:27.403018 sshd[3954]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:27.408015 systemd[1]: sshd@9-10.128.0.80:22-147.75.109.163:39880.service: Deactivated successfully. Dec 13 01:27:27.410640 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:27:27.412854 systemd-logind[1446]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:27:27.414619 systemd-logind[1446]: Removed session 10. Dec 13 01:27:32.459842 systemd[1]: Started sshd@10-10.128.0.80:22-147.75.109.163:39884.service - OpenSSH per-connection server daemon (147.75.109.163:39884). Dec 13 01:27:32.754846 sshd[3976]: Accepted publickey for core from 147.75.109.163 port 39884 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:27:32.756759 sshd[3976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:32.763087 systemd-logind[1446]: New session 11 of user core. Dec 13 01:27:32.770411 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:27:33.041365 sshd[3976]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:33.047952 systemd[1]: sshd@10-10.128.0.80:22-147.75.109.163:39884.service: Deactivated successfully. Dec 13 01:27:33.050922 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:27:33.052158 systemd-logind[1446]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:27:33.053751 systemd-logind[1446]: Removed session 11. Dec 13 01:27:38.105563 systemd[1]: Started sshd@11-10.128.0.80:22-147.75.109.163:59840.service - OpenSSH per-connection server daemon (147.75.109.163:59840). Dec 13 01:27:38.386012 sshd[3992]: Accepted publickey for core from 147.75.109.163 port 59840 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:27:38.387631 sshd[3992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:38.393923 systemd-logind[1446]: New session 12 of user core. Dec 13 01:27:38.399381 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:27:38.673224 sshd[3992]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:38.679954 systemd[1]: sshd@11-10.128.0.80:22-147.75.109.163:59840.service: Deactivated successfully. Dec 13 01:27:38.682676 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:27:38.683962 systemd-logind[1446]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:27:38.685462 systemd-logind[1446]: Removed session 12. Dec 13 01:27:43.731576 systemd[1]: Started sshd@12-10.128.0.80:22-147.75.109.163:59844.service - OpenSSH per-connection server daemon (147.75.109.163:59844). Dec 13 01:27:44.025699 sshd[4006]: Accepted publickey for core from 147.75.109.163 port 59844 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:27:44.027757 sshd[4006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:44.034338 systemd-logind[1446]: New session 13 of user core. Dec 13 01:27:44.041436 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:27:44.308880 sshd[4006]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:44.314664 systemd[1]: sshd@12-10.128.0.80:22-147.75.109.163:59844.service: Deactivated successfully. Dec 13 01:27:44.317253 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:27:44.318404 systemd-logind[1446]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:27:44.319894 systemd-logind[1446]: Removed session 13. Dec 13 01:27:49.371607 systemd[1]: Started sshd@13-10.128.0.80:22-147.75.109.163:53182.service - OpenSSH per-connection server daemon (147.75.109.163:53182). Dec 13 01:27:49.670015 sshd[4020]: Accepted publickey for core from 147.75.109.163 port 53182 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:27:49.671877 sshd[4020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:49.678520 systemd-logind[1446]: New session 14 of user core. Dec 13 01:27:49.690397 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:27:49.970435 sshd[4020]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:49.976228 systemd[1]: sshd@13-10.128.0.80:22-147.75.109.163:53182.service: Deactivated successfully. Dec 13 01:27:49.978999 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:27:49.980272 systemd-logind[1446]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:27:49.981849 systemd-logind[1446]: Removed session 14. Dec 13 01:27:50.028596 systemd[1]: Started sshd@14-10.128.0.80:22-147.75.109.163:53198.service - OpenSSH per-connection server daemon (147.75.109.163:53198). Dec 13 01:27:50.316444 sshd[4034]: Accepted publickey for core from 147.75.109.163 port 53198 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:27:50.318326 sshd[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:50.324877 systemd-logind[1446]: New session 15 of user core. Dec 13 01:27:50.329400 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:27:50.650855 sshd[4034]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:50.656700 systemd[1]: sshd@14-10.128.0.80:22-147.75.109.163:53198.service: Deactivated successfully. Dec 13 01:27:50.659149 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:27:50.660435 systemd-logind[1446]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:27:50.661802 systemd-logind[1446]: Removed session 15. Dec 13 01:27:50.705579 systemd[1]: Started sshd@15-10.128.0.80:22-147.75.109.163:53202.service - OpenSSH per-connection server daemon (147.75.109.163:53202). Dec 13 01:27:51.000658 sshd[4045]: Accepted publickey for core from 147.75.109.163 port 53202 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:27:51.005815 sshd[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:51.021383 systemd-logind[1446]: New session 16 of user core. Dec 13 01:27:51.029537 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:27:51.287583 sshd[4045]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:51.292158 systemd[1]: sshd@15-10.128.0.80:22-147.75.109.163:53202.service: Deactivated successfully. Dec 13 01:27:51.295153 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:27:51.297562 systemd-logind[1446]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:27:51.299713 systemd-logind[1446]: Removed session 16. Dec 13 01:27:56.345582 systemd[1]: Started sshd@16-10.128.0.80:22-147.75.109.163:46892.service - OpenSSH per-connection server daemon (147.75.109.163:46892). Dec 13 01:27:56.638062 sshd[4057]: Accepted publickey for core from 147.75.109.163 port 46892 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:27:56.639960 sshd[4057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:56.646571 systemd-logind[1446]: New session 17 of user core. Dec 13 01:27:56.651382 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:27:56.925083 sshd[4057]: pam_unix(sshd:session): session closed for user core Dec 13 01:27:56.931041 systemd[1]: sshd@16-10.128.0.80:22-147.75.109.163:46892.service: Deactivated successfully. Dec 13 01:27:56.934150 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:27:56.935259 systemd-logind[1446]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:27:56.936864 systemd-logind[1446]: Removed session 17. Dec 13 01:28:01.983620 systemd[1]: Started sshd@17-10.128.0.80:22-147.75.109.163:46904.service - OpenSSH per-connection server daemon (147.75.109.163:46904). Dec 13 01:28:02.271427 sshd[4069]: Accepted publickey for core from 147.75.109.163 port 46904 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:28:02.273341 sshd[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:02.279114 systemd-logind[1446]: New session 18 of user core. Dec 13 01:28:02.284419 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:28:02.559786 sshd[4069]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:02.565659 systemd[1]: sshd@17-10.128.0.80:22-147.75.109.163:46904.service: Deactivated successfully. Dec 13 01:28:02.568724 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:28:02.569872 systemd-logind[1446]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:28:02.571447 systemd-logind[1446]: Removed session 18. Dec 13 01:28:02.624053 systemd[1]: Started sshd@18-10.128.0.80:22-147.75.109.163:46916.service - OpenSSH per-connection server daemon (147.75.109.163:46916). Dec 13 01:28:02.903133 sshd[4082]: Accepted publickey for core from 147.75.109.163 port 46916 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:28:02.905035 sshd[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:02.911525 systemd-logind[1446]: New session 19 of user core. Dec 13 01:28:02.916416 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:28:03.246873 sshd[4082]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:03.252678 systemd[1]: sshd@18-10.128.0.80:22-147.75.109.163:46916.service: Deactivated successfully. Dec 13 01:28:03.255563 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:28:03.256727 systemd-logind[1446]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:28:03.258307 systemd-logind[1446]: Removed session 19. Dec 13 01:28:03.302726 systemd[1]: Started sshd@19-10.128.0.80:22-147.75.109.163:46932.service - OpenSSH per-connection server daemon (147.75.109.163:46932). Dec 13 01:28:03.587021 sshd[4092]: Accepted publickey for core from 147.75.109.163 port 46932 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:28:03.589107 sshd[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:03.594890 systemd-logind[1446]: New session 20 of user core. Dec 13 01:28:03.604432 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:28:05.374214 sshd[4092]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:05.383456 systemd[1]: sshd@19-10.128.0.80:22-147.75.109.163:46932.service: Deactivated successfully. Dec 13 01:28:05.383674 systemd-logind[1446]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:28:05.389128 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:28:05.392939 systemd-logind[1446]: Removed session 20. Dec 13 01:28:05.430873 systemd[1]: Started sshd@20-10.128.0.80:22-147.75.109.163:46940.service - OpenSSH per-connection server daemon (147.75.109.163:46940). Dec 13 01:28:05.711328 sshd[4110]: Accepted publickey for core from 147.75.109.163 port 46940 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:28:05.713295 sshd[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:05.719324 systemd-logind[1446]: New session 21 of user core. Dec 13 01:28:05.727390 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:28:06.130852 sshd[4110]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:06.136545 systemd[1]: sshd@20-10.128.0.80:22-147.75.109.163:46940.service: Deactivated successfully. Dec 13 01:28:06.139058 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:28:06.140151 systemd-logind[1446]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:28:06.141872 systemd-logind[1446]: Removed session 21. Dec 13 01:28:06.185985 systemd[1]: Started sshd@21-10.128.0.80:22-147.75.109.163:52452.service - OpenSSH per-connection server daemon (147.75.109.163:52452). Dec 13 01:28:06.465048 sshd[4121]: Accepted publickey for core from 147.75.109.163 port 52452 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:28:06.467210 sshd[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:06.473248 systemd-logind[1446]: New session 22 of user core. Dec 13 01:28:06.479390 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:28:06.748409 sshd[4121]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:06.753030 systemd[1]: sshd@21-10.128.0.80:22-147.75.109.163:52452.service: Deactivated successfully. Dec 13 01:28:06.755579 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:28:06.757572 systemd-logind[1446]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:28:06.759130 systemd-logind[1446]: Removed session 22. Dec 13 01:28:11.808844 systemd[1]: Started sshd@22-10.128.0.80:22-147.75.109.163:52468.service - OpenSSH per-connection server daemon (147.75.109.163:52468). Dec 13 01:28:12.102673 sshd[4139]: Accepted publickey for core from 147.75.109.163 port 52468 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:28:12.104552 sshd[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:12.110778 systemd-logind[1446]: New session 23 of user core. Dec 13 01:28:12.124449 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:28:12.384220 sshd[4139]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:12.390029 systemd[1]: sshd@22-10.128.0.80:22-147.75.109.163:52468.service: Deactivated successfully. Dec 13 01:28:12.392540 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:28:12.393594 systemd-logind[1446]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:28:12.395232 systemd-logind[1446]: Removed session 23. Dec 13 01:28:17.442590 systemd[1]: Started sshd@23-10.128.0.80:22-147.75.109.163:46106.service - OpenSSH per-connection server daemon (147.75.109.163:46106). Dec 13 01:28:17.729355 sshd[4155]: Accepted publickey for core from 147.75.109.163 port 46106 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:28:17.731149 sshd[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:17.736677 systemd-logind[1446]: New session 24 of user core. Dec 13 01:28:17.743384 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:28:18.014044 sshd[4155]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:18.018656 systemd[1]: sshd@23-10.128.0.80:22-147.75.109.163:46106.service: Deactivated successfully. Dec 13 01:28:18.022076 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:28:18.024134 systemd-logind[1446]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:28:18.025766 systemd-logind[1446]: Removed session 24. Dec 13 01:28:23.067594 systemd[1]: Started sshd@24-10.128.0.80:22-147.75.109.163:46108.service - OpenSSH per-connection server daemon (147.75.109.163:46108). Dec 13 01:28:23.360995 sshd[4168]: Accepted publickey for core from 147.75.109.163 port 46108 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:28:23.362960 sshd[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:23.371243 systemd-logind[1446]: New session 25 of user core. Dec 13 01:28:23.376432 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:28:23.646930 sshd[4168]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:23.651832 systemd[1]: sshd@24-10.128.0.80:22-147.75.109.163:46108.service: Deactivated successfully. Dec 13 01:28:23.654373 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:28:23.656369 systemd-logind[1446]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:28:23.658018 systemd-logind[1446]: Removed session 25. Dec 13 01:28:28.702959 systemd[1]: Started sshd@25-10.128.0.80:22-147.75.109.163:38492.service - OpenSSH per-connection server daemon (147.75.109.163:38492). Dec 13 01:28:28.991750 sshd[4182]: Accepted publickey for core from 147.75.109.163 port 38492 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:28:28.993744 sshd[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:28.999319 systemd-logind[1446]: New session 26 of user core. Dec 13 01:28:29.010385 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:28:29.274717 sshd[4182]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:29.280004 systemd[1]: sshd@25-10.128.0.80:22-147.75.109.163:38492.service: Deactivated successfully. Dec 13 01:28:29.282642 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:28:29.284964 systemd-logind[1446]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:28:29.286564 systemd-logind[1446]: Removed session 26. Dec 13 01:28:29.335536 systemd[1]: Started sshd@26-10.128.0.80:22-147.75.109.163:38496.service - OpenSSH per-connection server daemon (147.75.109.163:38496). Dec 13 01:28:29.627286 sshd[4195]: Accepted publickey for core from 147.75.109.163 port 38496 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:28:29.629118 sshd[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:29.634881 systemd-logind[1446]: New session 27 of user core. Dec 13 01:28:29.640406 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 01:28:31.185742 containerd[1469]: time="2024-12-13T01:28:31.185661587Z" level=info msg="StopContainer for \"a86ee7043efb6625549a292a485096649511f17d9788e399dd314de13466995d\" with timeout 30 (s)" Dec 13 01:28:31.187201 containerd[1469]: time="2024-12-13T01:28:31.186660199Z" level=info msg="Stop container \"a86ee7043efb6625549a292a485096649511f17d9788e399dd314de13466995d\" with signal terminated" Dec 13 01:28:31.209823 systemd[1]: cri-containerd-a86ee7043efb6625549a292a485096649511f17d9788e399dd314de13466995d.scope: Deactivated successfully. Dec 13 01:28:31.231854 containerd[1469]: time="2024-12-13T01:28:31.231516848Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:28:31.242516 containerd[1469]: time="2024-12-13T01:28:31.242356134Z" level=info msg="StopContainer for \"10ec9b81423cd163c22f469c6de41135a3fe51332529e8efc23a169477488323\" with timeout 2 (s)" Dec 13 01:28:31.243114 containerd[1469]: time="2024-12-13T01:28:31.243039675Z" level=info msg="Stop container \"10ec9b81423cd163c22f469c6de41135a3fe51332529e8efc23a169477488323\" with signal terminated" Dec 13 01:28:31.253549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a86ee7043efb6625549a292a485096649511f17d9788e399dd314de13466995d-rootfs.mount: Deactivated successfully. Dec 13 01:28:31.260195 systemd-networkd[1383]: lxc_health: Link DOWN Dec 13 01:28:31.260212 systemd-networkd[1383]: lxc_health: Lost carrier Dec 13 01:28:31.280864 systemd[1]: cri-containerd-10ec9b81423cd163c22f469c6de41135a3fe51332529e8efc23a169477488323.scope: Deactivated successfully. Dec 13 01:28:31.281603 systemd[1]: cri-containerd-10ec9b81423cd163c22f469c6de41135a3fe51332529e8efc23a169477488323.scope: Consumed 9.027s CPU time. Dec 13 01:28:31.293471 containerd[1469]: time="2024-12-13T01:28:31.293358489Z" level=info msg="shim disconnected" id=a86ee7043efb6625549a292a485096649511f17d9788e399dd314de13466995d namespace=k8s.io Dec 13 01:28:31.293471 containerd[1469]: time="2024-12-13T01:28:31.293464505Z" level=warning msg="cleaning up after shim disconnected" id=a86ee7043efb6625549a292a485096649511f17d9788e399dd314de13466995d namespace=k8s.io Dec 13 01:28:31.293471 containerd[1469]: time="2024-12-13T01:28:31.293479902Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:28:31.332747 containerd[1469]: time="2024-12-13T01:28:31.332214322Z" level=info msg="StopContainer for \"a86ee7043efb6625549a292a485096649511f17d9788e399dd314de13466995d\" returns successfully" Dec 13 01:28:31.334127 containerd[1469]: time="2024-12-13T01:28:31.333460608Z" level=info msg="StopPodSandbox for \"13a567e1f7738ad9d212088e2d2ce5692e7bda218aa253ad9fb13732d702bd40\"" Dec 13 01:28:31.334127 containerd[1469]: time="2024-12-13T01:28:31.333511276Z" level=info msg="Container to stop \"a86ee7043efb6625549a292a485096649511f17d9788e399dd314de13466995d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:28:31.334414 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10ec9b81423cd163c22f469c6de41135a3fe51332529e8efc23a169477488323-rootfs.mount: Deactivated successfully. Dec 13 01:28:31.340981 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-13a567e1f7738ad9d212088e2d2ce5692e7bda218aa253ad9fb13732d702bd40-shm.mount: Deactivated successfully. Dec 13 01:28:31.344096 containerd[1469]: time="2024-12-13T01:28:31.343831955Z" level=info msg="shim disconnected" id=10ec9b81423cd163c22f469c6de41135a3fe51332529e8efc23a169477488323 namespace=k8s.io Dec 13 01:28:31.344096 containerd[1469]: time="2024-12-13T01:28:31.343895222Z" level=warning msg="cleaning up after shim disconnected" id=10ec9b81423cd163c22f469c6de41135a3fe51332529e8efc23a169477488323 namespace=k8s.io Dec 13 01:28:31.344096 containerd[1469]: time="2024-12-13T01:28:31.343910770Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:28:31.353018 systemd[1]: cri-containerd-13a567e1f7738ad9d212088e2d2ce5692e7bda218aa253ad9fb13732d702bd40.scope: Deactivated successfully. Dec 13 01:28:31.373223 containerd[1469]: time="2024-12-13T01:28:31.373015219Z" level=info msg="StopContainer for \"10ec9b81423cd163c22f469c6de41135a3fe51332529e8efc23a169477488323\" returns successfully" Dec 13 01:28:31.374257 containerd[1469]: time="2024-12-13T01:28:31.373972449Z" level=info msg="StopPodSandbox for \"316b477ec6497a335c12aa1c0c9ed11002f911482fca26c9248194efa5662f73\"" Dec 13 01:28:31.374257 containerd[1469]: time="2024-12-13T01:28:31.374031137Z" level=info msg="Container to stop \"f4a6eab261690acb75fa8b6a33d0ce5d12beb027880d053909d286d82bd32bb1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:28:31.374257 containerd[1469]: time="2024-12-13T01:28:31.374052729Z" level=info msg="Container to stop \"baddbad5ea1ede5525b9056674d3664a6023bfbfd3e864088f01c3d579920636\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:28:31.374257 containerd[1469]: time="2024-12-13T01:28:31.374070709Z" level=info msg="Container to stop \"b6673b2998760b082cf95c3619b596c6d2fff8b48034277179c88f5ffe8c25b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:28:31.374257 containerd[1469]: time="2024-12-13T01:28:31.374089153Z" level=info msg="Container to stop \"10ec9b81423cd163c22f469c6de41135a3fe51332529e8efc23a169477488323\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:28:31.374257 containerd[1469]: time="2024-12-13T01:28:31.374108448Z" level=info msg="Container to stop \"64ef4349ce402ecbc3848aa0a6863770d30dac88cc61aa42e51514976a55a91e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:28:31.380811 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-316b477ec6497a335c12aa1c0c9ed11002f911482fca26c9248194efa5662f73-shm.mount: Deactivated successfully. Dec 13 01:28:31.387201 systemd[1]: cri-containerd-316b477ec6497a335c12aa1c0c9ed11002f911482fca26c9248194efa5662f73.scope: Deactivated successfully. Dec 13 01:28:31.400435 containerd[1469]: time="2024-12-13T01:28:31.400367056Z" level=info msg="shim disconnected" id=13a567e1f7738ad9d212088e2d2ce5692e7bda218aa253ad9fb13732d702bd40 namespace=k8s.io Dec 13 01:28:31.401460 containerd[1469]: time="2024-12-13T01:28:31.401427533Z" level=warning msg="cleaning up after shim disconnected" id=13a567e1f7738ad9d212088e2d2ce5692e7bda218aa253ad9fb13732d702bd40 namespace=k8s.io Dec 13 01:28:31.401591 containerd[1469]: time="2024-12-13T01:28:31.401568534Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:28:31.426745 containerd[1469]: time="2024-12-13T01:28:31.426688845Z" level=info msg="TearDown network for sandbox \"13a567e1f7738ad9d212088e2d2ce5692e7bda218aa253ad9fb13732d702bd40\" successfully" Dec 13 01:28:31.427026 containerd[1469]: time="2024-12-13T01:28:31.426767510Z" level=info msg="StopPodSandbox for \"13a567e1f7738ad9d212088e2d2ce5692e7bda218aa253ad9fb13732d702bd40\" returns successfully" Dec 13 01:28:31.428541 containerd[1469]: time="2024-12-13T01:28:31.428141286Z" level=info msg="shim disconnected" id=316b477ec6497a335c12aa1c0c9ed11002f911482fca26c9248194efa5662f73 namespace=k8s.io Dec 13 01:28:31.428541 containerd[1469]: time="2024-12-13T01:28:31.428274929Z" level=warning msg="cleaning up after shim disconnected" id=316b477ec6497a335c12aa1c0c9ed11002f911482fca26c9248194efa5662f73 namespace=k8s.io Dec 13 01:28:31.428541 containerd[1469]: time="2024-12-13T01:28:31.428291600Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:28:31.450532 containerd[1469]: time="2024-12-13T01:28:31.450154371Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:28:31Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:28:31.452084 containerd[1469]: time="2024-12-13T01:28:31.452024998Z" level=info msg="TearDown network for sandbox \"316b477ec6497a335c12aa1c0c9ed11002f911482fca26c9248194efa5662f73\" successfully" Dec 13 01:28:31.452084 containerd[1469]: time="2024-12-13T01:28:31.452063023Z" level=info msg="StopPodSandbox for \"316b477ec6497a335c12aa1c0c9ed11002f911482fca26c9248194efa5662f73\" returns successfully" Dec 13 01:28:31.577536 kubelet[2566]: I1213 01:28:31.576943 2566 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-hostproc\") pod \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\" (UID: \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\") " Dec 13 01:28:31.577536 kubelet[2566]: I1213 01:28:31.577011 2566 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc2qk\" (UniqueName: \"kubernetes.io/projected/2de7733b-8502-4431-b3f9-45c7f0b51cc6-kube-api-access-dc2qk\") pod \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\" (UID: \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\") " Dec 13 01:28:31.577536 kubelet[2566]: I1213 01:28:31.577033 2566 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-hostproc" (OuterVolumeSpecName: "hostproc") pod "2de7733b-8502-4431-b3f9-45c7f0b51cc6" (UID: "2de7733b-8502-4431-b3f9-45c7f0b51cc6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:28:31.577536 kubelet[2566]: I1213 01:28:31.577045 2566 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-lib-modules\") pod \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\" (UID: \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\") " Dec 13 01:28:31.577536 kubelet[2566]: I1213 01:28:31.577078 2566 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2de7733b-8502-4431-b3f9-45c7f0b51cc6-cilium-config-path\") pod \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\" (UID: \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\") " Dec 13 01:28:31.577536 kubelet[2566]: I1213 01:28:31.577104 2566 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-cni-path\") pod \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\" (UID: \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\") " Dec 13 01:28:31.577536 kubelet[2566]: I1213 01:28:31.577126 2566 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-xtables-lock\") pod \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\" (UID: \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\") " Dec 13 01:28:31.577536 kubelet[2566]: I1213 01:28:31.577150 2566 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-cilium-run\") pod \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\" (UID: \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\") " Dec 13 01:28:31.577536 kubelet[2566]: I1213 01:28:31.577215 2566 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2de7733b-8502-4431-b3f9-45c7f0b51cc6-hubble-tls\") pod \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\" (UID: \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\") " Dec 13 01:28:31.577536 kubelet[2566]: I1213 01:28:31.577247 2566 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9c4d\" (UniqueName: \"kubernetes.io/projected/c1168053-9e94-487d-b091-8c92aa694e49-kube-api-access-l9c4d\") pod \"c1168053-9e94-487d-b091-8c92aa694e49\" (UID: \"c1168053-9e94-487d-b091-8c92aa694e49\") " Dec 13 01:28:31.577536 kubelet[2566]: I1213 01:28:31.577271 2566 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-host-proc-sys-kernel\") pod \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\" (UID: \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\") " Dec 13 01:28:31.577536 kubelet[2566]: I1213 01:28:31.577295 2566 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-bpf-maps\") pod \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\" (UID: \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\") " Dec 13 01:28:31.577536 kubelet[2566]: I1213 01:28:31.577325 2566 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2de7733b-8502-4431-b3f9-45c7f0b51cc6-clustermesh-secrets\") pod \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\" (UID: \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\") " Dec 13 01:28:31.577536 kubelet[2566]: I1213 01:28:31.577365 2566 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1168053-9e94-487d-b091-8c92aa694e49-cilium-config-path\") pod \"c1168053-9e94-487d-b091-8c92aa694e49\" (UID: \"c1168053-9e94-487d-b091-8c92aa694e49\") " Dec 13 01:28:31.577536 kubelet[2566]: I1213 01:28:31.577391 2566 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-etc-cni-netd\") pod \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\" (UID: \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\") " Dec 13 01:28:31.577536 kubelet[2566]: I1213 01:28:31.577417 2566 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-cilium-cgroup\") pod \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\" (UID: \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\") " Dec 13 01:28:31.578983 kubelet[2566]: I1213 01:28:31.577445 2566 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-host-proc-sys-net\") pod \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\" (UID: \"2de7733b-8502-4431-b3f9-45c7f0b51cc6\") " Dec 13 01:28:31.578983 kubelet[2566]: I1213 01:28:31.577502 2566 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-hostproc\") on node \"ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:28:31.578983 kubelet[2566]: I1213 01:28:31.577554 2566 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2de7733b-8502-4431-b3f9-45c7f0b51cc6" (UID: "2de7733b-8502-4431-b3f9-45c7f0b51cc6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:28:31.578983 kubelet[2566]: I1213 01:28:31.577584 2566 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2de7733b-8502-4431-b3f9-45c7f0b51cc6" (UID: "2de7733b-8502-4431-b3f9-45c7f0b51cc6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:28:31.580938 kubelet[2566]: I1213 01:28:31.580897 2566 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2de7733b-8502-4431-b3f9-45c7f0b51cc6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2de7733b-8502-4431-b3f9-45c7f0b51cc6" (UID: "2de7733b-8502-4431-b3f9-45c7f0b51cc6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:28:31.581105 kubelet[2566]: I1213 01:28:31.580981 2566 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-cni-path" (OuterVolumeSpecName: "cni-path") pod "2de7733b-8502-4431-b3f9-45c7f0b51cc6" (UID: "2de7733b-8502-4431-b3f9-45c7f0b51cc6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:28:31.581105 kubelet[2566]: I1213 01:28:31.581012 2566 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2de7733b-8502-4431-b3f9-45c7f0b51cc6" (UID: "2de7733b-8502-4431-b3f9-45c7f0b51cc6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:28:31.581105 kubelet[2566]: I1213 01:28:31.581036 2566 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2de7733b-8502-4431-b3f9-45c7f0b51cc6" (UID: "2de7733b-8502-4431-b3f9-45c7f0b51cc6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:28:31.585464 kubelet[2566]: I1213 01:28:31.585300 2566 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2de7733b-8502-4431-b3f9-45c7f0b51cc6" (UID: "2de7733b-8502-4431-b3f9-45c7f0b51cc6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:28:31.585464 kubelet[2566]: I1213 01:28:31.585355 2566 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2de7733b-8502-4431-b3f9-45c7f0b51cc6" (UID: "2de7733b-8502-4431-b3f9-45c7f0b51cc6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:28:31.587194 kubelet[2566]: I1213 01:28:31.585902 2566 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2de7733b-8502-4431-b3f9-45c7f0b51cc6" (UID: "2de7733b-8502-4431-b3f9-45c7f0b51cc6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:28:31.587194 kubelet[2566]: I1213 01:28:31.585946 2566 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2de7733b-8502-4431-b3f9-45c7f0b51cc6" (UID: "2de7733b-8502-4431-b3f9-45c7f0b51cc6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:28:31.588824 kubelet[2566]: I1213 01:28:31.588794 2566 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2de7733b-8502-4431-b3f9-45c7f0b51cc6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2de7733b-8502-4431-b3f9-45c7f0b51cc6" (UID: "2de7733b-8502-4431-b3f9-45c7f0b51cc6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:28:31.589855 kubelet[2566]: I1213 01:28:31.589825 2566 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2de7733b-8502-4431-b3f9-45c7f0b51cc6-kube-api-access-dc2qk" (OuterVolumeSpecName: "kube-api-access-dc2qk") pod "2de7733b-8502-4431-b3f9-45c7f0b51cc6" (UID: "2de7733b-8502-4431-b3f9-45c7f0b51cc6"). InnerVolumeSpecName "kube-api-access-dc2qk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:28:31.590360 kubelet[2566]: I1213 01:28:31.590314 2566 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1168053-9e94-487d-b091-8c92aa694e49-kube-api-access-l9c4d" (OuterVolumeSpecName: "kube-api-access-l9c4d") pod "c1168053-9e94-487d-b091-8c92aa694e49" (UID: "c1168053-9e94-487d-b091-8c92aa694e49"). InnerVolumeSpecName "kube-api-access-l9c4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:28:31.591274 kubelet[2566]: I1213 01:28:31.591233 2566 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1168053-9e94-487d-b091-8c92aa694e49-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c1168053-9e94-487d-b091-8c92aa694e49" (UID: "c1168053-9e94-487d-b091-8c92aa694e49"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:28:31.592226 kubelet[2566]: I1213 01:28:31.592183 2566 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2de7733b-8502-4431-b3f9-45c7f0b51cc6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2de7733b-8502-4431-b3f9-45c7f0b51cc6" (UID: "2de7733b-8502-4431-b3f9-45c7f0b51cc6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:28:31.678427 kubelet[2566]: I1213 01:28:31.678375 2566 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2de7733b-8502-4431-b3f9-45c7f0b51cc6-clustermesh-secrets\") on node \"ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:28:31.678427 kubelet[2566]: I1213 01:28:31.678415 2566 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-host-proc-sys-kernel\") on node \"ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:28:31.678427 kubelet[2566]: I1213 01:28:31.678433 2566 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-bpf-maps\") on node \"ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:28:31.678740 kubelet[2566]: I1213 01:28:31.678450 2566 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c1168053-9e94-487d-b091-8c92aa694e49-cilium-config-path\") on node \"ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:28:31.678740 kubelet[2566]: I1213 01:28:31.678469 2566 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-etc-cni-netd\") on node \"ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:28:31.678740 kubelet[2566]: I1213 01:28:31.678483 2566 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-host-proc-sys-net\") on node \"ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:28:31.678740 kubelet[2566]: I1213 01:28:31.678498 2566 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-cilium-cgroup\") on node \"ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:28:31.678740 kubelet[2566]: I1213 01:28:31.678514 2566 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-dc2qk\" (UniqueName: \"kubernetes.io/projected/2de7733b-8502-4431-b3f9-45c7f0b51cc6-kube-api-access-dc2qk\") on node \"ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:28:31.678740 kubelet[2566]: I1213 01:28:31.678530 2566 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-lib-modules\") on node \"ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:28:31.678740 kubelet[2566]: I1213 01:28:31.678547 2566 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2de7733b-8502-4431-b3f9-45c7f0b51cc6-cilium-config-path\") on node \"ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:28:31.678740 kubelet[2566]: I1213 01:28:31.678563 2566 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-cni-path\") on node \"ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:28:31.678740 kubelet[2566]: I1213 01:28:31.678589 2566 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-xtables-lock\") on node \"ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:28:31.678740 kubelet[2566]: I1213 01:28:31.678604 2566 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2de7733b-8502-4431-b3f9-45c7f0b51cc6-cilium-run\") on node \"ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:28:31.678740 kubelet[2566]: I1213 01:28:31.678619 2566 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2de7733b-8502-4431-b3f9-45c7f0b51cc6-hubble-tls\") on node \"ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:28:31.678740 kubelet[2566]: I1213 01:28:31.678634 2566 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-l9c4d\" (UniqueName: \"kubernetes.io/projected/c1168053-9e94-487d-b091-8c92aa694e49-kube-api-access-l9c4d\") on node \"ci-4081-2-1-7bf6c49e76e7895cf114.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 01:28:32.062335 kubelet[2566]: I1213 01:28:32.060978 2566 scope.go:117] "RemoveContainer" containerID="a86ee7043efb6625549a292a485096649511f17d9788e399dd314de13466995d" Dec 13 01:28:32.067308 containerd[1469]: time="2024-12-13T01:28:32.066862295Z" level=info msg="RemoveContainer for \"a86ee7043efb6625549a292a485096649511f17d9788e399dd314de13466995d\"" Dec 13 01:28:32.073029 systemd[1]: Removed slice kubepods-besteffort-podc1168053_9e94_487d_b091_8c92aa694e49.slice - libcontainer container kubepods-besteffort-podc1168053_9e94_487d_b091_8c92aa694e49.slice. Dec 13 01:28:32.078363 containerd[1469]: time="2024-12-13T01:28:32.078237961Z" level=info msg="RemoveContainer for \"a86ee7043efb6625549a292a485096649511f17d9788e399dd314de13466995d\" returns successfully" Dec 13 01:28:32.080203 kubelet[2566]: I1213 01:28:32.079381 2566 scope.go:117] "RemoveContainer" containerID="a86ee7043efb6625549a292a485096649511f17d9788e399dd314de13466995d" Dec 13 01:28:32.081278 containerd[1469]: time="2024-12-13T01:28:32.081164536Z" level=error msg="ContainerStatus for \"a86ee7043efb6625549a292a485096649511f17d9788e399dd314de13466995d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a86ee7043efb6625549a292a485096649511f17d9788e399dd314de13466995d\": not found" Dec 13 01:28:32.082073 kubelet[2566]: E1213 01:28:32.081893 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a86ee7043efb6625549a292a485096649511f17d9788e399dd314de13466995d\": not found" containerID="a86ee7043efb6625549a292a485096649511f17d9788e399dd314de13466995d" Dec 13 01:28:32.082253 kubelet[2566]: I1213 01:28:32.082118 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a86ee7043efb6625549a292a485096649511f17d9788e399dd314de13466995d"} err="failed to get container status \"a86ee7043efb6625549a292a485096649511f17d9788e399dd314de13466995d\": rpc error: code = NotFound desc = an error occurred when try to find container \"a86ee7043efb6625549a292a485096649511f17d9788e399dd314de13466995d\": not found" Dec 13 01:28:32.082358 kubelet[2566]: I1213 01:28:32.082262 2566 scope.go:117] "RemoveContainer" containerID="10ec9b81423cd163c22f469c6de41135a3fe51332529e8efc23a169477488323" Dec 13 01:28:32.083924 systemd[1]: Removed slice kubepods-burstable-pod2de7733b_8502_4431_b3f9_45c7f0b51cc6.slice - libcontainer container kubepods-burstable-pod2de7733b_8502_4431_b3f9_45c7f0b51cc6.slice. Dec 13 01:28:32.084096 systemd[1]: kubepods-burstable-pod2de7733b_8502_4431_b3f9_45c7f0b51cc6.slice: Consumed 9.147s CPU time. Dec 13 01:28:32.091207 containerd[1469]: time="2024-12-13T01:28:32.090144814Z" level=info msg="RemoveContainer for \"10ec9b81423cd163c22f469c6de41135a3fe51332529e8efc23a169477488323\"" Dec 13 01:28:32.096584 containerd[1469]: time="2024-12-13T01:28:32.096521305Z" level=info msg="RemoveContainer for \"10ec9b81423cd163c22f469c6de41135a3fe51332529e8efc23a169477488323\" returns successfully" Dec 13 01:28:32.102673 kubelet[2566]: I1213 01:28:32.102616 2566 scope.go:117] "RemoveContainer" containerID="64ef4349ce402ecbc3848aa0a6863770d30dac88cc61aa42e51514976a55a91e" Dec 13 01:28:32.106583 containerd[1469]: time="2024-12-13T01:28:32.106537158Z" level=info msg="RemoveContainer for \"64ef4349ce402ecbc3848aa0a6863770d30dac88cc61aa42e51514976a55a91e\"" Dec 13 01:28:32.111877 containerd[1469]: time="2024-12-13T01:28:32.111820587Z" level=info msg="RemoveContainer for \"64ef4349ce402ecbc3848aa0a6863770d30dac88cc61aa42e51514976a55a91e\" returns successfully" Dec 13 01:28:32.112071 kubelet[2566]: I1213 01:28:32.112036 2566 scope.go:117] "RemoveContainer" containerID="b6673b2998760b082cf95c3619b596c6d2fff8b48034277179c88f5ffe8c25b2" Dec 13 01:28:32.117574 containerd[1469]: time="2024-12-13T01:28:32.117512918Z" level=info msg="RemoveContainer for \"b6673b2998760b082cf95c3619b596c6d2fff8b48034277179c88f5ffe8c25b2\"" Dec 13 01:28:32.122379 containerd[1469]: time="2024-12-13T01:28:32.122144155Z" level=info msg="RemoveContainer for \"b6673b2998760b082cf95c3619b596c6d2fff8b48034277179c88f5ffe8c25b2\" returns successfully" Dec 13 01:28:32.122825 kubelet[2566]: I1213 01:28:32.122766 2566 scope.go:117] "RemoveContainer" containerID="baddbad5ea1ede5525b9056674d3664a6023bfbfd3e864088f01c3d579920636" Dec 13 01:28:32.124936 containerd[1469]: time="2024-12-13T01:28:32.124904231Z" level=info msg="RemoveContainer for \"baddbad5ea1ede5525b9056674d3664a6023bfbfd3e864088f01c3d579920636\"" Dec 13 01:28:32.129570 containerd[1469]: time="2024-12-13T01:28:32.129463166Z" level=info msg="RemoveContainer for \"baddbad5ea1ede5525b9056674d3664a6023bfbfd3e864088f01c3d579920636\" returns successfully" Dec 13 01:28:32.129873 kubelet[2566]: I1213 01:28:32.129852 2566 scope.go:117] "RemoveContainer" containerID="f4a6eab261690acb75fa8b6a33d0ce5d12beb027880d053909d286d82bd32bb1" Dec 13 01:28:32.131445 containerd[1469]: time="2024-12-13T01:28:32.131406146Z" level=info msg="RemoveContainer for \"f4a6eab261690acb75fa8b6a33d0ce5d12beb027880d053909d286d82bd32bb1\"" Dec 13 01:28:32.135510 containerd[1469]: time="2024-12-13T01:28:32.135478155Z" level=info msg="RemoveContainer for \"f4a6eab261690acb75fa8b6a33d0ce5d12beb027880d053909d286d82bd32bb1\" returns successfully" Dec 13 01:28:32.135730 kubelet[2566]: I1213 01:28:32.135674 2566 scope.go:117] "RemoveContainer" containerID="10ec9b81423cd163c22f469c6de41135a3fe51332529e8efc23a169477488323" Dec 13 01:28:32.135957 containerd[1469]: time="2024-12-13T01:28:32.135907252Z" level=error msg="ContainerStatus for \"10ec9b81423cd163c22f469c6de41135a3fe51332529e8efc23a169477488323\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"10ec9b81423cd163c22f469c6de41135a3fe51332529e8efc23a169477488323\": not found" Dec 13 01:28:32.136132 kubelet[2566]: E1213 01:28:32.136073 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"10ec9b81423cd163c22f469c6de41135a3fe51332529e8efc23a169477488323\": not found" containerID="10ec9b81423cd163c22f469c6de41135a3fe51332529e8efc23a169477488323" Dec 13 01:28:32.136132 kubelet[2566]: I1213 01:28:32.136112 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"10ec9b81423cd163c22f469c6de41135a3fe51332529e8efc23a169477488323"} err="failed to get container status \"10ec9b81423cd163c22f469c6de41135a3fe51332529e8efc23a169477488323\": rpc error: code = NotFound desc = an error occurred when try to find container \"10ec9b81423cd163c22f469c6de41135a3fe51332529e8efc23a169477488323\": not found" Dec 13 01:28:32.136365 kubelet[2566]: I1213 01:28:32.136144 2566 scope.go:117] "RemoveContainer" containerID="64ef4349ce402ecbc3848aa0a6863770d30dac88cc61aa42e51514976a55a91e" Dec 13 01:28:32.136584 containerd[1469]: time="2024-12-13T01:28:32.136446502Z" level=error msg="ContainerStatus for \"64ef4349ce402ecbc3848aa0a6863770d30dac88cc61aa42e51514976a55a91e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"64ef4349ce402ecbc3848aa0a6863770d30dac88cc61aa42e51514976a55a91e\": not found" Dec 13 01:28:32.136724 kubelet[2566]: E1213 01:28:32.136670 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"64ef4349ce402ecbc3848aa0a6863770d30dac88cc61aa42e51514976a55a91e\": not found" containerID="64ef4349ce402ecbc3848aa0a6863770d30dac88cc61aa42e51514976a55a91e" Dec 13 01:28:32.136895 kubelet[2566]: I1213 01:28:32.136731 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"64ef4349ce402ecbc3848aa0a6863770d30dac88cc61aa42e51514976a55a91e"} err="failed to get container status \"64ef4349ce402ecbc3848aa0a6863770d30dac88cc61aa42e51514976a55a91e\": rpc error: code = NotFound desc = an error occurred when try to find container \"64ef4349ce402ecbc3848aa0a6863770d30dac88cc61aa42e51514976a55a91e\": not found" Dec 13 01:28:32.136895 kubelet[2566]: I1213 01:28:32.136758 2566 scope.go:117] "RemoveContainer" containerID="b6673b2998760b082cf95c3619b596c6d2fff8b48034277179c88f5ffe8c25b2" Dec 13 01:28:32.137067 containerd[1469]: time="2024-12-13T01:28:32.137014708Z" level=error msg="ContainerStatus for \"b6673b2998760b082cf95c3619b596c6d2fff8b48034277179c88f5ffe8c25b2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b6673b2998760b082cf95c3619b596c6d2fff8b48034277179c88f5ffe8c25b2\": not found" Dec 13 01:28:32.137231 kubelet[2566]: E1213 01:28:32.137163 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b6673b2998760b082cf95c3619b596c6d2fff8b48034277179c88f5ffe8c25b2\": not found" containerID="b6673b2998760b082cf95c3619b596c6d2fff8b48034277179c88f5ffe8c25b2" Dec 13 01:28:32.137231 kubelet[2566]: I1213 01:28:32.137216 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b6673b2998760b082cf95c3619b596c6d2fff8b48034277179c88f5ffe8c25b2"} err="failed to get container status \"b6673b2998760b082cf95c3619b596c6d2fff8b48034277179c88f5ffe8c25b2\": rpc error: code = NotFound desc = an error occurred when try to find container \"b6673b2998760b082cf95c3619b596c6d2fff8b48034277179c88f5ffe8c25b2\": not found" Dec 13 01:28:32.137469 kubelet[2566]: I1213 01:28:32.137244 2566 scope.go:117] "RemoveContainer" containerID="baddbad5ea1ede5525b9056674d3664a6023bfbfd3e864088f01c3d579920636" Dec 13 01:28:32.137631 kubelet[2566]: E1213 01:28:32.137591 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"baddbad5ea1ede5525b9056674d3664a6023bfbfd3e864088f01c3d579920636\": not found" containerID="baddbad5ea1ede5525b9056674d3664a6023bfbfd3e864088f01c3d579920636" Dec 13 01:28:32.137631 kubelet[2566]: I1213 01:28:32.137621 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"baddbad5ea1ede5525b9056674d3664a6023bfbfd3e864088f01c3d579920636"} err="failed to get container status \"baddbad5ea1ede5525b9056674d3664a6023bfbfd3e864088f01c3d579920636\": rpc error: code = NotFound desc = an error occurred when try to find container \"baddbad5ea1ede5525b9056674d3664a6023bfbfd3e864088f01c3d579920636\": not found" Dec 13 01:28:32.137893 containerd[1469]: time="2024-12-13T01:28:32.137451021Z" level=error msg="ContainerStatus for \"baddbad5ea1ede5525b9056674d3664a6023bfbfd3e864088f01c3d579920636\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"baddbad5ea1ede5525b9056674d3664a6023bfbfd3e864088f01c3d579920636\": not found" Dec 13 01:28:32.137893 containerd[1469]: time="2024-12-13T01:28:32.137862801Z" level=error msg="ContainerStatus for \"f4a6eab261690acb75fa8b6a33d0ce5d12beb027880d053909d286d82bd32bb1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f4a6eab261690acb75fa8b6a33d0ce5d12beb027880d053909d286d82bd32bb1\": not found" Dec 13 01:28:32.138109 kubelet[2566]: I1213 01:28:32.137645 2566 scope.go:117] "RemoveContainer" containerID="f4a6eab261690acb75fa8b6a33d0ce5d12beb027880d053909d286d82bd32bb1" Dec 13 01:28:32.138194 kubelet[2566]: E1213 01:28:32.138140 2566 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f4a6eab261690acb75fa8b6a33d0ce5d12beb027880d053909d286d82bd32bb1\": not found" containerID="f4a6eab261690acb75fa8b6a33d0ce5d12beb027880d053909d286d82bd32bb1" Dec 13 01:28:32.138265 kubelet[2566]: I1213 01:28:32.138202 2566 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f4a6eab261690acb75fa8b6a33d0ce5d12beb027880d053909d286d82bd32bb1"} err="failed to get container status \"f4a6eab261690acb75fa8b6a33d0ce5d12beb027880d053909d286d82bd32bb1\": rpc error: code = NotFound desc = an error occurred when try to find container \"f4a6eab261690acb75fa8b6a33d0ce5d12beb027880d053909d286d82bd32bb1\": not found" Dec 13 01:28:32.202104 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13a567e1f7738ad9d212088e2d2ce5692e7bda218aa253ad9fb13732d702bd40-rootfs.mount: Deactivated successfully. Dec 13 01:28:32.204326 systemd[1]: var-lib-kubelet-pods-c1168053\x2d9e94\x2d487d\x2db091\x2d8c92aa694e49-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl9c4d.mount: Deactivated successfully. Dec 13 01:28:32.204466 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-316b477ec6497a335c12aa1c0c9ed11002f911482fca26c9248194efa5662f73-rootfs.mount: Deactivated successfully. Dec 13 01:28:32.204578 systemd[1]: var-lib-kubelet-pods-2de7733b\x2d8502\x2d4431\x2db3f9\x2d45c7f0b51cc6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddc2qk.mount: Deactivated successfully. Dec 13 01:28:32.204706 systemd[1]: var-lib-kubelet-pods-2de7733b\x2d8502\x2d4431\x2db3f9\x2d45c7f0b51cc6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:28:32.204828 systemd[1]: var-lib-kubelet-pods-2de7733b\x2d8502\x2d4431\x2db3f9\x2d45c7f0b51cc6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:28:32.720636 kubelet[2566]: I1213 01:28:32.720148 2566 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2de7733b-8502-4431-b3f9-45c7f0b51cc6" path="/var/lib/kubelet/pods/2de7733b-8502-4431-b3f9-45c7f0b51cc6/volumes" Dec 13 01:28:32.721419 kubelet[2566]: I1213 01:28:32.721326 2566 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1168053-9e94-487d-b091-8c92aa694e49" path="/var/lib/kubelet/pods/c1168053-9e94-487d-b091-8c92aa694e49/volumes" Dec 13 01:28:33.165656 sshd[4195]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:33.170580 systemd[1]: sshd@26-10.128.0.80:22-147.75.109.163:38496.service: Deactivated successfully. Dec 13 01:28:33.173430 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 01:28:33.175564 systemd-logind[1446]: Session 27 logged out. Waiting for processes to exit. Dec 13 01:28:33.177111 systemd-logind[1446]: Removed session 27. Dec 13 01:28:33.223577 systemd[1]: Started sshd@27-10.128.0.80:22-147.75.109.163:38502.service - OpenSSH per-connection server daemon (147.75.109.163:38502). Dec 13 01:28:33.513609 sshd[4359]: Accepted publickey for core from 147.75.109.163 port 38502 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:28:33.515535 sshd[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:33.522225 systemd-logind[1446]: New session 28 of user core. Dec 13 01:28:33.528373 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 01:28:33.771246 ntpd[1433]: Deleting interface #11 lxc_health, fe80::6872:46ff:fe85:2551%8#123, interface stats: received=0, sent=0, dropped=0, active_time=95 secs Dec 13 01:28:33.772394 ntpd[1433]: 13 Dec 01:28:33 ntpd[1433]: Deleting interface #11 lxc_health, fe80::6872:46ff:fe85:2551%8#123, interface stats: received=0, sent=0, dropped=0, active_time=95 secs Dec 13 01:28:34.544017 kubelet[2566]: E1213 01:28:34.542026 2566 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2de7733b-8502-4431-b3f9-45c7f0b51cc6" containerName="cilium-agent" Dec 13 01:28:34.544017 kubelet[2566]: E1213 01:28:34.542071 2566 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2de7733b-8502-4431-b3f9-45c7f0b51cc6" containerName="mount-cgroup" Dec 13 01:28:34.544017 kubelet[2566]: E1213 01:28:34.542083 2566 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2de7733b-8502-4431-b3f9-45c7f0b51cc6" containerName="apply-sysctl-overwrites" Dec 13 01:28:34.544017 kubelet[2566]: E1213 01:28:34.542093 2566 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c1168053-9e94-487d-b091-8c92aa694e49" containerName="cilium-operator" Dec 13 01:28:34.544017 kubelet[2566]: E1213 01:28:34.542104 2566 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2de7733b-8502-4431-b3f9-45c7f0b51cc6" containerName="clean-cilium-state" Dec 13 01:28:34.544017 kubelet[2566]: E1213 01:28:34.542115 2566 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2de7733b-8502-4431-b3f9-45c7f0b51cc6" containerName="mount-bpf-fs" Dec 13 01:28:34.544017 kubelet[2566]: I1213 01:28:34.542155 2566 memory_manager.go:354] "RemoveStaleState removing state" podUID="2de7733b-8502-4431-b3f9-45c7f0b51cc6" containerName="cilium-agent" Dec 13 01:28:34.544017 kubelet[2566]: I1213 01:28:34.542167 2566 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1168053-9e94-487d-b091-8c92aa694e49" containerName="cilium-operator" Dec 13 01:28:34.543814 sshd[4359]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:34.556800 systemd[1]: sshd@27-10.128.0.80:22-147.75.109.163:38502.service: Deactivated successfully. Dec 13 01:28:34.564110 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 01:28:34.570674 systemd-logind[1446]: Session 28 logged out. Waiting for processes to exit. Dec 13 01:28:34.582609 systemd[1]: Created slice kubepods-burstable-pod46ca3628_ecb6_4b9f_8188_9ffcfb890f08.slice - libcontainer container kubepods-burstable-pod46ca3628_ecb6_4b9f_8188_9ffcfb890f08.slice. Dec 13 01:28:34.583747 systemd-logind[1446]: Removed session 28. Dec 13 01:28:34.608560 systemd[1]: Started sshd@28-10.128.0.80:22-147.75.109.163:38512.service - OpenSSH per-connection server daemon (147.75.109.163:38512). Dec 13 01:28:34.695998 kubelet[2566]: I1213 01:28:34.695946 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/46ca3628-ecb6-4b9f-8188-9ffcfb890f08-bpf-maps\") pod \"cilium-zl2tq\" (UID: \"46ca3628-ecb6-4b9f-8188-9ffcfb890f08\") " pod="kube-system/cilium-zl2tq" Dec 13 01:28:34.696220 kubelet[2566]: I1213 01:28:34.696020 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/46ca3628-ecb6-4b9f-8188-9ffcfb890f08-host-proc-sys-kernel\") pod \"cilium-zl2tq\" (UID: \"46ca3628-ecb6-4b9f-8188-9ffcfb890f08\") " pod="kube-system/cilium-zl2tq" Dec 13 01:28:34.696220 kubelet[2566]: I1213 01:28:34.696079 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/46ca3628-ecb6-4b9f-8188-9ffcfb890f08-hubble-tls\") pod \"cilium-zl2tq\" (UID: \"46ca3628-ecb6-4b9f-8188-9ffcfb890f08\") " pod="kube-system/cilium-zl2tq" Dec 13 01:28:34.696220 kubelet[2566]: I1213 01:28:34.696116 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96l72\" (UniqueName: \"kubernetes.io/projected/46ca3628-ecb6-4b9f-8188-9ffcfb890f08-kube-api-access-96l72\") pod \"cilium-zl2tq\" (UID: \"46ca3628-ecb6-4b9f-8188-9ffcfb890f08\") " pod="kube-system/cilium-zl2tq" Dec 13 01:28:34.696220 kubelet[2566]: I1213 01:28:34.696152 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/46ca3628-ecb6-4b9f-8188-9ffcfb890f08-cilium-config-path\") pod \"cilium-zl2tq\" (UID: \"46ca3628-ecb6-4b9f-8188-9ffcfb890f08\") " pod="kube-system/cilium-zl2tq" Dec 13 01:28:34.696220 kubelet[2566]: I1213 01:28:34.696199 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/46ca3628-ecb6-4b9f-8188-9ffcfb890f08-cilium-ipsec-secrets\") pod \"cilium-zl2tq\" (UID: \"46ca3628-ecb6-4b9f-8188-9ffcfb890f08\") " pod="kube-system/cilium-zl2tq" Dec 13 01:28:34.696516 kubelet[2566]: I1213 01:28:34.696238 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/46ca3628-ecb6-4b9f-8188-9ffcfb890f08-cni-path\") pod \"cilium-zl2tq\" (UID: \"46ca3628-ecb6-4b9f-8188-9ffcfb890f08\") " pod="kube-system/cilium-zl2tq" Dec 13 01:28:34.696516 kubelet[2566]: I1213 01:28:34.696279 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/46ca3628-ecb6-4b9f-8188-9ffcfb890f08-clustermesh-secrets\") pod \"cilium-zl2tq\" (UID: \"46ca3628-ecb6-4b9f-8188-9ffcfb890f08\") " pod="kube-system/cilium-zl2tq" Dec 13 01:28:34.696516 kubelet[2566]: I1213 01:28:34.696313 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/46ca3628-ecb6-4b9f-8188-9ffcfb890f08-cilium-run\") pod \"cilium-zl2tq\" (UID: \"46ca3628-ecb6-4b9f-8188-9ffcfb890f08\") " pod="kube-system/cilium-zl2tq" Dec 13 01:28:34.696516 kubelet[2566]: I1213 01:28:34.696358 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/46ca3628-ecb6-4b9f-8188-9ffcfb890f08-etc-cni-netd\") pod \"cilium-zl2tq\" (UID: \"46ca3628-ecb6-4b9f-8188-9ffcfb890f08\") " pod="kube-system/cilium-zl2tq" Dec 13 01:28:34.696516 kubelet[2566]: I1213 01:28:34.696398 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46ca3628-ecb6-4b9f-8188-9ffcfb890f08-lib-modules\") pod \"cilium-zl2tq\" (UID: \"46ca3628-ecb6-4b9f-8188-9ffcfb890f08\") " pod="kube-system/cilium-zl2tq" Dec 13 01:28:34.696516 kubelet[2566]: I1213 01:28:34.696435 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46ca3628-ecb6-4b9f-8188-9ffcfb890f08-xtables-lock\") pod \"cilium-zl2tq\" (UID: \"46ca3628-ecb6-4b9f-8188-9ffcfb890f08\") " pod="kube-system/cilium-zl2tq" Dec 13 01:28:34.696516 kubelet[2566]: I1213 01:28:34.696469 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/46ca3628-ecb6-4b9f-8188-9ffcfb890f08-hostproc\") pod \"cilium-zl2tq\" (UID: \"46ca3628-ecb6-4b9f-8188-9ffcfb890f08\") " pod="kube-system/cilium-zl2tq" Dec 13 01:28:34.696516 kubelet[2566]: I1213 01:28:34.696496 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/46ca3628-ecb6-4b9f-8188-9ffcfb890f08-cilium-cgroup\") pod \"cilium-zl2tq\" (UID: \"46ca3628-ecb6-4b9f-8188-9ffcfb890f08\") " pod="kube-system/cilium-zl2tq" Dec 13 01:28:34.696838 kubelet[2566]: I1213 01:28:34.696524 2566 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/46ca3628-ecb6-4b9f-8188-9ffcfb890f08-host-proc-sys-net\") pod \"cilium-zl2tq\" (UID: \"46ca3628-ecb6-4b9f-8188-9ffcfb890f08\") " pod="kube-system/cilium-zl2tq" Dec 13 01:28:34.904080 containerd[1469]: time="2024-12-13T01:28:34.904001153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zl2tq,Uid:46ca3628-ecb6-4b9f-8188-9ffcfb890f08,Namespace:kube-system,Attempt:0,}" Dec 13 01:28:34.911744 sshd[4370]: Accepted publickey for core from 147.75.109.163 port 38512 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:28:34.914147 sshd[4370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:34.935281 systemd-logind[1446]: New session 29 of user core. Dec 13 01:28:34.940511 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 13 01:28:34.945465 containerd[1469]: time="2024-12-13T01:28:34.943437834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:28:34.945465 containerd[1469]: time="2024-12-13T01:28:34.943526746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:28:34.945465 containerd[1469]: time="2024-12-13T01:28:34.943555791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:34.945465 containerd[1469]: time="2024-12-13T01:28:34.943687609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:28:34.977362 systemd[1]: Started cri-containerd-c4937753f6457f2dfb2cca72f0dfdd128f9fa15df9773f6389a578c418af93aa.scope - libcontainer container c4937753f6457f2dfb2cca72f0dfdd128f9fa15df9773f6389a578c418af93aa. Dec 13 01:28:35.006684 containerd[1469]: time="2024-12-13T01:28:35.006625734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zl2tq,Uid:46ca3628-ecb6-4b9f-8188-9ffcfb890f08,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4937753f6457f2dfb2cca72f0dfdd128f9fa15df9773f6389a578c418af93aa\"" Dec 13 01:28:35.010775 containerd[1469]: time="2024-12-13T01:28:35.010712887Z" level=info msg="CreateContainer within sandbox \"c4937753f6457f2dfb2cca72f0dfdd128f9fa15df9773f6389a578c418af93aa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:28:35.024239 containerd[1469]: time="2024-12-13T01:28:35.024198513Z" level=info msg="CreateContainer within sandbox \"c4937753f6457f2dfb2cca72f0dfdd128f9fa15df9773f6389a578c418af93aa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a131ac6a78828872ebee4e789e1f9d4fee972aac8fa298d9484ceb72128162e0\"" Dec 13 01:28:35.025574 containerd[1469]: time="2024-12-13T01:28:35.024955287Z" level=info msg="StartContainer for \"a131ac6a78828872ebee4e789e1f9d4fee972aac8fa298d9484ceb72128162e0\"" Dec 13 01:28:35.062462 systemd[1]: Started cri-containerd-a131ac6a78828872ebee4e789e1f9d4fee972aac8fa298d9484ceb72128162e0.scope - libcontainer container a131ac6a78828872ebee4e789e1f9d4fee972aac8fa298d9484ceb72128162e0. Dec 13 01:28:35.097092 containerd[1469]: time="2024-12-13T01:28:35.097032773Z" level=info msg="StartContainer for \"a131ac6a78828872ebee4e789e1f9d4fee972aac8fa298d9484ceb72128162e0\" returns successfully" Dec 13 01:28:35.108213 systemd[1]: cri-containerd-a131ac6a78828872ebee4e789e1f9d4fee972aac8fa298d9484ceb72128162e0.scope: Deactivated successfully. Dec 13 01:28:35.135210 sshd[4370]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:35.140033 systemd[1]: sshd@28-10.128.0.80:22-147.75.109.163:38512.service: Deactivated successfully. Dec 13 01:28:35.143678 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 01:28:35.145736 systemd-logind[1446]: Session 29 logged out. Waiting for processes to exit. Dec 13 01:28:35.147149 systemd-logind[1446]: Removed session 29. Dec 13 01:28:35.149990 containerd[1469]: time="2024-12-13T01:28:35.149923256Z" level=info msg="shim disconnected" id=a131ac6a78828872ebee4e789e1f9d4fee972aac8fa298d9484ceb72128162e0 namespace=k8s.io Dec 13 01:28:35.150128 containerd[1469]: time="2024-12-13T01:28:35.150085152Z" level=warning msg="cleaning up after shim disconnected" id=a131ac6a78828872ebee4e789e1f9d4fee972aac8fa298d9484ceb72128162e0 namespace=k8s.io Dec 13 01:28:35.150128 containerd[1469]: time="2024-12-13T01:28:35.150109934Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:28:35.189639 systemd[1]: Started sshd@29-10.128.0.80:22-147.75.109.163:38528.service - OpenSSH per-connection server daemon (147.75.109.163:38528). Dec 13 01:28:35.483030 sshd[4486]: Accepted publickey for core from 147.75.109.163 port 38528 ssh2: RSA SHA256:AgO0+kxAF8BQ2wAowEKsr4oSS3e2ISsUStD0EZeAJCU Dec 13 01:28:35.484851 sshd[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:35.491240 systemd-logind[1446]: New session 30 of user core. Dec 13 01:28:35.498386 systemd[1]: Started session-30.scope - Session 30 of User core. Dec 13 01:28:35.821688 kubelet[2566]: E1213 01:28:35.821598 2566 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:28:36.099043 containerd[1469]: time="2024-12-13T01:28:36.098800098Z" level=info msg="CreateContainer within sandbox \"c4937753f6457f2dfb2cca72f0dfdd128f9fa15df9773f6389a578c418af93aa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:28:36.119899 containerd[1469]: time="2024-12-13T01:28:36.117931199Z" level=info msg="CreateContainer within sandbox \"c4937753f6457f2dfb2cca72f0dfdd128f9fa15df9773f6389a578c418af93aa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"98ba59d02feef3a7d19069c892b32a3a9c2b160f6dd230d5c4d5d141f5a3ef70\"" Dec 13 01:28:36.120335 containerd[1469]: time="2024-12-13T01:28:36.120111908Z" level=info msg="StartContainer for \"98ba59d02feef3a7d19069c892b32a3a9c2b160f6dd230d5c4d5d141f5a3ef70\"" Dec 13 01:28:36.172382 systemd[1]: Started cri-containerd-98ba59d02feef3a7d19069c892b32a3a9c2b160f6dd230d5c4d5d141f5a3ef70.scope - libcontainer container 98ba59d02feef3a7d19069c892b32a3a9c2b160f6dd230d5c4d5d141f5a3ef70. Dec 13 01:28:36.203077 containerd[1469]: time="2024-12-13T01:28:36.203004022Z" level=info msg="StartContainer for \"98ba59d02feef3a7d19069c892b32a3a9c2b160f6dd230d5c4d5d141f5a3ef70\" returns successfully" Dec 13 01:28:36.213160 systemd[1]: cri-containerd-98ba59d02feef3a7d19069c892b32a3a9c2b160f6dd230d5c4d5d141f5a3ef70.scope: Deactivated successfully. Dec 13 01:28:36.245639 containerd[1469]: time="2024-12-13T01:28:36.245569418Z" level=info msg="shim disconnected" id=98ba59d02feef3a7d19069c892b32a3a9c2b160f6dd230d5c4d5d141f5a3ef70 namespace=k8s.io Dec 13 01:28:36.245946 containerd[1469]: time="2024-12-13T01:28:36.245653683Z" level=warning msg="cleaning up after shim disconnected" id=98ba59d02feef3a7d19069c892b32a3a9c2b160f6dd230d5c4d5d141f5a3ef70 namespace=k8s.io Dec 13 01:28:36.245946 containerd[1469]: time="2024-12-13T01:28:36.245693520Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:28:36.809110 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98ba59d02feef3a7d19069c892b32a3a9c2b160f6dd230d5c4d5d141f5a3ef70-rootfs.mount: Deactivated successfully. Dec 13 01:28:37.103255 containerd[1469]: time="2024-12-13T01:28:37.103039926Z" level=info msg="CreateContainer within sandbox \"c4937753f6457f2dfb2cca72f0dfdd128f9fa15df9773f6389a578c418af93aa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:28:37.131055 containerd[1469]: time="2024-12-13T01:28:37.130996302Z" level=info msg="CreateContainer within sandbox \"c4937753f6457f2dfb2cca72f0dfdd128f9fa15df9773f6389a578c418af93aa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"66fa0d1d2feea116724e8286cf68cbd578a7b3c9380be7aaa16e10969a7391b2\"" Dec 13 01:28:37.131685 containerd[1469]: time="2024-12-13T01:28:37.131649587Z" level=info msg="StartContainer for \"66fa0d1d2feea116724e8286cf68cbd578a7b3c9380be7aaa16e10969a7391b2\"" Dec 13 01:28:37.177467 systemd[1]: Started cri-containerd-66fa0d1d2feea116724e8286cf68cbd578a7b3c9380be7aaa16e10969a7391b2.scope - libcontainer container 66fa0d1d2feea116724e8286cf68cbd578a7b3c9380be7aaa16e10969a7391b2. Dec 13 01:28:37.215123 containerd[1469]: time="2024-12-13T01:28:37.214565226Z" level=info msg="StartContainer for \"66fa0d1d2feea116724e8286cf68cbd578a7b3c9380be7aaa16e10969a7391b2\" returns successfully" Dec 13 01:28:37.218967 systemd[1]: cri-containerd-66fa0d1d2feea116724e8286cf68cbd578a7b3c9380be7aaa16e10969a7391b2.scope: Deactivated successfully. Dec 13 01:28:37.251880 containerd[1469]: time="2024-12-13T01:28:37.251804546Z" level=info msg="shim disconnected" id=66fa0d1d2feea116724e8286cf68cbd578a7b3c9380be7aaa16e10969a7391b2 namespace=k8s.io Dec 13 01:28:37.251880 containerd[1469]: time="2024-12-13T01:28:37.251879869Z" level=warning msg="cleaning up after shim disconnected" id=66fa0d1d2feea116724e8286cf68cbd578a7b3c9380be7aaa16e10969a7391b2 namespace=k8s.io Dec 13 01:28:37.252239 containerd[1469]: time="2024-12-13T01:28:37.251893651Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:28:37.809109 systemd[1]: run-containerd-runc-k8s.io-66fa0d1d2feea116724e8286cf68cbd578a7b3c9380be7aaa16e10969a7391b2-runc.WQMti3.mount: Deactivated successfully. Dec 13 01:28:37.809295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66fa0d1d2feea116724e8286cf68cbd578a7b3c9380be7aaa16e10969a7391b2-rootfs.mount: Deactivated successfully. Dec 13 01:28:38.108569 containerd[1469]: time="2024-12-13T01:28:38.108493358Z" level=info msg="CreateContainer within sandbox \"c4937753f6457f2dfb2cca72f0dfdd128f9fa15df9773f6389a578c418af93aa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:28:38.129213 containerd[1469]: time="2024-12-13T01:28:38.127997955Z" level=info msg="CreateContainer within sandbox \"c4937753f6457f2dfb2cca72f0dfdd128f9fa15df9773f6389a578c418af93aa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c109ce1e9745e422965305b0a3d5a319fc0bee2a9aeca3b8dc1ea5339a439900\"" Dec 13 01:28:38.130484 containerd[1469]: time="2024-12-13T01:28:38.129588312Z" level=info msg="StartContainer for \"c109ce1e9745e422965305b0a3d5a319fc0bee2a9aeca3b8dc1ea5339a439900\"" Dec 13 01:28:38.184730 systemd[1]: run-containerd-runc-k8s.io-c109ce1e9745e422965305b0a3d5a319fc0bee2a9aeca3b8dc1ea5339a439900-runc.Hu8KDf.mount: Deactivated successfully. Dec 13 01:28:38.195349 systemd[1]: Started cri-containerd-c109ce1e9745e422965305b0a3d5a319fc0bee2a9aeca3b8dc1ea5339a439900.scope - libcontainer container c109ce1e9745e422965305b0a3d5a319fc0bee2a9aeca3b8dc1ea5339a439900. Dec 13 01:28:38.230216 systemd[1]: cri-containerd-c109ce1e9745e422965305b0a3d5a319fc0bee2a9aeca3b8dc1ea5339a439900.scope: Deactivated successfully. Dec 13 01:28:38.232849 containerd[1469]: time="2024-12-13T01:28:38.232791829Z" level=info msg="StartContainer for \"c109ce1e9745e422965305b0a3d5a319fc0bee2a9aeca3b8dc1ea5339a439900\" returns successfully" Dec 13 01:28:38.259949 containerd[1469]: time="2024-12-13T01:28:38.259843362Z" level=info msg="shim disconnected" id=c109ce1e9745e422965305b0a3d5a319fc0bee2a9aeca3b8dc1ea5339a439900 namespace=k8s.io Dec 13 01:28:38.259949 containerd[1469]: time="2024-12-13T01:28:38.259948917Z" level=warning msg="cleaning up after shim disconnected" id=c109ce1e9745e422965305b0a3d5a319fc0bee2a9aeca3b8dc1ea5339a439900 namespace=k8s.io Dec 13 01:28:38.260370 containerd[1469]: time="2024-12-13T01:28:38.259963887Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:28:38.279276 containerd[1469]: time="2024-12-13T01:28:38.279075099Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:28:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:28:38.809263 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c109ce1e9745e422965305b0a3d5a319fc0bee2a9aeca3b8dc1ea5339a439900-rootfs.mount: Deactivated successfully. Dec 13 01:28:39.116618 containerd[1469]: time="2024-12-13T01:28:39.116434408Z" level=info msg="CreateContainer within sandbox \"c4937753f6457f2dfb2cca72f0dfdd128f9fa15df9773f6389a578c418af93aa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:28:39.140414 containerd[1469]: time="2024-12-13T01:28:39.140353920Z" level=info msg="CreateContainer within sandbox \"c4937753f6457f2dfb2cca72f0dfdd128f9fa15df9773f6389a578c418af93aa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9f199b6b96cb9510c0719bf1777ab047e4e01bc0593f83766829f51c08f479b7\"" Dec 13 01:28:39.140997 containerd[1469]: time="2024-12-13T01:28:39.140893550Z" level=info msg="StartContainer for \"9f199b6b96cb9510c0719bf1777ab047e4e01bc0593f83766829f51c08f479b7\"" Dec 13 01:28:39.183389 systemd[1]: Started cri-containerd-9f199b6b96cb9510c0719bf1777ab047e4e01bc0593f83766829f51c08f479b7.scope - libcontainer container 9f199b6b96cb9510c0719bf1777ab047e4e01bc0593f83766829f51c08f479b7. Dec 13 01:28:39.220754 containerd[1469]: time="2024-12-13T01:28:39.220687608Z" level=info msg="StartContainer for \"9f199b6b96cb9510c0719bf1777ab047e4e01bc0593f83766829f51c08f479b7\" returns successfully" Dec 13 01:28:39.656241 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 01:28:40.136204 kubelet[2566]: I1213 01:28:40.136098 2566 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zl2tq" podStartSLOduration=6.136072877 podStartE2EDuration="6.136072877s" podCreationTimestamp="2024-12-13 01:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:28:40.135626446 +0000 UTC m=+129.586259197" watchObservedRunningTime="2024-12-13 01:28:40.136072877 +0000 UTC m=+129.586705638" Dec 13 01:28:41.930430 systemd[1]: run-containerd-runc-k8s.io-9f199b6b96cb9510c0719bf1777ab047e4e01bc0593f83766829f51c08f479b7-runc.ToDIYD.mount: Deactivated successfully. Dec 13 01:28:42.949341 systemd-networkd[1383]: lxc_health: Link UP Dec 13 01:28:42.974204 systemd-networkd[1383]: lxc_health: Gained carrier Dec 13 01:28:44.105802 systemd-networkd[1383]: lxc_health: Gained IPv6LL Dec 13 01:28:44.246165 systemd[1]: run-containerd-runc-k8s.io-9f199b6b96cb9510c0719bf1777ab047e4e01bc0593f83766829f51c08f479b7-runc.3nSmFY.mount: Deactivated successfully. Dec 13 01:28:46.771307 ntpd[1433]: Listen normally on 14 lxc_health [fe80::7074:ccff:fe21:4cd9%14]:123 Dec 13 01:28:46.771835 ntpd[1433]: 13 Dec 01:28:46 ntpd[1433]: Listen normally on 14 lxc_health [fe80::7074:ccff:fe21:4cd9%14]:123 Dec 13 01:28:48.973994 sshd[4486]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:48.979919 systemd[1]: sshd@29-10.128.0.80:22-147.75.109.163:38528.service: Deactivated successfully. Dec 13 01:28:48.982994 systemd[1]: session-30.scope: Deactivated successfully. Dec 13 01:28:48.984312 systemd-logind[1446]: Session 30 logged out. Waiting for processes to exit. Dec 13 01:28:48.985906 systemd-logind[1446]: Removed session 30.