Dec 13 13:27:24.178412 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 13 11:52:04 -00 2024 Dec 13 13:27:24.178470 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:27:24.178489 kernel: BIOS-provided physical RAM map: Dec 13 13:27:24.178502 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Dec 13 13:27:24.178516 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Dec 13 13:27:24.178530 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Dec 13 13:27:24.178547 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Dec 13 13:27:24.178561 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Dec 13 13:27:24.178580 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd325fff] usable Dec 13 13:27:24.178594 kernel: BIOS-e820: [mem 0x00000000bd326000-0x00000000bd32dfff] ACPI data Dec 13 13:27:24.178609 kernel: BIOS-e820: [mem 0x00000000bd32e000-0x00000000bf8ecfff] usable Dec 13 13:27:24.178624 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Dec 13 13:27:24.178638 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Dec 13 13:27:24.178653 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Dec 13 13:27:24.178675 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Dec 13 13:27:24.178691 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Dec 13 13:27:24.178707 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Dec 13 13:27:24.178724 kernel: NX (Execute Disable) protection: active Dec 13 13:27:24.178740 kernel: APIC: Static calls initialized Dec 13 13:27:24.178756 kernel: efi: EFI v2.7 by EDK II Dec 13 13:27:24.178772 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd326018 Dec 13 13:27:24.178788 kernel: random: crng init done Dec 13 13:27:24.178804 kernel: secureboot: Secure boot disabled Dec 13 13:27:24.178820 kernel: SMBIOS 2.4 present. Dec 13 13:27:24.178840 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Dec 13 13:27:24.178856 kernel: Hypervisor detected: KVM Dec 13 13:27:24.178871 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 13:27:24.178886 kernel: kvm-clock: using sched offset of 12972431361 cycles Dec 13 13:27:24.178902 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 13:27:24.178919 kernel: tsc: Detected 2299.998 MHz processor Dec 13 13:27:24.178936 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 13:27:24.178953 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 13:27:24.178969 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Dec 13 13:27:24.178986 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Dec 13 13:27:24.179005 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 13:27:24.179030 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 13 13:27:24.179046 kernel: Using GB pages for direct mapping Dec 13 13:27:24.179087 kernel: ACPI: Early table checksum verification disabled Dec 13 13:27:24.179105 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Dec 13 13:27:24.179122 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Dec 13 13:27:24.179146 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Dec 13 13:27:24.179167 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Dec 13 13:27:24.179184 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Dec 13 13:27:24.179200 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Dec 13 13:27:24.179217 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Dec 13 13:27:24.179233 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Dec 13 13:27:24.179251 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Dec 13 13:27:24.179268 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Dec 13 13:27:24.179289 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Dec 13 13:27:24.179306 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Dec 13 13:27:24.179323 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Dec 13 13:27:24.179340 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Dec 13 13:27:24.179356 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Dec 13 13:27:24.179373 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Dec 13 13:27:24.179390 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Dec 13 13:27:24.179406 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Dec 13 13:27:24.179423 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Dec 13 13:27:24.179444 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Dec 13 13:27:24.179461 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 13:27:24.179477 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 13:27:24.179494 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 13:27:24.179511 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Dec 13 13:27:24.179529 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Dec 13 13:27:24.179546 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Dec 13 13:27:24.179564 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Dec 13 13:27:24.179582 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Dec 13 13:27:24.179603 kernel: Zone ranges: Dec 13 13:27:24.179620 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 13:27:24.179637 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 13:27:24.179654 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Dec 13 13:27:24.179671 kernel: Movable zone start for each node Dec 13 13:27:24.179688 kernel: Early memory node ranges Dec 13 13:27:24.179705 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Dec 13 13:27:24.179722 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Dec 13 13:27:24.179740 kernel: node 0: [mem 0x0000000000100000-0x00000000bd325fff] Dec 13 13:27:24.179760 kernel: node 0: [mem 0x00000000bd32e000-0x00000000bf8ecfff] Dec 13 13:27:24.179776 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Dec 13 13:27:24.179793 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Dec 13 13:27:24.179811 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Dec 13 13:27:24.179828 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 13:27:24.179845 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Dec 13 13:27:24.179862 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Dec 13 13:27:24.179879 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Dec 13 13:27:24.179896 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 13 13:27:24.179918 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Dec 13 13:27:24.179935 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 13:27:24.179953 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 13:27:24.179970 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 13:27:24.179987 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 13:27:24.180005 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 13:27:24.180029 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 13:27:24.180046 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 13:27:24.180081 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 13:27:24.180102 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 13:27:24.180120 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 13:27:24.180137 kernel: Booting paravirtualized kernel on KVM Dec 13 13:27:24.180155 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 13:27:24.180173 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 13:27:24.180190 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 13:27:24.180207 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 13:27:24.180224 kernel: pcpu-alloc: [0] 0 1 Dec 13 13:27:24.180241 kernel: kvm-guest: PV spinlocks enabled Dec 13 13:27:24.180263 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 13:27:24.180282 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:27:24.180300 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:27:24.180317 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 13:27:24.180336 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 13:27:24.180353 kernel: Fallback order for Node 0: 0 Dec 13 13:27:24.180370 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932272 Dec 13 13:27:24.180387 kernel: Policy zone: Normal Dec 13 13:27:24.180408 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:27:24.180425 kernel: software IO TLB: area num 2. Dec 13 13:27:24.180443 kernel: Memory: 7511308K/7860552K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43328K init, 1748K bss, 348988K reserved, 0K cma-reserved) Dec 13 13:27:24.180461 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 13:27:24.180478 kernel: Kernel/User page tables isolation: enabled Dec 13 13:27:24.180495 kernel: ftrace: allocating 37874 entries in 148 pages Dec 13 13:27:24.180545 kernel: ftrace: allocated 148 pages with 3 groups Dec 13 13:27:24.180563 kernel: Dynamic Preempt: voluntary Dec 13 13:27:24.180598 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:27:24.180616 kernel: rcu: RCU event tracing is enabled. Dec 13 13:27:24.180635 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 13:27:24.180654 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:27:24.180678 kernel: Rude variant of Tasks RCU enabled. Dec 13 13:27:24.180695 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:27:24.180714 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:27:24.180733 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 13:27:24.180751 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 13:27:24.180775 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 13:27:24.180792 kernel: Console: colour dummy device 80x25 Dec 13 13:27:24.180809 kernel: printk: console [ttyS0] enabled Dec 13 13:27:24.180827 kernel: ACPI: Core revision 20230628 Dec 13 13:27:24.180845 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 13:27:24.180863 kernel: x2apic enabled Dec 13 13:27:24.180882 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 13:27:24.180900 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Dec 13 13:27:24.180919 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 13:27:24.180942 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Dec 13 13:27:24.180960 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Dec 13 13:27:24.180977 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Dec 13 13:27:24.180996 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 13:27:24.181022 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 13:27:24.181041 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 13:27:24.181083 kernel: Spectre V2 : Mitigation: IBRS Dec 13 13:27:24.181100 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 13:27:24.181117 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 13:27:24.181139 kernel: RETBleed: Mitigation: IBRS Dec 13 13:27:24.181155 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 13:27:24.181173 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Dec 13 13:27:24.181191 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 13:27:24.181209 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 13:27:24.181227 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 13:27:24.181244 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 13:27:24.181262 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 13:27:24.181284 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 13:27:24.181301 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 13:27:24.181320 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 13:27:24.181338 kernel: Freeing SMP alternatives memory: 32K Dec 13 13:27:24.181356 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:27:24.181373 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:27:24.181391 kernel: landlock: Up and running. Dec 13 13:27:24.181409 kernel: SELinux: Initializing. Dec 13 13:27:24.181427 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 13:27:24.181450 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 13:27:24.181468 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Dec 13 13:27:24.181485 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:27:24.181503 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:27:24.181521 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:27:24.181539 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Dec 13 13:27:24.181557 kernel: signal: max sigframe size: 1776 Dec 13 13:27:24.181574 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:27:24.181593 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:27:24.181617 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 13:27:24.181635 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:27:24.181652 kernel: smpboot: x86: Booting SMP configuration: Dec 13 13:27:24.181669 kernel: .... node #0, CPUs: #1 Dec 13 13:27:24.181688 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 13:27:24.181707 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 13:27:24.181724 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 13:27:24.181742 kernel: smpboot: Max logical packages: 1 Dec 13 13:27:24.181760 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 13 13:27:24.181782 kernel: devtmpfs: initialized Dec 13 13:27:24.181800 kernel: x86/mm: Memory block size: 128MB Dec 13 13:27:24.181818 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Dec 13 13:27:24.181835 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:27:24.181852 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 13:27:24.181870 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:27:24.181888 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:27:24.181914 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:27:24.181932 kernel: audit: type=2000 audit(1734096442.209:1): state=initialized audit_enabled=0 res=1 Dec 13 13:27:24.181956 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:27:24.181973 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 13:27:24.181991 kernel: cpuidle: using governor menu Dec 13 13:27:24.182008 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:27:24.182033 kernel: dca service started, version 1.12.1 Dec 13 13:27:24.182051 kernel: PCI: Using configuration type 1 for base access Dec 13 13:27:24.182085 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 13:27:24.182103 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:27:24.182126 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 13:27:24.182143 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:27:24.182161 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:27:24.182178 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:27:24.182195 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:27:24.182213 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:27:24.182230 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:27:24.182248 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 13:27:24.182266 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 13:27:24.182284 kernel: ACPI: Interpreter enabled Dec 13 13:27:24.182306 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 13:27:24.182324 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 13:27:24.182342 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 13:27:24.182359 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 13 13:27:24.182377 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 13:27:24.182395 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 13:27:24.182674 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 13:27:24.182876 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 13:27:24.183104 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 13:27:24.183130 kernel: PCI host bridge to bus 0000:00 Dec 13 13:27:24.183322 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 13:27:24.183491 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 13:27:24.183655 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 13:27:24.183822 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Dec 13 13:27:24.183993 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 13:27:24.184230 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 13:27:24.184440 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Dec 13 13:27:24.184689 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 13:27:24.184889 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 13:27:24.185131 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Dec 13 13:27:24.185344 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Dec 13 13:27:24.185567 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Dec 13 13:27:24.185777 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 13:27:24.185961 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Dec 13 13:27:24.186220 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Dec 13 13:27:24.186436 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 13:27:24.186620 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Dec 13 13:27:24.186810 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Dec 13 13:27:24.186836 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 13:27:24.186854 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 13:27:24.186871 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 13:27:24.186889 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 13:27:24.186907 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 13:27:24.186927 kernel: iommu: Default domain type: Translated Dec 13 13:27:24.186948 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 13:27:24.186967 kernel: efivars: Registered efivars operations Dec 13 13:27:24.186996 kernel: PCI: Using ACPI for IRQ routing Dec 13 13:27:24.187028 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 13:27:24.187048 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Dec 13 13:27:24.187091 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Dec 13 13:27:24.187110 kernel: e820: reserve RAM buffer [mem 0xbd326000-0xbfffffff] Dec 13 13:27:24.187129 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Dec 13 13:27:24.187149 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Dec 13 13:27:24.187169 kernel: vgaarb: loaded Dec 13 13:27:24.187190 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 13:27:24.187217 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:27:24.187237 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:27:24.187256 kernel: pnp: PnP ACPI init Dec 13 13:27:24.187276 kernel: pnp: PnP ACPI: found 7 devices Dec 13 13:27:24.187296 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 13:27:24.187316 kernel: NET: Registered PF_INET protocol family Dec 13 13:27:24.187334 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 13:27:24.187351 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 13:27:24.187394 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:27:24.187417 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 13:27:24.187436 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 13 13:27:24.187455 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 13:27:24.187473 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 13:27:24.187492 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 13:27:24.187511 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:27:24.187529 kernel: NET: Registered PF_XDP protocol family Dec 13 13:27:24.187719 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 13:27:24.187897 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 13:27:24.188124 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 13:27:24.188331 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Dec 13 13:27:24.188520 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 13:27:24.188544 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:27:24.188563 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 13:27:24.188581 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Dec 13 13:27:24.188598 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 13:27:24.188624 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 13:27:24.188642 kernel: clocksource: Switched to clocksource tsc Dec 13 13:27:24.188660 kernel: Initialise system trusted keyrings Dec 13 13:27:24.188677 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 13:27:24.188695 kernel: Key type asymmetric registered Dec 13 13:27:24.188713 kernel: Asymmetric key parser 'x509' registered Dec 13 13:27:24.188730 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 13:27:24.188748 kernel: io scheduler mq-deadline registered Dec 13 13:27:24.188766 kernel: io scheduler kyber registered Dec 13 13:27:24.188788 kernel: io scheduler bfq registered Dec 13 13:27:24.188806 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 13:27:24.188825 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 13:27:24.189007 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Dec 13 13:27:24.189037 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Dec 13 13:27:24.189245 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Dec 13 13:27:24.189269 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 13:27:24.189446 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Dec 13 13:27:24.189473 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:27:24.189492 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 13:27:24.189510 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 13:27:24.189528 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Dec 13 13:27:24.189546 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Dec 13 13:27:24.189762 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Dec 13 13:27:24.189788 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 13:27:24.189806 kernel: i8042: Warning: Keylock active Dec 13 13:27:24.189828 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 13:27:24.189846 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 13:27:24.190053 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 13:27:24.190279 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 13:27:24.190444 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T13:27:23 UTC (1734096443) Dec 13 13:27:24.190606 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 13:27:24.190636 kernel: intel_pstate: CPU model not supported Dec 13 13:27:24.190654 kernel: pstore: Using crash dump compression: deflate Dec 13 13:27:24.190678 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 13:27:24.190696 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:27:24.190714 kernel: Segment Routing with IPv6 Dec 13 13:27:24.190731 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:27:24.190749 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:27:24.190766 kernel: Key type dns_resolver registered Dec 13 13:27:24.190784 kernel: IPI shorthand broadcast: enabled Dec 13 13:27:24.190802 kernel: sched_clock: Marking stable (1183004741, 131763329)->(1332153649, -17385579) Dec 13 13:27:24.190820 kernel: registered taskstats version 1 Dec 13 13:27:24.190841 kernel: Loading compiled-in X.509 certificates Dec 13 13:27:24.190860 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 87a680e70013684f1bdd04e047addefc714bd162' Dec 13 13:27:24.190877 kernel: Key type .fscrypt registered Dec 13 13:27:24.190894 kernel: Key type fscrypt-provisioning registered Dec 13 13:27:24.190912 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:27:24.190930 kernel: ima: No architecture policies found Dec 13 13:27:24.190948 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 13:27:24.190966 kernel: clk: Disabling unused clocks Dec 13 13:27:24.190983 kernel: Freeing unused kernel image (initmem) memory: 43328K Dec 13 13:27:24.191005 kernel: Write protecting the kernel read-only data: 38912k Dec 13 13:27:24.191033 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Dec 13 13:27:24.191051 kernel: Run /init as init process Dec 13 13:27:24.191092 kernel: with arguments: Dec 13 13:27:24.191110 kernel: /init Dec 13 13:27:24.191127 kernel: with environment: Dec 13 13:27:24.191144 kernel: HOME=/ Dec 13 13:27:24.191162 kernel: TERM=linux Dec 13 13:27:24.191180 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:27:24.191207 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:27:24.191229 systemd[1]: Detected virtualization google. Dec 13 13:27:24.191249 systemd[1]: Detected architecture x86-64. Dec 13 13:27:24.191267 systemd[1]: Running in initrd. Dec 13 13:27:24.191285 systemd[1]: No hostname configured, using default hostname. Dec 13 13:27:24.191303 systemd[1]: Hostname set to . Dec 13 13:27:24.191323 systemd[1]: Initializing machine ID from random generator. Dec 13 13:27:24.191345 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:27:24.191364 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:27:24.191383 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:27:24.191402 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:27:24.191422 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:27:24.191441 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:27:24.191460 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:27:24.191486 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:27:24.191521 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:27:24.191545 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:27:24.191565 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:27:24.191584 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:27:24.191608 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:27:24.191627 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:27:24.191647 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:27:24.191666 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:27:24.191686 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:27:24.191706 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:27:24.191725 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:27:24.191745 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:27:24.191764 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:27:24.191788 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:27:24.191812 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:27:24.191831 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:27:24.191851 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:27:24.191870 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:27:24.191889 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:27:24.191909 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:27:24.191928 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:27:24.191948 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:27:24.191971 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:27:24.192029 systemd-journald[184]: Collecting audit messages is disabled. Dec 13 13:27:24.192104 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:27:24.192124 systemd-journald[184]: Journal started Dec 13 13:27:24.192168 systemd-journald[184]: Runtime Journal (/run/log/journal/5f59f0be8a8349ff8a85c151d734ce46) is 8.0M, max 148.6M, 140.6M free. Dec 13 13:27:24.196856 systemd-modules-load[185]: Inserted module 'overlay' Dec 13 13:27:24.203182 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:27:24.207712 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:27:24.220292 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:27:24.232249 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:27:24.238584 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:27:24.246263 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:27:24.253144 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:27:24.255437 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:24.263215 kernel: Bridge firewalling registered Dec 13 13:27:24.255650 systemd-modules-load[185]: Inserted module 'br_netfilter' Dec 13 13:27:24.261747 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:27:24.276549 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:27:24.277049 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:27:24.289322 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:27:24.294579 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:27:24.326507 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:27:24.336323 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:27:24.350470 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:27:24.366284 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:27:24.385925 systemd-resolved[212]: Positive Trust Anchors: Dec 13 13:27:24.385944 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:27:24.386018 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:27:24.392680 systemd-resolved[212]: Defaulting to hostname 'linux'. Dec 13 13:27:24.414278 dracut-cmdline[219]: dracut-dracut-053 Dec 13 13:27:24.414278 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:27:24.394462 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:27:24.409634 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:27:24.505114 kernel: SCSI subsystem initialized Dec 13 13:27:24.515085 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:27:24.528093 kernel: iscsi: registered transport (tcp) Dec 13 13:27:24.552109 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:27:24.552185 kernel: QLogic iSCSI HBA Driver Dec 13 13:27:24.609543 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:27:24.616279 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:27:24.690716 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:27:24.690803 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:27:24.690845 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:27:24.748112 kernel: raid6: avx2x4 gen() 18244 MB/s Dec 13 13:27:24.769098 kernel: raid6: avx2x2 gen() 18287 MB/s Dec 13 13:27:24.795084 kernel: raid6: avx2x1 gen() 14343 MB/s Dec 13 13:27:24.795133 kernel: raid6: using algorithm avx2x2 gen() 18287 MB/s Dec 13 13:27:24.821173 kernel: raid6: .... xor() 18788 MB/s, rmw enabled Dec 13 13:27:24.821245 kernel: raid6: using avx2x2 recovery algorithm Dec 13 13:27:24.851097 kernel: xor: automatically using best checksumming function avx Dec 13 13:27:25.029101 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:27:25.042000 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:27:25.057295 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:27:25.085121 systemd-udevd[401]: Using default interface naming scheme 'v255'. Dec 13 13:27:25.092313 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:27:25.123254 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:27:25.164275 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Dec 13 13:27:25.201190 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:27:25.208276 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:27:25.309822 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:27:25.346804 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:27:25.401975 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:27:25.416980 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 13:27:25.427258 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:27:25.492877 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 13:27:25.492945 kernel: AES CTR mode by8 optimization enabled Dec 13 13:27:25.442422 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:27:25.459251 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:27:25.566230 kernel: scsi host0: Virtio SCSI HBA Dec 13 13:27:25.487078 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:27:25.570458 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:27:25.599205 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Dec 13 13:27:25.570916 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:27:25.621869 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:27:25.729237 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Dec 13 13:27:25.729564 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Dec 13 13:27:25.729806 kernel: sd 0:0:1:0: [sda] Write Protect is off Dec 13 13:27:25.730176 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Dec 13 13:27:25.730444 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 13:27:25.730699 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 13:27:25.730729 kernel: GPT:17805311 != 25165823 Dec 13 13:27:25.730751 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 13:27:25.730775 kernel: GPT:17805311 != 25165823 Dec 13 13:27:25.730797 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 13:27:25.730819 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:27:25.730843 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Dec 13 13:27:25.660487 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:27:25.660756 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:25.695903 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:27:25.802219 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (452) Dec 13 13:27:25.802262 kernel: BTRFS: device fsid 79c74448-2326-4c98-b9ff-09542b30ea52 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (456) Dec 13 13:27:25.723409 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:27:25.740776 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:27:25.807980 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Dec 13 13:27:25.825267 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Dec 13 13:27:25.853527 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:25.877705 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Dec 13 13:27:25.901127 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Dec 13 13:27:25.905347 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Dec 13 13:27:25.937239 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:27:25.972534 disk-uuid[541]: Primary Header is updated. Dec 13 13:27:25.972534 disk-uuid[541]: Secondary Entries is updated. Dec 13 13:27:25.972534 disk-uuid[541]: Secondary Header is updated. Dec 13 13:27:26.010182 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:27:25.981305 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:27:26.028331 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:27:26.057904 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:27:27.032085 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:27:27.033518 disk-uuid[543]: The operation has completed successfully. Dec 13 13:27:27.107952 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:27:27.108121 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:27:27.133265 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:27:27.165188 sh[565]: Success Dec 13 13:27:27.189290 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 13:27:27.271833 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:27:27.279099 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:27:27.305516 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:27:27.365371 kernel: BTRFS info (device dm-0): first mount of filesystem 79c74448-2326-4c98-b9ff-09542b30ea52 Dec 13 13:27:27.365410 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:27:27.365427 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:27:27.365442 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:27:27.365456 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:27:27.392119 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 13:27:27.408367 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:27:27.409379 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 13:27:27.414451 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:27:27.467179 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:27:27.527838 kernel: BTRFS info (device sda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:27:27.527894 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:27:27.527920 kernel: BTRFS info (device sda6): using free space tree Dec 13 13:27:27.527945 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 13:27:27.527969 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 13:27:27.545726 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:27:27.563261 kernel: BTRFS info (device sda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:27:27.572697 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:27:27.598326 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:27:27.649923 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:27:27.684364 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:27:27.797029 systemd-networkd[750]: lo: Link UP Dec 13 13:27:27.797460 systemd-networkd[750]: lo: Gained carrier Dec 13 13:27:27.799932 systemd-networkd[750]: Enumeration completed Dec 13 13:27:27.800110 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:27:27.800755 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:27:27.824223 ignition[694]: Ignition 2.20.0 Dec 13 13:27:27.800761 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:27:27.824232 ignition[694]: Stage: fetch-offline Dec 13 13:27:27.805972 systemd-networkd[750]: eth0: Link UP Dec 13 13:27:27.824273 ignition[694]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:27.805979 systemd-networkd[750]: eth0: Gained carrier Dec 13 13:27:27.824284 ignition[694]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 13:27:27.805992 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:27:27.824408 ignition[694]: parsed url from cmdline: "" Dec 13 13:27:27.817531 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.84/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 13:27:27.824416 ignition[694]: no config URL provided Dec 13 13:27:27.826628 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:27:27.824425 ignition[694]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:27:27.866033 systemd[1]: Reached target network.target - Network. Dec 13 13:27:27.824438 ignition[694]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:27:27.879358 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 13:27:27.824446 ignition[694]: failed to fetch config: resource requires networking Dec 13 13:27:27.926597 unknown[759]: fetched base config from "system" Dec 13 13:27:27.824774 ignition[694]: Ignition finished successfully Dec 13 13:27:27.926613 unknown[759]: fetched base config from "system" Dec 13 13:27:27.915965 ignition[759]: Ignition 2.20.0 Dec 13 13:27:27.926626 unknown[759]: fetched user config from "gcp" Dec 13 13:27:27.915974 ignition[759]: Stage: fetch Dec 13 13:27:27.929948 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 13:27:27.916352 ignition[759]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:27.951288 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:27:27.916371 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 13:27:27.996202 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:27:27.916497 ignition[759]: parsed url from cmdline: "" Dec 13 13:27:28.009340 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:27:27.916504 ignition[759]: no config URL provided Dec 13 13:27:28.054497 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:27:27.916514 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:27:28.079748 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:27:27.916528 ignition[759]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:27:28.092807 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:27:27.916559 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Dec 13 13:27:28.111320 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:27:27.920591 ignition[759]: GET result: OK Dec 13 13:27:28.118350 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:27:27.920652 ignition[759]: parsing config with SHA512: f964ce0f7072d289e73df10fec097caa5acfef85e1bd43d93207c22ad74edf52d91778a4ea1343196300a0740bd828ac52cdaa78807f13d69a6e1c5c8b216b0c Dec 13 13:27:28.132340 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:27:27.927971 ignition[759]: fetch: fetch complete Dec 13 13:27:28.155253 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:27:27.927984 ignition[759]: fetch: fetch passed Dec 13 13:27:27.928093 ignition[759]: Ignition finished successfully Dec 13 13:27:27.993741 ignition[765]: Ignition 2.20.0 Dec 13 13:27:27.993750 ignition[765]: Stage: kargs Dec 13 13:27:27.993995 ignition[765]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:27.994008 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 13:27:27.994963 ignition[765]: kargs: kargs passed Dec 13 13:27:27.995017 ignition[765]: Ignition finished successfully Dec 13 13:27:28.042206 ignition[771]: Ignition 2.20.0 Dec 13 13:27:28.042216 ignition[771]: Stage: disks Dec 13 13:27:28.042434 ignition[771]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:28.042448 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 13:27:28.043512 ignition[771]: disks: disks passed Dec 13 13:27:28.043576 ignition[771]: Ignition finished successfully Dec 13 13:27:28.208126 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 13:27:28.389743 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:27:28.396261 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:27:28.551099 kernel: EXT4-fs (sda9): mounted filesystem 8801d4fe-2f40-4e12-9140-c192f2e7d668 r/w with ordered data mode. Quota mode: none. Dec 13 13:27:28.551994 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:27:28.560891 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:27:28.590200 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:27:28.605254 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:27:28.647205 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (788) Dec 13 13:27:28.647243 kernel: BTRFS info (device sda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:27:28.647259 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:27:28.647274 kernel: BTRFS info (device sda6): using free space tree Dec 13 13:27:28.648014 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 13:27:28.666797 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 13:27:28.666829 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 13:27:28.648109 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:27:28.648156 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:27:28.692291 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:27:28.708509 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:27:28.740264 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:27:28.858381 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:27:28.869222 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:27:28.879203 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:27:28.889260 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:27:29.029728 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:27:29.035209 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:27:29.052282 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:27:29.089301 kernel: BTRFS info (device sda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:27:29.090374 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:27:29.129325 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:27:29.140480 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:27:29.166323 ignition[900]: INFO : Ignition 2.20.0 Dec 13 13:27:29.166323 ignition[900]: INFO : Stage: mount Dec 13 13:27:29.166323 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:29.166323 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 13:27:29.166323 ignition[900]: INFO : mount: mount passed Dec 13 13:27:29.166323 ignition[900]: INFO : Ignition finished successfully Dec 13 13:27:29.163226 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:27:29.195297 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:27:29.258094 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (912) Dec 13 13:27:29.275471 kernel: BTRFS info (device sda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:27:29.275527 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:27:29.275552 kernel: BTRFS info (device sda6): using free space tree Dec 13 13:27:29.296931 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 13:27:29.296989 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 13:27:29.300712 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:27:29.353309 ignition[929]: INFO : Ignition 2.20.0 Dec 13 13:27:29.353309 ignition[929]: INFO : Stage: files Dec 13 13:27:29.368168 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:29.368168 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 13:27:29.368168 ignition[929]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:27:29.368168 ignition[929]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:27:29.368168 ignition[929]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:27:29.368168 ignition[929]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:27:29.368168 ignition[929]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:27:29.368168 ignition[929]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:27:29.365210 unknown[929]: wrote ssh authorized keys file for user: core Dec 13 13:27:29.470210 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:27:29.470210 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 13:27:29.504187 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 13:27:29.696328 systemd-networkd[750]: eth0: Gained IPv6LL Dec 13 13:27:29.803792 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:27:29.821183 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 13:27:29.821183 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 13:27:30.330129 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 13:27:31.292270 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 13:27:31.308207 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:27:31.308207 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:27:31.308207 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:27:31.308207 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:27:31.308207 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:27:31.308207 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:27:31.308207 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:27:31.308207 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:27:31.308207 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:27:31.308207 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:27:31.308207 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 13:27:31.308207 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 13:27:31.308207 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 13:27:31.308207 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 13:27:31.538507 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 13:27:31.905609 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 13:27:31.905609 ignition[929]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 13:27:31.945195 ignition[929]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:27:31.945195 ignition[929]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:27:31.945195 ignition[929]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 13:27:31.945195 ignition[929]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 13 13:27:31.945195 ignition[929]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 13:27:31.945195 ignition[929]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:27:31.945195 ignition[929]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:27:31.945195 ignition[929]: INFO : files: files passed Dec 13 13:27:31.945195 ignition[929]: INFO : Ignition finished successfully Dec 13 13:27:31.910095 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:27:31.941433 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:27:31.966265 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:27:32.021525 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:27:32.157197 initrd-setup-root-after-ignition[957]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:27:32.157197 initrd-setup-root-after-ignition[957]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:27:32.021646 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:27:32.224298 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:27:32.044607 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:27:32.059545 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:27:32.089269 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:27:32.155680 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:27:32.155801 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:27:32.168426 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:27:32.182336 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:27:32.214385 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:27:32.221375 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:27:32.286671 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:27:32.308417 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:27:32.356630 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:27:32.370459 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:27:32.381516 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:27:32.401601 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:27:32.401824 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:27:32.455424 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:27:32.463549 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:27:32.481560 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:27:32.515486 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:27:32.525598 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:27:32.544520 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:27:32.562536 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:27:32.579520 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:27:32.600517 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:27:32.617496 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:27:32.634427 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:27:32.634642 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:27:32.675238 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:27:32.675634 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:27:32.693451 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:27:32.693631 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:27:32.712553 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:27:32.712762 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:27:32.751509 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:27:32.751723 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:27:32.759543 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:27:32.759711 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:27:32.832202 ignition[982]: INFO : Ignition 2.20.0 Dec 13 13:27:32.832202 ignition[982]: INFO : Stage: umount Dec 13 13:27:32.832202 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:32.832202 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 13:27:32.832202 ignition[982]: INFO : umount: umount passed Dec 13 13:27:32.832202 ignition[982]: INFO : Ignition finished successfully Dec 13 13:27:32.785429 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:27:32.825367 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:27:32.841340 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:27:32.841557 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:27:32.884430 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:27:32.884613 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:27:32.913826 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:27:32.914917 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:27:32.915034 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:27:32.930860 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:27:32.930975 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:27:32.950494 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:27:32.950615 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:27:32.971593 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:27:32.971650 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:27:32.980412 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:27:32.980475 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:27:32.997397 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 13:27:32.997456 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 13:27:33.014396 systemd[1]: Stopped target network.target - Network. Dec 13 13:27:33.029344 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:27:33.029423 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:27:33.044396 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:27:33.062328 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:27:33.066134 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:27:33.077329 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:27:33.095332 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:27:33.110360 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:27:33.110418 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:27:33.138346 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:27:33.138406 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:27:33.146417 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:27:33.146492 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:27:33.163375 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:27:33.163434 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:27:33.180375 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:27:33.180434 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:27:33.197577 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:27:33.202116 systemd-networkd[750]: eth0: DHCPv6 lease lost Dec 13 13:27:33.225335 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:27:33.234839 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:27:33.234968 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:27:33.251890 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:27:33.252288 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:27:33.269814 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:27:33.269867 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:27:33.292166 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:27:33.311140 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:27:33.311262 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:27:33.322277 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:27:33.322354 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:27:33.342225 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:27:33.342307 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:27:33.364226 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:27:33.364309 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:27:33.383382 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:27:33.402655 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:27:33.819171 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Dec 13 13:27:33.402816 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:27:33.426378 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:27:33.426525 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:27:33.429399 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:27:33.429455 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:27:33.456399 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:27:33.456476 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:27:33.490371 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:27:33.490601 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:27:33.534204 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:27:33.534429 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:27:33.568337 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:27:33.571330 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:27:33.571400 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:27:33.626384 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 13:27:33.626472 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:27:33.634401 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:27:33.634475 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:27:33.653463 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:27:33.653531 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:33.671920 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:27:33.672082 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:27:33.689738 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:27:33.689848 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:27:33.711548 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:27:33.734278 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:27:33.769536 systemd[1]: Switching root. Dec 13 13:27:34.087174 systemd-journald[184]: Journal stopped Dec 13 13:27:24.178412 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 13 11:52:04 -00 2024 Dec 13 13:27:24.178470 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:27:24.178489 kernel: BIOS-provided physical RAM map: Dec 13 13:27:24.178502 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Dec 13 13:27:24.178516 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Dec 13 13:27:24.178530 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Dec 13 13:27:24.178547 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Dec 13 13:27:24.178561 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Dec 13 13:27:24.178580 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd325fff] usable Dec 13 13:27:24.178594 kernel: BIOS-e820: [mem 0x00000000bd326000-0x00000000bd32dfff] ACPI data Dec 13 13:27:24.178609 kernel: BIOS-e820: [mem 0x00000000bd32e000-0x00000000bf8ecfff] usable Dec 13 13:27:24.178624 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Dec 13 13:27:24.178638 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Dec 13 13:27:24.178653 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Dec 13 13:27:24.178675 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Dec 13 13:27:24.178691 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Dec 13 13:27:24.178707 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Dec 13 13:27:24.178724 kernel: NX (Execute Disable) protection: active Dec 13 13:27:24.178740 kernel: APIC: Static calls initialized Dec 13 13:27:24.178756 kernel: efi: EFI v2.7 by EDK II Dec 13 13:27:24.178772 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd326018 Dec 13 13:27:24.178788 kernel: random: crng init done Dec 13 13:27:24.178804 kernel: secureboot: Secure boot disabled Dec 13 13:27:24.178820 kernel: SMBIOS 2.4 present. Dec 13 13:27:24.178840 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Dec 13 13:27:24.178856 kernel: Hypervisor detected: KVM Dec 13 13:27:24.178871 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 13:27:24.178886 kernel: kvm-clock: using sched offset of 12972431361 cycles Dec 13 13:27:24.178902 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 13:27:24.178919 kernel: tsc: Detected 2299.998 MHz processor Dec 13 13:27:24.178936 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 13:27:24.178953 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 13:27:24.178969 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Dec 13 13:27:24.178986 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Dec 13 13:27:24.179005 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 13:27:24.179030 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 13 13:27:24.179046 kernel: Using GB pages for direct mapping Dec 13 13:27:24.179087 kernel: ACPI: Early table checksum verification disabled Dec 13 13:27:24.179105 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Dec 13 13:27:24.179122 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Dec 13 13:27:24.179146 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Dec 13 13:27:24.179167 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Dec 13 13:27:24.179184 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Dec 13 13:27:24.179200 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Dec 13 13:27:24.179217 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Dec 13 13:27:24.179233 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Dec 13 13:27:24.179251 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Dec 13 13:27:24.179268 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Dec 13 13:27:24.179289 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Dec 13 13:27:24.179306 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Dec 13 13:27:24.179323 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Dec 13 13:27:24.179340 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Dec 13 13:27:24.179356 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Dec 13 13:27:24.179373 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Dec 13 13:27:24.179390 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Dec 13 13:27:24.179406 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Dec 13 13:27:24.179423 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Dec 13 13:27:24.179444 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Dec 13 13:27:24.179461 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 13:27:24.179477 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 13:27:24.179494 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 13:27:24.179511 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Dec 13 13:27:24.179529 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Dec 13 13:27:24.179546 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Dec 13 13:27:24.179564 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Dec 13 13:27:24.179582 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Dec 13 13:27:24.179603 kernel: Zone ranges: Dec 13 13:27:24.179620 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 13:27:24.179637 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 13:27:24.179654 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Dec 13 13:27:24.179671 kernel: Movable zone start for each node Dec 13 13:27:24.179688 kernel: Early memory node ranges Dec 13 13:27:24.179705 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Dec 13 13:27:24.179722 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Dec 13 13:27:24.179740 kernel: node 0: [mem 0x0000000000100000-0x00000000bd325fff] Dec 13 13:27:24.179760 kernel: node 0: [mem 0x00000000bd32e000-0x00000000bf8ecfff] Dec 13 13:27:24.179776 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Dec 13 13:27:24.179793 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Dec 13 13:27:24.179811 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Dec 13 13:27:24.179828 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 13:27:24.179845 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Dec 13 13:27:24.179862 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Dec 13 13:27:24.179879 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Dec 13 13:27:24.179896 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 13 13:27:24.179918 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Dec 13 13:27:24.179935 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 13:27:24.179953 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 13:27:24.179970 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 13:27:24.179987 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 13:27:24.180005 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 13:27:24.180029 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 13:27:24.180046 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 13:27:24.180081 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 13:27:24.180102 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 13:27:24.180120 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 13:27:24.180137 kernel: Booting paravirtualized kernel on KVM Dec 13 13:27:24.180155 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 13:27:24.180173 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 13:27:24.180190 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 13:27:24.180207 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 13:27:24.180224 kernel: pcpu-alloc: [0] 0 1 Dec 13 13:27:24.180241 kernel: kvm-guest: PV spinlocks enabled Dec 13 13:27:24.180263 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 13:27:24.180282 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:27:24.180300 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:27:24.180317 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 13:27:24.180336 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 13:27:24.180353 kernel: Fallback order for Node 0: 0 Dec 13 13:27:24.180370 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932272 Dec 13 13:27:24.180387 kernel: Policy zone: Normal Dec 13 13:27:24.180408 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:27:24.180425 kernel: software IO TLB: area num 2. Dec 13 13:27:24.180443 kernel: Memory: 7511308K/7860552K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43328K init, 1748K bss, 348988K reserved, 0K cma-reserved) Dec 13 13:27:24.180461 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 13:27:24.180478 kernel: Kernel/User page tables isolation: enabled Dec 13 13:27:24.180495 kernel: ftrace: allocating 37874 entries in 148 pages Dec 13 13:27:24.180545 kernel: ftrace: allocated 148 pages with 3 groups Dec 13 13:27:24.180563 kernel: Dynamic Preempt: voluntary Dec 13 13:27:24.180598 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:27:24.180616 kernel: rcu: RCU event tracing is enabled. Dec 13 13:27:24.180635 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 13:27:24.180654 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:27:24.180678 kernel: Rude variant of Tasks RCU enabled. Dec 13 13:27:24.180695 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:27:24.180714 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:27:24.180733 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 13:27:24.180751 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 13:27:24.180775 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 13:27:24.180792 kernel: Console: colour dummy device 80x25 Dec 13 13:27:24.180809 kernel: printk: console [ttyS0] enabled Dec 13 13:27:24.180827 kernel: ACPI: Core revision 20230628 Dec 13 13:27:24.180845 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 13:27:24.180863 kernel: x2apic enabled Dec 13 13:27:24.180882 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 13:27:24.180900 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Dec 13 13:27:24.180919 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 13:27:24.180942 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Dec 13 13:27:24.180960 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Dec 13 13:27:24.180977 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Dec 13 13:27:24.180996 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 13:27:24.181022 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 13:27:24.181041 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 13:27:24.181083 kernel: Spectre V2 : Mitigation: IBRS Dec 13 13:27:24.181100 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 13:27:24.181117 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 13:27:24.181139 kernel: RETBleed: Mitigation: IBRS Dec 13 13:27:24.181155 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 13:27:24.181173 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Dec 13 13:27:24.181191 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 13:27:24.181209 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 13:27:24.181227 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 13:27:24.181244 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 13:27:24.181262 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 13:27:24.181284 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 13:27:24.181301 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 13:27:24.181320 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 13:27:24.181338 kernel: Freeing SMP alternatives memory: 32K Dec 13 13:27:24.181356 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:27:24.181373 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:27:24.181391 kernel: landlock: Up and running. Dec 13 13:27:24.181409 kernel: SELinux: Initializing. Dec 13 13:27:24.181427 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 13:27:24.181450 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 13:27:24.181468 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Dec 13 13:27:24.181485 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:27:24.181503 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:27:24.181521 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 13:27:24.181539 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Dec 13 13:27:24.181557 kernel: signal: max sigframe size: 1776 Dec 13 13:27:24.181574 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:27:24.181593 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:27:24.181617 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 13:27:24.181635 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:27:24.181652 kernel: smpboot: x86: Booting SMP configuration: Dec 13 13:27:24.181669 kernel: .... node #0, CPUs: #1 Dec 13 13:27:24.181688 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 13:27:24.181707 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 13:27:24.181724 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 13:27:24.181742 kernel: smpboot: Max logical packages: 1 Dec 13 13:27:24.181760 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 13 13:27:24.181782 kernel: devtmpfs: initialized Dec 13 13:27:24.181800 kernel: x86/mm: Memory block size: 128MB Dec 13 13:27:24.181818 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Dec 13 13:27:24.181835 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:27:24.181852 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 13:27:24.181870 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:27:24.181888 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:27:24.181914 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:27:24.181932 kernel: audit: type=2000 audit(1734096442.209:1): state=initialized audit_enabled=0 res=1 Dec 13 13:27:24.181956 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:27:24.181973 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 13:27:24.181991 kernel: cpuidle: using governor menu Dec 13 13:27:24.182008 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:27:24.182033 kernel: dca service started, version 1.12.1 Dec 13 13:27:24.182051 kernel: PCI: Using configuration type 1 for base access Dec 13 13:27:24.182085 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 13:27:24.182103 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:27:24.182126 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 13:27:24.182143 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:27:24.182161 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:27:24.182178 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:27:24.182195 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:27:24.182213 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:27:24.182230 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:27:24.182248 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 13:27:24.182266 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 13:27:24.182284 kernel: ACPI: Interpreter enabled Dec 13 13:27:24.182306 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 13:27:24.182324 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 13:27:24.182342 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 13:27:24.182359 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 13 13:27:24.182377 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 13:27:24.182395 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 13:27:24.182674 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 13:27:24.182876 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Dec 13 13:27:24.183104 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Dec 13 13:27:24.183130 kernel: PCI host bridge to bus 0000:00 Dec 13 13:27:24.183322 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 13:27:24.183491 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 13:27:24.183655 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 13:27:24.183822 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Dec 13 13:27:24.183993 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 13:27:24.184230 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 13:27:24.184440 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Dec 13 13:27:24.184689 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 13:27:24.184889 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 13:27:24.185131 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Dec 13 13:27:24.185344 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Dec 13 13:27:24.185567 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Dec 13 13:27:24.185777 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 13:27:24.185961 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Dec 13 13:27:24.186220 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Dec 13 13:27:24.186436 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 13:27:24.186620 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Dec 13 13:27:24.186810 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Dec 13 13:27:24.186836 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 13:27:24.186854 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 13:27:24.186871 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 13:27:24.186889 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 13:27:24.186907 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 13:27:24.186927 kernel: iommu: Default domain type: Translated Dec 13 13:27:24.186948 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 13:27:24.186967 kernel: efivars: Registered efivars operations Dec 13 13:27:24.186996 kernel: PCI: Using ACPI for IRQ routing Dec 13 13:27:24.187028 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 13:27:24.187048 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Dec 13 13:27:24.187091 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Dec 13 13:27:24.187110 kernel: e820: reserve RAM buffer [mem 0xbd326000-0xbfffffff] Dec 13 13:27:24.187129 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Dec 13 13:27:24.187149 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Dec 13 13:27:24.187169 kernel: vgaarb: loaded Dec 13 13:27:24.187190 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 13:27:24.187217 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:27:24.187237 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:27:24.187256 kernel: pnp: PnP ACPI init Dec 13 13:27:24.187276 kernel: pnp: PnP ACPI: found 7 devices Dec 13 13:27:24.187296 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 13:27:24.187316 kernel: NET: Registered PF_INET protocol family Dec 13 13:27:24.187334 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 13:27:24.187351 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 13:27:24.187394 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:27:24.187417 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 13:27:24.187436 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 13 13:27:24.187455 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 13:27:24.187473 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 13:27:24.187492 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 13:27:24.187511 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:27:24.187529 kernel: NET: Registered PF_XDP protocol family Dec 13 13:27:24.187719 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 13:27:24.187897 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 13:27:24.188124 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 13:27:24.188331 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Dec 13 13:27:24.188520 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 13:27:24.188544 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:27:24.188563 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 13:27:24.188581 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Dec 13 13:27:24.188598 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 13:27:24.188624 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 13:27:24.188642 kernel: clocksource: Switched to clocksource tsc Dec 13 13:27:24.188660 kernel: Initialise system trusted keyrings Dec 13 13:27:24.188677 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 13:27:24.188695 kernel: Key type asymmetric registered Dec 13 13:27:24.188713 kernel: Asymmetric key parser 'x509' registered Dec 13 13:27:24.188730 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 13:27:24.188748 kernel: io scheduler mq-deadline registered Dec 13 13:27:24.188766 kernel: io scheduler kyber registered Dec 13 13:27:24.188788 kernel: io scheduler bfq registered Dec 13 13:27:24.188806 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 13:27:24.188825 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 13:27:24.189007 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Dec 13 13:27:24.189037 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Dec 13 13:27:24.189245 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Dec 13 13:27:24.189269 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 13:27:24.189446 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Dec 13 13:27:24.189473 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:27:24.189492 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 13:27:24.189510 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 13:27:24.189528 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Dec 13 13:27:24.189546 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Dec 13 13:27:24.189762 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Dec 13 13:27:24.189788 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 13:27:24.189806 kernel: i8042: Warning: Keylock active Dec 13 13:27:24.189828 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 13:27:24.189846 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 13:27:24.190053 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 13:27:24.190279 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 13:27:24.190444 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T13:27:23 UTC (1734096443) Dec 13 13:27:24.190606 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 13:27:24.190636 kernel: intel_pstate: CPU model not supported Dec 13 13:27:24.190654 kernel: pstore: Using crash dump compression: deflate Dec 13 13:27:24.190678 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 13:27:24.190696 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:27:24.190714 kernel: Segment Routing with IPv6 Dec 13 13:27:24.190731 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:27:24.190749 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:27:24.190766 kernel: Key type dns_resolver registered Dec 13 13:27:24.190784 kernel: IPI shorthand broadcast: enabled Dec 13 13:27:24.190802 kernel: sched_clock: Marking stable (1183004741, 131763329)->(1332153649, -17385579) Dec 13 13:27:24.190820 kernel: registered taskstats version 1 Dec 13 13:27:24.190841 kernel: Loading compiled-in X.509 certificates Dec 13 13:27:24.190860 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 87a680e70013684f1bdd04e047addefc714bd162' Dec 13 13:27:24.190877 kernel: Key type .fscrypt registered Dec 13 13:27:24.190894 kernel: Key type fscrypt-provisioning registered Dec 13 13:27:24.190912 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:27:24.190930 kernel: ima: No architecture policies found Dec 13 13:27:24.190948 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 13:27:24.190966 kernel: clk: Disabling unused clocks Dec 13 13:27:24.190983 kernel: Freeing unused kernel image (initmem) memory: 43328K Dec 13 13:27:24.191005 kernel: Write protecting the kernel read-only data: 38912k Dec 13 13:27:24.191033 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Dec 13 13:27:24.191051 kernel: Run /init as init process Dec 13 13:27:24.191092 kernel: with arguments: Dec 13 13:27:24.191110 kernel: /init Dec 13 13:27:24.191127 kernel: with environment: Dec 13 13:27:24.191144 kernel: HOME=/ Dec 13 13:27:24.191162 kernel: TERM=linux Dec 13 13:27:24.191180 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:27:24.191207 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:27:24.191229 systemd[1]: Detected virtualization google. Dec 13 13:27:24.191249 systemd[1]: Detected architecture x86-64. Dec 13 13:27:24.191267 systemd[1]: Running in initrd. Dec 13 13:27:24.191285 systemd[1]: No hostname configured, using default hostname. Dec 13 13:27:24.191303 systemd[1]: Hostname set to . Dec 13 13:27:24.191323 systemd[1]: Initializing machine ID from random generator. Dec 13 13:27:24.191345 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:27:24.191364 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:27:24.191383 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:27:24.191402 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:27:24.191422 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:27:24.191441 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:27:24.191460 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:27:24.191486 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:27:24.191521 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:27:24.191545 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:27:24.191565 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:27:24.191584 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:27:24.191608 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:27:24.191627 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:27:24.191647 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:27:24.191666 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:27:24.191686 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:27:24.191706 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:27:24.191725 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:27:24.191745 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:27:24.191764 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:27:24.191788 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:27:24.191812 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:27:24.191831 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:27:24.191851 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:27:24.191870 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:27:24.191889 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:27:24.191909 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:27:24.191928 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:27:24.191948 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:27:24.191971 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:27:24.192029 systemd-journald[184]: Collecting audit messages is disabled. Dec 13 13:27:24.192104 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:27:24.192124 systemd-journald[184]: Journal started Dec 13 13:27:24.192168 systemd-journald[184]: Runtime Journal (/run/log/journal/5f59f0be8a8349ff8a85c151d734ce46) is 8.0M, max 148.6M, 140.6M free. Dec 13 13:27:24.196856 systemd-modules-load[185]: Inserted module 'overlay' Dec 13 13:27:24.203182 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:27:24.207712 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:27:24.220292 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:27:24.232249 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:27:24.238584 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:27:24.246263 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:27:24.253144 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:27:24.255437 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:24.263215 kernel: Bridge firewalling registered Dec 13 13:27:24.255650 systemd-modules-load[185]: Inserted module 'br_netfilter' Dec 13 13:27:24.261747 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:27:24.276549 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:27:24.277049 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:27:24.289322 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:27:24.294579 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:27:24.326507 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:27:24.336323 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:27:24.350470 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:27:24.366284 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:27:24.385925 systemd-resolved[212]: Positive Trust Anchors: Dec 13 13:27:24.385944 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:27:24.386018 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:27:24.392680 systemd-resolved[212]: Defaulting to hostname 'linux'. Dec 13 13:27:24.414278 dracut-cmdline[219]: dracut-dracut-053 Dec 13 13:27:24.414278 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:27:24.394462 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:27:24.409634 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:27:24.505114 kernel: SCSI subsystem initialized Dec 13 13:27:24.515085 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:27:24.528093 kernel: iscsi: registered transport (tcp) Dec 13 13:27:24.552109 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:27:24.552185 kernel: QLogic iSCSI HBA Driver Dec 13 13:27:24.609543 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:27:24.616279 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:27:24.690716 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:27:24.690803 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:27:24.690845 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:27:24.748112 kernel: raid6: avx2x4 gen() 18244 MB/s Dec 13 13:27:24.769098 kernel: raid6: avx2x2 gen() 18287 MB/s Dec 13 13:27:24.795084 kernel: raid6: avx2x1 gen() 14343 MB/s Dec 13 13:27:24.795133 kernel: raid6: using algorithm avx2x2 gen() 18287 MB/s Dec 13 13:27:24.821173 kernel: raid6: .... xor() 18788 MB/s, rmw enabled Dec 13 13:27:24.821245 kernel: raid6: using avx2x2 recovery algorithm Dec 13 13:27:24.851097 kernel: xor: automatically using best checksumming function avx Dec 13 13:27:25.029101 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:27:25.042000 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:27:25.057295 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:27:25.085121 systemd-udevd[401]: Using default interface naming scheme 'v255'. Dec 13 13:27:25.092313 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:27:25.123254 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:27:25.164275 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Dec 13 13:27:25.201190 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:27:25.208276 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:27:25.309822 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:27:25.346804 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:27:25.401975 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:27:25.416980 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 13:27:25.427258 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:27:25.492877 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 13:27:25.492945 kernel: AES CTR mode by8 optimization enabled Dec 13 13:27:25.442422 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:27:25.459251 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:27:25.566230 kernel: scsi host0: Virtio SCSI HBA Dec 13 13:27:25.487078 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:27:25.570458 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:27:25.599205 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Dec 13 13:27:25.570916 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:27:25.621869 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:27:25.729237 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Dec 13 13:27:25.729564 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Dec 13 13:27:25.729806 kernel: sd 0:0:1:0: [sda] Write Protect is off Dec 13 13:27:25.730176 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Dec 13 13:27:25.730444 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 13:27:25.730699 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 13:27:25.730729 kernel: GPT:17805311 != 25165823 Dec 13 13:27:25.730751 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 13:27:25.730775 kernel: GPT:17805311 != 25165823 Dec 13 13:27:25.730797 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 13:27:25.730819 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:27:25.730843 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Dec 13 13:27:25.660487 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:27:25.660756 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:25.695903 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:27:25.802219 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (452) Dec 13 13:27:25.802262 kernel: BTRFS: device fsid 79c74448-2326-4c98-b9ff-09542b30ea52 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (456) Dec 13 13:27:25.723409 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:27:25.740776 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:27:25.807980 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Dec 13 13:27:25.825267 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Dec 13 13:27:25.853527 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:25.877705 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Dec 13 13:27:25.901127 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Dec 13 13:27:25.905347 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Dec 13 13:27:25.937239 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:27:25.972534 disk-uuid[541]: Primary Header is updated. Dec 13 13:27:25.972534 disk-uuid[541]: Secondary Entries is updated. Dec 13 13:27:25.972534 disk-uuid[541]: Secondary Header is updated. Dec 13 13:27:26.010182 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:27:25.981305 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:27:26.028331 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:27:26.057904 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:27:27.032085 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 13:27:27.033518 disk-uuid[543]: The operation has completed successfully. Dec 13 13:27:27.107952 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:27:27.108121 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:27:27.133265 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:27:27.165188 sh[565]: Success Dec 13 13:27:27.189290 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 13:27:27.271833 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:27:27.279099 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:27:27.305516 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:27:27.365371 kernel: BTRFS info (device dm-0): first mount of filesystem 79c74448-2326-4c98-b9ff-09542b30ea52 Dec 13 13:27:27.365410 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:27:27.365427 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:27:27.365442 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:27:27.365456 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:27:27.392119 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 13:27:27.408367 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:27:27.409379 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 13:27:27.414451 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:27:27.467179 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:27:27.527838 kernel: BTRFS info (device sda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:27:27.527894 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:27:27.527920 kernel: BTRFS info (device sda6): using free space tree Dec 13 13:27:27.527945 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 13:27:27.527969 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 13:27:27.545726 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:27:27.563261 kernel: BTRFS info (device sda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:27:27.572697 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:27:27.598326 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:27:27.649923 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:27:27.684364 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:27:27.797029 systemd-networkd[750]: lo: Link UP Dec 13 13:27:27.797460 systemd-networkd[750]: lo: Gained carrier Dec 13 13:27:27.799932 systemd-networkd[750]: Enumeration completed Dec 13 13:27:27.800110 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:27:27.800755 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:27:27.824223 ignition[694]: Ignition 2.20.0 Dec 13 13:27:27.800761 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:27:27.824232 ignition[694]: Stage: fetch-offline Dec 13 13:27:27.805972 systemd-networkd[750]: eth0: Link UP Dec 13 13:27:27.824273 ignition[694]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:27.805979 systemd-networkd[750]: eth0: Gained carrier Dec 13 13:27:27.824284 ignition[694]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 13:27:27.805992 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:27:27.824408 ignition[694]: parsed url from cmdline: "" Dec 13 13:27:27.817531 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.84/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 13:27:27.824416 ignition[694]: no config URL provided Dec 13 13:27:27.826628 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:27:27.824425 ignition[694]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:27:27.866033 systemd[1]: Reached target network.target - Network. Dec 13 13:27:27.824438 ignition[694]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:27:27.879358 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 13:27:27.824446 ignition[694]: failed to fetch config: resource requires networking Dec 13 13:27:27.926597 unknown[759]: fetched base config from "system" Dec 13 13:27:27.824774 ignition[694]: Ignition finished successfully Dec 13 13:27:27.926613 unknown[759]: fetched base config from "system" Dec 13 13:27:27.915965 ignition[759]: Ignition 2.20.0 Dec 13 13:27:27.926626 unknown[759]: fetched user config from "gcp" Dec 13 13:27:27.915974 ignition[759]: Stage: fetch Dec 13 13:27:27.929948 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 13:27:27.916352 ignition[759]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:27.951288 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:27:27.916371 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 13:27:27.996202 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:27:27.916497 ignition[759]: parsed url from cmdline: "" Dec 13 13:27:28.009340 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:27:27.916504 ignition[759]: no config URL provided Dec 13 13:27:28.054497 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:27:27.916514 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:27:28.079748 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:27:27.916528 ignition[759]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:27:28.092807 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:27:27.916559 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Dec 13 13:27:28.111320 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:27:27.920591 ignition[759]: GET result: OK Dec 13 13:27:28.118350 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:27:27.920652 ignition[759]: parsing config with SHA512: f964ce0f7072d289e73df10fec097caa5acfef85e1bd43d93207c22ad74edf52d91778a4ea1343196300a0740bd828ac52cdaa78807f13d69a6e1c5c8b216b0c Dec 13 13:27:28.132340 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:27:27.927971 ignition[759]: fetch: fetch complete Dec 13 13:27:28.155253 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:27:27.927984 ignition[759]: fetch: fetch passed Dec 13 13:27:27.928093 ignition[759]: Ignition finished successfully Dec 13 13:27:27.993741 ignition[765]: Ignition 2.20.0 Dec 13 13:27:27.993750 ignition[765]: Stage: kargs Dec 13 13:27:27.993995 ignition[765]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:27.994008 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 13:27:27.994963 ignition[765]: kargs: kargs passed Dec 13 13:27:27.995017 ignition[765]: Ignition finished successfully Dec 13 13:27:28.042206 ignition[771]: Ignition 2.20.0 Dec 13 13:27:28.042216 ignition[771]: Stage: disks Dec 13 13:27:28.042434 ignition[771]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:28.042448 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 13:27:28.043512 ignition[771]: disks: disks passed Dec 13 13:27:28.043576 ignition[771]: Ignition finished successfully Dec 13 13:27:28.208126 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 13:27:28.389743 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:27:28.396261 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:27:28.551099 kernel: EXT4-fs (sda9): mounted filesystem 8801d4fe-2f40-4e12-9140-c192f2e7d668 r/w with ordered data mode. Quota mode: none. Dec 13 13:27:28.551994 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:27:28.560891 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:27:28.590200 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:27:28.605254 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:27:28.647205 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (788) Dec 13 13:27:28.647243 kernel: BTRFS info (device sda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:27:28.647259 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:27:28.647274 kernel: BTRFS info (device sda6): using free space tree Dec 13 13:27:28.648014 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 13:27:28.666797 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 13:27:28.666829 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 13:27:28.648109 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:27:28.648156 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:27:28.692291 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:27:28.708509 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:27:28.740264 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:27:28.858381 initrd-setup-root[812]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:27:28.869222 initrd-setup-root[819]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:27:28.879203 initrd-setup-root[826]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:27:28.889260 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:27:29.029728 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:27:29.035209 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:27:29.052282 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:27:29.089301 kernel: BTRFS info (device sda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:27:29.090374 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:27:29.129325 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:27:29.140480 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:27:29.166323 ignition[900]: INFO : Ignition 2.20.0 Dec 13 13:27:29.166323 ignition[900]: INFO : Stage: mount Dec 13 13:27:29.166323 ignition[900]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:29.166323 ignition[900]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 13:27:29.166323 ignition[900]: INFO : mount: mount passed Dec 13 13:27:29.166323 ignition[900]: INFO : Ignition finished successfully Dec 13 13:27:29.163226 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:27:29.195297 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:27:29.258094 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (912) Dec 13 13:27:29.275471 kernel: BTRFS info (device sda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:27:29.275527 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:27:29.275552 kernel: BTRFS info (device sda6): using free space tree Dec 13 13:27:29.296931 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 13:27:29.296989 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 13:27:29.300712 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:27:29.353309 ignition[929]: INFO : Ignition 2.20.0 Dec 13 13:27:29.353309 ignition[929]: INFO : Stage: files Dec 13 13:27:29.368168 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:29.368168 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 13:27:29.368168 ignition[929]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:27:29.368168 ignition[929]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:27:29.368168 ignition[929]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:27:29.368168 ignition[929]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:27:29.368168 ignition[929]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:27:29.368168 ignition[929]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:27:29.365210 unknown[929]: wrote ssh authorized keys file for user: core Dec 13 13:27:29.470210 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:27:29.470210 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 13:27:29.504187 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 13:27:29.696328 systemd-networkd[750]: eth0: Gained IPv6LL Dec 13 13:27:29.803792 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 13:27:29.821183 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 13:27:29.821183 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 13:27:30.330129 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 13:27:31.292270 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 13:27:31.308207 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:27:31.308207 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:27:31.308207 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:27:31.308207 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 13:27:31.308207 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:27:31.308207 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 13:27:31.308207 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:27:31.308207 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 13:27:31.308207 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:27:31.308207 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:27:31.308207 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 13:27:31.308207 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 13:27:31.308207 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 13:27:31.308207 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 13:27:31.538507 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 13:27:31.905609 ignition[929]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 13:27:31.905609 ignition[929]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 13:27:31.945195 ignition[929]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:27:31.945195 ignition[929]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 13:27:31.945195 ignition[929]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 13:27:31.945195 ignition[929]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Dec 13 13:27:31.945195 ignition[929]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 13:27:31.945195 ignition[929]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:27:31.945195 ignition[929]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:27:31.945195 ignition[929]: INFO : files: files passed Dec 13 13:27:31.945195 ignition[929]: INFO : Ignition finished successfully Dec 13 13:27:31.910095 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:27:31.941433 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:27:31.966265 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:27:32.021525 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:27:32.157197 initrd-setup-root-after-ignition[957]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:27:32.157197 initrd-setup-root-after-ignition[957]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:27:32.021646 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:27:32.224298 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:27:32.044607 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:27:32.059545 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:27:32.089269 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:27:32.155680 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:27:32.155801 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:27:32.168426 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:27:32.182336 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:27:32.214385 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:27:32.221375 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:27:32.286671 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:27:32.308417 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:27:32.356630 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:27:32.370459 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:27:32.381516 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:27:32.401601 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:27:32.401824 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:27:32.455424 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:27:32.463549 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:27:32.481560 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:27:32.515486 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:27:32.525598 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:27:32.544520 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:27:32.562536 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:27:32.579520 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:27:32.600517 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:27:32.617496 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:27:32.634427 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:27:32.634642 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:27:32.675238 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:27:32.675634 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:27:32.693451 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:27:32.693631 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:27:32.712553 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:27:32.712762 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:27:32.751509 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:27:32.751723 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:27:32.759543 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:27:32.759711 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:27:32.832202 ignition[982]: INFO : Ignition 2.20.0 Dec 13 13:27:32.832202 ignition[982]: INFO : Stage: umount Dec 13 13:27:32.832202 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:32.832202 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 13:27:32.832202 ignition[982]: INFO : umount: umount passed Dec 13 13:27:32.832202 ignition[982]: INFO : Ignition finished successfully Dec 13 13:27:32.785429 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:27:32.825367 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:27:32.841340 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:27:32.841557 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:27:32.884430 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:27:32.884613 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:27:32.913826 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:27:32.914917 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:27:32.915034 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:27:32.930860 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:27:32.930975 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:27:32.950494 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:27:32.950615 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:27:32.971593 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:27:32.971650 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:27:32.980412 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:27:32.980475 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:27:32.997397 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 13:27:32.997456 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 13:27:33.014396 systemd[1]: Stopped target network.target - Network. Dec 13 13:27:33.029344 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:27:33.029423 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:27:33.044396 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:27:33.062328 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:27:33.066134 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:27:33.077329 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:27:33.095332 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:27:33.110360 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:27:33.110418 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:27:33.138346 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:27:33.138406 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:27:33.146417 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:27:33.146492 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:27:33.163375 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:27:33.163434 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:27:33.180375 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:27:33.180434 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:27:33.197577 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:27:33.202116 systemd-networkd[750]: eth0: DHCPv6 lease lost Dec 13 13:27:33.225335 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:27:33.234839 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:27:33.234968 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:27:33.251890 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:27:33.252288 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:27:33.269814 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:27:33.269867 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:27:33.292166 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:27:33.311140 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:27:33.311262 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:27:33.322277 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:27:33.322354 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:27:33.342225 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:27:33.342307 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:27:33.364226 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:27:33.364309 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:27:33.383382 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:27:33.402655 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:27:33.819171 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Dec 13 13:27:33.402816 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:27:33.426378 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:27:33.426525 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:27:33.429399 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:27:33.429455 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:27:33.456399 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:27:33.456476 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:27:33.490371 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:27:33.490601 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:27:33.534204 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:27:33.534429 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:27:33.568337 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:27:33.571330 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:27:33.571400 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:27:33.626384 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 13:27:33.626472 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:27:33.634401 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:27:33.634475 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:27:33.653463 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:27:33.653531 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:33.671920 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:27:33.672082 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:27:33.689738 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:27:33.689848 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:27:33.711548 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:27:33.734278 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:27:33.769536 systemd[1]: Switching root. Dec 13 13:27:34.087174 systemd-journald[184]: Journal stopped Dec 13 13:27:36.592360 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 13:27:36.592418 kernel: SELinux: policy capability open_perms=1 Dec 13 13:27:36.592442 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 13:27:36.592459 kernel: SELinux: policy capability always_check_network=0 Dec 13 13:27:36.592476 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 13:27:36.592493 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 13:27:36.592514 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 13:27:36.592533 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 13:27:36.592557 kernel: audit: type=1403 audit(1734096454.527:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 13:27:36.592579 systemd[1]: Successfully loaded SELinux policy in 82.731ms. Dec 13 13:27:36.592603 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.317ms. Dec 13 13:27:36.592625 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:27:36.592645 systemd[1]: Detected virtualization google. Dec 13 13:27:36.592664 systemd[1]: Detected architecture x86-64. Dec 13 13:27:36.592689 systemd[1]: Detected first boot. Dec 13 13:27:36.592712 systemd[1]: Initializing machine ID from random generator. Dec 13 13:27:36.592733 zram_generator::config[1024]: No configuration found. Dec 13 13:27:36.592754 systemd[1]: Populated /etc with preset unit settings. Dec 13 13:27:36.592774 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 13:27:36.592798 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 13:27:36.592818 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 13:27:36.592838 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 13:27:36.592859 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 13:27:36.592879 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 13:27:36.592901 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 13:27:36.592920 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 13:27:36.592946 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 13:27:36.592967 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 13:27:36.592987 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 13:27:36.593008 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:27:36.593029 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:27:36.593050 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 13:27:36.593104 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 13:27:36.593126 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 13:27:36.593153 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:27:36.593182 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 13:27:36.593204 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:27:36.593224 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 13:27:36.593245 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 13:27:36.593265 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 13:27:36.593293 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 13:27:36.593315 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:27:36.593337 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:27:36.593362 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:27:36.593384 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:27:36.593406 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 13:27:36.593428 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 13:27:36.593450 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:27:36.593471 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:27:36.593498 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:27:36.593528 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 13:27:36.593550 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 13:27:36.593573 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 13:27:36.593595 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 13:27:36.593617 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:27:36.593643 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 13:27:36.593666 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 13:27:36.593688 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 13:27:36.593712 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 13:27:36.593734 systemd[1]: Reached target machines.target - Containers. Dec 13 13:27:36.593757 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 13:27:36.593778 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:27:36.593801 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:27:36.593828 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 13:27:36.593852 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:27:36.593874 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:27:36.593897 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:27:36.593919 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 13:27:36.593941 kernel: fuse: init (API version 7.39) Dec 13 13:27:36.593962 kernel: ACPI: bus type drm_connector registered Dec 13 13:27:36.593983 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:27:36.594007 kernel: loop: module loaded Dec 13 13:27:36.594034 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 13:27:36.594072 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 13:27:36.594108 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 13:27:36.594131 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 13:27:36.594153 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 13:27:36.594183 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:27:36.594205 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:27:36.594227 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 13:27:36.594289 systemd-journald[1111]: Collecting audit messages is disabled. Dec 13 13:27:36.594338 systemd-journald[1111]: Journal started Dec 13 13:27:36.594384 systemd-journald[1111]: Runtime Journal (/run/log/journal/72131359540d4e4881dff1b962885cbc) is 8.0M, max 148.6M, 140.6M free. Dec 13 13:27:36.604802 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 13:27:35.388271 systemd[1]: Queued start job for default target multi-user.target. Dec 13 13:27:35.408604 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 13:27:35.409200 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 13:27:36.635278 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:27:36.635357 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 13:27:36.641958 systemd[1]: Stopped verity-setup.service. Dec 13 13:27:36.674528 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:27:36.685130 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:27:36.695611 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 13:27:36.705411 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 13:27:36.716423 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 13:27:36.727506 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 13:27:36.737413 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 13:27:36.748414 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 13:27:36.759603 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 13:27:36.771601 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:27:36.783497 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 13:27:36.783723 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 13:27:36.795452 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:27:36.795670 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:27:36.807471 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:27:36.807682 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:27:36.817451 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:27:36.817662 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:27:36.829434 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 13:27:36.829644 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 13:27:36.839430 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:27:36.839637 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:27:36.849453 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:27:36.859444 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 13:27:36.870478 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 13:27:36.881450 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:27:36.904973 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 13:27:36.927193 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 13:27:36.942205 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 13:27:36.952207 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 13:27:36.952267 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:27:36.963406 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 13:27:36.980256 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 13:27:36.997286 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 13:27:37.007317 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:27:37.018304 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 13:27:37.039782 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 13:27:37.051219 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:27:37.071666 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 13:27:37.080358 systemd-journald[1111]: Time spent on flushing to /var/log/journal/72131359540d4e4881dff1b962885cbc is 65.476ms for 934 entries. Dec 13 13:27:37.080358 systemd-journald[1111]: System Journal (/var/log/journal/72131359540d4e4881dff1b962885cbc) is 8.0M, max 584.8M, 576.8M free. Dec 13 13:27:37.194500 systemd-journald[1111]: Received client request to flush runtime journal. Dec 13 13:27:37.088489 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:27:37.099880 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:27:37.122282 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 13:27:37.142254 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:27:37.159323 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 13:27:37.177523 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 13:27:37.188339 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 13:27:37.199831 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 13:27:37.211585 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 13:27:37.220395 kernel: loop0: detected capacity change from 0 to 141000 Dec 13 13:27:37.229532 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 13:27:37.242852 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:27:37.267117 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 13:27:37.293108 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 13:27:37.300922 systemd-tmpfiles[1143]: ACLs are not supported, ignoring. Dec 13 13:27:37.300965 systemd-tmpfiles[1143]: ACLs are not supported, ignoring. Dec 13 13:27:37.313800 udevadm[1144]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 13:27:37.321124 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:27:37.341198 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 13:27:37.353327 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 13:27:37.356481 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 13:27:37.358176 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 13:27:37.396612 kernel: loop1: detected capacity change from 0 to 52184 Dec 13 13:27:37.469480 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 13:27:37.489276 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:27:37.509475 kernel: loop2: detected capacity change from 0 to 138184 Dec 13 13:27:37.562379 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Dec 13 13:27:37.562426 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Dec 13 13:27:37.576717 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:27:37.606159 kernel: loop3: detected capacity change from 0 to 205544 Dec 13 13:27:37.724134 kernel: loop4: detected capacity change from 0 to 141000 Dec 13 13:27:37.799109 kernel: loop5: detected capacity change from 0 to 52184 Dec 13 13:27:37.908228 kernel: loop6: detected capacity change from 0 to 138184 Dec 13 13:27:38.033326 kernel: loop7: detected capacity change from 0 to 205544 Dec 13 13:27:38.096075 (sd-merge)[1169]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Dec 13 13:27:38.098160 (sd-merge)[1169]: Merged extensions into '/usr'. Dec 13 13:27:38.107761 systemd[1]: Reloading requested from client PID 1142 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 13:27:38.108239 systemd[1]: Reloading... Dec 13 13:27:38.241098 zram_generator::config[1196]: No configuration found. Dec 13 13:27:38.385987 ldconfig[1137]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 13:27:38.521623 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:27:38.738250 systemd[1]: Reloading finished in 629 ms. Dec 13 13:27:38.769356 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 13:27:38.781878 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 13:27:38.802295 systemd[1]: Starting ensure-sysext.service... Dec 13 13:27:38.821315 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:27:38.840132 systemd[1]: Reloading requested from client PID 1236 ('systemctl') (unit ensure-sysext.service)... Dec 13 13:27:38.840161 systemd[1]: Reloading... Dec 13 13:27:38.892641 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 13:27:38.894834 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 13:27:38.896660 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 13:27:38.899296 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Dec 13 13:27:38.899419 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Dec 13 13:27:38.914950 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:27:38.914975 systemd-tmpfiles[1237]: Skipping /boot Dec 13 13:27:38.945093 zram_generator::config[1260]: No configuration found. Dec 13 13:27:38.956388 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:27:38.956417 systemd-tmpfiles[1237]: Skipping /boot Dec 13 13:27:39.114933 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:27:39.181332 systemd[1]: Reloading finished in 340 ms. Dec 13 13:27:39.200757 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 13:27:39.216624 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:27:39.237308 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:27:39.254315 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 13:27:39.275356 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 13:27:39.300348 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:27:39.320249 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:27:39.333579 augenrules[1329]: No rules Dec 13 13:27:39.340521 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 13:27:39.366656 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:27:39.370933 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:27:39.409214 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 13:27:39.420603 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 13:27:39.432726 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Dec 13 13:27:39.449631 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:27:39.450528 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:27:39.462501 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:27:39.483193 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:27:39.503251 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:27:39.513337 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:27:39.521344 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 13:27:39.531151 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:27:39.535313 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 13:27:39.547914 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:27:39.560770 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 13:27:39.574544 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:27:39.575758 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:27:39.588128 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:27:39.588381 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:27:39.600182 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:27:39.600615 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:27:39.612489 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 13:27:39.628827 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 13:27:39.649411 systemd-resolved[1323]: Positive Trust Anchors: Dec 13 13:27:39.649431 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:27:39.649501 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:27:39.664837 systemd-resolved[1323]: Defaulting to hostname 'linux'. Dec 13 13:27:39.671739 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:27:39.688221 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:27:39.699343 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:27:39.708258 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:27:39.718184 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:27:39.727262 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:27:39.745459 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:27:39.764414 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:27:39.785188 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:27:39.793838 augenrules[1374]: /sbin/augenrules: No change Dec 13 13:27:39.807093 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1349) Dec 13 13:27:39.812292 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 13 13:27:39.821325 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:27:39.830283 augenrules[1401]: No rules Dec 13 13:27:39.834281 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:27:39.841101 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1349) Dec 13 13:27:39.850218 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 13:27:39.860205 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:27:39.860248 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:27:39.861779 systemd[1]: Finished ensure-sysext.service. Dec 13 13:27:39.870757 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:27:39.872426 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:27:39.882663 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:27:39.883384 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:27:39.894619 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:27:39.896222 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:27:39.906691 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:27:39.907685 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:27:39.917125 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 13:27:39.926702 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:27:39.926972 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:27:39.950488 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1360) Dec 13 13:27:39.950613 kernel: ACPI: button: Power Button [PWRF] Dec 13 13:27:39.973473 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 13 13:27:39.979074 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 13 13:27:39.995354 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 13:27:40.024144 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Dec 13 13:27:40.036258 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Dec 13 13:27:40.043115 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 13:27:40.050206 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:27:40.050306 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:27:40.073441 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Dec 13 13:27:40.091768 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 13:27:40.103087 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Dec 13 13:27:40.117949 kernel: EDAC MC: Ver: 3.0.0 Dec 13 13:27:40.160289 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Dec 13 13:27:40.178162 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 13:27:40.198873 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:27:40.205270 systemd-networkd[1406]: lo: Link UP Dec 13 13:27:40.205855 systemd-networkd[1406]: lo: Gained carrier Dec 13 13:27:40.209407 systemd-networkd[1406]: Enumeration completed Dec 13 13:27:40.209983 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:27:40.209990 systemd-networkd[1406]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:27:40.211121 systemd-networkd[1406]: eth0: Link UP Dec 13 13:27:40.211129 systemd-networkd[1406]: eth0: Gained carrier Dec 13 13:27:40.211153 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:27:40.215050 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 13:27:40.217430 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:27:40.220192 systemd-networkd[1406]: eth0: DHCPv4 address 10.128.0.84/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 13:27:40.229876 systemd[1]: Reached target network.target - Network. Dec 13 13:27:40.245123 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 13:27:40.258868 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 13:27:40.267537 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 13:27:40.285792 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:27:40.321878 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:40.333628 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 13:27:40.346122 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:27:40.356180 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:27:40.366364 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 13:27:40.377213 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 13:27:40.388371 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 13:27:40.398306 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 13:27:40.409175 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 13:27:40.420164 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 13:27:40.420229 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:27:40.428182 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:27:40.438378 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 13:27:40.449900 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 13:27:40.467921 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 13:27:40.484303 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 13:27:40.501366 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 13:27:40.507356 lvm[1450]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:27:40.511451 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:27:40.521205 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:27:40.530251 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:27:40.530304 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:27:40.539238 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 13:27:40.555277 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 13:27:40.569179 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 13:27:40.595700 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 13:27:40.615281 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 13:27:40.625991 jq[1454]: false Dec 13 13:27:40.626189 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 13:27:40.632291 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 13:27:40.652277 systemd[1]: Started ntpd.service - Network Time Service. Dec 13 13:27:40.669210 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 13:27:40.689293 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 13:27:40.707861 extend-filesystems[1455]: Found loop4 Dec 13 13:27:40.734368 extend-filesystems[1455]: Found loop5 Dec 13 13:27:40.734368 extend-filesystems[1455]: Found loop6 Dec 13 13:27:40.734368 extend-filesystems[1455]: Found loop7 Dec 13 13:27:40.734368 extend-filesystems[1455]: Found sda Dec 13 13:27:40.734368 extend-filesystems[1455]: Found sda1 Dec 13 13:27:40.734368 extend-filesystems[1455]: Found sda2 Dec 13 13:27:40.734368 extend-filesystems[1455]: Found sda3 Dec 13 13:27:40.734368 extend-filesystems[1455]: Found usr Dec 13 13:27:40.734368 extend-filesystems[1455]: Found sda4 Dec 13 13:27:40.734368 extend-filesystems[1455]: Found sda6 Dec 13 13:27:40.734368 extend-filesystems[1455]: Found sda7 Dec 13 13:27:40.734368 extend-filesystems[1455]: Found sda9 Dec 13 13:27:40.734368 extend-filesystems[1455]: Checking size of /dev/sda9 Dec 13 13:27:40.904365 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Dec 13 13:27:40.904457 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Dec 13 13:27:40.904501 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1352) Dec 13 13:27:40.904531 coreos-metadata[1452]: Dec 13 13:27:40.716 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Dec 13 13:27:40.904531 coreos-metadata[1452]: Dec 13 13:27:40.717 INFO Fetch successful Dec 13 13:27:40.904531 coreos-metadata[1452]: Dec 13 13:27:40.717 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Dec 13 13:27:40.904531 coreos-metadata[1452]: Dec 13 13:27:40.721 INFO Fetch successful Dec 13 13:27:40.904531 coreos-metadata[1452]: Dec 13 13:27:40.721 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Dec 13 13:27:40.904531 coreos-metadata[1452]: Dec 13 13:27:40.723 INFO Fetch successful Dec 13 13:27:40.904531 coreos-metadata[1452]: Dec 13 13:27:40.723 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Dec 13 13:27:40.904531 coreos-metadata[1452]: Dec 13 13:27:40.731 INFO Fetch successful Dec 13 13:27:40.714112 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 13:27:40.905202 ntpd[1459]: 13 Dec 13:27:40 ntpd[1459]: ntpd 4.2.8p17@1.4004-o Fri Dec 13 11:27:12 UTC 2024 (1): Starting Dec 13 13:27:40.905202 ntpd[1459]: 13 Dec 13:27:40 ntpd[1459]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 13:27:40.905202 ntpd[1459]: 13 Dec 13:27:40 ntpd[1459]: ---------------------------------------------------- Dec 13 13:27:40.905202 ntpd[1459]: 13 Dec 13:27:40 ntpd[1459]: ntp-4 is maintained by Network Time Foundation, Dec 13 13:27:40.905202 ntpd[1459]: 13 Dec 13:27:40 ntpd[1459]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 13:27:40.905202 ntpd[1459]: 13 Dec 13:27:40 ntpd[1459]: corporation. Support and training for ntp-4 are Dec 13 13:27:40.905202 ntpd[1459]: 13 Dec 13:27:40 ntpd[1459]: available at https://www.nwtime.org/support Dec 13 13:27:40.905202 ntpd[1459]: 13 Dec 13:27:40 ntpd[1459]: ---------------------------------------------------- Dec 13 13:27:40.905202 ntpd[1459]: 13 Dec 13:27:40 ntpd[1459]: proto: precision = 0.106 usec (-23) Dec 13 13:27:40.905202 ntpd[1459]: 13 Dec 13:27:40 ntpd[1459]: basedate set to 2024-12-01 Dec 13 13:27:40.905202 ntpd[1459]: 13 Dec 13:27:40 ntpd[1459]: gps base set to 2024-12-01 (week 2343) Dec 13 13:27:40.905202 ntpd[1459]: 13 Dec 13:27:40 ntpd[1459]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 13:27:40.905202 ntpd[1459]: 13 Dec 13:27:40 ntpd[1459]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 13:27:40.905202 ntpd[1459]: 13 Dec 13:27:40 ntpd[1459]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 13:27:40.905202 ntpd[1459]: 13 Dec 13:27:40 ntpd[1459]: Listen normally on 3 eth0 10.128.0.84:123 Dec 13 13:27:40.905202 ntpd[1459]: 13 Dec 13:27:40 ntpd[1459]: Listen normally on 4 lo [::1]:123 Dec 13 13:27:40.905202 ntpd[1459]: 13 Dec 13:27:40 ntpd[1459]: bind(21) AF_INET6 fe80::4001:aff:fe80:54%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 13:27:40.905202 ntpd[1459]: 13 Dec 13:27:40 ntpd[1459]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:54%2#123 Dec 13 13:27:40.905202 ntpd[1459]: 13 Dec 13:27:40 ntpd[1459]: failed to init interface for address fe80::4001:aff:fe80:54%2 Dec 13 13:27:40.905202 ntpd[1459]: 13 Dec 13:27:40 ntpd[1459]: Listening on routing socket on fd #21 for interface updates Dec 13 13:27:40.905202 ntpd[1459]: 13 Dec 13:27:40 ntpd[1459]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 13:27:40.905202 ntpd[1459]: 13 Dec 13:27:40 ntpd[1459]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 13:27:40.908761 extend-filesystems[1455]: Resized partition /dev/sda9 Dec 13 13:27:40.764357 ntpd[1459]: ntpd 4.2.8p17@1.4004-o Fri Dec 13 11:27:12 UTC 2024 (1): Starting Dec 13 13:27:40.735358 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 13:27:40.921009 extend-filesystems[1481]: resize2fs 1.47.1 (20-May-2024) Dec 13 13:27:40.921009 extend-filesystems[1481]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 13:27:40.921009 extend-filesystems[1481]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 13 13:27:40.921009 extend-filesystems[1481]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Dec 13 13:27:40.764411 ntpd[1459]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 13:27:40.750039 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Dec 13 13:27:40.991577 extend-filesystems[1455]: Resized filesystem in /dev/sda9 Dec 13 13:27:40.764427 ntpd[1459]: ---------------------------------------------------- Dec 13 13:27:40.751579 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 13:27:40.764442 ntpd[1459]: ntp-4 is maintained by Network Time Foundation, Dec 13 13:27:41.001008 update_engine[1478]: I20241213 13:27:40.878041 1478 main.cc:92] Flatcar Update Engine starting Dec 13 13:27:41.001008 update_engine[1478]: I20241213 13:27:40.894205 1478 update_check_scheduler.cc:74] Next update check in 4m45s Dec 13 13:27:40.757995 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 13:27:40.764455 ntpd[1459]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 13:27:40.810181 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 13:27:40.764468 ntpd[1459]: corporation. Support and training for ntp-4 are Dec 13 13:27:41.001996 jq[1484]: true Dec 13 13:27:40.829374 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 13:27:40.764483 ntpd[1459]: available at https://www.nwtime.org/support Dec 13 13:27:40.853143 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 13:27:40.764497 ntpd[1459]: ---------------------------------------------------- Dec 13 13:27:40.891635 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 13:27:40.769748 ntpd[1459]: proto: precision = 0.106 usec (-23) Dec 13 13:27:40.893150 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 13:27:40.776128 ntpd[1459]: basedate set to 2024-12-01 Dec 13 13:27:40.893602 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 13:27:40.776153 ntpd[1459]: gps base set to 2024-12-01 (week 2343) Dec 13 13:27:40.893823 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 13:27:40.778860 dbus-daemon[1453]: [system] SELinux support is enabled Dec 13 13:27:40.912567 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 13:27:40.789627 ntpd[1459]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 13:27:40.914187 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 13:27:40.789686 ntpd[1459]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 13:27:40.936626 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 13:27:40.789933 ntpd[1459]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 13:27:40.938252 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 13:27:40.789984 ntpd[1459]: Listen normally on 3 eth0 10.128.0.84:123 Dec 13 13:27:40.790039 ntpd[1459]: Listen normally on 4 lo [::1]:123 Dec 13 13:27:40.790131 ntpd[1459]: bind(21) AF_INET6 fe80::4001:aff:fe80:54%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 13:27:40.790163 ntpd[1459]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:54%2#123 Dec 13 13:27:40.790188 ntpd[1459]: failed to init interface for address fe80::4001:aff:fe80:54%2 Dec 13 13:27:40.790235 ntpd[1459]: Listening on routing socket on fd #21 for interface updates Dec 13 13:27:40.793835 dbus-daemon[1453]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1406 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 13:27:40.794343 ntpd[1459]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 13:27:40.794394 ntpd[1459]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 13:27:41.009524 dbus-daemon[1453]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 13:27:41.031806 jq[1490]: true Dec 13 13:27:41.033270 (ntainerd)[1491]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 13:27:41.044698 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 13:27:41.079270 systemd[1]: Started update-engine.service - Update Engine. Dec 13 13:27:41.099757 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 13:27:41.100039 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 13:27:41.100102 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 13:27:41.121271 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 13:27:41.132219 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 13:27:41.132262 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 13:27:41.135326 tar[1488]: linux-amd64/helm Dec 13 13:27:41.151904 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 13:27:41.177937 systemd-logind[1472]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 13:27:41.188346 systemd-logind[1472]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 13:27:41.188389 systemd-logind[1472]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 13:27:41.188675 systemd-logind[1472]: New seat seat0. Dec 13 13:27:41.193722 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 13:27:41.219430 bash[1523]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:27:41.219769 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 13:27:41.248423 systemd[1]: Starting sshkeys.service... Dec 13 13:27:41.314071 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 13:27:41.333374 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 13:27:41.378041 dbus-daemon[1453]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 13:27:41.380344 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 13:27:41.382771 dbus-daemon[1453]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1512 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 13:27:41.399513 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 13:27:41.531522 polkitd[1527]: Started polkitd version 121 Dec 13 13:27:41.537733 systemd-networkd[1406]: eth0: Gained IPv6LL Dec 13 13:27:41.549490 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 13:27:41.557968 coreos-metadata[1526]: Dec 13 13:27:41.557 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Dec 13 13:27:41.560127 coreos-metadata[1526]: Dec 13 13:27:41.559 INFO Fetch failed with 404: resource not found Dec 13 13:27:41.560127 coreos-metadata[1526]: Dec 13 13:27:41.559 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Dec 13 13:27:41.562150 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 13:27:41.565283 coreos-metadata[1526]: Dec 13 13:27:41.562 INFO Fetch successful Dec 13 13:27:41.565283 coreos-metadata[1526]: Dec 13 13:27:41.562 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Dec 13 13:27:41.565656 coreos-metadata[1526]: Dec 13 13:27:41.565 INFO Fetch failed with 404: resource not found Dec 13 13:27:41.565656 coreos-metadata[1526]: Dec 13 13:27:41.565 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Dec 13 13:27:41.567389 coreos-metadata[1526]: Dec 13 13:27:41.567 INFO Fetch failed with 404: resource not found Dec 13 13:27:41.567389 coreos-metadata[1526]: Dec 13 13:27:41.567 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Dec 13 13:27:41.569232 coreos-metadata[1526]: Dec 13 13:27:41.568 INFO Fetch successful Dec 13 13:27:41.580343 unknown[1526]: wrote ssh authorized keys file for user: core Dec 13 13:27:41.585277 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:27:41.600521 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 13:27:41.615540 polkitd[1527]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 13:27:41.616764 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Dec 13 13:27:41.618645 polkitd[1527]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 13:27:41.636997 polkitd[1527]: Finished loading, compiling and executing 2 rules Dec 13 13:27:41.642305 dbus-daemon[1453]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 13:27:41.642527 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 13:27:41.650156 polkitd[1527]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 13:27:41.678610 locksmithd[1519]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 13:27:41.689785 init.sh[1545]: + '[' -e /etc/default/instance_configs.cfg.template ']' Dec 13 13:27:41.692089 init.sh[1545]: + echo -e '[InstanceSetup]\nset_host_keys = false' Dec 13 13:27:41.692517 init.sh[1545]: + /usr/bin/google_instance_setup Dec 13 13:27:41.716927 update-ssh-keys[1547]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:27:41.722531 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 13:27:41.724163 systemd-hostnamed[1512]: Hostname set to (transient) Dec 13 13:27:41.727856 systemd-resolved[1323]: System hostname changed to 'ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal'. Dec 13 13:27:41.740164 systemd[1]: Finished sshkeys.service. Dec 13 13:27:41.791143 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 13:27:42.113910 sshd_keygen[1480]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 13:27:42.173163 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 13:27:42.192009 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 13:27:42.217733 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 13:27:42.223849 containerd[1491]: time="2024-12-13T13:27:42.223741791Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 13:27:42.244420 systemd[1]: Started sshd@0-10.128.0.84:22-147.75.109.163:52762.service - OpenSSH per-connection server daemon (147.75.109.163:52762). Dec 13 13:27:42.289177 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 13:27:42.289834 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 13:27:42.308439 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 13:27:42.374274 containerd[1491]: time="2024-12-13T13:27:42.368965138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:42.374274 containerd[1491]: time="2024-12-13T13:27:42.371814119Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:27:42.374274 containerd[1491]: time="2024-12-13T13:27:42.371856255Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 13:27:42.374274 containerd[1491]: time="2024-12-13T13:27:42.371884534Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 13:27:42.374274 containerd[1491]: time="2024-12-13T13:27:42.372121265Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 13:27:42.374274 containerd[1491]: time="2024-12-13T13:27:42.372160040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:42.374274 containerd[1491]: time="2024-12-13T13:27:42.372273573Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:27:42.374274 containerd[1491]: time="2024-12-13T13:27:42.372296848Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:42.374274 containerd[1491]: time="2024-12-13T13:27:42.372564717Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:27:42.374274 containerd[1491]: time="2024-12-13T13:27:42.372592828Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:42.374274 containerd[1491]: time="2024-12-13T13:27:42.372617643Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:27:42.374808 containerd[1491]: time="2024-12-13T13:27:42.372636230Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:42.374808 containerd[1491]: time="2024-12-13T13:27:42.372758642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:42.374808 containerd[1491]: time="2024-12-13T13:27:42.373138649Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:42.374808 containerd[1491]: time="2024-12-13T13:27:42.373331901Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:27:42.374808 containerd[1491]: time="2024-12-13T13:27:42.373358253Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 13:27:42.374808 containerd[1491]: time="2024-12-13T13:27:42.373492990Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 13:27:42.374808 containerd[1491]: time="2024-12-13T13:27:42.373581179Z" level=info msg="metadata content store policy set" policy=shared Dec 13 13:27:42.387086 containerd[1491]: time="2024-12-13T13:27:42.385330877Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 13:27:42.387086 containerd[1491]: time="2024-12-13T13:27:42.385409712Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 13:27:42.387086 containerd[1491]: time="2024-12-13T13:27:42.385436481Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 13:27:42.387086 containerd[1491]: time="2024-12-13T13:27:42.385462299Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 13:27:42.387086 containerd[1491]: time="2024-12-13T13:27:42.385486402Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 13:27:42.387086 containerd[1491]: time="2024-12-13T13:27:42.385706351Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 13:27:42.387588 containerd[1491]: time="2024-12-13T13:27:42.387553099Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 13:27:42.387882 containerd[1491]: time="2024-12-13T13:27:42.387853641Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 13:27:42.390693 containerd[1491]: time="2024-12-13T13:27:42.390105849Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 13:27:42.390693 containerd[1491]: time="2024-12-13T13:27:42.390146800Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 13:27:42.390693 containerd[1491]: time="2024-12-13T13:27:42.390173213Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 13:27:42.390693 containerd[1491]: time="2024-12-13T13:27:42.390196593Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 13:27:42.390693 containerd[1491]: time="2024-12-13T13:27:42.390217467Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 13:27:42.390693 containerd[1491]: time="2024-12-13T13:27:42.390240258Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 13:27:42.390693 containerd[1491]: time="2024-12-13T13:27:42.390269319Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 13:27:42.390693 containerd[1491]: time="2024-12-13T13:27:42.390314616Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 13:27:42.390693 containerd[1491]: time="2024-12-13T13:27:42.390338088Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 13:27:42.390693 containerd[1491]: time="2024-12-13T13:27:42.390358365Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 13:27:42.390693 containerd[1491]: time="2024-12-13T13:27:42.390393295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 13:27:42.390693 containerd[1491]: time="2024-12-13T13:27:42.390416754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 13:27:42.390693 containerd[1491]: time="2024-12-13T13:27:42.390438009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 13:27:42.390693 containerd[1491]: time="2024-12-13T13:27:42.390460104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 13:27:42.391354 containerd[1491]: time="2024-12-13T13:27:42.390479999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 13:27:42.391354 containerd[1491]: time="2024-12-13T13:27:42.390499324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 13:27:42.391354 containerd[1491]: time="2024-12-13T13:27:42.390516891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 13:27:42.391354 containerd[1491]: time="2024-12-13T13:27:42.390544117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 13:27:42.391354 containerd[1491]: time="2024-12-13T13:27:42.390564071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 13:27:42.391354 containerd[1491]: time="2024-12-13T13:27:42.390587936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 13:27:42.391354 containerd[1491]: time="2024-12-13T13:27:42.390608580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 13:27:42.391354 containerd[1491]: time="2024-12-13T13:27:42.390630201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 13:27:42.393091 containerd[1491]: time="2024-12-13T13:27:42.390650762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 13:27:42.393091 containerd[1491]: time="2024-12-13T13:27:42.391784603Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 13:27:42.393091 containerd[1491]: time="2024-12-13T13:27:42.391829224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 13:27:42.393091 containerd[1491]: time="2024-12-13T13:27:42.391860629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 13:27:42.393091 containerd[1491]: time="2024-12-13T13:27:42.391880262Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 13:27:42.393091 containerd[1491]: time="2024-12-13T13:27:42.391971367Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 13:27:42.393091 containerd[1491]: time="2024-12-13T13:27:42.392000459Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 13:27:42.393091 containerd[1491]: time="2024-12-13T13:27:42.392119153Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 13:27:42.393091 containerd[1491]: time="2024-12-13T13:27:42.392142883Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 13:27:42.393091 containerd[1491]: time="2024-12-13T13:27:42.392160246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 13:27:42.393091 containerd[1491]: time="2024-12-13T13:27:42.392183314Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 13:27:42.393091 containerd[1491]: time="2024-12-13T13:27:42.392200201Z" level=info msg="NRI interface is disabled by configuration." Dec 13 13:27:42.393091 containerd[1491]: time="2024-12-13T13:27:42.392219238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 13:27:42.393730 containerd[1491]: time="2024-12-13T13:27:42.392886733Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 13:27:42.393730 containerd[1491]: time="2024-12-13T13:27:42.392979525Z" level=info msg="Connect containerd service" Dec 13 13:27:42.400093 containerd[1491]: time="2024-12-13T13:27:42.396108629Z" level=info msg="using legacy CRI server" Dec 13 13:27:42.400093 containerd[1491]: time="2024-12-13T13:27:42.396142162Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 13:27:42.400093 containerd[1491]: time="2024-12-13T13:27:42.396356904Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 13:27:42.400093 containerd[1491]: time="2024-12-13T13:27:42.397508294Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:27:42.400093 containerd[1491]: time="2024-12-13T13:27:42.398235746Z" level=info msg="Start subscribing containerd event" Dec 13 13:27:42.400093 containerd[1491]: time="2024-12-13T13:27:42.398299781Z" level=info msg="Start recovering state" Dec 13 13:27:42.400093 containerd[1491]: time="2024-12-13T13:27:42.398384924Z" level=info msg="Start event monitor" Dec 13 13:27:42.400093 containerd[1491]: time="2024-12-13T13:27:42.398409244Z" level=info msg="Start snapshots syncer" Dec 13 13:27:42.400093 containerd[1491]: time="2024-12-13T13:27:42.398423334Z" level=info msg="Start cni network conf syncer for default" Dec 13 13:27:42.400093 containerd[1491]: time="2024-12-13T13:27:42.398434831Z" level=info msg="Start streaming server" Dec 13 13:27:42.402034 containerd[1491]: time="2024-12-13T13:27:42.400631746Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 13:27:42.402034 containerd[1491]: time="2024-12-13T13:27:42.400719129Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 13:27:42.402034 containerd[1491]: time="2024-12-13T13:27:42.400813938Z" level=info msg="containerd successfully booted in 0.180666s" Dec 13 13:27:42.400930 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 13:27:42.422104 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 13:27:42.444273 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 13:27:42.464286 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 13:27:42.474498 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 13:27:42.766015 sshd[1574]: Accepted publickey for core from 147.75.109.163 port 52762 ssh2: RSA SHA256:cNKX4Hvnd9CzZlNWRKUhmWuECv5diNIlM1/aFPCcnqE Dec 13 13:27:42.777656 sshd-session[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:42.799026 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 13:27:42.817646 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 13:27:42.837717 systemd-logind[1472]: New session 1 of user core. Dec 13 13:27:42.872151 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 13:27:42.884353 instance-setup[1554]: INFO Running google_set_multiqueue. Dec 13 13:27:42.895485 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 13:27:42.933410 instance-setup[1554]: INFO Set channels for eth0 to 2. Dec 13 13:27:42.936237 (systemd)[1597]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:27:42.949610 instance-setup[1554]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Dec 13 13:27:42.955590 instance-setup[1554]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Dec 13 13:27:42.955661 instance-setup[1554]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Dec 13 13:27:42.963545 instance-setup[1554]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Dec 13 13:27:42.963624 instance-setup[1554]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Dec 13 13:27:42.966841 instance-setup[1554]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Dec 13 13:27:42.966909 instance-setup[1554]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Dec 13 13:27:42.968920 instance-setup[1554]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Dec 13 13:27:42.990940 instance-setup[1554]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Dec 13 13:27:43.000812 tar[1488]: linux-amd64/LICENSE Dec 13 13:27:43.000812 tar[1488]: linux-amd64/README.md Dec 13 13:27:43.003548 instance-setup[1554]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Dec 13 13:27:43.008844 instance-setup[1554]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Dec 13 13:27:43.008908 instance-setup[1554]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Dec 13 13:27:43.035627 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 13:27:43.059658 init.sh[1545]: + /usr/bin/google_metadata_script_runner --script-type startup Dec 13 13:27:43.187944 systemd[1597]: Queued start job for default target default.target. Dec 13 13:27:43.199280 systemd[1597]: Created slice app.slice - User Application Slice. Dec 13 13:27:43.199327 systemd[1597]: Reached target paths.target - Paths. Dec 13 13:27:43.199354 systemd[1597]: Reached target timers.target - Timers. Dec 13 13:27:43.204247 systemd[1597]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 13:27:43.233328 systemd[1597]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 13:27:43.233507 systemd[1597]: Reached target sockets.target - Sockets. Dec 13 13:27:43.233534 systemd[1597]: Reached target basic.target - Basic System. Dec 13 13:27:43.233594 systemd[1597]: Reached target default.target - Main User Target. Dec 13 13:27:43.233643 systemd[1597]: Startup finished in 274ms. Dec 13 13:27:43.235424 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 13:27:43.252267 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 13:27:43.290825 startup-script[1627]: INFO Starting startup scripts. Dec 13 13:27:43.297408 startup-script[1627]: INFO No startup scripts found in metadata. Dec 13 13:27:43.297518 startup-script[1627]: INFO Finished running startup scripts. Dec 13 13:27:43.318678 init.sh[1545]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Dec 13 13:27:43.318678 init.sh[1545]: + daemon_pids=() Dec 13 13:27:43.318856 init.sh[1545]: + for d in accounts clock_skew network Dec 13 13:27:43.319073 init.sh[1545]: + daemon_pids+=($!) Dec 13 13:27:43.319162 init.sh[1545]: + for d in accounts clock_skew network Dec 13 13:27:43.319379 init.sh[1633]: + /usr/bin/google_accounts_daemon Dec 13 13:27:43.319755 init.sh[1545]: + daemon_pids+=($!) Dec 13 13:27:43.319755 init.sh[1545]: + for d in accounts clock_skew network Dec 13 13:27:43.319838 init.sh[1545]: + daemon_pids+=($!) Dec 13 13:27:43.319900 init.sh[1545]: + NOTIFY_SOCKET=/run/systemd/notify Dec 13 13:27:43.319900 init.sh[1545]: + /usr/bin/systemd-notify --ready Dec 13 13:27:43.320674 init.sh[1634]: + /usr/bin/google_clock_skew_daemon Dec 13 13:27:43.321465 init.sh[1635]: + /usr/bin/google_network_daemon Dec 13 13:27:43.342393 systemd[1]: Started oem-gce.service - GCE Linux Agent. Dec 13 13:27:43.357295 init.sh[1545]: + wait -n 1633 1634 1635 Dec 13 13:27:43.593976 systemd[1]: Started sshd@1-10.128.0.84:22-147.75.109.163:52768.service - OpenSSH per-connection server daemon (147.75.109.163:52768). Dec 13 13:27:43.771288 ntpd[1459]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:54%2]:123 Dec 13 13:27:43.784943 ntpd[1459]: 13 Dec 13:27:43 ntpd[1459]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:54%2]:123 Dec 13 13:27:44.207358 sshd[1639]: Accepted publickey for core from 147.75.109.163 port 52768 ssh2: RSA SHA256:cNKX4Hvnd9CzZlNWRKUhmWuECv5diNIlM1/aFPCcnqE Dec 13 13:27:44.211081 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:44.214202 groupadd[1647]: group added to /etc/group: name=google-sudoers, GID=1000 Dec 13 13:27:44.228631 systemd-logind[1472]: New session 2 of user core. Dec 13 13:27:44.232931 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 13:27:44.233683 groupadd[1647]: group added to /etc/gshadow: name=google-sudoers Dec 13 13:27:44.257323 google-clock-skew[1634]: INFO Starting Google Clock Skew daemon. Dec 13 13:27:44.270597 google-clock-skew[1634]: INFO Clock drift token has changed: 0. Dec 13 13:27:44.294442 google-networking[1635]: INFO Starting Google Networking daemon. Dec 13 13:27:44.325188 groupadd[1647]: new group: name=google-sudoers, GID=1000 Dec 13 13:27:44.355812 google-accounts[1633]: INFO Starting Google Accounts daemon. Dec 13 13:27:44.367604 google-accounts[1633]: WARNING OS Login not installed. Dec 13 13:27:44.369475 google-accounts[1633]: INFO Creating a new user account for 0. Dec 13 13:27:44.377030 init.sh[1658]: useradd: invalid user name '0': use --badname to ignore Dec 13 13:27:44.377265 google-accounts[1633]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Dec 13 13:27:44.436628 sshd[1649]: Connection closed by 147.75.109.163 port 52768 Dec 13 13:27:44.437477 sshd-session[1639]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:44.443553 systemd[1]: sshd@1-10.128.0.84:22-147.75.109.163:52768.service: Deactivated successfully. Dec 13 13:27:44.446221 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 13:27:44.447320 systemd-logind[1472]: Session 2 logged out. Waiting for processes to exit. Dec 13 13:27:44.448994 systemd-logind[1472]: Removed session 2. Dec 13 13:27:44.000515 systemd-resolved[1323]: Clock change detected. Flushing caches. Dec 13 13:27:44.017375 systemd-journald[1111]: Time jumped backwards, rotating. Dec 13 13:27:44.000858 google-clock-skew[1634]: INFO Synced system time with hardware clock. Dec 13 13:27:44.037134 systemd[1]: Started sshd@2-10.128.0.84:22-147.75.109.163:52770.service - OpenSSH per-connection server daemon (147.75.109.163:52770). Dec 13 13:27:44.055876 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:27:44.070371 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 13:27:44.074331 (kubelet)[1670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:27:44.081689 systemd[1]: Startup finished in 1.438s (kernel) + 10.667s (initrd) + 10.095s (userspace) = 22.201s. Dec 13 13:27:44.106286 agetty[1585]: failed to open credentials directory Dec 13 13:27:44.107153 agetty[1584]: failed to open credentials directory Dec 13 13:27:44.352548 sshd[1669]: Accepted publickey for core from 147.75.109.163 port 52770 ssh2: RSA SHA256:cNKX4Hvnd9CzZlNWRKUhmWuECv5diNIlM1/aFPCcnqE Dec 13 13:27:44.353641 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:44.362984 systemd-logind[1472]: New session 3 of user core. Dec 13 13:27:44.368865 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 13:27:44.571415 sshd[1681]: Connection closed by 147.75.109.163 port 52770 Dec 13 13:27:44.572213 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:44.578419 systemd[1]: sshd@2-10.128.0.84:22-147.75.109.163:52770.service: Deactivated successfully. Dec 13 13:27:44.581944 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 13:27:44.583156 systemd-logind[1472]: Session 3 logged out. Waiting for processes to exit. Dec 13 13:27:44.584841 systemd-logind[1472]: Removed session 3. Dec 13 13:27:44.948918 kubelet[1670]: E1213 13:27:44.948830 1670 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:27:44.950890 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:27:44.951129 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:27:44.951694 systemd[1]: kubelet.service: Consumed 1.502s CPU time. Dec 13 13:27:54.633033 systemd[1]: Started sshd@3-10.128.0.84:22-147.75.109.163:52652.service - OpenSSH per-connection server daemon (147.75.109.163:52652). Dec 13 13:27:54.930591 sshd[1688]: Accepted publickey for core from 147.75.109.163 port 52652 ssh2: RSA SHA256:cNKX4Hvnd9CzZlNWRKUhmWuECv5diNIlM1/aFPCcnqE Dec 13 13:27:54.932475 sshd-session[1688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:54.938618 systemd-logind[1472]: New session 4 of user core. Dec 13 13:27:54.950907 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 13:27:54.952297 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 13:27:54.955352 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:27:55.149054 sshd[1691]: Connection closed by 147.75.109.163 port 52652 Dec 13 13:27:55.149903 sshd-session[1688]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:55.157747 systemd-logind[1472]: Session 4 logged out. Waiting for processes to exit. Dec 13 13:27:55.160424 systemd[1]: sshd@3-10.128.0.84:22-147.75.109.163:52652.service: Deactivated successfully. Dec 13 13:27:55.165785 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 13:27:55.167950 systemd-logind[1472]: Removed session 4. Dec 13 13:27:55.204809 systemd[1]: Started sshd@4-10.128.0.84:22-147.75.109.163:52654.service - OpenSSH per-connection server daemon (147.75.109.163:52654). Dec 13 13:27:55.280081 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:27:55.298219 (kubelet)[1705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:27:55.380136 kubelet[1705]: E1213 13:27:55.380048 1705 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:27:55.384730 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:27:55.384998 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:27:55.531452 sshd[1698]: Accepted publickey for core from 147.75.109.163 port 52654 ssh2: RSA SHA256:cNKX4Hvnd9CzZlNWRKUhmWuECv5diNIlM1/aFPCcnqE Dec 13 13:27:55.533214 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:55.539636 systemd-logind[1472]: New session 5 of user core. Dec 13 13:27:55.549903 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 13:27:55.741095 sshd[1713]: Connection closed by 147.75.109.163 port 52654 Dec 13 13:27:55.741959 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:55.746203 systemd[1]: sshd@4-10.128.0.84:22-147.75.109.163:52654.service: Deactivated successfully. Dec 13 13:27:55.748551 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 13:27:55.750401 systemd-logind[1472]: Session 5 logged out. Waiting for processes to exit. Dec 13 13:27:55.751781 systemd-logind[1472]: Removed session 5. Dec 13 13:27:55.794032 systemd[1]: Started sshd@5-10.128.0.84:22-147.75.109.163:52658.service - OpenSSH per-connection server daemon (147.75.109.163:52658). Dec 13 13:27:56.095241 sshd[1718]: Accepted publickey for core from 147.75.109.163 port 52658 ssh2: RSA SHA256:cNKX4Hvnd9CzZlNWRKUhmWuECv5diNIlM1/aFPCcnqE Dec 13 13:27:56.097030 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:56.103362 systemd-logind[1472]: New session 6 of user core. Dec 13 13:27:56.112889 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 13:27:56.308817 sshd[1720]: Connection closed by 147.75.109.163 port 52658 Dec 13 13:27:56.309593 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:56.313718 systemd[1]: sshd@5-10.128.0.84:22-147.75.109.163:52658.service: Deactivated successfully. Dec 13 13:27:56.316079 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 13:27:56.317904 systemd-logind[1472]: Session 6 logged out. Waiting for processes to exit. Dec 13 13:27:56.319350 systemd-logind[1472]: Removed session 6. Dec 13 13:27:56.368028 systemd[1]: Started sshd@6-10.128.0.84:22-147.75.109.163:49462.service - OpenSSH per-connection server daemon (147.75.109.163:49462). Dec 13 13:27:56.658717 sshd[1725]: Accepted publickey for core from 147.75.109.163 port 49462 ssh2: RSA SHA256:cNKX4Hvnd9CzZlNWRKUhmWuECv5diNIlM1/aFPCcnqE Dec 13 13:27:56.660392 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:56.665762 systemd-logind[1472]: New session 7 of user core. Dec 13 13:27:56.672858 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 13:27:56.852538 sudo[1728]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 13:27:56.853049 sudo[1728]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:27:56.871446 sudo[1728]: pam_unix(sudo:session): session closed for user root Dec 13 13:27:56.914028 sshd[1727]: Connection closed by 147.75.109.163 port 49462 Dec 13 13:27:56.915597 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:56.920924 systemd[1]: sshd@6-10.128.0.84:22-147.75.109.163:49462.service: Deactivated successfully. Dec 13 13:27:56.923343 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 13:27:56.924339 systemd-logind[1472]: Session 7 logged out. Waiting for processes to exit. Dec 13 13:27:56.926091 systemd-logind[1472]: Removed session 7. Dec 13 13:27:56.971084 systemd[1]: Started sshd@7-10.128.0.84:22-147.75.109.163:49476.service - OpenSSH per-connection server daemon (147.75.109.163:49476). Dec 13 13:27:57.259424 sshd[1733]: Accepted publickey for core from 147.75.109.163 port 49476 ssh2: RSA SHA256:cNKX4Hvnd9CzZlNWRKUhmWuECv5diNIlM1/aFPCcnqE Dec 13 13:27:57.260927 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:57.267730 systemd-logind[1472]: New session 8 of user core. Dec 13 13:27:57.274917 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 13:27:57.436201 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 13:27:57.436723 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:27:57.441787 sudo[1737]: pam_unix(sudo:session): session closed for user root Dec 13 13:27:57.454925 sudo[1736]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 13:27:57.455408 sudo[1736]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:27:57.472132 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:27:57.511976 augenrules[1759]: No rules Dec 13 13:27:57.514062 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:27:57.514362 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:27:57.515944 sudo[1736]: pam_unix(sudo:session): session closed for user root Dec 13 13:27:57.558155 sshd[1735]: Connection closed by 147.75.109.163 port 49476 Dec 13 13:27:57.559022 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:57.563353 systemd[1]: sshd@7-10.128.0.84:22-147.75.109.163:49476.service: Deactivated successfully. Dec 13 13:27:57.565825 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 13:27:57.567744 systemd-logind[1472]: Session 8 logged out. Waiting for processes to exit. Dec 13 13:27:57.569213 systemd-logind[1472]: Removed session 8. Dec 13 13:27:57.612717 systemd[1]: Started sshd@8-10.128.0.84:22-147.75.109.163:49484.service - OpenSSH per-connection server daemon (147.75.109.163:49484). Dec 13 13:27:57.903087 sshd[1767]: Accepted publickey for core from 147.75.109.163 port 49484 ssh2: RSA SHA256:cNKX4Hvnd9CzZlNWRKUhmWuECv5diNIlM1/aFPCcnqE Dec 13 13:27:57.904882 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:57.910748 systemd-logind[1472]: New session 9 of user core. Dec 13 13:27:57.919862 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 13:27:58.080564 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 13:27:58.081193 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:27:58.609072 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 13:27:58.613764 (dockerd)[1789]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 13:27:59.089690 dockerd[1789]: time="2024-12-13T13:27:59.089581883Z" level=info msg="Starting up" Dec 13 13:27:59.251856 dockerd[1789]: time="2024-12-13T13:27:59.251805342Z" level=info msg="Loading containers: start." Dec 13 13:27:59.471711 kernel: Initializing XFRM netlink socket Dec 13 13:27:59.598090 systemd-networkd[1406]: docker0: Link UP Dec 13 13:27:59.637709 dockerd[1789]: time="2024-12-13T13:27:59.637647465Z" level=info msg="Loading containers: done." Dec 13 13:27:59.662438 dockerd[1789]: time="2024-12-13T13:27:59.662373662Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 13:27:59.662613 dockerd[1789]: time="2024-12-13T13:27:59.662494395Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Dec 13 13:27:59.662712 dockerd[1789]: time="2024-12-13T13:27:59.662636099Z" level=info msg="Daemon has completed initialization" Dec 13 13:27:59.703942 dockerd[1789]: time="2024-12-13T13:27:59.703808442Z" level=info msg="API listen on /run/docker.sock" Dec 13 13:27:59.704283 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 13:28:00.626604 containerd[1491]: time="2024-12-13T13:28:00.626503094Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 13 13:28:01.229916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3979451655.mount: Deactivated successfully. Dec 13 13:28:03.068100 containerd[1491]: time="2024-12-13T13:28:03.067927682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:03.074154 containerd[1491]: time="2024-12-13T13:28:03.071655612Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=27982111" Dec 13 13:28:03.077513 containerd[1491]: time="2024-12-13T13:28:03.077302688Z" level=info msg="ImageCreate event name:\"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:03.092023 containerd[1491]: time="2024-12-13T13:28:03.091643533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:03.098768 containerd[1491]: time="2024-12-13T13:28:03.098468729Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"27972283\" in 2.471797782s" Dec 13 13:28:03.100662 containerd[1491]: time="2024-12-13T13:28:03.099836229Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Dec 13 13:28:03.109032 containerd[1491]: time="2024-12-13T13:28:03.108865915Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 13 13:28:04.805153 containerd[1491]: time="2024-12-13T13:28:04.805080500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:04.806786 containerd[1491]: time="2024-12-13T13:28:04.806712032Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=24704091" Dec 13 13:28:04.807861 containerd[1491]: time="2024-12-13T13:28:04.807821029Z" level=info msg="ImageCreate event name:\"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:04.811589 containerd[1491]: time="2024-12-13T13:28:04.811528278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:04.813207 containerd[1491]: time="2024-12-13T13:28:04.813023503Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"26147269\" in 1.703660915s" Dec 13 13:28:04.813207 containerd[1491]: time="2024-12-13T13:28:04.813070147Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Dec 13 13:28:04.814839 containerd[1491]: time="2024-12-13T13:28:04.814808889Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 13 13:28:05.513935 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 13:28:05.521992 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:28:05.806901 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:05.817194 (kubelet)[2047]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:28:05.919249 kubelet[2047]: E1213 13:28:05.919190 2047 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:28:05.924871 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:28:05.925201 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:28:06.347091 containerd[1491]: time="2024-12-13T13:28:06.347017447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:06.348649 containerd[1491]: time="2024-12-13T13:28:06.348572342Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=18653983" Dec 13 13:28:06.349845 containerd[1491]: time="2024-12-13T13:28:06.349776049Z" level=info msg="ImageCreate event name:\"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:06.353466 containerd[1491]: time="2024-12-13T13:28:06.353406025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:06.355120 containerd[1491]: time="2024-12-13T13:28:06.354910857Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"20097197\" in 1.540061883s" Dec 13 13:28:06.355120 containerd[1491]: time="2024-12-13T13:28:06.354956117Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Dec 13 13:28:06.355836 containerd[1491]: time="2024-12-13T13:28:06.355805711Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 13:28:07.872562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1413507115.mount: Deactivated successfully. Dec 13 13:28:08.823702 containerd[1491]: time="2024-12-13T13:28:08.823609054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:08.825525 containerd[1491]: time="2024-12-13T13:28:08.825437792Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30232138" Dec 13 13:28:08.827618 containerd[1491]: time="2024-12-13T13:28:08.827408498Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:08.831272 containerd[1491]: time="2024-12-13T13:28:08.831230137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:08.832724 containerd[1491]: time="2024-12-13T13:28:08.832451200Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 2.47648315s" Dec 13 13:28:08.832724 containerd[1491]: time="2024-12-13T13:28:08.832495824Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 13:28:08.833982 containerd[1491]: time="2024-12-13T13:28:08.833345605Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 13:28:09.287643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2642961515.mount: Deactivated successfully. Dec 13 13:28:10.615251 containerd[1491]: time="2024-12-13T13:28:10.615174561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:10.616924 containerd[1491]: time="2024-12-13T13:28:10.616857268Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Dec 13 13:28:10.618170 containerd[1491]: time="2024-12-13T13:28:10.618093844Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:10.622189 containerd[1491]: time="2024-12-13T13:28:10.621862451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:10.623490 containerd[1491]: time="2024-12-13T13:28:10.623296426Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.789913443s" Dec 13 13:28:10.623490 containerd[1491]: time="2024-12-13T13:28:10.623344989Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 13:28:10.624548 containerd[1491]: time="2024-12-13T13:28:10.624324451Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 13:28:11.229281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3816492691.mount: Deactivated successfully. Dec 13 13:28:11.247712 containerd[1491]: time="2024-12-13T13:28:11.246918651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:11.249496 containerd[1491]: time="2024-12-13T13:28:11.249393872Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Dec 13 13:28:11.251733 containerd[1491]: time="2024-12-13T13:28:11.250734880Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:11.257727 containerd[1491]: time="2024-12-13T13:28:11.257141867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:11.260322 containerd[1491]: time="2024-12-13T13:28:11.260253142Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 635.884214ms" Dec 13 13:28:11.260611 containerd[1491]: time="2024-12-13T13:28:11.260563639Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 13 13:28:11.263200 containerd[1491]: time="2024-12-13T13:28:11.263151837Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 13 13:28:11.310430 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 13:28:11.743150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1689091406.mount: Deactivated successfully. Dec 13 13:28:14.829042 containerd[1491]: time="2024-12-13T13:28:14.828952463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:14.830873 containerd[1491]: time="2024-12-13T13:28:14.830801506Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56786556" Dec 13 13:28:14.832808 containerd[1491]: time="2024-12-13T13:28:14.832732490Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:14.837357 containerd[1491]: time="2024-12-13T13:28:14.837263219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:14.840697 containerd[1491]: time="2024-12-13T13:28:14.838974228Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.575762864s" Dec 13 13:28:14.840697 containerd[1491]: time="2024-12-13T13:28:14.839022212Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Dec 13 13:28:16.014424 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 13:28:16.027819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:28:16.337989 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:16.349447 (kubelet)[2195]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:28:16.427491 kubelet[2195]: E1213 13:28:16.427377 2195 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:28:16.437567 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:28:16.438150 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:28:19.027615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:19.036041 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:28:19.091201 systemd[1]: Reloading requested from client PID 2210 ('systemctl') (unit session-9.scope)... Dec 13 13:28:19.091225 systemd[1]: Reloading... Dec 13 13:28:19.279035 zram_generator::config[2253]: No configuration found. Dec 13 13:28:19.421715 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:28:19.535488 systemd[1]: Reloading finished in 443 ms. Dec 13 13:28:19.617260 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 13:28:19.617402 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 13:28:19.617784 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:19.623088 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:28:19.836460 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:19.849197 (kubelet)[2302]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:28:19.908697 kubelet[2302]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:28:19.908697 kubelet[2302]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:28:19.909202 kubelet[2302]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:28:19.909202 kubelet[2302]: I1213 13:28:19.908944 2302 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:28:20.196763 kubelet[2302]: I1213 13:28:20.196564 2302 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 13:28:20.196763 kubelet[2302]: I1213 13:28:20.196604 2302 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:28:20.197623 kubelet[2302]: I1213 13:28:20.197555 2302 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 13:28:20.250601 kubelet[2302]: I1213 13:28:20.250536 2302 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:28:20.253112 kubelet[2302]: E1213 13:28:20.252658 2302 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.84:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.84:6443: connect: connection refused" logger="UnhandledError" Dec 13 13:28:20.268022 kubelet[2302]: E1213 13:28:20.267930 2302 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 13:28:20.268022 kubelet[2302]: I1213 13:28:20.268000 2302 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 13:28:20.278698 kubelet[2302]: I1213 13:28:20.276642 2302 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:28:20.279904 kubelet[2302]: I1213 13:28:20.279819 2302 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 13:28:20.280475 kubelet[2302]: I1213 13:28:20.280377 2302 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:28:20.281167 kubelet[2302]: I1213 13:28:20.280479 2302 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 13:28:20.281504 kubelet[2302]: I1213 13:28:20.281297 2302 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:28:20.281504 kubelet[2302]: I1213 13:28:20.281320 2302 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 13:28:20.284430 kubelet[2302]: I1213 13:28:20.284224 2302 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:28:20.290034 kubelet[2302]: I1213 13:28:20.290008 2302 kubelet.go:408] "Attempting to sync node with API server" Dec 13 13:28:20.290156 kubelet[2302]: I1213 13:28:20.290142 2302 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:28:20.290387 kubelet[2302]: I1213 13:28:20.290371 2302 kubelet.go:314] "Adding apiserver pod source" Dec 13 13:28:20.290582 kubelet[2302]: I1213 13:28:20.290565 2302 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:28:20.299901 kubelet[2302]: W1213 13:28:20.299780 2302 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.84:6443: connect: connection refused Dec 13 13:28:20.300015 kubelet[2302]: E1213 13:28:20.299900 2302 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.84:6443: connect: connection refused" logger="UnhandledError" Dec 13 13:28:20.300195 kubelet[2302]: I1213 13:28:20.300156 2302 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:28:20.303469 kubelet[2302]: I1213 13:28:20.303441 2302 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:28:20.305027 kubelet[2302]: W1213 13:28:20.304999 2302 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 13:28:20.308200 kubelet[2302]: I1213 13:28:20.307332 2302 server.go:1269] "Started kubelet" Dec 13 13:28:20.313082 kubelet[2302]: I1213 13:28:20.313040 2302 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:28:20.316155 kubelet[2302]: W1213 13:28:20.316086 2302 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.84:6443: connect: connection refused Dec 13 13:28:20.316278 kubelet[2302]: E1213 13:28:20.316238 2302 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.84:6443: connect: connection refused" logger="UnhandledError" Dec 13 13:28:20.317465 kubelet[2302]: I1213 13:28:20.317422 2302 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:28:20.320090 kubelet[2302]: I1213 13:28:20.320062 2302 server.go:460] "Adding debug handlers to kubelet server" Dec 13 13:28:20.320874 kubelet[2302]: I1213 13:28:20.320835 2302 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 13:28:20.321408 kubelet[2302]: E1213 13:28:20.321282 2302 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" not found" Dec 13 13:28:20.323519 kubelet[2302]: I1213 13:28:20.322647 2302 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:28:20.323519 kubelet[2302]: I1213 13:28:20.323479 2302 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:28:20.323837 kubelet[2302]: I1213 13:28:20.323813 2302 reconciler.go:26] "Reconciler: start to sync state" Dec 13 13:28:20.323971 kubelet[2302]: I1213 13:28:20.323653 2302 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 13:28:20.325343 kubelet[2302]: W1213 13:28:20.324902 2302 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.84:6443: connect: connection refused Dec 13 13:28:20.325343 kubelet[2302]: E1213 13:28:20.324975 2302 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.84:6443: connect: connection refused" logger="UnhandledError" Dec 13 13:28:20.325343 kubelet[2302]: E1213 13:28:20.325062 2302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.84:6443: connect: connection refused" interval="200ms" Dec 13 13:28:20.329069 kubelet[2302]: I1213 13:28:20.327602 2302 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:28:20.329069 kubelet[2302]: I1213 13:28:20.327752 2302 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:28:20.330567 kubelet[2302]: E1213 13:28:20.330542 2302 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:28:20.331780 kubelet[2302]: I1213 13:28:20.330913 2302 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 13:28:20.337122 kubelet[2302]: I1213 13:28:20.337086 2302 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:28:20.355771 kubelet[2302]: E1213 13:28:20.349576 2302 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.84:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.84:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal.1810bf9453102cdf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal,UID:ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal,},FirstTimestamp:2024-12-13 13:28:20.307274975 +0000 UTC m=+0.453178577,LastTimestamp:2024-12-13 13:28:20.307274975 +0000 UTC m=+0.453178577,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal,}" Dec 13 13:28:20.374112 kubelet[2302]: I1213 13:28:20.373912 2302 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:28:20.379950 kubelet[2302]: I1213 13:28:20.379919 2302 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:28:20.380228 kubelet[2302]: I1213 13:28:20.380208 2302 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:28:20.380699 kubelet[2302]: I1213 13:28:20.380326 2302 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 13:28:20.380699 kubelet[2302]: E1213 13:28:20.380397 2302 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:28:20.385646 kubelet[2302]: W1213 13:28:20.385564 2302 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.84:6443: connect: connection refused Dec 13 13:28:20.385889 kubelet[2302]: E1213 13:28:20.385825 2302 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.84:6443: connect: connection refused" logger="UnhandledError" Dec 13 13:28:20.409814 kubelet[2302]: I1213 13:28:20.409784 2302 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:28:20.409979 kubelet[2302]: I1213 13:28:20.409877 2302 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:28:20.410040 kubelet[2302]: I1213 13:28:20.410012 2302 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:28:20.412665 kubelet[2302]: I1213 13:28:20.412628 2302 policy_none.go:49] "None policy: Start" Dec 13 13:28:20.413491 kubelet[2302]: I1213 13:28:20.413469 2302 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:28:20.413617 kubelet[2302]: I1213 13:28:20.413562 2302 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:28:20.422191 kubelet[2302]: E1213 13:28:20.421950 2302 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" not found" Dec 13 13:28:20.426431 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 13:28:20.443726 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 13:28:20.448605 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 13:28:20.464032 kubelet[2302]: I1213 13:28:20.463998 2302 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:28:20.464570 kubelet[2302]: I1213 13:28:20.464540 2302 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 13:28:20.464708 kubelet[2302]: I1213 13:28:20.464593 2302 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 13:28:20.466290 kubelet[2302]: I1213 13:28:20.465461 2302 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:28:20.469449 kubelet[2302]: E1213 13:28:20.469405 2302 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" not found" Dec 13 13:28:20.508225 systemd[1]: Created slice kubepods-burstable-poded3350ed8117f2ccece5ad11435c1dbb.slice - libcontainer container kubepods-burstable-poded3350ed8117f2ccece5ad11435c1dbb.slice. Dec 13 13:28:20.520449 systemd[1]: Created slice kubepods-burstable-pod08fb0482fb89b99f77cdcb8426dac7e2.slice - libcontainer container kubepods-burstable-pod08fb0482fb89b99f77cdcb8426dac7e2.slice. Dec 13 13:28:20.526210 kubelet[2302]: E1213 13:28:20.526145 2302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.84:6443: connect: connection refused" interval="400ms" Dec 13 13:28:20.526724 systemd[1]: Created slice kubepods-burstable-podb9ed87279dd2d78b40df2a76add472c6.slice - libcontainer container kubepods-burstable-podb9ed87279dd2d78b40df2a76add472c6.slice. Dec 13 13:28:20.572643 kubelet[2302]: I1213 13:28:20.572588 2302 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:20.573235 kubelet[2302]: E1213 13:28:20.573179 2302 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.84:6443/api/v1/nodes\": dial tcp 10.128.0.84:6443: connect: connection refused" node="ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:20.626958 kubelet[2302]: I1213 13:28:20.626893 2302 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ed3350ed8117f2ccece5ad11435c1dbb-ca-certs\") pod \"kube-apiserver-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" (UID: \"ed3350ed8117f2ccece5ad11435c1dbb\") " pod="kube-system/kube-apiserver-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:20.626958 kubelet[2302]: I1213 13:28:20.626952 2302 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ed3350ed8117f2ccece5ad11435c1dbb-k8s-certs\") pod \"kube-apiserver-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" (UID: \"ed3350ed8117f2ccece5ad11435c1dbb\") " pod="kube-system/kube-apiserver-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:20.627205 kubelet[2302]: I1213 13:28:20.626994 2302 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/08fb0482fb89b99f77cdcb8426dac7e2-ca-certs\") pod \"kube-controller-manager-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" (UID: \"08fb0482fb89b99f77cdcb8426dac7e2\") " pod="kube-system/kube-controller-manager-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:20.627205 kubelet[2302]: I1213 13:28:20.627023 2302 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/08fb0482fb89b99f77cdcb8426dac7e2-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" (UID: \"08fb0482fb89b99f77cdcb8426dac7e2\") " pod="kube-system/kube-controller-manager-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:20.627205 kubelet[2302]: I1213 13:28:20.627090 2302 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/08fb0482fb89b99f77cdcb8426dac7e2-k8s-certs\") pod \"kube-controller-manager-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" (UID: \"08fb0482fb89b99f77cdcb8426dac7e2\") " pod="kube-system/kube-controller-manager-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:20.627205 kubelet[2302]: I1213 13:28:20.627118 2302 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b9ed87279dd2d78b40df2a76add472c6-kubeconfig\") pod \"kube-scheduler-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" (UID: \"b9ed87279dd2d78b40df2a76add472c6\") " pod="kube-system/kube-scheduler-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:20.627476 kubelet[2302]: I1213 13:28:20.627146 2302 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed3350ed8117f2ccece5ad11435c1dbb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" (UID: \"ed3350ed8117f2ccece5ad11435c1dbb\") " pod="kube-system/kube-apiserver-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:20.627476 kubelet[2302]: I1213 13:28:20.627176 2302 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/08fb0482fb89b99f77cdcb8426dac7e2-kubeconfig\") pod \"kube-controller-manager-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" (UID: \"08fb0482fb89b99f77cdcb8426dac7e2\") " pod="kube-system/kube-controller-manager-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:20.627476 kubelet[2302]: I1213 13:28:20.627206 2302 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/08fb0482fb89b99f77cdcb8426dac7e2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" (UID: \"08fb0482fb89b99f77cdcb8426dac7e2\") " pod="kube-system/kube-controller-manager-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:20.778806 kubelet[2302]: I1213 13:28:20.778755 2302 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:20.779289 kubelet[2302]: E1213 13:28:20.779238 2302 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.84:6443/api/v1/nodes\": dial tcp 10.128.0.84:6443: connect: connection refused" node="ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:20.818090 containerd[1491]: time="2024-12-13T13:28:20.818014662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal,Uid:ed3350ed8117f2ccece5ad11435c1dbb,Namespace:kube-system,Attempt:0,}" Dec 13 13:28:20.826001 containerd[1491]: time="2024-12-13T13:28:20.825953753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal,Uid:08fb0482fb89b99f77cdcb8426dac7e2,Namespace:kube-system,Attempt:0,}" Dec 13 13:28:20.831297 containerd[1491]: time="2024-12-13T13:28:20.831232876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal,Uid:b9ed87279dd2d78b40df2a76add472c6,Namespace:kube-system,Attempt:0,}" Dec 13 13:28:20.926761 kubelet[2302]: E1213 13:28:20.926687 2302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.84:6443: connect: connection refused" interval="800ms" Dec 13 13:28:21.183628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1910530757.mount: Deactivated successfully. Dec 13 13:28:21.186378 kubelet[2302]: I1213 13:28:21.186312 2302 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:21.187629 kubelet[2302]: E1213 13:28:21.187542 2302 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.84:6443/api/v1/nodes\": dial tcp 10.128.0.84:6443: connect: connection refused" node="ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:21.194660 containerd[1491]: time="2024-12-13T13:28:21.194598185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:28:21.199455 containerd[1491]: time="2024-12-13T13:28:21.199288236Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Dec 13 13:28:21.200833 containerd[1491]: time="2024-12-13T13:28:21.200778989Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:28:21.202402 containerd[1491]: time="2024-12-13T13:28:21.202341220Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:28:21.204405 containerd[1491]: time="2024-12-13T13:28:21.204351648Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:28:21.205650 containerd[1491]: time="2024-12-13T13:28:21.205588393Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:28:21.206711 containerd[1491]: time="2024-12-13T13:28:21.206590737Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:28:21.208064 containerd[1491]: time="2024-12-13T13:28:21.207956689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:28:21.212295 containerd[1491]: time="2024-12-13T13:28:21.210848559Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 384.767767ms" Dec 13 13:28:21.214037 containerd[1491]: time="2024-12-13T13:28:21.213699722Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 395.359679ms" Dec 13 13:28:21.218514 containerd[1491]: time="2024-12-13T13:28:21.218455468Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 387.119637ms" Dec 13 13:28:21.264214 kubelet[2302]: W1213 13:28:21.264036 2302 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.84:6443: connect: connection refused Dec 13 13:28:21.264214 kubelet[2302]: E1213 13:28:21.264149 2302 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.84:6443: connect: connection refused" logger="UnhandledError" Dec 13 13:28:21.432622 containerd[1491]: time="2024-12-13T13:28:21.432498467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:28:21.433039 containerd[1491]: time="2024-12-13T13:28:21.432630801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:28:21.433039 containerd[1491]: time="2024-12-13T13:28:21.432726621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:21.433383 containerd[1491]: time="2024-12-13T13:28:21.433008495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:21.436717 containerd[1491]: time="2024-12-13T13:28:21.434642765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:28:21.436717 containerd[1491]: time="2024-12-13T13:28:21.434781875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:28:21.436717 containerd[1491]: time="2024-12-13T13:28:21.434819730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:21.436717 containerd[1491]: time="2024-12-13T13:28:21.434958788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:21.437059 containerd[1491]: time="2024-12-13T13:28:21.435458043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:28:21.439996 containerd[1491]: time="2024-12-13T13:28:21.438337696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:28:21.439996 containerd[1491]: time="2024-12-13T13:28:21.438412315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:21.439996 containerd[1491]: time="2024-12-13T13:28:21.438560319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:21.491926 systemd[1]: Started cri-containerd-045ee890ca6e5ee322eed21e9280dfcf92b8960c99b3122c46ccfe72138e36ed.scope - libcontainer container 045ee890ca6e5ee322eed21e9280dfcf92b8960c99b3122c46ccfe72138e36ed. Dec 13 13:28:21.509312 systemd[1]: Started cri-containerd-4c76075ba6ff62951c362b731455c32a6a416fbacf3d70c89363f9c7c254b0de.scope - libcontainer container 4c76075ba6ff62951c362b731455c32a6a416fbacf3d70c89363f9c7c254b0de. Dec 13 13:28:21.519007 systemd[1]: Started cri-containerd-c21205404cf9130105139e34edffa2b2c2599e3506493d51f60f1ee0c2f0b4a3.scope - libcontainer container c21205404cf9130105139e34edffa2b2c2599e3506493d51f60f1ee0c2f0b4a3. Dec 13 13:28:21.627914 containerd[1491]: time="2024-12-13T13:28:21.625255615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal,Uid:08fb0482fb89b99f77cdcb8426dac7e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c76075ba6ff62951c362b731455c32a6a416fbacf3d70c89363f9c7c254b0de\"" Dec 13 13:28:21.635166 kubelet[2302]: E1213 13:28:21.634974 2302 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4186-0-0-2fe7670bebb044e8263e.c.flat" Dec 13 13:28:21.637699 kubelet[2302]: W1213 13:28:21.637098 2302 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.84:6443: connect: connection refused Dec 13 13:28:21.637699 kubelet[2302]: E1213 13:28:21.637362 2302 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.84:6443: connect: connection refused" logger="UnhandledError" Dec 13 13:28:21.639378 containerd[1491]: time="2024-12-13T13:28:21.639329916Z" level=info msg="CreateContainer within sandbox \"4c76075ba6ff62951c362b731455c32a6a416fbacf3d70c89363f9c7c254b0de\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 13:28:21.651893 containerd[1491]: time="2024-12-13T13:28:21.651840542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal,Uid:ed3350ed8117f2ccece5ad11435c1dbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"045ee890ca6e5ee322eed21e9280dfcf92b8960c99b3122c46ccfe72138e36ed\"" Dec 13 13:28:21.653496 kubelet[2302]: E1213 13:28:21.653457 2302 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-21291" Dec 13 13:28:21.655122 containerd[1491]: time="2024-12-13T13:28:21.655085394Z" level=info msg="CreateContainer within sandbox \"045ee890ca6e5ee322eed21e9280dfcf92b8960c99b3122c46ccfe72138e36ed\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 13:28:21.662536 kubelet[2302]: W1213 13:28:21.662426 2302 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.84:6443: connect: connection refused Dec 13 13:28:21.662726 kubelet[2302]: E1213 13:28:21.662660 2302 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.84:6443: connect: connection refused" logger="UnhandledError" Dec 13 13:28:21.663981 containerd[1491]: time="2024-12-13T13:28:21.663944881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal,Uid:b9ed87279dd2d78b40df2a76add472c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"c21205404cf9130105139e34edffa2b2c2599e3506493d51f60f1ee0c2f0b4a3\"" Dec 13 13:28:21.665440 kubelet[2302]: E1213 13:28:21.665409 2302 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-21291" Dec 13 13:28:21.668653 containerd[1491]: time="2024-12-13T13:28:21.668504181Z" level=info msg="CreateContainer within sandbox \"c21205404cf9130105139e34edffa2b2c2599e3506493d51f60f1ee0c2f0b4a3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 13:28:21.676019 containerd[1491]: time="2024-12-13T13:28:21.675977218Z" level=info msg="CreateContainer within sandbox \"4c76075ba6ff62951c362b731455c32a6a416fbacf3d70c89363f9c7c254b0de\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"97a09be4cf0b9c8bc9ccdb22d7e4437e6209507626684a7a5d2fb7da9f99b7f4\"" Dec 13 13:28:21.678496 containerd[1491]: time="2024-12-13T13:28:21.676813208Z" level=info msg="StartContainer for \"97a09be4cf0b9c8bc9ccdb22d7e4437e6209507626684a7a5d2fb7da9f99b7f4\"" Dec 13 13:28:21.686106 containerd[1491]: time="2024-12-13T13:28:21.686018958Z" level=info msg="CreateContainer within sandbox \"045ee890ca6e5ee322eed21e9280dfcf92b8960c99b3122c46ccfe72138e36ed\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4358d38f14d3aad8d21eea81de8a91c324ea05e5e247418d66e8ac7fc8aba211\"" Dec 13 13:28:21.689757 containerd[1491]: time="2024-12-13T13:28:21.688142936Z" level=info msg="StartContainer for \"4358d38f14d3aad8d21eea81de8a91c324ea05e5e247418d66e8ac7fc8aba211\"" Dec 13 13:28:21.697085 containerd[1491]: time="2024-12-13T13:28:21.697036169Z" level=info msg="CreateContainer within sandbox \"c21205404cf9130105139e34edffa2b2c2599e3506493d51f60f1ee0c2f0b4a3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bf30deb54a0cd8c63af0c4e326d352bb6352ecbf61207d11bdba667e60469b82\"" Dec 13 13:28:21.697597 containerd[1491]: time="2024-12-13T13:28:21.697564341Z" level=info msg="StartContainer for \"bf30deb54a0cd8c63af0c4e326d352bb6352ecbf61207d11bdba667e60469b82\"" Dec 13 13:28:21.705735 kubelet[2302]: W1213 13:28:21.705562 2302 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.84:6443: connect: connection refused Dec 13 13:28:21.705735 kubelet[2302]: E1213 13:28:21.705659 2302 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.84:6443: connect: connection refused" logger="UnhandledError" Dec 13 13:28:21.728070 kubelet[2302]: E1213 13:28:21.728018 2302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.84:6443: connect: connection refused" interval="1.6s" Dec 13 13:28:21.730908 systemd[1]: Started cri-containerd-97a09be4cf0b9c8bc9ccdb22d7e4437e6209507626684a7a5d2fb7da9f99b7f4.scope - libcontainer container 97a09be4cf0b9c8bc9ccdb22d7e4437e6209507626684a7a5d2fb7da9f99b7f4. Dec 13 13:28:21.787878 systemd[1]: Started cri-containerd-4358d38f14d3aad8d21eea81de8a91c324ea05e5e247418d66e8ac7fc8aba211.scope - libcontainer container 4358d38f14d3aad8d21eea81de8a91c324ea05e5e247418d66e8ac7fc8aba211. Dec 13 13:28:21.790368 systemd[1]: Started cri-containerd-bf30deb54a0cd8c63af0c4e326d352bb6352ecbf61207d11bdba667e60469b82.scope - libcontainer container bf30deb54a0cd8c63af0c4e326d352bb6352ecbf61207d11bdba667e60469b82. Dec 13 13:28:21.846227 containerd[1491]: time="2024-12-13T13:28:21.846172359Z" level=info msg="StartContainer for \"97a09be4cf0b9c8bc9ccdb22d7e4437e6209507626684a7a5d2fb7da9f99b7f4\" returns successfully" Dec 13 13:28:21.951712 containerd[1491]: time="2024-12-13T13:28:21.951491365Z" level=info msg="StartContainer for \"4358d38f14d3aad8d21eea81de8a91c324ea05e5e247418d66e8ac7fc8aba211\" returns successfully" Dec 13 13:28:21.991182 containerd[1491]: time="2024-12-13T13:28:21.991087972Z" level=info msg="StartContainer for \"bf30deb54a0cd8c63af0c4e326d352bb6352ecbf61207d11bdba667e60469b82\" returns successfully" Dec 13 13:28:21.997305 kubelet[2302]: I1213 13:28:21.997077 2302 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:21.998564 kubelet[2302]: E1213 13:28:21.998436 2302 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.84:6443/api/v1/nodes\": dial tcp 10.128.0.84:6443: connect: connection refused" node="ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:23.617458 kubelet[2302]: I1213 13:28:23.616980 2302 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:25.887721 update_engine[1478]: I20241213 13:28:25.884281 1478 update_attempter.cc:509] Updating boot flags... Dec 13 13:28:26.250715 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2582) Dec 13 13:28:26.312862 kubelet[2302]: I1213 13:28:26.312470 2302 apiserver.go:52] "Watching apiserver" Dec 13 13:28:26.499934 kubelet[2302]: E1213 13:28:26.499694 2302 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" not found" node="ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:26.526920 kubelet[2302]: I1213 13:28:26.526789 2302 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 13:28:26.709754 kubelet[2302]: I1213 13:28:26.706956 2302 kubelet_node_status.go:75] "Successfully registered node" node="ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:26.735736 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2582) Dec 13 13:28:26.758316 kubelet[2302]: E1213 13:28:26.758162 2302 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal.1810bf9453102cdf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal,UID:ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal,},FirstTimestamp:2024-12-13 13:28:20.307274975 +0000 UTC m=+0.453178577,LastTimestamp:2024-12-13 13:28:20.307274975 +0000 UTC m=+0.453178577,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal,}" Dec 13 13:28:26.844978 kubelet[2302]: E1213 13:28:26.842030 2302 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal.1810bf945472a7d6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal,UID:ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal,},FirstTimestamp:2024-12-13 13:28:20.330506198 +0000 UTC m=+0.476409805,LastTimestamp:2024-12-13 13:28:20.330506198 +0000 UTC m=+0.476409805,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal,}" Dec 13 13:28:26.971018 kubelet[2302]: E1213 13:28:26.966160 2302 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal.1810bf94591d2100 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal,UID:ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal,},FirstTimestamp:2024-12-13 13:28:20.4087872 +0000 UTC m=+0.554690798,LastTimestamp:2024-12-13 13:28:20.4087872 +0000 UTC m=+0.554690798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal,}" Dec 13 13:28:27.151723 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2582) Dec 13 13:28:28.082021 kubelet[2302]: W1213 13:28:28.081944 2302 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 13:28:29.367487 systemd[1]: Reloading requested from client PID 2596 ('systemctl') (unit session-9.scope)... Dec 13 13:28:29.367511 systemd[1]: Reloading... Dec 13 13:28:29.533724 zram_generator::config[2639]: No configuration found. Dec 13 13:28:29.683069 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:28:29.817552 systemd[1]: Reloading finished in 449 ms. Dec 13 13:28:29.874468 kubelet[2302]: I1213 13:28:29.873934 2302 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:28:29.874109 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:28:29.885408 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:28:29.885793 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:29.885880 systemd[1]: kubelet.service: Consumed 1.285s CPU time, 119.2M memory peak, 0B memory swap peak. Dec 13 13:28:29.892028 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:28:30.155518 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:30.169819 (kubelet)[2684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:28:30.247717 kubelet[2684]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:28:30.247717 kubelet[2684]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:28:30.247717 kubelet[2684]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:28:30.247717 kubelet[2684]: I1213 13:28:30.247521 2684 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:28:30.259660 kubelet[2684]: I1213 13:28:30.259177 2684 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 13:28:30.259660 kubelet[2684]: I1213 13:28:30.259210 2684 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:28:30.260203 kubelet[2684]: I1213 13:28:30.259798 2684 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 13:28:30.261624 kubelet[2684]: I1213 13:28:30.261588 2684 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 13:28:30.266831 kubelet[2684]: I1213 13:28:30.266790 2684 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:28:30.272513 kubelet[2684]: E1213 13:28:30.272472 2684 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 13:28:30.272642 kubelet[2684]: I1213 13:28:30.272553 2684 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 13:28:30.277061 kubelet[2684]: I1213 13:28:30.277020 2684 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:28:30.278707 kubelet[2684]: I1213 13:28:30.277218 2684 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 13:28:30.278707 kubelet[2684]: I1213 13:28:30.277420 2684 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:28:30.278707 kubelet[2684]: I1213 13:28:30.277450 2684 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 13:28:30.279010 kubelet[2684]: I1213 13:28:30.277902 2684 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:28:30.279010 kubelet[2684]: I1213 13:28:30.277921 2684 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 13:28:30.279010 kubelet[2684]: I1213 13:28:30.277978 2684 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:28:30.279010 kubelet[2684]: I1213 13:28:30.278283 2684 kubelet.go:408] "Attempting to sync node with API server" Dec 13 13:28:30.279010 kubelet[2684]: I1213 13:28:30.278302 2684 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:28:30.279010 kubelet[2684]: I1213 13:28:30.278354 2684 kubelet.go:314] "Adding apiserver pod source" Dec 13 13:28:30.279010 kubelet[2684]: I1213 13:28:30.278384 2684 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:28:30.285257 kubelet[2684]: I1213 13:28:30.283657 2684 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:28:30.287933 kubelet[2684]: I1213 13:28:30.286728 2684 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:28:30.290648 kubelet[2684]: I1213 13:28:30.290624 2684 server.go:1269] "Started kubelet" Dec 13 13:28:30.292461 kubelet[2684]: I1213 13:28:30.292425 2684 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:28:30.294121 kubelet[2684]: I1213 13:28:30.294100 2684 server.go:460] "Adding debug handlers to kubelet server" Dec 13 13:28:30.298174 kubelet[2684]: I1213 13:28:30.298118 2684 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:28:30.299289 kubelet[2684]: I1213 13:28:30.299116 2684 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:28:30.303180 kubelet[2684]: I1213 13:28:30.302639 2684 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:28:30.303518 kubelet[2684]: I1213 13:28:30.303403 2684 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 13:28:30.307434 kubelet[2684]: E1213 13:28:30.303190 2684 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:28:30.309318 kubelet[2684]: E1213 13:28:30.309066 2684 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" not found" Dec 13 13:28:30.309318 kubelet[2684]: I1213 13:28:30.309125 2684 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 13:28:30.309711 kubelet[2684]: I1213 13:28:30.309349 2684 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 13:28:30.309711 kubelet[2684]: I1213 13:28:30.309551 2684 reconciler.go:26] "Reconciler: start to sync state" Dec 13 13:28:30.310687 kubelet[2684]: I1213 13:28:30.310424 2684 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:28:30.310687 kubelet[2684]: I1213 13:28:30.310543 2684 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:28:30.316370 kubelet[2684]: I1213 13:28:30.314943 2684 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:28:30.336216 kubelet[2684]: I1213 13:28:30.335473 2684 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:28:30.338799 kubelet[2684]: I1213 13:28:30.338760 2684 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:28:30.338799 kubelet[2684]: I1213 13:28:30.338789 2684 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:28:30.339037 kubelet[2684]: I1213 13:28:30.338813 2684 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 13:28:30.339037 kubelet[2684]: E1213 13:28:30.338875 2684 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:28:30.426017 sudo[2714]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 13:28:30.427983 sudo[2714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Dec 13 13:28:30.442048 kubelet[2684]: E1213 13:28:30.442010 2684 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 13:28:30.458582 kubelet[2684]: I1213 13:28:30.458540 2684 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:28:30.458582 kubelet[2684]: I1213 13:28:30.458589 2684 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:28:30.458802 kubelet[2684]: I1213 13:28:30.458618 2684 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:28:30.459034 kubelet[2684]: I1213 13:28:30.458975 2684 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 13:28:30.459107 kubelet[2684]: I1213 13:28:30.459024 2684 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 13:28:30.459107 kubelet[2684]: I1213 13:28:30.459056 2684 policy_none.go:49] "None policy: Start" Dec 13 13:28:30.460446 kubelet[2684]: I1213 13:28:30.460419 2684 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:28:30.460586 kubelet[2684]: I1213 13:28:30.460468 2684 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:28:30.461793 kubelet[2684]: I1213 13:28:30.460722 2684 state_mem.go:75] "Updated machine memory state" Dec 13 13:28:30.470720 kubelet[2684]: I1213 13:28:30.469765 2684 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:28:30.470720 kubelet[2684]: I1213 13:28:30.469991 2684 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 13:28:30.470720 kubelet[2684]: I1213 13:28:30.470009 2684 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 13:28:30.470944 kubelet[2684]: I1213 13:28:30.470900 2684 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:28:30.604756 kubelet[2684]: I1213 13:28:30.604711 2684 kubelet_node_status.go:72] "Attempting to register node" node="ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:30.619702 kubelet[2684]: I1213 13:28:30.619638 2684 kubelet_node_status.go:111] "Node was previously registered" node="ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:30.619899 kubelet[2684]: I1213 13:28:30.619802 2684 kubelet_node_status.go:75] "Successfully registered node" node="ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:30.657696 kubelet[2684]: W1213 13:28:30.657360 2684 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 13:28:30.660000 kubelet[2684]: W1213 13:28:30.659586 2684 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 13:28:30.660000 kubelet[2684]: E1213 13:28:30.659739 2684 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:30.663036 kubelet[2684]: W1213 13:28:30.662517 2684 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 13:28:30.717867 kubelet[2684]: I1213 13:28:30.716987 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b9ed87279dd2d78b40df2a76add472c6-kubeconfig\") pod \"kube-scheduler-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" (UID: \"b9ed87279dd2d78b40df2a76add472c6\") " pod="kube-system/kube-scheduler-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:30.717867 kubelet[2684]: I1213 13:28:30.717048 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ed3350ed8117f2ccece5ad11435c1dbb-k8s-certs\") pod \"kube-apiserver-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" (UID: \"ed3350ed8117f2ccece5ad11435c1dbb\") " pod="kube-system/kube-apiserver-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:30.717867 kubelet[2684]: I1213 13:28:30.717085 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/08fb0482fb89b99f77cdcb8426dac7e2-ca-certs\") pod \"kube-controller-manager-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" (UID: \"08fb0482fb89b99f77cdcb8426dac7e2\") " pod="kube-system/kube-controller-manager-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:30.717867 kubelet[2684]: I1213 13:28:30.717116 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ed3350ed8117f2ccece5ad11435c1dbb-ca-certs\") pod \"kube-apiserver-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" (UID: \"ed3350ed8117f2ccece5ad11435c1dbb\") " pod="kube-system/kube-apiserver-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:30.718199 kubelet[2684]: I1213 13:28:30.717149 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed3350ed8117f2ccece5ad11435c1dbb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" (UID: \"ed3350ed8117f2ccece5ad11435c1dbb\") " pod="kube-system/kube-apiserver-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:30.718199 kubelet[2684]: I1213 13:28:30.717179 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/08fb0482fb89b99f77cdcb8426dac7e2-flexvolume-dir\") pod \"kube-controller-manager-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" (UID: \"08fb0482fb89b99f77cdcb8426dac7e2\") " pod="kube-system/kube-controller-manager-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:30.718199 kubelet[2684]: I1213 13:28:30.717208 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/08fb0482fb89b99f77cdcb8426dac7e2-k8s-certs\") pod \"kube-controller-manager-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" (UID: \"08fb0482fb89b99f77cdcb8426dac7e2\") " pod="kube-system/kube-controller-manager-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:30.718199 kubelet[2684]: I1213 13:28:30.717240 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/08fb0482fb89b99f77cdcb8426dac7e2-kubeconfig\") pod \"kube-controller-manager-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" (UID: \"08fb0482fb89b99f77cdcb8426dac7e2\") " pod="kube-system/kube-controller-manager-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:30.718479 kubelet[2684]: I1213 13:28:30.717283 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/08fb0482fb89b99f77cdcb8426dac7e2-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" (UID: \"08fb0482fb89b99f77cdcb8426dac7e2\") " pod="kube-system/kube-controller-manager-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:31.283824 sudo[2714]: pam_unix(sudo:session): session closed for user root Dec 13 13:28:31.297238 kubelet[2684]: I1213 13:28:31.297197 2684 apiserver.go:52] "Watching apiserver" Dec 13 13:28:31.309527 kubelet[2684]: I1213 13:28:31.309491 2684 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 13:28:31.414469 kubelet[2684]: W1213 13:28:31.414431 2684 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 13:28:31.414646 kubelet[2684]: E1213 13:28:31.414508 2684 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-scheduler-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:31.414800 kubelet[2684]: W1213 13:28:31.414763 2684 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 13:28:31.414891 kubelet[2684]: E1213 13:28:31.414845 2684 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" Dec 13 13:28:31.468297 kubelet[2684]: I1213 13:28:31.468043 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" podStartSLOduration=3.46792975 podStartE2EDuration="3.46792975s" podCreationTimestamp="2024-12-13 13:28:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:28:31.453061651 +0000 UTC m=+1.276540733" watchObservedRunningTime="2024-12-13 13:28:31.46792975 +0000 UTC m=+1.291408823" Dec 13 13:28:31.497542 kubelet[2684]: I1213 13:28:31.497104 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" podStartSLOduration=1.497082945 podStartE2EDuration="1.497082945s" podCreationTimestamp="2024-12-13 13:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:28:31.469001319 +0000 UTC m=+1.292480402" watchObservedRunningTime="2024-12-13 13:28:31.497082945 +0000 UTC m=+1.320562028" Dec 13 13:28:31.512576 kubelet[2684]: I1213 13:28:31.512457 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" podStartSLOduration=1.512363312 podStartE2EDuration="1.512363312s" podCreationTimestamp="2024-12-13 13:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:28:31.497399022 +0000 UTC m=+1.320878108" watchObservedRunningTime="2024-12-13 13:28:31.512363312 +0000 UTC m=+1.335842385" Dec 13 13:28:33.280459 sudo[1770]: pam_unix(sudo:session): session closed for user root Dec 13 13:28:33.326519 sshd[1769]: Connection closed by 147.75.109.163 port 49484 Dec 13 13:28:33.330046 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:33.341563 systemd[1]: sshd@8-10.128.0.84:22-147.75.109.163:49484.service: Deactivated successfully. Dec 13 13:28:33.345408 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 13:28:33.345706 systemd[1]: session-9.scope: Consumed 7.456s CPU time, 149.9M memory peak, 0B memory swap peak. Dec 13 13:28:33.348090 systemd-logind[1472]: Session 9 logged out. Waiting for processes to exit. Dec 13 13:28:33.353502 systemd-logind[1472]: Removed session 9. Dec 13 13:28:33.690501 kubelet[2684]: I1213 13:28:33.690351 2684 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 13:28:33.691790 containerd[1491]: time="2024-12-13T13:28:33.691726418Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 13:28:33.692570 kubelet[2684]: I1213 13:28:33.692034 2684 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 13:28:34.150469 systemd[1]: Created slice kubepods-besteffort-podd757f221_b3db_4a3e_8546_a9c4ccabb3a9.slice - libcontainer container kubepods-besteffort-podd757f221_b3db_4a3e_8546_a9c4ccabb3a9.slice. Dec 13 13:28:34.171092 systemd[1]: Created slice kubepods-burstable-podec217df5_e1f4_4c1b_bcdb_592ea88c86bb.slice - libcontainer container kubepods-burstable-podec217df5_e1f4_4c1b_bcdb_592ea88c86bb.slice. Dec 13 13:28:34.240877 kubelet[2684]: I1213 13:28:34.240830 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d757f221-b3db-4a3e-8546-a9c4ccabb3a9-xtables-lock\") pod \"kube-proxy-cvzfr\" (UID: \"d757f221-b3db-4a3e-8546-a9c4ccabb3a9\") " pod="kube-system/kube-proxy-cvzfr" Dec 13 13:28:34.241106 kubelet[2684]: I1213 13:28:34.240888 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-cni-path\") pod \"cilium-2b5qq\" (UID: \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\") " pod="kube-system/cilium-2b5qq" Dec 13 13:28:34.241106 kubelet[2684]: I1213 13:28:34.240920 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-etc-cni-netd\") pod \"cilium-2b5qq\" (UID: \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\") " pod="kube-system/cilium-2b5qq" Dec 13 13:28:34.241106 kubelet[2684]: I1213 13:28:34.240945 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-cilium-cgroup\") pod \"cilium-2b5qq\" (UID: \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\") " pod="kube-system/cilium-2b5qq" Dec 13 13:28:34.241106 kubelet[2684]: I1213 13:28:34.240988 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-host-proc-sys-net\") pod \"cilium-2b5qq\" (UID: \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\") " pod="kube-system/cilium-2b5qq" Dec 13 13:28:34.241106 kubelet[2684]: I1213 13:28:34.241034 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cd4z\" (UniqueName: \"kubernetes.io/projected/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-kube-api-access-7cd4z\") pod \"cilium-2b5qq\" (UID: \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\") " pod="kube-system/cilium-2b5qq" Dec 13 13:28:34.241106 kubelet[2684]: I1213 13:28:34.241065 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-cilium-run\") pod \"cilium-2b5qq\" (UID: \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\") " pod="kube-system/cilium-2b5qq" Dec 13 13:28:34.241469 kubelet[2684]: I1213 13:28:34.241088 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-hostproc\") pod \"cilium-2b5qq\" (UID: \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\") " pod="kube-system/cilium-2b5qq" Dec 13 13:28:34.241469 kubelet[2684]: I1213 13:28:34.241180 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-hubble-tls\") pod \"cilium-2b5qq\" (UID: \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\") " pod="kube-system/cilium-2b5qq" Dec 13 13:28:34.241469 kubelet[2684]: I1213 13:28:34.241209 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d757f221-b3db-4a3e-8546-a9c4ccabb3a9-lib-modules\") pod \"kube-proxy-cvzfr\" (UID: \"d757f221-b3db-4a3e-8546-a9c4ccabb3a9\") " pod="kube-system/kube-proxy-cvzfr" Dec 13 13:28:34.241469 kubelet[2684]: I1213 13:28:34.241239 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-clustermesh-secrets\") pod \"cilium-2b5qq\" (UID: \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\") " pod="kube-system/cilium-2b5qq" Dec 13 13:28:34.241469 kubelet[2684]: I1213 13:28:34.241298 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-xtables-lock\") pod \"cilium-2b5qq\" (UID: \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\") " pod="kube-system/cilium-2b5qq" Dec 13 13:28:34.241469 kubelet[2684]: I1213 13:28:34.241326 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-cilium-config-path\") pod \"cilium-2b5qq\" (UID: \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\") " pod="kube-system/cilium-2b5qq" Dec 13 13:28:34.241786 kubelet[2684]: I1213 13:28:34.241353 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-bpf-maps\") pod \"cilium-2b5qq\" (UID: \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\") " pod="kube-system/cilium-2b5qq" Dec 13 13:28:34.241786 kubelet[2684]: I1213 13:28:34.241382 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-lib-modules\") pod \"cilium-2b5qq\" (UID: \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\") " pod="kube-system/cilium-2b5qq" Dec 13 13:28:34.241786 kubelet[2684]: I1213 13:28:34.241408 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d757f221-b3db-4a3e-8546-a9c4ccabb3a9-kube-proxy\") pod \"kube-proxy-cvzfr\" (UID: \"d757f221-b3db-4a3e-8546-a9c4ccabb3a9\") " pod="kube-system/kube-proxy-cvzfr" Dec 13 13:28:34.241786 kubelet[2684]: I1213 13:28:34.241435 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5whjf\" (UniqueName: \"kubernetes.io/projected/d757f221-b3db-4a3e-8546-a9c4ccabb3a9-kube-api-access-5whjf\") pod \"kube-proxy-cvzfr\" (UID: \"d757f221-b3db-4a3e-8546-a9c4ccabb3a9\") " pod="kube-system/kube-proxy-cvzfr" Dec 13 13:28:34.241786 kubelet[2684]: I1213 13:28:34.241463 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-host-proc-sys-kernel\") pod \"cilium-2b5qq\" (UID: \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\") " pod="kube-system/cilium-2b5qq" Dec 13 13:28:34.377841 kubelet[2684]: E1213 13:28:34.371982 2684 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 13:28:34.377841 kubelet[2684]: E1213 13:28:34.372068 2684 projected.go:194] Error preparing data for projected volume kube-api-access-5whjf for pod kube-system/kube-proxy-cvzfr: configmap "kube-root-ca.crt" not found Dec 13 13:28:34.377841 kubelet[2684]: E1213 13:28:34.372240 2684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d757f221-b3db-4a3e-8546-a9c4ccabb3a9-kube-api-access-5whjf podName:d757f221-b3db-4a3e-8546-a9c4ccabb3a9 nodeName:}" failed. No retries permitted until 2024-12-13 13:28:34.872176958 +0000 UTC m=+4.695656030 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5whjf" (UniqueName: "kubernetes.io/projected/d757f221-b3db-4a3e-8546-a9c4ccabb3a9-kube-api-access-5whjf") pod "kube-proxy-cvzfr" (UID: "d757f221-b3db-4a3e-8546-a9c4ccabb3a9") : configmap "kube-root-ca.crt" not found Dec 13 13:28:34.410774 kubelet[2684]: E1213 13:28:34.410193 2684 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Dec 13 13:28:34.410774 kubelet[2684]: E1213 13:28:34.410262 2684 projected.go:194] Error preparing data for projected volume kube-api-access-7cd4z for pod kube-system/cilium-2b5qq: configmap "kube-root-ca.crt" not found Dec 13 13:28:34.410774 kubelet[2684]: E1213 13:28:34.410393 2684 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-kube-api-access-7cd4z podName:ec217df5-e1f4-4c1b-bcdb-592ea88c86bb nodeName:}" failed. No retries permitted until 2024-12-13 13:28:34.910360321 +0000 UTC m=+4.733839406 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7cd4z" (UniqueName: "kubernetes.io/projected/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-kube-api-access-7cd4z") pod "cilium-2b5qq" (UID: "ec217df5-e1f4-4c1b-bcdb-592ea88c86bb") : configmap "kube-root-ca.crt" not found Dec 13 13:28:34.839844 systemd[1]: Created slice kubepods-besteffort-podea20f771_60c4_426c_a541_764e4dcc998d.slice - libcontainer container kubepods-besteffort-podea20f771_60c4_426c_a541_764e4dcc998d.slice. Dec 13 13:28:34.868599 kubelet[2684]: I1213 13:28:34.868361 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea20f771-60c4-426c-a541-764e4dcc998d-cilium-config-path\") pod \"cilium-operator-5d85765b45-xbf92\" (UID: \"ea20f771-60c4-426c-a541-764e4dcc998d\") " pod="kube-system/cilium-operator-5d85765b45-xbf92" Dec 13 13:28:34.868599 kubelet[2684]: I1213 13:28:34.868436 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2d9v\" (UniqueName: \"kubernetes.io/projected/ea20f771-60c4-426c-a541-764e4dcc998d-kube-api-access-k2d9v\") pod \"cilium-operator-5d85765b45-xbf92\" (UID: \"ea20f771-60c4-426c-a541-764e4dcc998d\") " pod="kube-system/cilium-operator-5d85765b45-xbf92" Dec 13 13:28:35.065739 containerd[1491]: time="2024-12-13T13:28:35.065470695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cvzfr,Uid:d757f221-b3db-4a3e-8546-a9c4ccabb3a9,Namespace:kube-system,Attempt:0,}" Dec 13 13:28:35.076339 containerd[1491]: time="2024-12-13T13:28:35.075792595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2b5qq,Uid:ec217df5-e1f4-4c1b-bcdb-592ea88c86bb,Namespace:kube-system,Attempt:0,}" Dec 13 13:28:35.109766 containerd[1491]: time="2024-12-13T13:28:35.108410622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:28:35.109766 containerd[1491]: time="2024-12-13T13:28:35.108485751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:28:35.109766 containerd[1491]: time="2024-12-13T13:28:35.108505361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:35.109766 containerd[1491]: time="2024-12-13T13:28:35.108628623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:35.135562 containerd[1491]: time="2024-12-13T13:28:35.135430280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:28:35.135562 containerd[1491]: time="2024-12-13T13:28:35.135516747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:28:35.136851 containerd[1491]: time="2024-12-13T13:28:35.135543890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:35.137865 containerd[1491]: time="2024-12-13T13:28:35.137704547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:35.146958 containerd[1491]: time="2024-12-13T13:28:35.146866105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-xbf92,Uid:ea20f771-60c4-426c-a541-764e4dcc998d,Namespace:kube-system,Attempt:0,}" Dec 13 13:28:35.161907 systemd[1]: Started cri-containerd-5576cb25810a2957c185ee6189752927f165c9404feddb297b4be635fb04d782.scope - libcontainer container 5576cb25810a2957c185ee6189752927f165c9404feddb297b4be635fb04d782. Dec 13 13:28:35.192931 systemd[1]: Started cri-containerd-8ffc653a815571eeca6a4fc61c364f2df7574af753bc13da0741ee0d1e11b822.scope - libcontainer container 8ffc653a815571eeca6a4fc61c364f2df7574af753bc13da0741ee0d1e11b822. Dec 13 13:28:35.319705 containerd[1491]: time="2024-12-13T13:28:35.318042277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:28:35.319705 containerd[1491]: time="2024-12-13T13:28:35.318159165Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:28:35.319705 containerd[1491]: time="2024-12-13T13:28:35.318209063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:35.319705 containerd[1491]: time="2024-12-13T13:28:35.318402593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:35.336506 containerd[1491]: time="2024-12-13T13:28:35.336458588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2b5qq,Uid:ec217df5-e1f4-4c1b-bcdb-592ea88c86bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ffc653a815571eeca6a4fc61c364f2df7574af753bc13da0741ee0d1e11b822\"" Dec 13 13:28:35.341522 containerd[1491]: time="2024-12-13T13:28:35.339663259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cvzfr,Uid:d757f221-b3db-4a3e-8546-a9c4ccabb3a9,Namespace:kube-system,Attempt:0,} returns sandbox id \"5576cb25810a2957c185ee6189752927f165c9404feddb297b4be635fb04d782\"" Dec 13 13:28:35.342734 containerd[1491]: time="2024-12-13T13:28:35.342642224Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 13:28:35.346839 containerd[1491]: time="2024-12-13T13:28:35.346805043Z" level=info msg="CreateContainer within sandbox \"5576cb25810a2957c185ee6189752927f165c9404feddb297b4be635fb04d782\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 13:28:35.434825 containerd[1491]: time="2024-12-13T13:28:35.433275081Z" level=info msg="CreateContainer within sandbox \"5576cb25810a2957c185ee6189752927f165c9404feddb297b4be635fb04d782\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3a2b917bf3ae908ad5d1035931201e5cda9190d98158ed7d98dc437f75eba89f\"" Dec 13 13:28:35.454787 containerd[1491]: time="2024-12-13T13:28:35.445866144Z" level=info msg="StartContainer for \"3a2b917bf3ae908ad5d1035931201e5cda9190d98158ed7d98dc437f75eba89f\"" Dec 13 13:28:35.475905 systemd[1]: Started cri-containerd-c48b283a106b2885fbc6c3f916db2fe8878fb32b4555ca248be3c80ca490c3f1.scope - libcontainer container c48b283a106b2885fbc6c3f916db2fe8878fb32b4555ca248be3c80ca490c3f1. Dec 13 13:28:35.611821 systemd[1]: Started cri-containerd-3a2b917bf3ae908ad5d1035931201e5cda9190d98158ed7d98dc437f75eba89f.scope - libcontainer container 3a2b917bf3ae908ad5d1035931201e5cda9190d98158ed7d98dc437f75eba89f. Dec 13 13:28:35.732786 containerd[1491]: time="2024-12-13T13:28:35.731999918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-xbf92,Uid:ea20f771-60c4-426c-a541-764e4dcc998d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c48b283a106b2885fbc6c3f916db2fe8878fb32b4555ca248be3c80ca490c3f1\"" Dec 13 13:28:35.740844 containerd[1491]: time="2024-12-13T13:28:35.740493496Z" level=info msg="StartContainer for \"3a2b917bf3ae908ad5d1035931201e5cda9190d98158ed7d98dc437f75eba89f\" returns successfully" Dec 13 13:28:36.690997 kubelet[2684]: I1213 13:28:36.689288 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cvzfr" podStartSLOduration=2.689259508 podStartE2EDuration="2.689259508s" podCreationTimestamp="2024-12-13 13:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:28:36.532794962 +0000 UTC m=+6.356274070" watchObservedRunningTime="2024-12-13 13:28:36.689259508 +0000 UTC m=+6.512738594" Dec 13 13:28:43.256616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1656063100.mount: Deactivated successfully. Dec 13 13:28:47.555730 containerd[1491]: time="2024-12-13T13:28:47.554234039Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:47.560362 containerd[1491]: time="2024-12-13T13:28:47.557818490Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735367" Dec 13 13:28:47.563733 containerd[1491]: time="2024-12-13T13:28:47.561577911Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:47.572372 containerd[1491]: time="2024-12-13T13:28:47.572254797Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 12.22921013s" Dec 13 13:28:47.573026 containerd[1491]: time="2024-12-13T13:28:47.572498646Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 13:28:47.581755 containerd[1491]: time="2024-12-13T13:28:47.579952783Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 13:28:47.588173 containerd[1491]: time="2024-12-13T13:28:47.588072533Z" level=info msg="CreateContainer within sandbox \"8ffc653a815571eeca6a4fc61c364f2df7574af753bc13da0741ee0d1e11b822\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 13:28:47.650734 containerd[1491]: time="2024-12-13T13:28:47.648759325Z" level=info msg="CreateContainer within sandbox \"8ffc653a815571eeca6a4fc61c364f2df7574af753bc13da0741ee0d1e11b822\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"10232d31b24a19451111ed43b4e101d240d650c24ff7df165c9c2a5431efa618\"" Dec 13 13:28:47.662508 containerd[1491]: time="2024-12-13T13:28:47.662442482Z" level=info msg="StartContainer for \"10232d31b24a19451111ed43b4e101d240d650c24ff7df165c9c2a5431efa618\"" Dec 13 13:28:47.838330 systemd[1]: run-containerd-runc-k8s.io-10232d31b24a19451111ed43b4e101d240d650c24ff7df165c9c2a5431efa618-runc.Dgtcrz.mount: Deactivated successfully. Dec 13 13:28:47.854237 systemd[1]: Started cri-containerd-10232d31b24a19451111ed43b4e101d240d650c24ff7df165c9c2a5431efa618.scope - libcontainer container 10232d31b24a19451111ed43b4e101d240d650c24ff7df165c9c2a5431efa618. Dec 13 13:28:47.994240 containerd[1491]: time="2024-12-13T13:28:47.994052790Z" level=info msg="StartContainer for \"10232d31b24a19451111ed43b4e101d240d650c24ff7df165c9c2a5431efa618\" returns successfully" Dec 13 13:28:48.030090 systemd[1]: cri-containerd-10232d31b24a19451111ed43b4e101d240d650c24ff7df165c9c2a5431efa618.scope: Deactivated successfully. Dec 13 13:28:48.637235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10232d31b24a19451111ed43b4e101d240d650c24ff7df165c9c2a5431efa618-rootfs.mount: Deactivated successfully. Dec 13 13:28:49.918921 containerd[1491]: time="2024-12-13T13:28:49.918610326Z" level=info msg="shim disconnected" id=10232d31b24a19451111ed43b4e101d240d650c24ff7df165c9c2a5431efa618 namespace=k8s.io Dec 13 13:28:49.918921 containerd[1491]: time="2024-12-13T13:28:49.918965161Z" level=warning msg="cleaning up after shim disconnected" id=10232d31b24a19451111ed43b4e101d240d650c24ff7df165c9c2a5431efa618 namespace=k8s.io Dec 13 13:28:49.918921 containerd[1491]: time="2024-12-13T13:28:49.919011650Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:28:50.666022 containerd[1491]: time="2024-12-13T13:28:50.665959592Z" level=info msg="CreateContainer within sandbox \"8ffc653a815571eeca6a4fc61c364f2df7574af753bc13da0741ee0d1e11b822\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 13:28:50.700743 containerd[1491]: time="2024-12-13T13:28:50.699647459Z" level=info msg="CreateContainer within sandbox \"8ffc653a815571eeca6a4fc61c364f2df7574af753bc13da0741ee0d1e11b822\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"475d98b824b404b7c371435d1cb9c8c07f746a0cd882263d4232137afbee3c46\"" Dec 13 13:28:50.700489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1759286546.mount: Deactivated successfully. Dec 13 13:28:50.707423 containerd[1491]: time="2024-12-13T13:28:50.704926605Z" level=info msg="StartContainer for \"475d98b824b404b7c371435d1cb9c8c07f746a0cd882263d4232137afbee3c46\"" Dec 13 13:28:50.794870 systemd[1]: Started cri-containerd-475d98b824b404b7c371435d1cb9c8c07f746a0cd882263d4232137afbee3c46.scope - libcontainer container 475d98b824b404b7c371435d1cb9c8c07f746a0cd882263d4232137afbee3c46. Dec 13 13:28:50.841069 containerd[1491]: time="2024-12-13T13:28:50.839819507Z" level=info msg="StartContainer for \"475d98b824b404b7c371435d1cb9c8c07f746a0cd882263d4232137afbee3c46\" returns successfully" Dec 13 13:28:50.895752 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:28:50.898470 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:28:50.900494 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:28:50.916541 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:28:50.918927 systemd[1]: cri-containerd-475d98b824b404b7c371435d1cb9c8c07f746a0cd882263d4232137afbee3c46.scope: Deactivated successfully. Dec 13 13:28:50.991128 containerd[1491]: time="2024-12-13T13:28:50.990066713Z" level=info msg="shim disconnected" id=475d98b824b404b7c371435d1cb9c8c07f746a0cd882263d4232137afbee3c46 namespace=k8s.io Dec 13 13:28:50.991128 containerd[1491]: time="2024-12-13T13:28:50.990230313Z" level=warning msg="cleaning up after shim disconnected" id=475d98b824b404b7c371435d1cb9c8c07f746a0cd882263d4232137afbee3c46 namespace=k8s.io Dec 13 13:28:50.991128 containerd[1491]: time="2024-12-13T13:28:50.990257138Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:28:51.074789 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:28:51.682707 containerd[1491]: time="2024-12-13T13:28:51.681488041Z" level=info msg="CreateContainer within sandbox \"8ffc653a815571eeca6a4fc61c364f2df7574af753bc13da0741ee0d1e11b822\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 13:28:51.697628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-475d98b824b404b7c371435d1cb9c8c07f746a0cd882263d4232137afbee3c46-rootfs.mount: Deactivated successfully. Dec 13 13:28:51.786579 containerd[1491]: time="2024-12-13T13:28:51.786457799Z" level=info msg="CreateContainer within sandbox \"8ffc653a815571eeca6a4fc61c364f2df7574af753bc13da0741ee0d1e11b822\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"20cb4edba90af8bbf554f95abe5c032714c0114f1eb6804fa57048bf4ede84ee\"" Dec 13 13:28:51.795741 containerd[1491]: time="2024-12-13T13:28:51.794519631Z" level=info msg="StartContainer for \"20cb4edba90af8bbf554f95abe5c032714c0114f1eb6804fa57048bf4ede84ee\"" Dec 13 13:28:51.800377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1609234523.mount: Deactivated successfully. Dec 13 13:28:51.985216 systemd[1]: Started cri-containerd-20cb4edba90af8bbf554f95abe5c032714c0114f1eb6804fa57048bf4ede84ee.scope - libcontainer container 20cb4edba90af8bbf554f95abe5c032714c0114f1eb6804fa57048bf4ede84ee. Dec 13 13:28:52.224162 containerd[1491]: time="2024-12-13T13:28:52.221030252Z" level=info msg="StartContainer for \"20cb4edba90af8bbf554f95abe5c032714c0114f1eb6804fa57048bf4ede84ee\" returns successfully" Dec 13 13:28:52.250546 systemd[1]: cri-containerd-20cb4edba90af8bbf554f95abe5c032714c0114f1eb6804fa57048bf4ede84ee.scope: Deactivated successfully. Dec 13 13:28:52.412327 containerd[1491]: time="2024-12-13T13:28:52.412219119Z" level=info msg="shim disconnected" id=20cb4edba90af8bbf554f95abe5c032714c0114f1eb6804fa57048bf4ede84ee namespace=k8s.io Dec 13 13:28:52.412327 containerd[1491]: time="2024-12-13T13:28:52.412329690Z" level=warning msg="cleaning up after shim disconnected" id=20cb4edba90af8bbf554f95abe5c032714c0114f1eb6804fa57048bf4ede84ee namespace=k8s.io Dec 13 13:28:52.413776 containerd[1491]: time="2024-12-13T13:28:52.412357338Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:28:52.699359 containerd[1491]: time="2024-12-13T13:28:52.697196299Z" level=info msg="CreateContainer within sandbox \"8ffc653a815571eeca6a4fc61c364f2df7574af753bc13da0741ee0d1e11b822\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 13:28:52.697880 systemd[1]: run-containerd-runc-k8s.io-20cb4edba90af8bbf554f95abe5c032714c0114f1eb6804fa57048bf4ede84ee-runc.F8T8zk.mount: Deactivated successfully. Dec 13 13:28:52.698289 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20cb4edba90af8bbf554f95abe5c032714c0114f1eb6804fa57048bf4ede84ee-rootfs.mount: Deactivated successfully. Dec 13 13:28:52.775870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount350471637.mount: Deactivated successfully. Dec 13 13:28:52.828395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3705688094.mount: Deactivated successfully. Dec 13 13:28:52.841001 containerd[1491]: time="2024-12-13T13:28:52.839326030Z" level=info msg="CreateContainer within sandbox \"8ffc653a815571eeca6a4fc61c364f2df7574af753bc13da0741ee0d1e11b822\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d9d7047baf770ad0e3682cf7a659c2f5dbc4e1d81636e42388a39740236a5f99\"" Dec 13 13:28:52.844499 containerd[1491]: time="2024-12-13T13:28:52.843113021Z" level=info msg="StartContainer for \"d9d7047baf770ad0e3682cf7a659c2f5dbc4e1d81636e42388a39740236a5f99\"" Dec 13 13:28:52.955405 systemd[1]: Started cri-containerd-d9d7047baf770ad0e3682cf7a659c2f5dbc4e1d81636e42388a39740236a5f99.scope - libcontainer container d9d7047baf770ad0e3682cf7a659c2f5dbc4e1d81636e42388a39740236a5f99. Dec 13 13:28:53.060832 systemd[1]: cri-containerd-d9d7047baf770ad0e3682cf7a659c2f5dbc4e1d81636e42388a39740236a5f99.scope: Deactivated successfully. Dec 13 13:28:53.070124 containerd[1491]: time="2024-12-13T13:28:53.068984818Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec217df5_e1f4_4c1b_bcdb_592ea88c86bb.slice/cri-containerd-d9d7047baf770ad0e3682cf7a659c2f5dbc4e1d81636e42388a39740236a5f99.scope/memory.events\": no such file or directory" Dec 13 13:28:53.076361 containerd[1491]: time="2024-12-13T13:28:53.076185046Z" level=info msg="StartContainer for \"d9d7047baf770ad0e3682cf7a659c2f5dbc4e1d81636e42388a39740236a5f99\" returns successfully" Dec 13 13:28:53.231656 containerd[1491]: time="2024-12-13T13:28:53.230576817Z" level=info msg="shim disconnected" id=d9d7047baf770ad0e3682cf7a659c2f5dbc4e1d81636e42388a39740236a5f99 namespace=k8s.io Dec 13 13:28:53.236134 containerd[1491]: time="2024-12-13T13:28:53.232462102Z" level=warning msg="cleaning up after shim disconnected" id=d9d7047baf770ad0e3682cf7a659c2f5dbc4e1d81636e42388a39740236a5f99 namespace=k8s.io Dec 13 13:28:53.236134 containerd[1491]: time="2024-12-13T13:28:53.232501883Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:28:53.704037 containerd[1491]: time="2024-12-13T13:28:53.703981030Z" level=info msg="CreateContainer within sandbox \"8ffc653a815571eeca6a4fc61c364f2df7574af753bc13da0741ee0d1e11b822\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 13:28:53.757440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3751597788.mount: Deactivated successfully. Dec 13 13:28:53.764508 containerd[1491]: time="2024-12-13T13:28:53.764456357Z" level=info msg="CreateContainer within sandbox \"8ffc653a815571eeca6a4fc61c364f2df7574af753bc13da0741ee0d1e11b822\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a13e2a041a76af0f784cc190938e81a57579005a750701a8769ead0df4ff2c6d\"" Dec 13 13:28:53.766194 containerd[1491]: time="2024-12-13T13:28:53.766158209Z" level=info msg="StartContainer for \"a13e2a041a76af0f784cc190938e81a57579005a750701a8769ead0df4ff2c6d\"" Dec 13 13:28:53.846111 systemd[1]: Started cri-containerd-a13e2a041a76af0f784cc190938e81a57579005a750701a8769ead0df4ff2c6d.scope - libcontainer container a13e2a041a76af0f784cc190938e81a57579005a750701a8769ead0df4ff2c6d. Dec 13 13:28:53.880852 containerd[1491]: time="2024-12-13T13:28:53.880194821Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:53.883603 containerd[1491]: time="2024-12-13T13:28:53.883463743Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18906025" Dec 13 13:28:53.887748 containerd[1491]: time="2024-12-13T13:28:53.885794157Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:53.890368 containerd[1491]: time="2024-12-13T13:28:53.890318989Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 6.310194249s" Dec 13 13:28:53.890532 containerd[1491]: time="2024-12-13T13:28:53.890420527Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 13:28:53.900286 containerd[1491]: time="2024-12-13T13:28:53.900245000Z" level=info msg="CreateContainer within sandbox \"c48b283a106b2885fbc6c3f916db2fe8878fb32b4555ca248be3c80ca490c3f1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 13:28:53.939803 containerd[1491]: time="2024-12-13T13:28:53.939749745Z" level=info msg="StartContainer for \"a13e2a041a76af0f784cc190938e81a57579005a750701a8769ead0df4ff2c6d\" returns successfully" Dec 13 13:28:53.940159 containerd[1491]: time="2024-12-13T13:28:53.940104680Z" level=info msg="CreateContainer within sandbox \"c48b283a106b2885fbc6c3f916db2fe8878fb32b4555ca248be3c80ca490c3f1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b185e4f7b51647eb41b2b025ae818b1f6e7ada5338013bc72c1df2335fd0a51f\"" Dec 13 13:28:53.945083 containerd[1491]: time="2024-12-13T13:28:53.945040521Z" level=info msg="StartContainer for \"b185e4f7b51647eb41b2b025ae818b1f6e7ada5338013bc72c1df2335fd0a51f\"" Dec 13 13:28:54.092345 systemd[1]: Started cri-containerd-b185e4f7b51647eb41b2b025ae818b1f6e7ada5338013bc72c1df2335fd0a51f.scope - libcontainer container b185e4f7b51647eb41b2b025ae818b1f6e7ada5338013bc72c1df2335fd0a51f. Dec 13 13:28:54.191975 containerd[1491]: time="2024-12-13T13:28:54.191875143Z" level=info msg="StartContainer for \"b185e4f7b51647eb41b2b025ae818b1f6e7ada5338013bc72c1df2335fd0a51f\" returns successfully" Dec 13 13:28:54.290959 kubelet[2684]: I1213 13:28:54.290824 2684 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 13:28:54.501550 systemd[1]: Created slice kubepods-burstable-podf7e10d13_a8a9_4097_bb4c_f95030427b1a.slice - libcontainer container kubepods-burstable-podf7e10d13_a8a9_4097_bb4c_f95030427b1a.slice. Dec 13 13:28:54.534998 systemd[1]: Created slice kubepods-burstable-pod553d606b_2534_494e_9c36_28c85fe0c28b.slice - libcontainer container kubepods-burstable-pod553d606b_2534_494e_9c36_28c85fe0c28b.slice. Dec 13 13:28:54.595911 kubelet[2684]: I1213 13:28:54.593648 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/553d606b-2534-494e-9c36-28c85fe0c28b-config-volume\") pod \"coredns-6f6b679f8f-j2p89\" (UID: \"553d606b-2534-494e-9c36-28c85fe0c28b\") " pod="kube-system/coredns-6f6b679f8f-j2p89" Dec 13 13:28:54.596321 kubelet[2684]: I1213 13:28:54.596000 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbbh8\" (UniqueName: \"kubernetes.io/projected/553d606b-2534-494e-9c36-28c85fe0c28b-kube-api-access-gbbh8\") pod \"coredns-6f6b679f8f-j2p89\" (UID: \"553d606b-2534-494e-9c36-28c85fe0c28b\") " pod="kube-system/coredns-6f6b679f8f-j2p89" Dec 13 13:28:54.596321 kubelet[2684]: I1213 13:28:54.596129 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7e10d13-a8a9-4097-bb4c-f95030427b1a-config-volume\") pod \"coredns-6f6b679f8f-zmg6d\" (UID: \"f7e10d13-a8a9-4097-bb4c-f95030427b1a\") " pod="kube-system/coredns-6f6b679f8f-zmg6d" Dec 13 13:28:54.596321 kubelet[2684]: I1213 13:28:54.596229 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whqlt\" (UniqueName: \"kubernetes.io/projected/f7e10d13-a8a9-4097-bb4c-f95030427b1a-kube-api-access-whqlt\") pod \"coredns-6f6b679f8f-zmg6d\" (UID: \"f7e10d13-a8a9-4097-bb4c-f95030427b1a\") " pod="kube-system/coredns-6f6b679f8f-zmg6d" Dec 13 13:28:54.854997 containerd[1491]: time="2024-12-13T13:28:54.854112954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zmg6d,Uid:f7e10d13-a8a9-4097-bb4c-f95030427b1a,Namespace:kube-system,Attempt:0,}" Dec 13 13:28:54.861549 containerd[1491]: time="2024-12-13T13:28:54.861450498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-j2p89,Uid:553d606b-2534-494e-9c36-28c85fe0c28b,Namespace:kube-system,Attempt:0,}" Dec 13 13:28:54.880599 kubelet[2684]: I1213 13:28:54.878853 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-xbf92" podStartSLOduration=2.721345369 podStartE2EDuration="20.878262018s" podCreationTimestamp="2024-12-13 13:28:34 +0000 UTC" firstStartedPulling="2024-12-13 13:28:35.735957084 +0000 UTC m=+5.559436150" lastFinishedPulling="2024-12-13 13:28:53.892873729 +0000 UTC m=+23.716352799" observedRunningTime="2024-12-13 13:28:54.868877141 +0000 UTC m=+24.692356223" watchObservedRunningTime="2024-12-13 13:28:54.878262018 +0000 UTC m=+24.701741108" Dec 13 13:28:55.225336 kubelet[2684]: I1213 13:28:55.223160 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2b5qq" podStartSLOduration=8.986125734 podStartE2EDuration="21.223114503s" podCreationTimestamp="2024-12-13 13:28:34 +0000 UTC" firstStartedPulling="2024-12-13 13:28:35.340758829 +0000 UTC m=+5.164237908" lastFinishedPulling="2024-12-13 13:28:47.577747618 +0000 UTC m=+17.401226677" observedRunningTime="2024-12-13 13:28:55.178102536 +0000 UTC m=+25.001581599" watchObservedRunningTime="2024-12-13 13:28:55.223114503 +0000 UTC m=+25.046593586" Dec 13 13:28:59.469399 systemd-networkd[1406]: cilium_host: Link UP Dec 13 13:28:59.472784 systemd-networkd[1406]: cilium_net: Link UP Dec 13 13:28:59.473298 systemd-networkd[1406]: cilium_net: Gained carrier Dec 13 13:28:59.485731 systemd-networkd[1406]: cilium_host: Gained carrier Dec 13 13:28:59.663304 systemd-networkd[1406]: cilium_vxlan: Link UP Dec 13 13:28:59.663323 systemd-networkd[1406]: cilium_vxlan: Gained carrier Dec 13 13:28:59.789363 systemd-networkd[1406]: cilium_net: Gained IPv6LL Dec 13 13:28:59.990228 systemd-networkd[1406]: cilium_host: Gained IPv6LL Dec 13 13:29:00.508176 kernel: NET: Registered PF_ALG protocol family Dec 13 13:29:01.468864 systemd-networkd[1406]: cilium_vxlan: Gained IPv6LL Dec 13 13:29:02.097349 systemd-networkd[1406]: lxc_health: Link UP Dec 13 13:29:02.126645 systemd-networkd[1406]: lxc_health: Gained carrier Dec 13 13:29:02.612448 systemd-networkd[1406]: lxc24fced2d6274: Link UP Dec 13 13:29:02.622791 kernel: eth0: renamed from tmpf046e Dec 13 13:29:02.642263 systemd-networkd[1406]: lxc24fced2d6274: Gained carrier Dec 13 13:29:02.669268 systemd-networkd[1406]: lxc930bd6076d00: Link UP Dec 13 13:29:02.696717 kernel: eth0: renamed from tmpecad1 Dec 13 13:29:02.709860 systemd-networkd[1406]: lxc930bd6076d00: Gained carrier Dec 13 13:29:03.187947 systemd-networkd[1406]: lxc_health: Gained IPv6LL Dec 13 13:29:04.019985 systemd-networkd[1406]: lxc24fced2d6274: Gained IPv6LL Dec 13 13:29:04.467949 systemd-networkd[1406]: lxc930bd6076d00: Gained IPv6LL Dec 13 13:29:07.305618 ntpd[1459]: Listen normally on 7 cilium_host 192.168.0.80:123 Dec 13 13:29:07.309852 ntpd[1459]: 13 Dec 13:29:07 ntpd[1459]: Listen normally on 7 cilium_host 192.168.0.80:123 Dec 13 13:29:07.309852 ntpd[1459]: 13 Dec 13:29:07 ntpd[1459]: Listen normally on 8 cilium_net [fe80::883f:6bff:fe84:2f9d%4]:123 Dec 13 13:29:07.309852 ntpd[1459]: 13 Dec 13:29:07 ntpd[1459]: Listen normally on 9 cilium_host [fe80::30e5:50ff:fe83:324e%5]:123 Dec 13 13:29:07.309852 ntpd[1459]: 13 Dec 13:29:07 ntpd[1459]: Listen normally on 10 cilium_vxlan [fe80::bc75:bcff:fe24:9350%6]:123 Dec 13 13:29:07.309852 ntpd[1459]: 13 Dec 13:29:07 ntpd[1459]: Listen normally on 11 lxc_health [fe80::447c:f7ff:fe64:a7a9%8]:123 Dec 13 13:29:07.309852 ntpd[1459]: 13 Dec 13:29:07 ntpd[1459]: Listen normally on 12 lxc24fced2d6274 [fe80::1098:99ff:fe0f:8132%10]:123 Dec 13 13:29:07.309852 ntpd[1459]: 13 Dec 13:29:07 ntpd[1459]: Listen normally on 13 lxc930bd6076d00 [fe80::dcb6:22ff:fe65:5e0a%12]:123 Dec 13 13:29:07.306146 ntpd[1459]: Listen normally on 8 cilium_net [fe80::883f:6bff:fe84:2f9d%4]:123 Dec 13 13:29:07.308690 ntpd[1459]: Listen normally on 9 cilium_host [fe80::30e5:50ff:fe83:324e%5]:123 Dec 13 13:29:07.308831 ntpd[1459]: Listen normally on 10 cilium_vxlan [fe80::bc75:bcff:fe24:9350%6]:123 Dec 13 13:29:07.308995 ntpd[1459]: Listen normally on 11 lxc_health [fe80::447c:f7ff:fe64:a7a9%8]:123 Dec 13 13:29:07.309085 ntpd[1459]: Listen normally on 12 lxc24fced2d6274 [fe80::1098:99ff:fe0f:8132%10]:123 Dec 13 13:29:07.309181 ntpd[1459]: Listen normally on 13 lxc930bd6076d00 [fe80::dcb6:22ff:fe65:5e0a%12]:123 Dec 13 13:29:08.055970 containerd[1491]: time="2024-12-13T13:29:08.055566680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:29:08.060815 containerd[1491]: time="2024-12-13T13:29:08.055939919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:29:08.060815 containerd[1491]: time="2024-12-13T13:29:08.055967441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:08.060815 containerd[1491]: time="2024-12-13T13:29:08.056258014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:08.063744 containerd[1491]: time="2024-12-13T13:29:08.061932515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:29:08.063744 containerd[1491]: time="2024-12-13T13:29:08.062020530Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:29:08.063744 containerd[1491]: time="2024-12-13T13:29:08.062047100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:08.063744 containerd[1491]: time="2024-12-13T13:29:08.062307423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:08.198629 systemd[1]: Started cri-containerd-ecad1ba05fbacc18156639eadb7091c32071b4aa9f31fc30ef0559a4516b9e35.scope - libcontainer container ecad1ba05fbacc18156639eadb7091c32071b4aa9f31fc30ef0559a4516b9e35. Dec 13 13:29:08.211045 systemd[1]: Started cri-containerd-f046e118028eb9ee7e9587e391a1ee542cea6e02dade77767114bf3946848dda.scope - libcontainer container f046e118028eb9ee7e9587e391a1ee542cea6e02dade77767114bf3946848dda. Dec 13 13:29:08.341910 containerd[1491]: time="2024-12-13T13:29:08.340533729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-zmg6d,Uid:f7e10d13-a8a9-4097-bb4c-f95030427b1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecad1ba05fbacc18156639eadb7091c32071b4aa9f31fc30ef0559a4516b9e35\"" Dec 13 13:29:08.355565 containerd[1491]: time="2024-12-13T13:29:08.355320118Z" level=info msg="CreateContainer within sandbox \"ecad1ba05fbacc18156639eadb7091c32071b4aa9f31fc30ef0559a4516b9e35\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:29:08.378942 containerd[1491]: time="2024-12-13T13:29:08.378800410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-j2p89,Uid:553d606b-2534-494e-9c36-28c85fe0c28b,Namespace:kube-system,Attempt:0,} returns sandbox id \"f046e118028eb9ee7e9587e391a1ee542cea6e02dade77767114bf3946848dda\"" Dec 13 13:29:08.389404 containerd[1491]: time="2024-12-13T13:29:08.389151893Z" level=info msg="CreateContainer within sandbox \"f046e118028eb9ee7e9587e391a1ee542cea6e02dade77767114bf3946848dda\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 13:29:08.410326 containerd[1491]: time="2024-12-13T13:29:08.410219694Z" level=info msg="CreateContainer within sandbox \"ecad1ba05fbacc18156639eadb7091c32071b4aa9f31fc30ef0559a4516b9e35\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"79a3073ff60209158211be88333844eec457918f387fb91b39028a6c12c20d7f\"" Dec 13 13:29:08.412713 containerd[1491]: time="2024-12-13T13:29:08.411052565Z" level=info msg="StartContainer for \"79a3073ff60209158211be88333844eec457918f387fb91b39028a6c12c20d7f\"" Dec 13 13:29:08.424922 containerd[1491]: time="2024-12-13T13:29:08.424836211Z" level=info msg="CreateContainer within sandbox \"f046e118028eb9ee7e9587e391a1ee542cea6e02dade77767114bf3946848dda\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7bd0c6c50ed335f7ff18eb9a50b9408ae8121c404cddee2157cd963a3a476e59\"" Dec 13 13:29:08.426872 containerd[1491]: time="2024-12-13T13:29:08.426817237Z" level=info msg="StartContainer for \"7bd0c6c50ed335f7ff18eb9a50b9408ae8121c404cddee2157cd963a3a476e59\"" Dec 13 13:29:08.469235 systemd[1]: Started cri-containerd-79a3073ff60209158211be88333844eec457918f387fb91b39028a6c12c20d7f.scope - libcontainer container 79a3073ff60209158211be88333844eec457918f387fb91b39028a6c12c20d7f. Dec 13 13:29:08.485899 systemd[1]: Started cri-containerd-7bd0c6c50ed335f7ff18eb9a50b9408ae8121c404cddee2157cd963a3a476e59.scope - libcontainer container 7bd0c6c50ed335f7ff18eb9a50b9408ae8121c404cddee2157cd963a3a476e59. Dec 13 13:29:08.540572 containerd[1491]: time="2024-12-13T13:29:08.540518505Z" level=info msg="StartContainer for \"79a3073ff60209158211be88333844eec457918f387fb91b39028a6c12c20d7f\" returns successfully" Dec 13 13:29:08.564625 containerd[1491]: time="2024-12-13T13:29:08.564563932Z" level=info msg="StartContainer for \"7bd0c6c50ed335f7ff18eb9a50b9408ae8121c404cddee2157cd963a3a476e59\" returns successfully" Dec 13 13:29:08.866987 kubelet[2684]: I1213 13:29:08.865264 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-j2p89" podStartSLOduration=34.865202253 podStartE2EDuration="34.865202253s" podCreationTimestamp="2024-12-13 13:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:29:08.864174142 +0000 UTC m=+38.687653224" watchObservedRunningTime="2024-12-13 13:29:08.865202253 +0000 UTC m=+38.688681356" Dec 13 13:29:08.919581 kubelet[2684]: I1213 13:29:08.919493 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-zmg6d" podStartSLOduration=34.919463859 podStartE2EDuration="34.919463859s" podCreationTimestamp="2024-12-13 13:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:29:08.893941962 +0000 UTC m=+38.717421049" watchObservedRunningTime="2024-12-13 13:29:08.919463859 +0000 UTC m=+38.742942943" Dec 13 13:29:09.077505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3143265157.mount: Deactivated successfully. Dec 13 13:29:15.340567 systemd[1]: Started sshd@9-10.128.0.84:22-147.75.109.163:52010.service - OpenSSH per-connection server daemon (147.75.109.163:52010). Dec 13 13:29:15.661479 sshd[4062]: Accepted publickey for core from 147.75.109.163 port 52010 ssh2: RSA SHA256:cNKX4Hvnd9CzZlNWRKUhmWuECv5diNIlM1/aFPCcnqE Dec 13 13:29:15.664525 sshd-session[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:15.677287 systemd-logind[1472]: New session 10 of user core. Dec 13 13:29:15.682916 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 13:29:16.143361 sshd[4064]: Connection closed by 147.75.109.163 port 52010 Dec 13 13:29:16.144541 sshd-session[4062]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:16.151047 systemd[1]: sshd@9-10.128.0.84:22-147.75.109.163:52010.service: Deactivated successfully. Dec 13 13:29:16.154066 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 13:29:16.155549 systemd-logind[1472]: Session 10 logged out. Waiting for processes to exit. Dec 13 13:29:16.159401 systemd-logind[1472]: Removed session 10. Dec 13 13:29:21.203157 systemd[1]: Started sshd@10-10.128.0.84:22-147.75.109.163:58384.service - OpenSSH per-connection server daemon (147.75.109.163:58384). Dec 13 13:29:21.514103 sshd[4076]: Accepted publickey for core from 147.75.109.163 port 58384 ssh2: RSA SHA256:cNKX4Hvnd9CzZlNWRKUhmWuECv5diNIlM1/aFPCcnqE Dec 13 13:29:21.516225 sshd-session[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:21.522618 systemd-logind[1472]: New session 11 of user core. Dec 13 13:29:21.528996 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 13:29:21.823236 sshd[4079]: Connection closed by 147.75.109.163 port 58384 Dec 13 13:29:21.824258 sshd-session[4076]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:21.829835 systemd[1]: sshd@10-10.128.0.84:22-147.75.109.163:58384.service: Deactivated successfully. Dec 13 13:29:21.832754 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 13:29:21.835386 systemd-logind[1472]: Session 11 logged out. Waiting for processes to exit. Dec 13 13:29:21.837383 systemd-logind[1472]: Removed session 11. Dec 13 13:29:25.862047 systemd[1]: Started sshd@11-10.128.0.84:22-113.140.95.250:55371.service - OpenSSH per-connection server daemon (113.140.95.250:55371). Dec 13 13:29:26.879223 systemd[1]: Started sshd@12-10.128.0.84:22-147.75.109.163:34736.service - OpenSSH per-connection server daemon (147.75.109.163:34736). Dec 13 13:29:27.189761 sshd[4094]: Accepted publickey for core from 147.75.109.163 port 34736 ssh2: RSA SHA256:cNKX4Hvnd9CzZlNWRKUhmWuECv5diNIlM1/aFPCcnqE Dec 13 13:29:27.191769 sshd-session[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:27.198936 systemd-logind[1472]: New session 12 of user core. Dec 13 13:29:27.206931 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 13:29:27.495766 sshd[4096]: Connection closed by 147.75.109.163 port 34736 Dec 13 13:29:27.497241 sshd-session[4094]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:27.503221 systemd[1]: sshd@12-10.128.0.84:22-147.75.109.163:34736.service: Deactivated successfully. Dec 13 13:29:27.506507 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 13:29:27.508463 systemd-logind[1472]: Session 12 logged out. Waiting for processes to exit. Dec 13 13:29:27.510625 systemd-logind[1472]: Removed session 12. Dec 13 13:29:28.952579 sshd[4091]: Invalid user ubnt from 113.140.95.250 port 55371 Dec 13 13:29:29.507800 sshd[4091]: PAM: Permission denied for illegal user ubnt from 113.140.95.250 Dec 13 13:29:29.508348 sshd[4091]: Failed keyboard-interactive/pam for invalid user ubnt from 113.140.95.250 port 55371 ssh2 Dec 13 13:29:30.148832 sshd[4091]: Connection closed by invalid user ubnt 113.140.95.250 port 55371 [preauth] Dec 13 13:29:30.153509 systemd[1]: sshd@11-10.128.0.84:22-113.140.95.250:55371.service: Deactivated successfully. Dec 13 13:29:32.551066 systemd[1]: Started sshd@13-10.128.0.84:22-147.75.109.163:34750.service - OpenSSH per-connection server daemon (147.75.109.163:34750). Dec 13 13:29:32.853542 sshd[4112]: Accepted publickey for core from 147.75.109.163 port 34750 ssh2: RSA SHA256:cNKX4Hvnd9CzZlNWRKUhmWuECv5diNIlM1/aFPCcnqE Dec 13 13:29:32.855462 sshd-session[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:32.862927 systemd-logind[1472]: New session 13 of user core. Dec 13 13:29:32.871886 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 13:29:33.167534 sshd[4114]: Connection closed by 147.75.109.163 port 34750 Dec 13 13:29:33.168588 sshd-session[4112]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:33.174900 systemd[1]: sshd@13-10.128.0.84:22-147.75.109.163:34750.service: Deactivated successfully. Dec 13 13:29:33.177880 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 13:29:33.179172 systemd-logind[1472]: Session 13 logged out. Waiting for processes to exit. Dec 13 13:29:33.180648 systemd-logind[1472]: Removed session 13. Dec 13 13:29:33.230523 systemd[1]: Started sshd@14-10.128.0.84:22-147.75.109.163:34762.service - OpenSSH per-connection server daemon (147.75.109.163:34762). Dec 13 13:29:33.528419 sshd[4126]: Accepted publickey for core from 147.75.109.163 port 34762 ssh2: RSA SHA256:cNKX4Hvnd9CzZlNWRKUhmWuECv5diNIlM1/aFPCcnqE Dec 13 13:29:33.530580 sshd-session[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:33.537406 systemd-logind[1472]: New session 14 of user core. Dec 13 13:29:33.543878 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 13:29:33.897345 sshd[4128]: Connection closed by 147.75.109.163 port 34762 Dec 13 13:29:33.898436 sshd-session[4126]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:33.910726 systemd[1]: sshd@14-10.128.0.84:22-147.75.109.163:34762.service: Deactivated successfully. Dec 13 13:29:33.917568 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 13:29:33.920877 systemd-logind[1472]: Session 14 logged out. Waiting for processes to exit. Dec 13 13:29:33.924930 systemd-logind[1472]: Removed session 14. Dec 13 13:29:33.956082 systemd[1]: Started sshd@15-10.128.0.84:22-147.75.109.163:34764.service - OpenSSH per-connection server daemon (147.75.109.163:34764). Dec 13 13:29:34.258827 sshd[4137]: Accepted publickey for core from 147.75.109.163 port 34764 ssh2: RSA SHA256:cNKX4Hvnd9CzZlNWRKUhmWuECv5diNIlM1/aFPCcnqE Dec 13 13:29:34.261015 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:34.267917 systemd-logind[1472]: New session 15 of user core. Dec 13 13:29:34.277881 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 13:29:34.569174 sshd[4139]: Connection closed by 147.75.109.163 port 34764 Dec 13 13:29:34.570401 sshd-session[4137]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:34.576473 systemd[1]: sshd@15-10.128.0.84:22-147.75.109.163:34764.service: Deactivated successfully. Dec 13 13:29:34.579807 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 13:29:34.581306 systemd-logind[1472]: Session 15 logged out. Waiting for processes to exit. Dec 13 13:29:34.583302 systemd-logind[1472]: Removed session 15. Dec 13 13:29:39.626055 systemd[1]: Started sshd@16-10.128.0.84:22-147.75.109.163:59846.service - OpenSSH per-connection server daemon (147.75.109.163:59846). Dec 13 13:29:39.931793 sshd[4154]: Accepted publickey for core from 147.75.109.163 port 59846 ssh2: RSA SHA256:cNKX4Hvnd9CzZlNWRKUhmWuECv5diNIlM1/aFPCcnqE Dec 13 13:29:39.933998 sshd-session[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:39.940792 systemd-logind[1472]: New session 16 of user core. Dec 13 13:29:39.945916 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 13:29:40.229955 sshd[4156]: Connection closed by 147.75.109.163 port 59846 Dec 13 13:29:40.231056 sshd-session[4154]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:40.235544 systemd[1]: sshd@16-10.128.0.84:22-147.75.109.163:59846.service: Deactivated successfully. Dec 13 13:29:40.239126 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 13:29:40.242354 systemd-logind[1472]: Session 16 logged out. Waiting for processes to exit. Dec 13 13:29:40.244772 systemd-logind[1472]: Removed session 16. Dec 13 13:29:45.288063 systemd[1]: Started sshd@17-10.128.0.84:22-147.75.109.163:59858.service - OpenSSH per-connection server daemon (147.75.109.163:59858). Dec 13 13:29:45.585014 sshd[4167]: Accepted publickey for core from 147.75.109.163 port 59858 ssh2: RSA SHA256:cNKX4Hvnd9CzZlNWRKUhmWuECv5diNIlM1/aFPCcnqE Dec 13 13:29:45.586803 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:45.593563 systemd-logind[1472]: New session 17 of user core. Dec 13 13:29:45.596882 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 13:29:45.888620 sshd[4169]: Connection closed by 147.75.109.163 port 59858 Dec 13 13:29:45.890235 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:45.895853 systemd[1]: sshd@17-10.128.0.84:22-147.75.109.163:59858.service: Deactivated successfully. Dec 13 13:29:45.898756 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 13:29:45.900088 systemd-logind[1472]: Session 17 logged out. Waiting for processes to exit. Dec 13 13:29:45.902122 systemd-logind[1472]: Removed session 17. Dec 13 13:29:45.950464 systemd[1]: Started sshd@18-10.128.0.84:22-147.75.109.163:59860.service - OpenSSH per-connection server daemon (147.75.109.163:59860). Dec 13 13:29:46.246301 sshd[4180]: Accepted publickey for core from 147.75.109.163 port 59860 ssh2: RSA SHA256:cNKX4Hvnd9CzZlNWRKUhmWuECv5diNIlM1/aFPCcnqE Dec 13 13:29:46.248280 sshd-session[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:46.255476 systemd-logind[1472]: New session 18 of user core. Dec 13 13:29:46.264887 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 13:29:46.694459 sshd[4182]: Connection closed by 147.75.109.163 port 59860 Dec 13 13:29:46.695698 sshd-session[4180]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:46.700896 systemd[1]: sshd@18-10.128.0.84:22-147.75.109.163:59860.service: Deactivated successfully. Dec 13 13:29:46.704579 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 13:29:46.707391 systemd-logind[1472]: Session 18 logged out. Waiting for processes to exit. Dec 13 13:29:46.709390 systemd-logind[1472]: Removed session 18. Dec 13 13:29:46.750138 systemd[1]: Started sshd@19-10.128.0.84:22-147.75.109.163:39044.service - OpenSSH per-connection server daemon (147.75.109.163:39044). Dec 13 13:29:47.046983 sshd[4191]: Accepted publickey for core from 147.75.109.163 port 39044 ssh2: RSA SHA256:cNKX4Hvnd9CzZlNWRKUhmWuECv5diNIlM1/aFPCcnqE Dec 13 13:29:47.049021 sshd-session[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:47.055107 systemd-logind[1472]: New session 19 of user core. Dec 13 13:29:47.060929 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 13:29:49.320858 sshd[4193]: Connection closed by 147.75.109.163 port 39044 Dec 13 13:29:49.322794 sshd-session[4191]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:49.333970 systemd[1]: sshd@19-10.128.0.84:22-147.75.109.163:39044.service: Deactivated successfully. Dec 13 13:29:49.336549 systemd-logind[1472]: Session 19 logged out. Waiting for processes to exit. Dec 13 13:29:49.340867 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 13:29:49.347023 systemd-logind[1472]: Removed session 19. Dec 13 13:29:49.382056 systemd[1]: Started sshd@20-10.128.0.84:22-147.75.109.163:39054.service - OpenSSH per-connection server daemon (147.75.109.163:39054). Dec 13 13:29:49.691844 sshd[4209]: Accepted publickey for core from 147.75.109.163 port 39054 ssh2: RSA SHA256:cNKX4Hvnd9CzZlNWRKUhmWuECv5diNIlM1/aFPCcnqE Dec 13 13:29:49.693381 sshd-session[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:49.703451 systemd-logind[1472]: New session 20 of user core. Dec 13 13:29:49.710914 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 13:29:50.134310 sshd[4211]: Connection closed by 147.75.109.163 port 39054 Dec 13 13:29:50.136993 sshd-session[4209]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:50.141540 systemd[1]: sshd@20-10.128.0.84:22-147.75.109.163:39054.service: Deactivated successfully. Dec 13 13:29:50.145244 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 13:29:50.147521 systemd-logind[1472]: Session 20 logged out. Waiting for processes to exit. Dec 13 13:29:50.149596 systemd-logind[1472]: Removed session 20. Dec 13 13:29:50.193799 systemd[1]: Started sshd@21-10.128.0.84:22-147.75.109.163:39064.service - OpenSSH per-connection server daemon (147.75.109.163:39064). Dec 13 13:29:50.506254 sshd[4220]: Accepted publickey for core from 147.75.109.163 port 39064 ssh2: RSA SHA256:cNKX4Hvnd9CzZlNWRKUhmWuECv5diNIlM1/aFPCcnqE Dec 13 13:29:50.507868 sshd-session[4220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:50.515573 systemd-logind[1472]: New session 21 of user core. Dec 13 13:29:50.518944 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 13:29:50.804189 sshd[4222]: Connection closed by 147.75.109.163 port 39064 Dec 13 13:29:50.804911 sshd-session[4220]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:50.810227 systemd[1]: sshd@21-10.128.0.84:22-147.75.109.163:39064.service: Deactivated successfully. Dec 13 13:29:50.813544 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 13:29:50.816165 systemd-logind[1472]: Session 21 logged out. Waiting for processes to exit. Dec 13 13:29:50.817981 systemd-logind[1472]: Removed session 21. Dec 13 13:29:55.874449 systemd[1]: Started sshd@22-10.128.0.84:22-147.75.109.163:39072.service - OpenSSH per-connection server daemon (147.75.109.163:39072). Dec 13 13:29:56.201745 sshd[4236]: Accepted publickey for core from 147.75.109.163 port 39072 ssh2: RSA SHA256:cNKX4Hvnd9CzZlNWRKUhmWuECv5diNIlM1/aFPCcnqE Dec 13 13:29:56.206922 sshd-session[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:29:56.218429 systemd-logind[1472]: New session 22 of user core. Dec 13 13:29:56.230103 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 13:29:56.530975 sshd[4238]: Connection closed by 147.75.109.163 port 39072 Dec 13 13:29:56.532072 sshd-session[4236]: pam_unix(sshd:session): session closed for user core Dec 13 13:29:56.538440 systemd[1]: sshd@22-10.128.0.84:22-147.75.109.163:39072.service: Deactivated successfully. Dec 13 13:29:56.542265 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 13:29:56.544194 systemd-logind[1472]: Session 22 logged out. Waiting for processes to exit. Dec 13 13:29:56.546487 systemd-logind[1472]: Removed session 22. Dec 13 13:30:01.600079 systemd[1]: Started sshd@23-10.128.0.84:22-147.75.109.163:41936.service - OpenSSH per-connection server daemon (147.75.109.163:41936). Dec 13 13:30:01.911410 sshd[4249]: Accepted publickey for core from 147.75.109.163 port 41936 ssh2: RSA SHA256:cNKX4Hvnd9CzZlNWRKUhmWuECv5diNIlM1/aFPCcnqE Dec 13 13:30:01.913512 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:01.921967 systemd-logind[1472]: New session 23 of user core. Dec 13 13:30:01.925939 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 13:30:02.235795 sshd[4251]: Connection closed by 147.75.109.163 port 41936 Dec 13 13:30:02.237483 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:02.243583 systemd[1]: sshd@23-10.128.0.84:22-147.75.109.163:41936.service: Deactivated successfully. Dec 13 13:30:02.250363 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 13:30:02.253337 systemd-logind[1472]: Session 23 logged out. Waiting for processes to exit. Dec 13 13:30:02.256433 systemd-logind[1472]: Removed session 23. Dec 13 13:30:07.303239 systemd[1]: Started sshd@24-10.128.0.84:22-147.75.109.163:40464.service - OpenSSH per-connection server daemon (147.75.109.163:40464). Dec 13 13:30:07.621273 sshd[4264]: Accepted publickey for core from 147.75.109.163 port 40464 ssh2: RSA SHA256:cNKX4Hvnd9CzZlNWRKUhmWuECv5diNIlM1/aFPCcnqE Dec 13 13:30:07.623512 sshd-session[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:07.630159 systemd-logind[1472]: New session 24 of user core. Dec 13 13:30:07.638964 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 13:30:07.927179 sshd[4266]: Connection closed by 147.75.109.163 port 40464 Dec 13 13:30:07.928206 sshd-session[4264]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:07.933586 systemd[1]: sshd@24-10.128.0.84:22-147.75.109.163:40464.service: Deactivated successfully. Dec 13 13:30:07.937822 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 13:30:07.940022 systemd-logind[1472]: Session 24 logged out. Waiting for processes to exit. Dec 13 13:30:07.942273 systemd-logind[1472]: Removed session 24. Dec 13 13:30:13.011483 systemd[1]: Started sshd@25-10.128.0.84:22-147.75.109.163:40476.service - OpenSSH per-connection server daemon (147.75.109.163:40476). Dec 13 13:30:13.340371 sshd[4276]: Accepted publickey for core from 147.75.109.163 port 40476 ssh2: RSA SHA256:cNKX4Hvnd9CzZlNWRKUhmWuECv5diNIlM1/aFPCcnqE Dec 13 13:30:13.344110 sshd-session[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:13.355351 systemd-logind[1472]: New session 25 of user core. Dec 13 13:30:13.365445 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 13:30:13.718553 sshd[4279]: Connection closed by 147.75.109.163 port 40476 Dec 13 13:30:13.720139 sshd-session[4276]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:13.726305 systemd[1]: sshd@25-10.128.0.84:22-147.75.109.163:40476.service: Deactivated successfully. Dec 13 13:30:13.730085 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 13:30:13.733864 systemd-logind[1472]: Session 25 logged out. Waiting for processes to exit. Dec 13 13:30:13.735567 systemd-logind[1472]: Removed session 25. Dec 13 13:30:13.776089 systemd[1]: Started sshd@26-10.128.0.84:22-147.75.109.163:40484.service - OpenSSH per-connection server daemon (147.75.109.163:40484). Dec 13 13:30:14.078656 sshd[4289]: Accepted publickey for core from 147.75.109.163 port 40484 ssh2: RSA SHA256:cNKX4Hvnd9CzZlNWRKUhmWuECv5diNIlM1/aFPCcnqE Dec 13 13:30:14.080003 sshd-session[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:14.092324 systemd-logind[1472]: New session 26 of user core. Dec 13 13:30:14.096981 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 13:30:16.186916 containerd[1491]: time="2024-12-13T13:30:16.186634785Z" level=info msg="StopContainer for \"b185e4f7b51647eb41b2b025ae818b1f6e7ada5338013bc72c1df2335fd0a51f\" with timeout 30 (s)" Dec 13 13:30:16.196746 containerd[1491]: time="2024-12-13T13:30:16.195563216Z" level=info msg="Stop container \"b185e4f7b51647eb41b2b025ae818b1f6e7ada5338013bc72c1df2335fd0a51f\" with signal terminated" Dec 13 13:30:16.381329 systemd[1]: cri-containerd-b185e4f7b51647eb41b2b025ae818b1f6e7ada5338013bc72c1df2335fd0a51f.scope: Deactivated successfully. Dec 13 13:30:16.450096 containerd[1491]: time="2024-12-13T13:30:16.449211251Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:30:16.475098 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b185e4f7b51647eb41b2b025ae818b1f6e7ada5338013bc72c1df2335fd0a51f-rootfs.mount: Deactivated successfully. Dec 13 13:30:16.480587 containerd[1491]: time="2024-12-13T13:30:16.480534924Z" level=info msg="StopContainer for \"a13e2a041a76af0f784cc190938e81a57579005a750701a8769ead0df4ff2c6d\" with timeout 2 (s)" Dec 13 13:30:16.481544 containerd[1491]: time="2024-12-13T13:30:16.481440884Z" level=info msg="Stop container \"a13e2a041a76af0f784cc190938e81a57579005a750701a8769ead0df4ff2c6d\" with signal terminated" Dec 13 13:30:16.533263 containerd[1491]: time="2024-12-13T13:30:16.526375993Z" level=info msg="shim disconnected" id=b185e4f7b51647eb41b2b025ae818b1f6e7ada5338013bc72c1df2335fd0a51f namespace=k8s.io Dec 13 13:30:16.533263 containerd[1491]: time="2024-12-13T13:30:16.527235657Z" level=warning msg="cleaning up after shim disconnected" id=b185e4f7b51647eb41b2b025ae818b1f6e7ada5338013bc72c1df2335fd0a51f namespace=k8s.io Dec 13 13:30:16.533263 containerd[1491]: time="2024-12-13T13:30:16.527283799Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:30:16.528192 systemd-networkd[1406]: lxc_health: Link DOWN Dec 13 13:30:16.528307 systemd-networkd[1406]: lxc_health: Lost carrier Dec 13 13:30:16.607426 systemd[1]: cri-containerd-a13e2a041a76af0f784cc190938e81a57579005a750701a8769ead0df4ff2c6d.scope: Deactivated successfully. Dec 13 13:30:16.609729 systemd[1]: cri-containerd-a13e2a041a76af0f784cc190938e81a57579005a750701a8769ead0df4ff2c6d.scope: Consumed 12.214s CPU time. Dec 13 13:30:16.657951 containerd[1491]: time="2024-12-13T13:30:16.657627765Z" level=warning msg="cleanup warnings time=\"2024-12-13T13:30:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 13:30:16.669330 containerd[1491]: time="2024-12-13T13:30:16.668826337Z" level=info msg="StopContainer for \"b185e4f7b51647eb41b2b025ae818b1f6e7ada5338013bc72c1df2335fd0a51f\" returns successfully" Dec 13 13:30:16.672858 containerd[1491]: time="2024-12-13T13:30:16.672450611Z" level=info msg="StopPodSandbox for \"c48b283a106b2885fbc6c3f916db2fe8878fb32b4555ca248be3c80ca490c3f1\"" Dec 13 13:30:16.676704 containerd[1491]: time="2024-12-13T13:30:16.672980876Z" level=info msg="Container to stop \"b185e4f7b51647eb41b2b025ae818b1f6e7ada5338013bc72c1df2335fd0a51f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:30:16.686440 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c48b283a106b2885fbc6c3f916db2fe8878fb32b4555ca248be3c80ca490c3f1-shm.mount: Deactivated successfully. Dec 13 13:30:16.705628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a13e2a041a76af0f784cc190938e81a57579005a750701a8769ead0df4ff2c6d-rootfs.mount: Deactivated successfully. Dec 13 13:30:16.717202 systemd[1]: cri-containerd-c48b283a106b2885fbc6c3f916db2fe8878fb32b4555ca248be3c80ca490c3f1.scope: Deactivated successfully. Dec 13 13:30:16.734101 containerd[1491]: time="2024-12-13T13:30:16.733625396Z" level=info msg="shim disconnected" id=a13e2a041a76af0f784cc190938e81a57579005a750701a8769ead0df4ff2c6d namespace=k8s.io Dec 13 13:30:16.734101 containerd[1491]: time="2024-12-13T13:30:16.733817886Z" level=warning msg="cleaning up after shim disconnected" id=a13e2a041a76af0f784cc190938e81a57579005a750701a8769ead0df4ff2c6d namespace=k8s.io Dec 13 13:30:16.734101 containerd[1491]: time="2024-12-13T13:30:16.733834602Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:30:16.797828 containerd[1491]: time="2024-12-13T13:30:16.797089016Z" level=info msg="shim disconnected" id=c48b283a106b2885fbc6c3f916db2fe8878fb32b4555ca248be3c80ca490c3f1 namespace=k8s.io Dec 13 13:30:16.797828 containerd[1491]: time="2024-12-13T13:30:16.797510495Z" level=warning msg="cleaning up after shim disconnected" id=c48b283a106b2885fbc6c3f916db2fe8878fb32b4555ca248be3c80ca490c3f1 namespace=k8s.io Dec 13 13:30:16.797828 containerd[1491]: time="2024-12-13T13:30:16.797593352Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:30:16.804098 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c48b283a106b2885fbc6c3f916db2fe8878fb32b4555ca248be3c80ca490c3f1-rootfs.mount: Deactivated successfully. Dec 13 13:30:16.808701 containerd[1491]: time="2024-12-13T13:30:16.808504084Z" level=info msg="StopContainer for \"a13e2a041a76af0f784cc190938e81a57579005a750701a8769ead0df4ff2c6d\" returns successfully" Dec 13 13:30:16.812046 containerd[1491]: time="2024-12-13T13:30:16.811866089Z" level=info msg="StopPodSandbox for \"8ffc653a815571eeca6a4fc61c364f2df7574af753bc13da0741ee0d1e11b822\"" Dec 13 13:30:16.812046 containerd[1491]: time="2024-12-13T13:30:16.811942664Z" level=info msg="Container to stop \"475d98b824b404b7c371435d1cb9c8c07f746a0cd882263d4232137afbee3c46\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:30:16.812046 containerd[1491]: time="2024-12-13T13:30:16.812007158Z" level=info msg="Container to stop \"20cb4edba90af8bbf554f95abe5c032714c0114f1eb6804fa57048bf4ede84ee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:30:16.812714 containerd[1491]: time="2024-12-13T13:30:16.812522106Z" level=info msg="Container to stop \"10232d31b24a19451111ed43b4e101d240d650c24ff7df165c9c2a5431efa618\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:30:16.812714 containerd[1491]: time="2024-12-13T13:30:16.812587919Z" level=info msg="Container to stop \"d9d7047baf770ad0e3682cf7a659c2f5dbc4e1d81636e42388a39740236a5f99\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:30:16.812714 containerd[1491]: time="2024-12-13T13:30:16.812605965Z" level=info msg="Container to stop \"a13e2a041a76af0f784cc190938e81a57579005a750701a8769ead0df4ff2c6d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 13:30:16.846083 systemd[1]: cri-containerd-8ffc653a815571eeca6a4fc61c364f2df7574af753bc13da0741ee0d1e11b822.scope: Deactivated successfully. Dec 13 13:30:16.873332 containerd[1491]: time="2024-12-13T13:30:16.873264264Z" level=info msg="TearDown network for sandbox \"c48b283a106b2885fbc6c3f916db2fe8878fb32b4555ca248be3c80ca490c3f1\" successfully" Dec 13 13:30:16.873635 containerd[1491]: time="2024-12-13T13:30:16.873600004Z" level=info msg="StopPodSandbox for \"c48b283a106b2885fbc6c3f916db2fe8878fb32b4555ca248be3c80ca490c3f1\" returns successfully" Dec 13 13:30:16.923959 containerd[1491]: time="2024-12-13T13:30:16.923451784Z" level=info msg="shim disconnected" id=8ffc653a815571eeca6a4fc61c364f2df7574af753bc13da0741ee0d1e11b822 namespace=k8s.io Dec 13 13:30:16.927523 containerd[1491]: time="2024-12-13T13:30:16.926397768Z" level=warning msg="cleaning up after shim disconnected" id=8ffc653a815571eeca6a4fc61c364f2df7574af753bc13da0741ee0d1e11b822 namespace=k8s.io Dec 13 13:30:16.927523 containerd[1491]: time="2024-12-13T13:30:16.926734751Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:30:16.933811 kubelet[2684]: I1213 13:30:16.933110 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea20f771-60c4-426c-a541-764e4dcc998d-cilium-config-path\") pod \"ea20f771-60c4-426c-a541-764e4dcc998d\" (UID: \"ea20f771-60c4-426c-a541-764e4dcc998d\") " Dec 13 13:30:16.938165 kubelet[2684]: I1213 13:30:16.938076 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2d9v\" (UniqueName: \"kubernetes.io/projected/ea20f771-60c4-426c-a541-764e4dcc998d-kube-api-access-k2d9v\") pod \"ea20f771-60c4-426c-a541-764e4dcc998d\" (UID: \"ea20f771-60c4-426c-a541-764e4dcc998d\") " Dec 13 13:30:16.982772 kubelet[2684]: I1213 13:30:16.981443 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea20f771-60c4-426c-a541-764e4dcc998d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ea20f771-60c4-426c-a541-764e4dcc998d" (UID: "ea20f771-60c4-426c-a541-764e4dcc998d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 13:30:16.994352 kubelet[2684]: I1213 13:30:16.994144 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea20f771-60c4-426c-a541-764e4dcc998d-kube-api-access-k2d9v" (OuterVolumeSpecName: "kube-api-access-k2d9v") pod "ea20f771-60c4-426c-a541-764e4dcc998d" (UID: "ea20f771-60c4-426c-a541-764e4dcc998d"). InnerVolumeSpecName "kube-api-access-k2d9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 13:30:17.021490 containerd[1491]: time="2024-12-13T13:30:17.021232352Z" level=info msg="TearDown network for sandbox \"8ffc653a815571eeca6a4fc61c364f2df7574af753bc13da0741ee0d1e11b822\" successfully" Dec 13 13:30:17.021490 containerd[1491]: time="2024-12-13T13:30:17.021286966Z" level=info msg="StopPodSandbox for \"8ffc653a815571eeca6a4fc61c364f2df7574af753bc13da0741ee0d1e11b822\" returns successfully" Dec 13 13:30:17.042091 kubelet[2684]: I1213 13:30:17.041989 2684 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-k2d9v\" (UniqueName: \"kubernetes.io/projected/ea20f771-60c4-426c-a541-764e4dcc998d-kube-api-access-k2d9v\") on node \"ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 13:30:17.042482 kubelet[2684]: I1213 13:30:17.042446 2684 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea20f771-60c4-426c-a541-764e4dcc998d-cilium-config-path\") on node \"ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 13:30:17.061734 kubelet[2684]: I1213 13:30:17.061226 2684 scope.go:117] "RemoveContainer" containerID="a13e2a041a76af0f784cc190938e81a57579005a750701a8769ead0df4ff2c6d" Dec 13 13:30:17.065217 containerd[1491]: time="2024-12-13T13:30:17.065143020Z" level=info msg="RemoveContainer for \"a13e2a041a76af0f784cc190938e81a57579005a750701a8769ead0df4ff2c6d\"" Dec 13 13:30:17.084817 containerd[1491]: time="2024-12-13T13:30:17.084724291Z" level=info msg="RemoveContainer for \"a13e2a041a76af0f784cc190938e81a57579005a750701a8769ead0df4ff2c6d\" returns successfully" Dec 13 13:30:17.087453 kubelet[2684]: I1213 13:30:17.087365 2684 scope.go:117] "RemoveContainer" containerID="d9d7047baf770ad0e3682cf7a659c2f5dbc4e1d81636e42388a39740236a5f99" Dec 13 13:30:17.093628 containerd[1491]: time="2024-12-13T13:30:17.090961845Z" level=info msg="RemoveContainer for \"d9d7047baf770ad0e3682cf7a659c2f5dbc4e1d81636e42388a39740236a5f99\"" Dec 13 13:30:17.093248 systemd[1]: Removed slice kubepods-besteffort-podea20f771_60c4_426c_a541_764e4dcc998d.slice - libcontainer container kubepods-besteffort-podea20f771_60c4_426c_a541_764e4dcc998d.slice. Dec 13 13:30:17.093433 systemd[1]: kubepods-besteffort-podea20f771_60c4_426c_a541_764e4dcc998d.slice: Consumed 1.018s CPU time. Dec 13 13:30:17.103920 containerd[1491]: time="2024-12-13T13:30:17.103820776Z" level=info msg="RemoveContainer for \"d9d7047baf770ad0e3682cf7a659c2f5dbc4e1d81636e42388a39740236a5f99\" returns successfully" Dec 13 13:30:17.105369 kubelet[2684]: I1213 13:30:17.105210 2684 scope.go:117] "RemoveContainer" containerID="20cb4edba90af8bbf554f95abe5c032714c0114f1eb6804fa57048bf4ede84ee" Dec 13 13:30:17.108487 containerd[1491]: time="2024-12-13T13:30:17.108417508Z" level=info msg="RemoveContainer for \"20cb4edba90af8bbf554f95abe5c032714c0114f1eb6804fa57048bf4ede84ee\"" Dec 13 13:30:17.118979 containerd[1491]: time="2024-12-13T13:30:17.118926540Z" level=info msg="RemoveContainer for \"20cb4edba90af8bbf554f95abe5c032714c0114f1eb6804fa57048bf4ede84ee\" returns successfully" Dec 13 13:30:17.119350 kubelet[2684]: I1213 13:30:17.119278 2684 scope.go:117] "RemoveContainer" containerID="475d98b824b404b7c371435d1cb9c8c07f746a0cd882263d4232137afbee3c46" Dec 13 13:30:17.121774 containerd[1491]: time="2024-12-13T13:30:17.121727331Z" level=info msg="RemoveContainer for \"475d98b824b404b7c371435d1cb9c8c07f746a0cd882263d4232137afbee3c46\"" Dec 13 13:30:17.128705 containerd[1491]: time="2024-12-13T13:30:17.128635570Z" level=info msg="RemoveContainer for \"475d98b824b404b7c371435d1cb9c8c07f746a0cd882263d4232137afbee3c46\" returns successfully" Dec 13 13:30:17.129119 kubelet[2684]: I1213 13:30:17.129071 2684 scope.go:117] "RemoveContainer" containerID="10232d31b24a19451111ed43b4e101d240d650c24ff7df165c9c2a5431efa618" Dec 13 13:30:17.131356 containerd[1491]: time="2024-12-13T13:30:17.131299857Z" level=info msg="RemoveContainer for \"10232d31b24a19451111ed43b4e101d240d650c24ff7df165c9c2a5431efa618\"" Dec 13 13:30:17.137097 containerd[1491]: time="2024-12-13T13:30:17.137058735Z" level=info msg="RemoveContainer for \"10232d31b24a19451111ed43b4e101d240d650c24ff7df165c9c2a5431efa618\" returns successfully" Dec 13 13:30:17.137552 kubelet[2684]: I1213 13:30:17.137486 2684 scope.go:117] "RemoveContainer" containerID="a13e2a041a76af0f784cc190938e81a57579005a750701a8769ead0df4ff2c6d" Dec 13 13:30:17.137933 containerd[1491]: time="2024-12-13T13:30:17.137852850Z" level=error msg="ContainerStatus for \"a13e2a041a76af0f784cc190938e81a57579005a750701a8769ead0df4ff2c6d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a13e2a041a76af0f784cc190938e81a57579005a750701a8769ead0df4ff2c6d\": not found" Dec 13 13:30:17.138361 kubelet[2684]: E1213 13:30:17.138303 2684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a13e2a041a76af0f784cc190938e81a57579005a750701a8769ead0df4ff2c6d\": not found" containerID="a13e2a041a76af0f784cc190938e81a57579005a750701a8769ead0df4ff2c6d" Dec 13 13:30:17.138904 kubelet[2684]: I1213 13:30:17.138446 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a13e2a041a76af0f784cc190938e81a57579005a750701a8769ead0df4ff2c6d"} err="failed to get container status \"a13e2a041a76af0f784cc190938e81a57579005a750701a8769ead0df4ff2c6d\": rpc error: code = NotFound desc = an error occurred when try to find container \"a13e2a041a76af0f784cc190938e81a57579005a750701a8769ead0df4ff2c6d\": not found" Dec 13 13:30:17.139155 kubelet[2684]: I1213 13:30:17.138940 2684 scope.go:117] "RemoveContainer" containerID="d9d7047baf770ad0e3682cf7a659c2f5dbc4e1d81636e42388a39740236a5f99" Dec 13 13:30:17.139327 containerd[1491]: time="2024-12-13T13:30:17.139296330Z" level=error msg="ContainerStatus for \"d9d7047baf770ad0e3682cf7a659c2f5dbc4e1d81636e42388a39740236a5f99\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d9d7047baf770ad0e3682cf7a659c2f5dbc4e1d81636e42388a39740236a5f99\": not found" Dec 13 13:30:17.139486 kubelet[2684]: E1213 13:30:17.139465 2684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d9d7047baf770ad0e3682cf7a659c2f5dbc4e1d81636e42388a39740236a5f99\": not found" containerID="d9d7047baf770ad0e3682cf7a659c2f5dbc4e1d81636e42388a39740236a5f99" Dec 13 13:30:17.139559 kubelet[2684]: I1213 13:30:17.139500 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d9d7047baf770ad0e3682cf7a659c2f5dbc4e1d81636e42388a39740236a5f99"} err="failed to get container status \"d9d7047baf770ad0e3682cf7a659c2f5dbc4e1d81636e42388a39740236a5f99\": rpc error: code = NotFound desc = an error occurred when try to find container \"d9d7047baf770ad0e3682cf7a659c2f5dbc4e1d81636e42388a39740236a5f99\": not found" Dec 13 13:30:17.139652 kubelet[2684]: I1213 13:30:17.139565 2684 scope.go:117] "RemoveContainer" containerID="20cb4edba90af8bbf554f95abe5c032714c0114f1eb6804fa57048bf4ede84ee" Dec 13 13:30:17.140057 containerd[1491]: time="2024-12-13T13:30:17.139940714Z" level=error msg="ContainerStatus for \"20cb4edba90af8bbf554f95abe5c032714c0114f1eb6804fa57048bf4ede84ee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"20cb4edba90af8bbf554f95abe5c032714c0114f1eb6804fa57048bf4ede84ee\": not found" Dec 13 13:30:17.140240 kubelet[2684]: E1213 13:30:17.140217 2684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"20cb4edba90af8bbf554f95abe5c032714c0114f1eb6804fa57048bf4ede84ee\": not found" containerID="20cb4edba90af8bbf554f95abe5c032714c0114f1eb6804fa57048bf4ede84ee" Dec 13 13:30:17.140327 kubelet[2684]: I1213 13:30:17.140250 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"20cb4edba90af8bbf554f95abe5c032714c0114f1eb6804fa57048bf4ede84ee"} err="failed to get container status \"20cb4edba90af8bbf554f95abe5c032714c0114f1eb6804fa57048bf4ede84ee\": rpc error: code = NotFound desc = an error occurred when try to find container \"20cb4edba90af8bbf554f95abe5c032714c0114f1eb6804fa57048bf4ede84ee\": not found" Dec 13 13:30:17.140327 kubelet[2684]: I1213 13:30:17.140277 2684 scope.go:117] "RemoveContainer" containerID="475d98b824b404b7c371435d1cb9c8c07f746a0cd882263d4232137afbee3c46" Dec 13 13:30:17.140642 containerd[1491]: time="2024-12-13T13:30:17.140569020Z" level=error msg="ContainerStatus for \"475d98b824b404b7c371435d1cb9c8c07f746a0cd882263d4232137afbee3c46\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"475d98b824b404b7c371435d1cb9c8c07f746a0cd882263d4232137afbee3c46\": not found" Dec 13 13:30:17.140875 kubelet[2684]: E1213 13:30:17.140825 2684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"475d98b824b404b7c371435d1cb9c8c07f746a0cd882263d4232137afbee3c46\": not found" containerID="475d98b824b404b7c371435d1cb9c8c07f746a0cd882263d4232137afbee3c46" Dec 13 13:30:17.141034 kubelet[2684]: I1213 13:30:17.140877 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"475d98b824b404b7c371435d1cb9c8c07f746a0cd882263d4232137afbee3c46"} err="failed to get container status \"475d98b824b404b7c371435d1cb9c8c07f746a0cd882263d4232137afbee3c46\": rpc error: code = NotFound desc = an error occurred when try to find container \"475d98b824b404b7c371435d1cb9c8c07f746a0cd882263d4232137afbee3c46\": not found" Dec 13 13:30:17.141034 kubelet[2684]: I1213 13:30:17.140931 2684 scope.go:117] "RemoveContainer" containerID="10232d31b24a19451111ed43b4e101d240d650c24ff7df165c9c2a5431efa618" Dec 13 13:30:17.141391 containerd[1491]: time="2024-12-13T13:30:17.141318666Z" level=error msg="ContainerStatus for \"10232d31b24a19451111ed43b4e101d240d650c24ff7df165c9c2a5431efa618\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"10232d31b24a19451111ed43b4e101d240d650c24ff7df165c9c2a5431efa618\": not found" Dec 13 13:30:17.141696 kubelet[2684]: E1213 13:30:17.141656 2684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"10232d31b24a19451111ed43b4e101d240d650c24ff7df165c9c2a5431efa618\": not found" containerID="10232d31b24a19451111ed43b4e101d240d650c24ff7df165c9c2a5431efa618" Dec 13 13:30:17.141815 kubelet[2684]: I1213 13:30:17.141705 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"10232d31b24a19451111ed43b4e101d240d650c24ff7df165c9c2a5431efa618"} err="failed to get container status \"10232d31b24a19451111ed43b4e101d240d650c24ff7df165c9c2a5431efa618\": rpc error: code = NotFound desc = an error occurred when try to find container \"10232d31b24a19451111ed43b4e101d240d650c24ff7df165c9c2a5431efa618\": not found" Dec 13 13:30:17.141815 kubelet[2684]: I1213 13:30:17.141732 2684 scope.go:117] "RemoveContainer" containerID="b185e4f7b51647eb41b2b025ae818b1f6e7ada5338013bc72c1df2335fd0a51f" Dec 13 13:30:17.143005 kubelet[2684]: I1213 13:30:17.142967 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-clustermesh-secrets\") pod \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\" (UID: \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\") " Dec 13 13:30:17.143105 kubelet[2684]: I1213 13:30:17.143060 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-lib-modules\") pod \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\" (UID: \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\") " Dec 13 13:30:17.143105 kubelet[2684]: I1213 13:30:17.143098 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-cilium-cgroup\") pod \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\" (UID: \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\") " Dec 13 13:30:17.143209 kubelet[2684]: I1213 13:30:17.143173 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-bpf-maps\") pod \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\" (UID: \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\") " Dec 13 13:30:17.143320 kubelet[2684]: I1213 13:30:17.143213 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cd4z\" (UniqueName: \"kubernetes.io/projected/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-kube-api-access-7cd4z\") pod \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\" (UID: \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\") " Dec 13 13:30:17.143320 kubelet[2684]: I1213 13:30:17.143278 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-hostproc\") pod \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\" (UID: \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\") " Dec 13 13:30:17.143320 kubelet[2684]: I1213 13:30:17.143309 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-cilium-config-path\") pod \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\" (UID: \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\") " Dec 13 13:30:17.143625 kubelet[2684]: I1213 13:30:17.143358 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-etc-cni-netd\") pod \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\" (UID: \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\") " Dec 13 13:30:17.143625 kubelet[2684]: I1213 13:30:17.143387 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-host-proc-sys-net\") pod \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\" (UID: \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\") " Dec 13 13:30:17.143625 kubelet[2684]: I1213 13:30:17.143502 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-xtables-lock\") pod \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\" (UID: \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\") " Dec 13 13:30:17.143625 kubelet[2684]: I1213 13:30:17.143536 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-host-proc-sys-kernel\") pod \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\" (UID: \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\") " Dec 13 13:30:17.143934 kubelet[2684]: I1213 13:30:17.143648 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-hubble-tls\") pod \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\" (UID: \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\") " Dec 13 13:30:17.143934 kubelet[2684]: I1213 13:30:17.143747 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-cilium-run\") pod \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\" (UID: \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\") " Dec 13 13:30:17.143934 kubelet[2684]: I1213 13:30:17.143851 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-cni-path\") pod \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\" (UID: \"ec217df5-e1f4-4c1b-bcdb-592ea88c86bb\") " Dec 13 13:30:17.144119 kubelet[2684]: I1213 13:30:17.144085 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-cni-path" (OuterVolumeSpecName: "cni-path") pod "ec217df5-e1f4-4c1b-bcdb-592ea88c86bb" (UID: "ec217df5-e1f4-4c1b-bcdb-592ea88c86bb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:30:17.145468 kubelet[2684]: I1213 13:30:17.144843 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ec217df5-e1f4-4c1b-bcdb-592ea88c86bb" (UID: "ec217df5-e1f4-4c1b-bcdb-592ea88c86bb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:30:17.145468 kubelet[2684]: I1213 13:30:17.144996 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ec217df5-e1f4-4c1b-bcdb-592ea88c86bb" (UID: "ec217df5-e1f4-4c1b-bcdb-592ea88c86bb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:30:17.145468 kubelet[2684]: I1213 13:30:17.145026 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ec217df5-e1f4-4c1b-bcdb-592ea88c86bb" (UID: "ec217df5-e1f4-4c1b-bcdb-592ea88c86bb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:30:17.145468 kubelet[2684]: I1213 13:30:17.145181 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ec217df5-e1f4-4c1b-bcdb-592ea88c86bb" (UID: "ec217df5-e1f4-4c1b-bcdb-592ea88c86bb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:30:17.145468 kubelet[2684]: I1213 13:30:17.145253 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-hostproc" (OuterVolumeSpecName: "hostproc") pod "ec217df5-e1f4-4c1b-bcdb-592ea88c86bb" (UID: "ec217df5-e1f4-4c1b-bcdb-592ea88c86bb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:30:17.147893 kubelet[2684]: I1213 13:30:17.147855 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ec217df5-e1f4-4c1b-bcdb-592ea88c86bb" (UID: "ec217df5-e1f4-4c1b-bcdb-592ea88c86bb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:30:17.148570 kubelet[2684]: I1213 13:30:17.148014 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ec217df5-e1f4-4c1b-bcdb-592ea88c86bb" (UID: "ec217df5-e1f4-4c1b-bcdb-592ea88c86bb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:30:17.148570 kubelet[2684]: I1213 13:30:17.148068 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ec217df5-e1f4-4c1b-bcdb-592ea88c86bb" (UID: "ec217df5-e1f4-4c1b-bcdb-592ea88c86bb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:30:17.149006 containerd[1491]: time="2024-12-13T13:30:17.148660559Z" level=info msg="RemoveContainer for \"b185e4f7b51647eb41b2b025ae818b1f6e7ada5338013bc72c1df2335fd0a51f\"" Dec 13 13:30:17.153537 kubelet[2684]: I1213 13:30:17.152883 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ec217df5-e1f4-4c1b-bcdb-592ea88c86bb" (UID: "ec217df5-e1f4-4c1b-bcdb-592ea88c86bb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 13:30:17.158079 containerd[1491]: time="2024-12-13T13:30:17.157840578Z" level=info msg="RemoveContainer for \"b185e4f7b51647eb41b2b025ae818b1f6e7ada5338013bc72c1df2335fd0a51f\" returns successfully" Dec 13 13:30:17.158936 kubelet[2684]: I1213 13:30:17.158895 2684 scope.go:117] "RemoveContainer" containerID="b185e4f7b51647eb41b2b025ae818b1f6e7ada5338013bc72c1df2335fd0a51f" Dec 13 13:30:17.159264 containerd[1491]: time="2024-12-13T13:30:17.159213604Z" level=error msg="ContainerStatus for \"b185e4f7b51647eb41b2b025ae818b1f6e7ada5338013bc72c1df2335fd0a51f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b185e4f7b51647eb41b2b025ae818b1f6e7ada5338013bc72c1df2335fd0a51f\": not found" Dec 13 13:30:17.159496 kubelet[2684]: E1213 13:30:17.159438 2684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b185e4f7b51647eb41b2b025ae818b1f6e7ada5338013bc72c1df2335fd0a51f\": not found" containerID="b185e4f7b51647eb41b2b025ae818b1f6e7ada5338013bc72c1df2335fd0a51f" Dec 13 13:30:17.159804 kubelet[2684]: I1213 13:30:17.159511 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b185e4f7b51647eb41b2b025ae818b1f6e7ada5338013bc72c1df2335fd0a51f"} err="failed to get container status \"b185e4f7b51647eb41b2b025ae818b1f6e7ada5338013bc72c1df2335fd0a51f\": rpc error: code = NotFound desc = an error occurred when try to find container \"b185e4f7b51647eb41b2b025ae818b1f6e7ada5338013bc72c1df2335fd0a51f\": not found" Dec 13 13:30:17.162268 kubelet[2684]: I1213 13:30:17.162167 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ec217df5-e1f4-4c1b-bcdb-592ea88c86bb" (UID: "ec217df5-e1f4-4c1b-bcdb-592ea88c86bb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 13:30:17.162353 kubelet[2684]: I1213 13:30:17.162289 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-kube-api-access-7cd4z" (OuterVolumeSpecName: "kube-api-access-7cd4z") pod "ec217df5-e1f4-4c1b-bcdb-592ea88c86bb" (UID: "ec217df5-e1f4-4c1b-bcdb-592ea88c86bb"). InnerVolumeSpecName "kube-api-access-7cd4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 13:30:17.162840 kubelet[2684]: I1213 13:30:17.162767 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ec217df5-e1f4-4c1b-bcdb-592ea88c86bb" (UID: "ec217df5-e1f4-4c1b-bcdb-592ea88c86bb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 13:30:17.166427 kubelet[2684]: I1213 13:30:17.165845 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ec217df5-e1f4-4c1b-bcdb-592ea88c86bb" (UID: "ec217df5-e1f4-4c1b-bcdb-592ea88c86bb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 13:30:17.245954 kubelet[2684]: I1213 13:30:17.245003 2684 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-cilium-run\") on node \"ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 13:30:17.245954 kubelet[2684]: I1213 13:30:17.245091 2684 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-hubble-tls\") on node \"ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 13:30:17.245954 kubelet[2684]: I1213 13:30:17.245147 2684 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-cni-path\") on node \"ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 13:30:17.245954 kubelet[2684]: I1213 13:30:17.245181 2684 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-clustermesh-secrets\") on node \"ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 13:30:17.245954 kubelet[2684]: I1213 13:30:17.245217 2684 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-cilium-cgroup\") on node \"ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 13:30:17.245954 kubelet[2684]: I1213 13:30:17.245245 2684 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-bpf-maps\") on node \"ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 13:30:17.245954 kubelet[2684]: I1213 13:30:17.245273 2684 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-lib-modules\") on node \"ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 13:30:17.246687 kubelet[2684]: I1213 13:30:17.245306 2684 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-7cd4z\" (UniqueName: \"kubernetes.io/projected/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-kube-api-access-7cd4z\") on node \"ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 13:30:17.246687 kubelet[2684]: I1213 13:30:17.245338 2684 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-hostproc\") on node \"ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 13:30:17.246687 kubelet[2684]: I1213 13:30:17.245437 2684 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-etc-cni-netd\") on node \"ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 13:30:17.246687 kubelet[2684]: I1213 13:30:17.245492 2684 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-host-proc-sys-net\") on node \"ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 13:30:17.246687 kubelet[2684]: I1213 13:30:17.245551 2684 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-xtables-lock\") on node \"ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 13:30:17.246687 kubelet[2684]: I1213 13:30:17.245581 2684 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-cilium-config-path\") on node \"ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 13:30:17.246687 kubelet[2684]: I1213 13:30:17.245608 2684 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb-host-proc-sys-kernel\") on node \"ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 13:30:17.330028 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ffc653a815571eeca6a4fc61c364f2df7574af753bc13da0741ee0d1e11b822-rootfs.mount: Deactivated successfully. Dec 13 13:30:17.330198 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8ffc653a815571eeca6a4fc61c364f2df7574af753bc13da0741ee0d1e11b822-shm.mount: Deactivated successfully. Dec 13 13:30:17.330318 systemd[1]: var-lib-kubelet-pods-ea20f771\x2d60c4\x2d426c\x2da541\x2d764e4dcc998d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk2d9v.mount: Deactivated successfully. Dec 13 13:30:17.330533 systemd[1]: var-lib-kubelet-pods-ec217df5\x2de1f4\x2d4c1b\x2dbcdb\x2d592ea88c86bb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7cd4z.mount: Deactivated successfully. Dec 13 13:30:17.330800 systemd[1]: var-lib-kubelet-pods-ec217df5\x2de1f4\x2d4c1b\x2dbcdb\x2d592ea88c86bb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 13:30:17.330987 systemd[1]: var-lib-kubelet-pods-ec217df5\x2de1f4\x2d4c1b\x2dbcdb\x2d592ea88c86bb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 13:30:17.374923 systemd[1]: Removed slice kubepods-burstable-podec217df5_e1f4_4c1b_bcdb_592ea88c86bb.slice - libcontainer container kubepods-burstable-podec217df5_e1f4_4c1b_bcdb_592ea88c86bb.slice. Dec 13 13:30:17.375154 systemd[1]: kubepods-burstable-podec217df5_e1f4_4c1b_bcdb_592ea88c86bb.slice: Consumed 12.522s CPU time. Dec 13 13:30:18.064604 sshd[4294]: Connection closed by 147.75.109.163 port 40484 Dec 13 13:30:18.067061 sshd-session[4289]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:18.081739 systemd[1]: sshd@26-10.128.0.84:22-147.75.109.163:40484.service: Deactivated successfully. Dec 13 13:30:18.085621 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 13:30:18.086347 systemd[1]: session-26.scope: Consumed 1.259s CPU time. Dec 13 13:30:18.087365 systemd-logind[1472]: Session 26 logged out. Waiting for processes to exit. Dec 13 13:30:18.089252 systemd-logind[1472]: Removed session 26. Dec 13 13:30:18.122639 systemd[1]: Started sshd@27-10.128.0.84:22-147.75.109.163:52056.service - OpenSSH per-connection server daemon (147.75.109.163:52056). Dec 13 13:30:18.344737 kubelet[2684]: I1213 13:30:18.344218 2684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea20f771-60c4-426c-a541-764e4dcc998d" path="/var/lib/kubelet/pods/ea20f771-60c4-426c-a541-764e4dcc998d/volumes" Dec 13 13:30:18.345642 kubelet[2684]: I1213 13:30:18.345307 2684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec217df5-e1f4-4c1b-bcdb-592ea88c86bb" path="/var/lib/kubelet/pods/ec217df5-e1f4-4c1b-bcdb-592ea88c86bb/volumes" Dec 13 13:30:18.425621 sshd[4457]: Accepted publickey for core from 147.75.109.163 port 52056 ssh2: RSA SHA256:cNKX4Hvnd9CzZlNWRKUhmWuECv5diNIlM1/aFPCcnqE Dec 13 13:30:18.427532 sshd-session[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:18.437253 systemd-logind[1472]: New session 27 of user core. Dec 13 13:30:18.444894 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 13:30:19.305286 ntpd[1459]: Deleting interface #11 lxc_health, fe80::447c:f7ff:fe64:a7a9%8#123, interface stats: received=0, sent=0, dropped=0, active_time=72 secs Dec 13 13:30:19.306197 ntpd[1459]: 13 Dec 13:30:19 ntpd[1459]: Deleting interface #11 lxc_health, fe80::447c:f7ff:fe64:a7a9%8#123, interface stats: received=0, sent=0, dropped=0, active_time=72 secs Dec 13 13:30:19.737110 sshd[4459]: Connection closed by 147.75.109.163 port 52056 Dec 13 13:30:19.738353 sshd-session[4457]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:19.747431 systemd-logind[1472]: Session 27 logged out. Waiting for processes to exit. Dec 13 13:30:19.750591 systemd[1]: sshd@27-10.128.0.84:22-147.75.109.163:52056.service: Deactivated successfully. Dec 13 13:30:19.757026 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 13:30:19.760233 systemd[1]: session-27.scope: Consumed 1.058s CPU time. Dec 13 13:30:19.770820 systemd-logind[1472]: Removed session 27. Dec 13 13:30:19.804836 systemd[1]: Started sshd@28-10.128.0.84:22-147.75.109.163:52058.service - OpenSSH per-connection server daemon (147.75.109.163:52058). Dec 13 13:30:19.858929 kubelet[2684]: E1213 13:30:19.858823 2684 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ec217df5-e1f4-4c1b-bcdb-592ea88c86bb" containerName="clean-cilium-state" Dec 13 13:30:19.858929 kubelet[2684]: E1213 13:30:19.858933 2684 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ea20f771-60c4-426c-a541-764e4dcc998d" containerName="cilium-operator" Dec 13 13:30:19.859545 kubelet[2684]: E1213 13:30:19.858950 2684 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ec217df5-e1f4-4c1b-bcdb-592ea88c86bb" containerName="mount-cgroup" Dec 13 13:30:19.859545 kubelet[2684]: E1213 13:30:19.858991 2684 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ec217df5-e1f4-4c1b-bcdb-592ea88c86bb" containerName="mount-bpf-fs" Dec 13 13:30:19.859545 kubelet[2684]: E1213 13:30:19.859002 2684 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ec217df5-e1f4-4c1b-bcdb-592ea88c86bb" containerName="apply-sysctl-overwrites" Dec 13 13:30:19.859545 kubelet[2684]: E1213 13:30:19.859023 2684 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ec217df5-e1f4-4c1b-bcdb-592ea88c86bb" containerName="cilium-agent" Dec 13 13:30:19.859545 kubelet[2684]: I1213 13:30:19.859287 2684 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec217df5-e1f4-4c1b-bcdb-592ea88c86bb" containerName="cilium-agent" Dec 13 13:30:19.859545 kubelet[2684]: I1213 13:30:19.859342 2684 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea20f771-60c4-426c-a541-764e4dcc998d" containerName="cilium-operator" Dec 13 13:30:19.917452 systemd[1]: Created slice kubepods-burstable-pod764ce7ce_dcbc_46f0_aecc_6187fd31ba74.slice - libcontainer container kubepods-burstable-pod764ce7ce_dcbc_46f0_aecc_6187fd31ba74.slice. Dec 13 13:30:19.974869 kubelet[2684]: I1213 13:30:19.974789 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/764ce7ce-dcbc-46f0-aecc-6187fd31ba74-cilium-cgroup\") pod \"cilium-lfhsv\" (UID: \"764ce7ce-dcbc-46f0-aecc-6187fd31ba74\") " pod="kube-system/cilium-lfhsv" Dec 13 13:30:19.975063 kubelet[2684]: I1213 13:30:19.974903 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/764ce7ce-dcbc-46f0-aecc-6187fd31ba74-host-proc-sys-kernel\") pod \"cilium-lfhsv\" (UID: \"764ce7ce-dcbc-46f0-aecc-6187fd31ba74\") " pod="kube-system/cilium-lfhsv" Dec 13 13:30:19.975063 kubelet[2684]: I1213 13:30:19.974959 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/764ce7ce-dcbc-46f0-aecc-6187fd31ba74-hostproc\") pod \"cilium-lfhsv\" (UID: \"764ce7ce-dcbc-46f0-aecc-6187fd31ba74\") " pod="kube-system/cilium-lfhsv" Dec 13 13:30:19.975063 kubelet[2684]: I1213 13:30:19.975008 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/764ce7ce-dcbc-46f0-aecc-6187fd31ba74-clustermesh-secrets\") pod \"cilium-lfhsv\" (UID: \"764ce7ce-dcbc-46f0-aecc-6187fd31ba74\") " pod="kube-system/cilium-lfhsv" Dec 13 13:30:19.975273 kubelet[2684]: I1213 13:30:19.975053 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/764ce7ce-dcbc-46f0-aecc-6187fd31ba74-hubble-tls\") pod \"cilium-lfhsv\" (UID: \"764ce7ce-dcbc-46f0-aecc-6187fd31ba74\") " pod="kube-system/cilium-lfhsv" Dec 13 13:30:19.975273 kubelet[2684]: I1213 13:30:19.975118 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/764ce7ce-dcbc-46f0-aecc-6187fd31ba74-lib-modules\") pod \"cilium-lfhsv\" (UID: \"764ce7ce-dcbc-46f0-aecc-6187fd31ba74\") " pod="kube-system/cilium-lfhsv" Dec 13 13:30:19.975273 kubelet[2684]: I1213 13:30:19.975163 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/764ce7ce-dcbc-46f0-aecc-6187fd31ba74-cilium-run\") pod \"cilium-lfhsv\" (UID: \"764ce7ce-dcbc-46f0-aecc-6187fd31ba74\") " pod="kube-system/cilium-lfhsv" Dec 13 13:30:19.975273 kubelet[2684]: I1213 13:30:19.975214 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/764ce7ce-dcbc-46f0-aecc-6187fd31ba74-bpf-maps\") pod \"cilium-lfhsv\" (UID: \"764ce7ce-dcbc-46f0-aecc-6187fd31ba74\") " pod="kube-system/cilium-lfhsv" Dec 13 13:30:19.975273 kubelet[2684]: I1213 13:30:19.975261 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/764ce7ce-dcbc-46f0-aecc-6187fd31ba74-xtables-lock\") pod \"cilium-lfhsv\" (UID: \"764ce7ce-dcbc-46f0-aecc-6187fd31ba74\") " pod="kube-system/cilium-lfhsv" Dec 13 13:30:19.975590 kubelet[2684]: I1213 13:30:19.975321 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/764ce7ce-dcbc-46f0-aecc-6187fd31ba74-cilium-ipsec-secrets\") pod \"cilium-lfhsv\" (UID: \"764ce7ce-dcbc-46f0-aecc-6187fd31ba74\") " pod="kube-system/cilium-lfhsv" Dec 13 13:30:19.975590 kubelet[2684]: I1213 13:30:19.975373 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/764ce7ce-dcbc-46f0-aecc-6187fd31ba74-etc-cni-netd\") pod \"cilium-lfhsv\" (UID: \"764ce7ce-dcbc-46f0-aecc-6187fd31ba74\") " pod="kube-system/cilium-lfhsv" Dec 13 13:30:19.975590 kubelet[2684]: I1213 13:30:19.975460 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x6ng\" (UniqueName: \"kubernetes.io/projected/764ce7ce-dcbc-46f0-aecc-6187fd31ba74-kube-api-access-6x6ng\") pod \"cilium-lfhsv\" (UID: \"764ce7ce-dcbc-46f0-aecc-6187fd31ba74\") " pod="kube-system/cilium-lfhsv" Dec 13 13:30:19.975590 kubelet[2684]: I1213 13:30:19.975495 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/764ce7ce-dcbc-46f0-aecc-6187fd31ba74-cni-path\") pod \"cilium-lfhsv\" (UID: \"764ce7ce-dcbc-46f0-aecc-6187fd31ba74\") " pod="kube-system/cilium-lfhsv" Dec 13 13:30:19.975590 kubelet[2684]: I1213 13:30:19.975551 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/764ce7ce-dcbc-46f0-aecc-6187fd31ba74-cilium-config-path\") pod \"cilium-lfhsv\" (UID: \"764ce7ce-dcbc-46f0-aecc-6187fd31ba74\") " pod="kube-system/cilium-lfhsv" Dec 13 13:30:19.975987 kubelet[2684]: I1213 13:30:19.975600 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/764ce7ce-dcbc-46f0-aecc-6187fd31ba74-host-proc-sys-net\") pod \"cilium-lfhsv\" (UID: \"764ce7ce-dcbc-46f0-aecc-6187fd31ba74\") " pod="kube-system/cilium-lfhsv" Dec 13 13:30:20.155167 sshd[4469]: Accepted publickey for core from 147.75.109.163 port 52058 ssh2: RSA SHA256:cNKX4Hvnd9CzZlNWRKUhmWuECv5diNIlM1/aFPCcnqE Dec 13 13:30:20.157126 sshd-session[4469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:20.163803 systemd-logind[1472]: New session 28 of user core. Dec 13 13:30:20.168871 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 13:30:20.242834 containerd[1491]: time="2024-12-13T13:30:20.242769345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lfhsv,Uid:764ce7ce-dcbc-46f0-aecc-6187fd31ba74,Namespace:kube-system,Attempt:0,}" Dec 13 13:30:20.294341 containerd[1491]: time="2024-12-13T13:30:20.294165537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:30:20.294541 containerd[1491]: time="2024-12-13T13:30:20.294332064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:30:20.294541 containerd[1491]: time="2024-12-13T13:30:20.294417216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:30:20.296607 containerd[1491]: time="2024-12-13T13:30:20.296343956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:30:20.329922 systemd[1]: Started cri-containerd-be337fe0c55d05b270ea4eb83904e45750a06b2c2888b9cf369ca0a6d38dfc2b.scope - libcontainer container be337fe0c55d05b270ea4eb83904e45750a06b2c2888b9cf369ca0a6d38dfc2b. Dec 13 13:30:20.375029 sshd[4475]: Connection closed by 147.75.109.163 port 52058 Dec 13 13:30:20.376106 sshd-session[4469]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:20.382373 systemd-logind[1472]: Session 28 logged out. Waiting for processes to exit. Dec 13 13:30:20.383473 systemd[1]: sshd@28-10.128.0.84:22-147.75.109.163:52058.service: Deactivated successfully. Dec 13 13:30:20.388387 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 13:30:20.392181 containerd[1491]: time="2024-12-13T13:30:20.389648453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lfhsv,Uid:764ce7ce-dcbc-46f0-aecc-6187fd31ba74,Namespace:kube-system,Attempt:0,} returns sandbox id \"be337fe0c55d05b270ea4eb83904e45750a06b2c2888b9cf369ca0a6d38dfc2b\"" Dec 13 13:30:20.393595 systemd-logind[1472]: Removed session 28. Dec 13 13:30:20.397955 containerd[1491]: time="2024-12-13T13:30:20.397656313Z" level=info msg="CreateContainer within sandbox \"be337fe0c55d05b270ea4eb83904e45750a06b2c2888b9cf369ca0a6d38dfc2b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 13:30:20.418223 containerd[1491]: time="2024-12-13T13:30:20.418072867Z" level=info msg="CreateContainer within sandbox \"be337fe0c55d05b270ea4eb83904e45750a06b2c2888b9cf369ca0a6d38dfc2b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"209a5c42c649b3142d48bb1140e8955191535eecdec47ceccd182b299d9da01f\"" Dec 13 13:30:20.420560 containerd[1491]: time="2024-12-13T13:30:20.419018630Z" level=info msg="StartContainer for \"209a5c42c649b3142d48bb1140e8955191535eecdec47ceccd182b299d9da01f\"" Dec 13 13:30:20.436148 systemd[1]: Started sshd@29-10.128.0.84:22-147.75.109.163:52064.service - OpenSSH per-connection server daemon (147.75.109.163:52064). Dec 13 13:30:20.471908 systemd[1]: Started cri-containerd-209a5c42c649b3142d48bb1140e8955191535eecdec47ceccd182b299d9da01f.scope - libcontainer container 209a5c42c649b3142d48bb1140e8955191535eecdec47ceccd182b299d9da01f. Dec 13 13:30:20.524527 kubelet[2684]: E1213 13:30:20.524348 2684 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 13:30:20.524925 containerd[1491]: time="2024-12-13T13:30:20.524872575Z" level=info msg="StartContainer for \"209a5c42c649b3142d48bb1140e8955191535eecdec47ceccd182b299d9da01f\" returns successfully" Dec 13 13:30:20.578952 systemd[1]: cri-containerd-209a5c42c649b3142d48bb1140e8955191535eecdec47ceccd182b299d9da01f.scope: Deactivated successfully. Dec 13 13:30:20.641816 containerd[1491]: time="2024-12-13T13:30:20.641419839Z" level=info msg="shim disconnected" id=209a5c42c649b3142d48bb1140e8955191535eecdec47ceccd182b299d9da01f namespace=k8s.io Dec 13 13:30:20.641816 containerd[1491]: time="2024-12-13T13:30:20.641505450Z" level=warning msg="cleaning up after shim disconnected" id=209a5c42c649b3142d48bb1140e8955191535eecdec47ceccd182b299d9da01f namespace=k8s.io Dec 13 13:30:20.641816 containerd[1491]: time="2024-12-13T13:30:20.641567603Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:30:20.800521 sshd[4526]: Accepted publickey for core from 147.75.109.163 port 52064 ssh2: RSA SHA256:cNKX4Hvnd9CzZlNWRKUhmWuECv5diNIlM1/aFPCcnqE Dec 13 13:30:20.802531 sshd-session[4526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:30:20.819043 systemd-logind[1472]: New session 29 of user core. Dec 13 13:30:20.828047 systemd[1]: Started session-29.scope - Session 29 of User core. Dec 13 13:30:21.112531 containerd[1491]: time="2024-12-13T13:30:21.110289827Z" level=info msg="CreateContainer within sandbox \"be337fe0c55d05b270ea4eb83904e45750a06b2c2888b9cf369ca0a6d38dfc2b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 13:30:21.134925 containerd[1491]: time="2024-12-13T13:30:21.134844960Z" level=info msg="CreateContainer within sandbox \"be337fe0c55d05b270ea4eb83904e45750a06b2c2888b9cf369ca0a6d38dfc2b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d554266c3dd37824bb5888c89567d5eb63ae4de3a1cff9934214ca2927313a09\"" Dec 13 13:30:21.137531 containerd[1491]: time="2024-12-13T13:30:21.135948476Z" level=info msg="StartContainer for \"d554266c3dd37824bb5888c89567d5eb63ae4de3a1cff9934214ca2927313a09\"" Dec 13 13:30:21.228235 systemd[1]: Started cri-containerd-d554266c3dd37824bb5888c89567d5eb63ae4de3a1cff9934214ca2927313a09.scope - libcontainer container d554266c3dd37824bb5888c89567d5eb63ae4de3a1cff9934214ca2927313a09. Dec 13 13:30:21.301157 containerd[1491]: time="2024-12-13T13:30:21.301027123Z" level=info msg="StartContainer for \"d554266c3dd37824bb5888c89567d5eb63ae4de3a1cff9934214ca2927313a09\" returns successfully" Dec 13 13:30:21.325146 systemd[1]: cri-containerd-d554266c3dd37824bb5888c89567d5eb63ae4de3a1cff9934214ca2927313a09.scope: Deactivated successfully. Dec 13 13:30:21.365084 containerd[1491]: time="2024-12-13T13:30:21.364897718Z" level=info msg="shim disconnected" id=d554266c3dd37824bb5888c89567d5eb63ae4de3a1cff9934214ca2927313a09 namespace=k8s.io Dec 13 13:30:21.365084 containerd[1491]: time="2024-12-13T13:30:21.364994737Z" level=warning msg="cleaning up after shim disconnected" id=d554266c3dd37824bb5888c89567d5eb63ae4de3a1cff9934214ca2927313a09 namespace=k8s.io Dec 13 13:30:21.365084 containerd[1491]: time="2024-12-13T13:30:21.365009240Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:30:22.091523 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d554266c3dd37824bb5888c89567d5eb63ae4de3a1cff9934214ca2927313a09-rootfs.mount: Deactivated successfully. Dec 13 13:30:22.112973 containerd[1491]: time="2024-12-13T13:30:22.112608879Z" level=info msg="CreateContainer within sandbox \"be337fe0c55d05b270ea4eb83904e45750a06b2c2888b9cf369ca0a6d38dfc2b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 13:30:22.144547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2346693592.mount: Deactivated successfully. Dec 13 13:30:22.148416 containerd[1491]: time="2024-12-13T13:30:22.148296023Z" level=info msg="CreateContainer within sandbox \"be337fe0c55d05b270ea4eb83904e45750a06b2c2888b9cf369ca0a6d38dfc2b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"98a303e0c5e06de68c6462080ec6b7cb83730283c797edb08d2e45ce5edc5b5b\"" Dec 13 13:30:22.149781 containerd[1491]: time="2024-12-13T13:30:22.149741395Z" level=info msg="StartContainer for \"98a303e0c5e06de68c6462080ec6b7cb83730283c797edb08d2e45ce5edc5b5b\"" Dec 13 13:30:22.221300 systemd[1]: Started cri-containerd-98a303e0c5e06de68c6462080ec6b7cb83730283c797edb08d2e45ce5edc5b5b.scope - libcontainer container 98a303e0c5e06de68c6462080ec6b7cb83730283c797edb08d2e45ce5edc5b5b. Dec 13 13:30:22.331729 containerd[1491]: time="2024-12-13T13:30:22.331650713Z" level=info msg="StartContainer for \"98a303e0c5e06de68c6462080ec6b7cb83730283c797edb08d2e45ce5edc5b5b\" returns successfully" Dec 13 13:30:22.343467 systemd[1]: cri-containerd-98a303e0c5e06de68c6462080ec6b7cb83730283c797edb08d2e45ce5edc5b5b.scope: Deactivated successfully. Dec 13 13:30:22.381934 containerd[1491]: time="2024-12-13T13:30:22.381839554Z" level=info msg="shim disconnected" id=98a303e0c5e06de68c6462080ec6b7cb83730283c797edb08d2e45ce5edc5b5b namespace=k8s.io Dec 13 13:30:22.381934 containerd[1491]: time="2024-12-13T13:30:22.381913825Z" level=warning msg="cleaning up after shim disconnected" id=98a303e0c5e06de68c6462080ec6b7cb83730283c797edb08d2e45ce5edc5b5b namespace=k8s.io Dec 13 13:30:22.381934 containerd[1491]: time="2024-12-13T13:30:22.381927265Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:30:23.092903 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98a303e0c5e06de68c6462080ec6b7cb83730283c797edb08d2e45ce5edc5b5b-rootfs.mount: Deactivated successfully. Dec 13 13:30:23.126255 containerd[1491]: time="2024-12-13T13:30:23.126189227Z" level=info msg="CreateContainer within sandbox \"be337fe0c55d05b270ea4eb83904e45750a06b2c2888b9cf369ca0a6d38dfc2b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 13:30:23.187832 containerd[1491]: time="2024-12-13T13:30:23.186659332Z" level=info msg="CreateContainer within sandbox \"be337fe0c55d05b270ea4eb83904e45750a06b2c2888b9cf369ca0a6d38dfc2b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e1e1bca68ec01cbd5f33a05ee1677e45ec5443b22ffc3fe291bd7427fa810d6b\"" Dec 13 13:30:23.194507 containerd[1491]: time="2024-12-13T13:30:23.194396247Z" level=info msg="StartContainer for \"e1e1bca68ec01cbd5f33a05ee1677e45ec5443b22ffc3fe291bd7427fa810d6b\"" Dec 13 13:30:23.380800 systemd[1]: Started cri-containerd-e1e1bca68ec01cbd5f33a05ee1677e45ec5443b22ffc3fe291bd7427fa810d6b.scope - libcontainer container e1e1bca68ec01cbd5f33a05ee1677e45ec5443b22ffc3fe291bd7427fa810d6b. Dec 13 13:30:23.394214 kubelet[2684]: I1213 13:30:23.394091 2684 setters.go:600] "Node became not ready" node="ci-4186-0-0-2fe7670bebb044e8263e.c.flatcar-212911.internal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T13:30:23Z","lastTransitionTime":"2024-12-13T13:30:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 13:30:23.487861 containerd[1491]: time="2024-12-13T13:30:23.487804784Z" level=info msg="StartContainer for \"e1e1bca68ec01cbd5f33a05ee1677e45ec5443b22ffc3fe291bd7427fa810d6b\" returns successfully" Dec 13 13:30:23.488556 systemd[1]: cri-containerd-e1e1bca68ec01cbd5f33a05ee1677e45ec5443b22ffc3fe291bd7427fa810d6b.scope: Deactivated successfully. Dec 13 13:30:23.554158 containerd[1491]: time="2024-12-13T13:30:23.553948472Z" level=info msg="shim disconnected" id=e1e1bca68ec01cbd5f33a05ee1677e45ec5443b22ffc3fe291bd7427fa810d6b namespace=k8s.io Dec 13 13:30:23.554158 containerd[1491]: time="2024-12-13T13:30:23.554158857Z" level=warning msg="cleaning up after shim disconnected" id=e1e1bca68ec01cbd5f33a05ee1677e45ec5443b22ffc3fe291bd7427fa810d6b namespace=k8s.io Dec 13 13:30:23.554964 containerd[1491]: time="2024-12-13T13:30:23.554208483Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:30:23.584487 containerd[1491]: time="2024-12-13T13:30:23.583955386Z" level=warning msg="cleanup warnings time=\"2024-12-13T13:30:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 13:30:24.091778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1e1bca68ec01cbd5f33a05ee1677e45ec5443b22ffc3fe291bd7427fa810d6b-rootfs.mount: Deactivated successfully. Dec 13 13:30:24.129079 containerd[1491]: time="2024-12-13T13:30:24.129020725Z" level=info msg="CreateContainer within sandbox \"be337fe0c55d05b270ea4eb83904e45750a06b2c2888b9cf369ca0a6d38dfc2b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 13:30:24.161759 containerd[1491]: time="2024-12-13T13:30:24.161652233Z" level=info msg="CreateContainer within sandbox \"be337fe0c55d05b270ea4eb83904e45750a06b2c2888b9cf369ca0a6d38dfc2b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f2ebea025d4cc961ad833a71f6314f0195fed0d5e5210e51ea74a0738791bcdc\"" Dec 13 13:30:24.167477 containerd[1491]: time="2024-12-13T13:30:24.164779070Z" level=info msg="StartContainer for \"f2ebea025d4cc961ad833a71f6314f0195fed0d5e5210e51ea74a0738791bcdc\"" Dec 13 13:30:24.255922 systemd[1]: Started cri-containerd-f2ebea025d4cc961ad833a71f6314f0195fed0d5e5210e51ea74a0738791bcdc.scope - libcontainer container f2ebea025d4cc961ad833a71f6314f0195fed0d5e5210e51ea74a0738791bcdc. Dec 13 13:30:24.311481 containerd[1491]: time="2024-12-13T13:30:24.311409665Z" level=info msg="StartContainer for \"f2ebea025d4cc961ad833a71f6314f0195fed0d5e5210e51ea74a0738791bcdc\" returns successfully" Dec 13 13:30:24.344384 kubelet[2684]: E1213 13:30:24.342950 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-j2p89" podUID="553d606b-2534-494e-9c36-28c85fe0c28b" Dec 13 13:30:25.095378 systemd[1]: run-containerd-runc-k8s.io-f2ebea025d4cc961ad833a71f6314f0195fed0d5e5210e51ea74a0738791bcdc-runc.yFt9qy.mount: Deactivated successfully. Dec 13 13:30:25.224413 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 13:30:27.771150 kubelet[2684]: E1213 13:30:27.770291 2684 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:47788->127.0.0.1:46367: write tcp 127.0.0.1:47788->127.0.0.1:46367: write: broken pipe Dec 13 13:30:29.373456 systemd-networkd[1406]: lxc_health: Link UP Dec 13 13:30:29.377915 systemd-networkd[1406]: lxc_health: Gained carrier Dec 13 13:30:30.320213 containerd[1491]: time="2024-12-13T13:30:30.320032174Z" level=info msg="StopPodSandbox for \"c48b283a106b2885fbc6c3f916db2fe8878fb32b4555ca248be3c80ca490c3f1\"" Dec 13 13:30:30.328526 containerd[1491]: time="2024-12-13T13:30:30.321829872Z" level=info msg="TearDown network for sandbox \"c48b283a106b2885fbc6c3f916db2fe8878fb32b4555ca248be3c80ca490c3f1\" successfully" Dec 13 13:30:30.328526 containerd[1491]: time="2024-12-13T13:30:30.321905149Z" level=info msg="StopPodSandbox for \"c48b283a106b2885fbc6c3f916db2fe8878fb32b4555ca248be3c80ca490c3f1\" returns successfully" Dec 13 13:30:30.328526 containerd[1491]: time="2024-12-13T13:30:30.326187669Z" level=info msg="RemovePodSandbox for \"c48b283a106b2885fbc6c3f916db2fe8878fb32b4555ca248be3c80ca490c3f1\"" Dec 13 13:30:30.328526 containerd[1491]: time="2024-12-13T13:30:30.326326553Z" level=info msg="Forcibly stopping sandbox \"c48b283a106b2885fbc6c3f916db2fe8878fb32b4555ca248be3c80ca490c3f1\"" Dec 13 13:30:30.330081 containerd[1491]: time="2024-12-13T13:30:30.326459561Z" level=info msg="TearDown network for sandbox \"c48b283a106b2885fbc6c3f916db2fe8878fb32b4555ca248be3c80ca490c3f1\" successfully" Dec 13 13:30:30.343146 containerd[1491]: time="2024-12-13T13:30:30.342955607Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c48b283a106b2885fbc6c3f916db2fe8878fb32b4555ca248be3c80ca490c3f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:30:30.343731 containerd[1491]: time="2024-12-13T13:30:30.343214680Z" level=info msg="RemovePodSandbox \"c48b283a106b2885fbc6c3f916db2fe8878fb32b4555ca248be3c80ca490c3f1\" returns successfully" Dec 13 13:30:30.345330 containerd[1491]: time="2024-12-13T13:30:30.345221305Z" level=info msg="StopPodSandbox for \"8ffc653a815571eeca6a4fc61c364f2df7574af753bc13da0741ee0d1e11b822\"" Dec 13 13:30:30.345732 containerd[1491]: time="2024-12-13T13:30:30.345484055Z" level=info msg="TearDown network for sandbox \"8ffc653a815571eeca6a4fc61c364f2df7574af753bc13da0741ee0d1e11b822\" successfully" Dec 13 13:30:30.346794 containerd[1491]: time="2024-12-13T13:30:30.345652873Z" level=info msg="StopPodSandbox for \"8ffc653a815571eeca6a4fc61c364f2df7574af753bc13da0741ee0d1e11b822\" returns successfully" Dec 13 13:30:30.348038 containerd[1491]: time="2024-12-13T13:30:30.347999335Z" level=info msg="RemovePodSandbox for \"8ffc653a815571eeca6a4fc61c364f2df7574af753bc13da0741ee0d1e11b822\"" Dec 13 13:30:30.348268 containerd[1491]: time="2024-12-13T13:30:30.348083576Z" level=info msg="Forcibly stopping sandbox \"8ffc653a815571eeca6a4fc61c364f2df7574af753bc13da0741ee0d1e11b822\"" Dec 13 13:30:30.348522 containerd[1491]: time="2024-12-13T13:30:30.348267839Z" level=info msg="TearDown network for sandbox \"8ffc653a815571eeca6a4fc61c364f2df7574af753bc13da0741ee0d1e11b822\" successfully" Dec 13 13:30:30.365724 containerd[1491]: time="2024-12-13T13:30:30.365619867Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ffc653a815571eeca6a4fc61c364f2df7574af753bc13da0741ee0d1e11b822\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:30:30.365892 containerd[1491]: time="2024-12-13T13:30:30.365854584Z" level=info msg="RemovePodSandbox \"8ffc653a815571eeca6a4fc61c364f2df7574af753bc13da0741ee0d1e11b822\" returns successfully" Dec 13 13:30:30.409935 kubelet[2684]: I1213 13:30:30.409797 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lfhsv" podStartSLOduration=11.409768146 podStartE2EDuration="11.409768146s" podCreationTimestamp="2024-12-13 13:30:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 13:30:25.264326366 +0000 UTC m=+115.087805449" watchObservedRunningTime="2024-12-13 13:30:30.409768146 +0000 UTC m=+120.233247231" Dec 13 13:30:30.871246 systemd-networkd[1406]: lxc_health: Gained IPv6LL Dec 13 13:30:33.305482 ntpd[1459]: Listen normally on 14 lxc_health [fe80::7c36:9eff:fec7:92bb%14]:123 Dec 13 13:30:33.308882 ntpd[1459]: 13 Dec 13:30:33 ntpd[1459]: Listen normally on 14 lxc_health [fe80::7c36:9eff:fec7:92bb%14]:123 Dec 13 13:30:35.606708 sshd[4588]: Connection closed by 147.75.109.163 port 52064 Dec 13 13:30:35.610162 sshd-session[4526]: pam_unix(sshd:session): session closed for user core Dec 13 13:30:35.625143 systemd[1]: sshd@29-10.128.0.84:22-147.75.109.163:52064.service: Deactivated successfully. Dec 13 13:30:35.636664 systemd[1]: session-29.scope: Deactivated successfully. Dec 13 13:30:35.642456 systemd-logind[1472]: Session 29 logged out. Waiting for processes to exit. Dec 13 13:30:35.645900 systemd-logind[1472]: Removed session 29.