Jan 29 11:23:15.099127 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:36:13 -00 2025 Jan 29 11:23:15.099177 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:23:15.099196 kernel: BIOS-provided physical RAM map: Jan 29 11:23:15.099211 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 29 11:23:15.099224 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 29 11:23:15.099238 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 29 11:23:15.099256 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 29 11:23:15.099275 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 29 11:23:15.099290 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd324fff] usable Jan 29 11:23:15.099304 kernel: BIOS-e820: [mem 0x00000000bd325000-0x00000000bd32dfff] ACPI data Jan 29 11:23:15.099329 kernel: BIOS-e820: [mem 0x00000000bd32e000-0x00000000bf8ecfff] usable Jan 29 11:23:15.099344 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Jan 29 11:23:15.099359 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 29 11:23:15.099375 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 29 11:23:15.099398 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 29 11:23:15.099415 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 29 11:23:15.099432 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 29 11:23:15.099448 kernel: NX (Execute Disable) protection: active Jan 29 11:23:15.099465 kernel: APIC: Static calls initialized Jan 29 11:23:15.099481 kernel: efi: EFI v2.7 by EDK II Jan 29 11:23:15.099498 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd325018 Jan 29 11:23:15.099515 kernel: random: crng init done Jan 29 11:23:15.099531 kernel: secureboot: Secure boot disabled Jan 29 11:23:15.099547 kernel: SMBIOS 2.4 present. Jan 29 11:23:15.099567 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 29 11:23:15.099583 kernel: Hypervisor detected: KVM Jan 29 11:23:15.099600 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 11:23:15.099616 kernel: kvm-clock: using sched offset of 13405899029 cycles Jan 29 11:23:15.099634 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 11:23:15.099650 kernel: tsc: Detected 2299.998 MHz processor Jan 29 11:23:15.099666 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:23:15.099682 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:23:15.099699 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 29 11:23:15.099720 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 29 11:23:15.099737 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:23:15.099753 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 29 11:23:15.099769 kernel: Using GB pages for direct mapping Jan 29 11:23:15.099785 kernel: ACPI: Early table checksum verification disabled Jan 29 11:23:15.099826 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 29 11:23:15.099844 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 29 11:23:15.099868 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 29 11:23:15.099890 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 29 11:23:15.099908 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 29 11:23:15.099926 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 29 11:23:15.099944 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 29 11:23:15.099963 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 29 11:23:15.099981 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 29 11:23:15.100002 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 29 11:23:15.100020 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 29 11:23:15.100038 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 29 11:23:15.100057 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 29 11:23:15.100075 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 29 11:23:15.100092 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 29 11:23:15.100110 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 29 11:23:15.100128 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 29 11:23:15.100146 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 29 11:23:15.100168 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 29 11:23:15.100186 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 29 11:23:15.100204 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 11:23:15.100223 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 11:23:15.100240 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 29 11:23:15.100258 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 29 11:23:15.100276 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 29 11:23:15.100295 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 29 11:23:15.100313 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 29 11:23:15.100345 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Jan 29 11:23:15.100363 kernel: Zone ranges: Jan 29 11:23:15.100381 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:23:15.100399 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 29 11:23:15.100417 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 29 11:23:15.100434 kernel: Movable zone start for each node Jan 29 11:23:15.100452 kernel: Early memory node ranges Jan 29 11:23:15.100471 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 29 11:23:15.100489 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 29 11:23:15.100507 kernel: node 0: [mem 0x0000000000100000-0x00000000bd324fff] Jan 29 11:23:15.100529 kernel: node 0: [mem 0x00000000bd32e000-0x00000000bf8ecfff] Jan 29 11:23:15.100547 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 29 11:23:15.100565 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 29 11:23:15.100582 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 29 11:23:15.100600 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:23:15.100619 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 29 11:23:15.100636 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 29 11:23:15.100655 kernel: On node 0, zone DMA32: 9 pages in unavailable ranges Jan 29 11:23:15.100673 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 29 11:23:15.100695 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 29 11:23:15.100713 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 29 11:23:15.100731 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 11:23:15.100749 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:23:15.100767 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 11:23:15.100786 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:23:15.100828 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 11:23:15.100843 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 11:23:15.100858 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:23:15.100879 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 11:23:15.100894 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 29 11:23:15.100909 kernel: Booting paravirtualized kernel on KVM Jan 29 11:23:15.100924 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:23:15.100939 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 11:23:15.100957 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 11:23:15.100973 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 11:23:15.100990 kernel: pcpu-alloc: [0] 0 1 Jan 29 11:23:15.101004 kernel: kvm-guest: PV spinlocks enabled Jan 29 11:23:15.101025 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 11:23:15.101041 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:23:15.101056 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:23:15.101071 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 29 11:23:15.101088 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:23:15.101105 kernel: Fallback order for Node 0: 0 Jan 29 11:23:15.101123 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932271 Jan 29 11:23:15.101140 kernel: Policy zone: Normal Jan 29 11:23:15.101163 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:23:15.101179 kernel: software IO TLB: area num 2. Jan 29 11:23:15.101195 kernel: Memory: 7513352K/7860548K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42972K init, 2220K bss, 346940K reserved, 0K cma-reserved) Jan 29 11:23:15.101210 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 11:23:15.101228 kernel: Kernel/User page tables isolation: enabled Jan 29 11:23:15.101245 kernel: ftrace: allocating 37923 entries in 149 pages Jan 29 11:23:15.101262 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:23:15.101280 kernel: Dynamic Preempt: voluntary Jan 29 11:23:15.101324 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:23:15.101344 kernel: rcu: RCU event tracing is enabled. Jan 29 11:23:15.101363 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 11:23:15.101386 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:23:15.101404 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:23:15.101423 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:23:15.101440 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:23:15.101459 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 11:23:15.101478 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 11:23:15.101500 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:23:15.101518 kernel: Console: colour dummy device 80x25 Jan 29 11:23:15.101537 kernel: printk: console [ttyS0] enabled Jan 29 11:23:15.101555 kernel: ACPI: Core revision 20230628 Jan 29 11:23:15.101574 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:23:15.101592 kernel: x2apic enabled Jan 29 11:23:15.101611 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:23:15.101628 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 29 11:23:15.101647 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 29 11:23:15.101670 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 29 11:23:15.101689 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 29 11:23:15.101707 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 29 11:23:15.101726 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:23:15.101744 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 29 11:23:15.101763 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 29 11:23:15.101781 kernel: Spectre V2 : Mitigation: IBRS Jan 29 11:23:15.101840 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:23:15.101863 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 11:23:15.101881 kernel: RETBleed: Mitigation: IBRS Jan 29 11:23:15.101900 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 11:23:15.101918 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 29 11:23:15.101937 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 11:23:15.101956 kernel: MDS: Mitigation: Clear CPU buffers Jan 29 11:23:15.101974 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 11:23:15.101992 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:23:15.102010 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:23:15.102032 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:23:15.102050 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:23:15.102067 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 29 11:23:15.102086 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:23:15.102104 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:23:15.102122 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:23:15.102141 kernel: landlock: Up and running. Jan 29 11:23:15.102158 kernel: SELinux: Initializing. Jan 29 11:23:15.102173 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 11:23:15.102192 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 11:23:15.102208 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 29 11:23:15.102224 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:23:15.102241 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:23:15.102258 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:23:15.102277 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 29 11:23:15.102295 kernel: signal: max sigframe size: 1776 Jan 29 11:23:15.102322 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:23:15.102341 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:23:15.102365 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 11:23:15.102382 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:23:15.102398 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:23:15.102415 kernel: .... node #0, CPUs: #1 Jan 29 11:23:15.102434 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 29 11:23:15.102452 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 29 11:23:15.102469 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 11:23:15.102485 kernel: smpboot: Max logical packages: 1 Jan 29 11:23:15.102506 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 29 11:23:15.102525 kernel: devtmpfs: initialized Jan 29 11:23:15.102543 kernel: x86/mm: Memory block size: 128MB Jan 29 11:23:15.102562 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 29 11:23:15.102580 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:23:15.102599 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 11:23:15.102617 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:23:15.102635 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:23:15.102655 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:23:15.102678 kernel: audit: type=2000 audit(1738149793.687:1): state=initialized audit_enabled=0 res=1 Jan 29 11:23:15.102697 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:23:15.102716 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:23:15.102734 kernel: cpuidle: using governor menu Jan 29 11:23:15.102753 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:23:15.102772 kernel: dca service started, version 1.12.1 Jan 29 11:23:15.102805 kernel: PCI: Using configuration type 1 for base access Jan 29 11:23:15.102824 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:23:15.102843 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:23:15.102889 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:23:15.102909 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:23:15.102927 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:23:15.102944 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:23:15.102962 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:23:15.102981 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:23:15.102999 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:23:15.103018 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 29 11:23:15.103036 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:23:15.103060 kernel: ACPI: Interpreter enabled Jan 29 11:23:15.103079 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 11:23:15.103099 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:23:15.103119 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:23:15.103138 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 29 11:23:15.103156 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 29 11:23:15.103175 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:23:15.103466 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:23:15.103695 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 11:23:15.103900 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 11:23:15.103926 kernel: PCI host bridge to bus 0000:00 Jan 29 11:23:15.104106 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:23:15.104275 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 11:23:15.104452 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:23:15.104617 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 29 11:23:15.104802 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:23:15.105021 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 11:23:15.105239 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 29 11:23:15.105459 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 29 11:23:15.105641 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 29 11:23:15.105853 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 29 11:23:15.106045 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 29 11:23:15.106222 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 29 11:23:15.106417 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 11:23:15.106612 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 29 11:23:15.106864 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 29 11:23:15.107073 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 11:23:15.107260 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 29 11:23:15.107463 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 29 11:23:15.107488 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 11:23:15.107508 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 11:23:15.107527 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:23:15.107547 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 11:23:15.107566 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 11:23:15.107586 kernel: iommu: Default domain type: Translated Jan 29 11:23:15.107606 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:23:15.107625 kernel: efivars: Registered efivars operations Jan 29 11:23:15.107650 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:23:15.107670 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:23:15.107689 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 29 11:23:15.107709 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 29 11:23:15.107729 kernel: e820: reserve RAM buffer [mem 0xbd325000-0xbfffffff] Jan 29 11:23:15.107747 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 29 11:23:15.107766 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 29 11:23:15.107786 kernel: vgaarb: loaded Jan 29 11:23:15.107850 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 11:23:15.107874 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:23:15.107891 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:23:15.107909 kernel: pnp: PnP ACPI init Jan 29 11:23:15.107925 kernel: pnp: PnP ACPI: found 7 devices Jan 29 11:23:15.107944 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:23:15.107962 kernel: NET: Registered PF_INET protocol family Jan 29 11:23:15.107979 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 11:23:15.107996 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 29 11:23:15.108018 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:23:15.108037 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:23:15.108056 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 29 11:23:15.108074 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 29 11:23:15.108091 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 29 11:23:15.108109 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 29 11:23:15.108127 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:23:15.108144 kernel: NET: Registered PF_XDP protocol family Jan 29 11:23:15.108341 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 11:23:15.108517 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 11:23:15.108688 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 11:23:15.108904 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 29 11:23:15.109097 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 11:23:15.109123 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:23:15.109141 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 11:23:15.109160 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 29 11:23:15.109186 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 11:23:15.109206 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 29 11:23:15.109224 kernel: clocksource: Switched to clocksource tsc Jan 29 11:23:15.109244 kernel: Initialise system trusted keyrings Jan 29 11:23:15.109263 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 29 11:23:15.109283 kernel: Key type asymmetric registered Jan 29 11:23:15.109302 kernel: Asymmetric key parser 'x509' registered Jan 29 11:23:15.109334 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:23:15.109354 kernel: io scheduler mq-deadline registered Jan 29 11:23:15.109378 kernel: io scheduler kyber registered Jan 29 11:23:15.109398 kernel: io scheduler bfq registered Jan 29 11:23:15.109418 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:23:15.109438 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 29 11:23:15.109635 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 29 11:23:15.109659 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 29 11:23:15.109860 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 29 11:23:15.109885 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 29 11:23:15.110067 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 29 11:23:15.110096 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:23:15.110114 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:23:15.110134 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 29 11:23:15.110153 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 29 11:23:15.110171 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 29 11:23:15.110376 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 29 11:23:15.110404 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 11:23:15.110422 kernel: i8042: Warning: Keylock active Jan 29 11:23:15.110446 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:23:15.110465 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:23:15.110674 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 29 11:23:15.110865 kernel: rtc_cmos 00:00: registered as rtc0 Jan 29 11:23:15.111048 kernel: rtc_cmos 00:00: setting system clock to 2025-01-29T11:23:14 UTC (1738149794) Jan 29 11:23:15.111218 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 29 11:23:15.111241 kernel: intel_pstate: CPU model not supported Jan 29 11:23:15.111261 kernel: pstore: Using crash dump compression: deflate Jan 29 11:23:15.111285 kernel: pstore: Registered efi_pstore as persistent store backend Jan 29 11:23:15.111304 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:23:15.111330 kernel: Segment Routing with IPv6 Jan 29 11:23:15.111348 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:23:15.111368 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:23:15.111387 kernel: Key type dns_resolver registered Jan 29 11:23:15.111403 kernel: IPI shorthand broadcast: enabled Jan 29 11:23:15.111419 kernel: sched_clock: Marking stable (880004056, 177498390)->(1095900485, -38398039) Jan 29 11:23:15.111437 kernel: registered taskstats version 1 Jan 29 11:23:15.111462 kernel: Loading compiled-in X.509 certificates Jan 29 11:23:15.111481 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: de92a621108c58f5771c86c5c3ccb1aa0728ed55' Jan 29 11:23:15.111501 kernel: Key type .fscrypt registered Jan 29 11:23:15.111520 kernel: Key type fscrypt-provisioning registered Jan 29 11:23:15.111540 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:23:15.111559 kernel: ima: No architecture policies found Jan 29 11:23:15.111576 kernel: clk: Disabling unused clocks Jan 29 11:23:15.111597 kernel: Freeing unused kernel image (initmem) memory: 42972K Jan 29 11:23:15.111616 kernel: Write protecting the kernel read-only data: 36864k Jan 29 11:23:15.111640 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 29 11:23:15.111656 kernel: Run /init as init process Jan 29 11:23:15.111673 kernel: with arguments: Jan 29 11:23:15.111690 kernel: /init Jan 29 11:23:15.111706 kernel: with environment: Jan 29 11:23:15.111724 kernel: HOME=/ Jan 29 11:23:15.111740 kernel: TERM=linux Jan 29 11:23:15.111759 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:23:15.111781 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:23:15.111842 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:23:15.111865 systemd[1]: Detected virtualization google. Jan 29 11:23:15.111885 systemd[1]: Detected architecture x86-64. Jan 29 11:23:15.111904 systemd[1]: Running in initrd. Jan 29 11:23:15.111923 systemd[1]: No hostname configured, using default hostname. Jan 29 11:23:15.111942 systemd[1]: Hostname set to <localhost>. Jan 29 11:23:15.111966 systemd[1]: Initializing machine ID from random generator. Jan 29 11:23:15.111985 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:23:15.112005 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:23:15.112025 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:23:15.112045 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:23:15.112064 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:23:15.112084 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:23:15.112104 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:23:15.112134 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:23:15.112171 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:23:15.112195 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:23:15.112215 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:23:15.112235 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:23:15.112259 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:23:15.112279 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:23:15.112299 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:23:15.112329 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:23:15.112349 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:23:15.112370 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:23:15.112390 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:23:15.112410 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:23:15.112431 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:23:15.112455 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:23:15.112475 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:23:15.112495 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:23:15.112516 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:23:15.112536 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:23:15.112556 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:23:15.112576 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:23:15.112597 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:23:15.112621 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:23:15.112684 systemd-journald[184]: Collecting audit messages is disabled. Jan 29 11:23:15.112726 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:23:15.112747 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:23:15.112771 systemd-journald[184]: Journal started Jan 29 11:23:15.112836 systemd-journald[184]: Runtime Journal (/run/log/journal/090433178c3d471c94aa12d6824efba4) is 8.0M, max 148.7M, 140.7M free. Jan 29 11:23:15.117862 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:23:15.121230 systemd-modules-load[185]: Inserted module 'overlay' Jan 29 11:23:15.130963 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:23:15.137086 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:23:15.149166 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:23:15.162546 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:23:15.174625 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:23:15.191110 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:23:15.191147 kernel: Bridge firewalling registered Jan 29 11:23:15.180225 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:23:15.182905 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 29 11:23:15.185450 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:23:15.202374 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:23:15.212136 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:23:15.217014 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:23:15.239880 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:23:15.250032 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:23:15.254265 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:23:15.262220 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:23:15.273002 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:23:15.293824 dracut-cmdline[214]: dracut-dracut-053 Jan 29 11:23:15.298896 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:23:15.327866 systemd-resolved[217]: Positive Trust Anchors: Jan 29 11:23:15.328435 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:23:15.328505 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:23:15.336145 systemd-resolved[217]: Defaulting to hostname 'linux'. Jan 29 11:23:15.341010 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:23:15.351596 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:23:15.404839 kernel: SCSI subsystem initialized Jan 29 11:23:15.415841 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:23:15.428828 kernel: iscsi: registered transport (tcp) Jan 29 11:23:15.452837 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:23:15.452934 kernel: QLogic iSCSI HBA Driver Jan 29 11:23:15.505262 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:23:15.512086 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:23:15.590915 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:23:15.591006 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:23:15.591035 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:23:15.648832 kernel: raid6: avx2x4 gen() 17857 MB/s Jan 29 11:23:15.669836 kernel: raid6: avx2x2 gen() 17780 MB/s Jan 29 11:23:15.695878 kernel: raid6: avx2x1 gen() 13949 MB/s Jan 29 11:23:15.695967 kernel: raid6: using algorithm avx2x4 gen() 17857 MB/s Jan 29 11:23:15.722839 kernel: raid6: .... xor() 6848 MB/s, rmw enabled Jan 29 11:23:15.722935 kernel: raid6: using avx2x2 recovery algorithm Jan 29 11:23:15.751833 kernel: xor: automatically using best checksumming function avx Jan 29 11:23:15.938833 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:23:15.952541 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:23:15.959154 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:23:16.014207 systemd-udevd[400]: Using default interface naming scheme 'v255'. Jan 29 11:23:16.021167 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:23:16.052017 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:23:16.072991 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Jan 29 11:23:16.109928 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:23:16.135043 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:23:16.240021 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:23:16.258615 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:23:16.317041 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:23:16.338219 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:23:16.354058 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:23:16.353869 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:23:16.427521 kernel: scsi host0: Virtio SCSI HBA Jan 29 11:23:16.427836 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 29 11:23:16.383512 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:23:16.456967 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:23:16.457007 kernel: AES CTR mode by8 optimization enabled Jan 29 11:23:16.420036 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:23:16.491477 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:23:16.519892 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 29 11:23:16.581301 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 29 11:23:16.581579 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 29 11:23:16.581818 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 29 11:23:16.582044 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 29 11:23:16.582285 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:23:16.582313 kernel: GPT:17805311 != 25165823 Jan 29 11:23:16.582337 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:23:16.582360 kernel: GPT:17805311 != 25165823 Jan 29 11:23:16.582382 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:23:16.582406 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:23:16.582430 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 29 11:23:16.491690 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:23:16.519743 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:23:16.576056 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:23:16.657986 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (455) Jan 29 11:23:16.658028 kernel: BTRFS: device fsid 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (457) Jan 29 11:23:16.576283 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:23:16.593965 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:23:16.612751 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:23:16.687434 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:23:16.719245 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:23:16.751162 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 29 11:23:16.757969 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 29 11:23:16.777863 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 29 11:23:16.797869 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 29 11:23:16.835979 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 29 11:23:16.843006 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:23:16.874077 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:23:16.899492 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:23:16.899696 disk-uuid[542]: Primary Header is updated. Jan 29 11:23:16.899696 disk-uuid[542]: Secondary Entries is updated. Jan 29 11:23:16.899696 disk-uuid[542]: Secondary Header is updated. Jan 29 11:23:16.928830 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:23:16.971558 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:23:17.942840 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:23:17.943858 disk-uuid[544]: The operation has completed successfully. Jan 29 11:23:18.015195 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:23:18.015345 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:23:18.056036 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:23:18.086949 sh[566]: Success Jan 29 11:23:18.110832 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 11:23:18.207594 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:23:18.214956 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:23:18.241422 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:23:18.293088 kernel: BTRFS info (device dm-0): first mount of filesystem 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 Jan 29 11:23:18.293197 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:23:18.293223 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:23:18.302532 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:23:18.309383 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:23:18.343825 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 11:23:18.350017 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:23:18.351027 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:23:18.357078 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:23:18.379176 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:23:18.445984 kernel: BTRFS info (device sda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:23:18.446029 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:23:18.446076 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:23:18.446101 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:23:18.446124 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:23:18.466831 kernel: BTRFS info (device sda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:23:18.481143 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:23:18.508226 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:23:18.527652 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:23:18.537045 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:23:18.621441 systemd-networkd[748]: lo: Link UP Jan 29 11:23:18.621455 systemd-networkd[748]: lo: Gained carrier Jan 29 11:23:18.623867 systemd-networkd[748]: Enumeration completed Jan 29 11:23:18.624035 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:23:18.624633 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:23:18.624641 systemd-networkd[748]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:23:18.626880 systemd-networkd[748]: eth0: Link UP Jan 29 11:23:18.626887 systemd-networkd[748]: eth0: Gained carrier Jan 29 11:23:18.626901 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:23:18.719030 ignition[729]: Ignition 2.20.0 Jan 29 11:23:18.646933 systemd-networkd[748]: eth0: DHCPv4 address 10.128.0.21/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 29 11:23:18.719051 ignition[729]: Stage: fetch-offline Jan 29 11:23:18.672084 systemd[1]: Reached target network.target - Network. Jan 29 11:23:18.719123 ignition[729]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:23:18.726333 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:23:18.719139 ignition[729]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 11:23:18.762055 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 11:23:18.719276 ignition[729]: parsed url from cmdline: "" Jan 29 11:23:18.811440 unknown[758]: fetched base config from "system" Jan 29 11:23:18.719283 ignition[729]: no config URL provided Jan 29 11:23:18.811462 unknown[758]: fetched base config from "system" Jan 29 11:23:18.719292 ignition[729]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:23:18.811474 unknown[758]: fetched user config from "gcp" Jan 29 11:23:18.719304 ignition[729]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:23:18.828472 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 11:23:18.719313 ignition[729]: failed to fetch config: resource requires networking Jan 29 11:23:18.856106 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:23:18.719755 ignition[729]: Ignition finished successfully Jan 29 11:23:18.881187 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:23:18.798228 ignition[758]: Ignition 2.20.0 Jan 29 11:23:18.903052 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:23:18.798241 ignition[758]: Stage: fetch Jan 29 11:23:18.950202 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:23:18.798443 ignition[758]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:23:18.966323 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:23:18.798456 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 11:23:18.985103 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:23:18.798574 ignition[758]: parsed url from cmdline: "" Jan 29 11:23:19.000036 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:23:18.798581 ignition[758]: no config URL provided Jan 29 11:23:19.017112 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:23:18.798591 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:23:19.017259 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:23:18.798602 ignition[758]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:23:19.048091 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:23:18.798628 ignition[758]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 29 11:23:18.802852 ignition[758]: GET result: OK Jan 29 11:23:18.802945 ignition[758]: parsing config with SHA512: 42ec37db54c5b7643c549cd6705f10935cca3c16c8a391c77af8aaafc6282342ff66881319dca234bc8aebea4bd8d3ecfb382241b6e3bf82a7f9f92569d304f5 Jan 29 11:23:18.812469 ignition[758]: fetch: fetch complete Jan 29 11:23:18.812479 ignition[758]: fetch: fetch passed Jan 29 11:23:18.812559 ignition[758]: Ignition finished successfully Jan 29 11:23:18.878581 ignition[765]: Ignition 2.20.0 Jan 29 11:23:18.878593 ignition[765]: Stage: kargs Jan 29 11:23:18.878822 ignition[765]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:23:18.878837 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 11:23:18.879880 ignition[765]: kargs: kargs passed Jan 29 11:23:18.879939 ignition[765]: Ignition finished successfully Jan 29 11:23:18.947254 ignition[770]: Ignition 2.20.0 Jan 29 11:23:18.947264 ignition[770]: Stage: disks Jan 29 11:23:18.947469 ignition[770]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:23:18.947481 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 11:23:18.948658 ignition[770]: disks: disks passed Jan 29 11:23:18.948719 ignition[770]: Ignition finished successfully Jan 29 11:23:19.108565 systemd-fsck[779]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 11:23:19.250508 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:23:19.255991 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:23:19.404023 kernel: EXT4-fs (sda9): mounted filesystem 2fbf9359-701e-4995-b3f7-74280bd2b1c9 r/w with ordered data mode. Quota mode: none. Jan 29 11:23:19.404921 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:23:19.405807 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:23:19.437970 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:23:19.453965 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:23:19.456608 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:23:19.456690 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:23:19.555976 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (787) Jan 29 11:23:19.556022 kernel: BTRFS info (device sda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:23:19.556038 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:23:19.556053 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:23:19.556068 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:23:19.556083 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:23:19.456726 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:23:19.526227 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:23:19.576860 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:23:19.598066 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:23:19.732825 initrd-setup-root[811]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:23:19.743945 initrd-setup-root[818]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:23:19.753973 initrd-setup-root[825]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:23:19.763950 initrd-setup-root[832]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:23:19.897485 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:23:19.903957 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:23:19.942888 kernel: BTRFS info (device sda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:23:19.952161 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:23:19.962558 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:23:19.987811 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:23:20.000304 ignition[903]: INFO : Ignition 2.20.0 Jan 29 11:23:20.000304 ignition[903]: INFO : Stage: mount Jan 29 11:23:20.026991 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:23:20.026991 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 11:23:20.026991 ignition[903]: INFO : mount: mount passed Jan 29 11:23:20.026991 ignition[903]: INFO : Ignition finished successfully Jan 29 11:23:20.003374 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:23:20.025953 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:23:20.418151 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:23:20.452421 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (915) Jan 29 11:23:20.452467 kernel: BTRFS info (device sda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:23:20.452494 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:23:20.460171 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:23:20.493470 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:23:20.493566 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:23:20.497012 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:23:20.533009 ignition[932]: INFO : Ignition 2.20.0 Jan 29 11:23:20.533009 ignition[932]: INFO : Stage: files Jan 29 11:23:20.548982 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:23:20.548982 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 11:23:20.548982 ignition[932]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:23:20.548982 ignition[932]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:23:20.548982 ignition[932]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:23:20.548982 ignition[932]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:23:20.548982 ignition[932]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:23:20.548982 ignition[932]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:23:20.548982 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:23:20.548982 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 11:23:20.546504 unknown[932]: wrote ssh authorized keys file for user: core Jan 29 11:23:20.685001 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:23:20.557978 systemd-networkd[748]: eth0: Gained IPv6LL Jan 29 11:23:20.807704 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:23:20.825091 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:23:20.825091 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 29 11:23:22.124830 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 11:23:22.256239 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:23:22.272009 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:23:22.272009 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:23:22.272009 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:23:22.272009 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:23:22.272009 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:23:22.272009 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:23:22.272009 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:23:22.272009 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:23:22.272009 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:23:22.272009 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:23:22.272009 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:23:22.272009 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:23:22.272009 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:23:22.272009 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 11:23:22.508068 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 11:23:22.726541 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:23:22.726541 ignition[932]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 11:23:22.765006 ignition[932]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:23:22.765006 ignition[932]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:23:22.765006 ignition[932]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 11:23:22.765006 ignition[932]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:23:22.765006 ignition[932]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:23:22.765006 ignition[932]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:23:22.765006 ignition[932]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:23:22.765006 ignition[932]: INFO : files: files passed Jan 29 11:23:22.765006 ignition[932]: INFO : Ignition finished successfully Jan 29 11:23:22.731376 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:23:22.751071 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:23:22.771046 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:23:22.817528 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:23:22.981142 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:23:22.981142 initrd-setup-root-after-ignition[960]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:23:22.817707 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:23:23.047014 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:23:22.840913 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:23:22.851474 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:23:22.903081 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:23:22.979714 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:23:22.979863 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:23:22.992231 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:23:23.016034 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:23:23.037137 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:23:23.044089 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:23:23.105868 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:23:23.131202 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:23:23.165471 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:23:23.177196 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:23:23.201314 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:23:23.222185 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:23:23.222399 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:23:23.255269 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:23:23.275207 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:23:23.294263 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:23:23.312231 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:23:23.331170 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:23:23.353253 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:23:23.373193 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:23:23.394268 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:23:23.414177 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:23:23.434226 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:23:23.452121 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:23:23.452355 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:23:23.483246 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:23:23.503159 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:23:23.524207 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:23:23.524370 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:23:23.542113 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:23:23.542347 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:23:23.659985 ignition[985]: INFO : Ignition 2.20.0 Jan 29 11:23:23.659985 ignition[985]: INFO : Stage: umount Jan 29 11:23:23.659985 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:23:23.659985 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 11:23:23.659985 ignition[985]: INFO : umount: umount passed Jan 29 11:23:23.659985 ignition[985]: INFO : Ignition finished successfully Jan 29 11:23:23.570239 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:23:23.570479 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:23:23.591329 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:23:23.591530 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:23:23.619118 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:23:23.650098 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:23:23.650367 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:23:23.678136 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:23:23.691989 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:23:23.692284 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:23:23.702410 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:23:23.702600 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:23:23.739292 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:23:23.739451 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:23:23.771646 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:23:23.772554 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:23:23.772669 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:23:23.793384 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:23:23.793512 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:23:23.813381 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:23:23.813448 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:23:23.829206 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:23:23.829289 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:23:23.839254 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 11:23:23.839329 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 11:23:23.856262 systemd[1]: Stopped target network.target - Network. Jan 29 11:23:23.874214 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:23:23.874310 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:23:23.889285 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:23:23.907183 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:23:23.910913 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:23:23.924213 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:23:23.950115 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:23:23.960234 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:23:23.960296 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:23:23.975234 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:23:23.975297 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:23:24.011161 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:23:24.011250 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:23:24.034159 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:23:24.034244 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:23:24.042233 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:23:24.042306 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:23:24.061473 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:23:24.065903 systemd-networkd[748]: eth0: DHCPv6 lease lost Jan 29 11:23:24.088208 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:23:24.116566 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:23:24.116732 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:23:24.135451 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:23:24.135852 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:23:24.153608 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:23:24.153676 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:23:24.167235 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:23:24.623983 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 29 11:23:24.185925 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:23:24.186037 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:23:24.205197 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:23:24.205278 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:23:24.225195 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:23:24.225278 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:23:24.243177 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:23:24.243260 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:23:24.264314 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:23:24.277687 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:23:24.277899 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:23:24.302282 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:23:24.302352 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:23:24.322087 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:23:24.322167 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:23:24.332134 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:23:24.332215 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:23:24.357317 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:23:24.357430 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:23:24.386278 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:23:24.386373 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:23:24.446081 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:23:24.450158 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:23:24.450242 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:23:24.479075 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:23:24.479181 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:23:24.491652 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:23:24.491866 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:23:24.502494 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:23:24.502662 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:23:24.520812 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:23:24.545067 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:23:24.576779 systemd[1]: Switching root. Jan 29 11:23:24.954025 systemd-journald[184]: Journal stopped Jan 29 11:23:15.099127 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:36:13 -00 2025 Jan 29 11:23:15.099177 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:23:15.099196 kernel: BIOS-provided physical RAM map: Jan 29 11:23:15.099211 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 29 11:23:15.099224 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 29 11:23:15.099238 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 29 11:23:15.099256 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 29 11:23:15.099275 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 29 11:23:15.099290 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd324fff] usable Jan 29 11:23:15.099304 kernel: BIOS-e820: [mem 0x00000000bd325000-0x00000000bd32dfff] ACPI data Jan 29 11:23:15.099329 kernel: BIOS-e820: [mem 0x00000000bd32e000-0x00000000bf8ecfff] usable Jan 29 11:23:15.099344 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Jan 29 11:23:15.099359 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 29 11:23:15.099375 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 29 11:23:15.099398 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 29 11:23:15.099415 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 29 11:23:15.099432 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 29 11:23:15.099448 kernel: NX (Execute Disable) protection: active Jan 29 11:23:15.099465 kernel: APIC: Static calls initialized Jan 29 11:23:15.099481 kernel: efi: EFI v2.7 by EDK II Jan 29 11:23:15.099498 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd325018 Jan 29 11:23:15.099515 kernel: random: crng init done Jan 29 11:23:15.099531 kernel: secureboot: Secure boot disabled Jan 29 11:23:15.099547 kernel: SMBIOS 2.4 present. Jan 29 11:23:15.099567 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 29 11:23:15.099583 kernel: Hypervisor detected: KVM Jan 29 11:23:15.099600 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 11:23:15.099616 kernel: kvm-clock: using sched offset of 13405899029 cycles Jan 29 11:23:15.099634 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 11:23:15.099650 kernel: tsc: Detected 2299.998 MHz processor Jan 29 11:23:15.099666 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:23:15.099682 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:23:15.099699 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 29 11:23:15.099720 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 29 11:23:15.099737 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:23:15.099753 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 29 11:23:15.099769 kernel: Using GB pages for direct mapping Jan 29 11:23:15.099785 kernel: ACPI: Early table checksum verification disabled Jan 29 11:23:15.099826 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 29 11:23:15.099844 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 29 11:23:15.099868 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 29 11:23:15.099890 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 29 11:23:15.099908 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 29 11:23:15.099926 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 29 11:23:15.099944 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 29 11:23:15.099963 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 29 11:23:15.099981 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 29 11:23:15.100002 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 29 11:23:15.100020 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 29 11:23:15.100038 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 29 11:23:15.100057 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 29 11:23:15.100075 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 29 11:23:15.100092 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 29 11:23:15.100110 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 29 11:23:15.100128 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 29 11:23:15.100146 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 29 11:23:15.100168 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 29 11:23:15.100186 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 29 11:23:15.100204 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 11:23:15.100223 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 11:23:15.100240 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 29 11:23:15.100258 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 29 11:23:15.100276 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 29 11:23:15.100295 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 29 11:23:15.100313 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 29 11:23:15.100345 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Jan 29 11:23:15.100363 kernel: Zone ranges: Jan 29 11:23:15.100381 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:23:15.100399 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 29 11:23:15.100417 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 29 11:23:15.100434 kernel: Movable zone start for each node Jan 29 11:23:15.100452 kernel: Early memory node ranges Jan 29 11:23:15.100471 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 29 11:23:15.100489 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 29 11:23:15.100507 kernel: node 0: [mem 0x0000000000100000-0x00000000bd324fff] Jan 29 11:23:15.100529 kernel: node 0: [mem 0x00000000bd32e000-0x00000000bf8ecfff] Jan 29 11:23:15.100547 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 29 11:23:15.100565 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 29 11:23:15.100582 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 29 11:23:15.100600 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:23:15.100619 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 29 11:23:15.100636 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 29 11:23:15.100655 kernel: On node 0, zone DMA32: 9 pages in unavailable ranges Jan 29 11:23:15.100673 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 29 11:23:15.100695 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 29 11:23:15.100713 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 29 11:23:15.100731 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 11:23:15.100749 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:23:15.100767 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 11:23:15.100786 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:23:15.100828 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 11:23:15.100843 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 11:23:15.100858 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:23:15.100879 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 11:23:15.100894 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 29 11:23:15.100909 kernel: Booting paravirtualized kernel on KVM Jan 29 11:23:15.100924 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:23:15.100939 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 11:23:15.100957 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 11:23:15.100973 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 11:23:15.100990 kernel: pcpu-alloc: [0] 0 1 Jan 29 11:23:15.101004 kernel: kvm-guest: PV spinlocks enabled Jan 29 11:23:15.101025 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 11:23:15.101041 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:23:15.101056 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:23:15.101071 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 29 11:23:15.101088 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:23:15.101105 kernel: Fallback order for Node 0: 0 Jan 29 11:23:15.101123 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932271 Jan 29 11:23:15.101140 kernel: Policy zone: Normal Jan 29 11:23:15.101163 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:23:15.101179 kernel: software IO TLB: area num 2. Jan 29 11:23:15.101195 kernel: Memory: 7513352K/7860548K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42972K init, 2220K bss, 346940K reserved, 0K cma-reserved) Jan 29 11:23:15.101210 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 11:23:15.101228 kernel: Kernel/User page tables isolation: enabled Jan 29 11:23:15.101245 kernel: ftrace: allocating 37923 entries in 149 pages Jan 29 11:23:15.101262 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:23:15.101280 kernel: Dynamic Preempt: voluntary Jan 29 11:23:15.101324 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:23:15.101344 kernel: rcu: RCU event tracing is enabled. Jan 29 11:23:15.101363 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 11:23:15.101386 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:23:15.101404 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:23:15.101423 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:23:15.101440 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:23:15.101459 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 11:23:15.101478 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 11:23:15.101500 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:23:15.101518 kernel: Console: colour dummy device 80x25 Jan 29 11:23:15.101537 kernel: printk: console [ttyS0] enabled Jan 29 11:23:15.101555 kernel: ACPI: Core revision 20230628 Jan 29 11:23:15.101574 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:23:15.101592 kernel: x2apic enabled Jan 29 11:23:15.101611 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:23:15.101628 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 29 11:23:15.101647 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 29 11:23:15.101670 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 29 11:23:15.101689 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 29 11:23:15.101707 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 29 11:23:15.101726 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:23:15.101744 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 29 11:23:15.101763 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 29 11:23:15.101781 kernel: Spectre V2 : Mitigation: IBRS Jan 29 11:23:15.101840 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:23:15.101863 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 11:23:15.101881 kernel: RETBleed: Mitigation: IBRS Jan 29 11:23:15.101900 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 11:23:15.101918 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 29 11:23:15.101937 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 11:23:15.101956 kernel: MDS: Mitigation: Clear CPU buffers Jan 29 11:23:15.101974 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 11:23:15.101992 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:23:15.102010 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:23:15.102032 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:23:15.102050 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:23:15.102067 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 29 11:23:15.102086 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:23:15.102104 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:23:15.102122 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:23:15.102141 kernel: landlock: Up and running. Jan 29 11:23:15.102158 kernel: SELinux: Initializing. Jan 29 11:23:15.102173 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 11:23:15.102192 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 11:23:15.102208 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 29 11:23:15.102224 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:23:15.102241 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:23:15.102258 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:23:15.102277 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 29 11:23:15.102295 kernel: signal: max sigframe size: 1776 Jan 29 11:23:15.102322 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:23:15.102341 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:23:15.102365 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 11:23:15.102382 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:23:15.102398 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:23:15.102415 kernel: .... node #0, CPUs: #1 Jan 29 11:23:15.102434 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 29 11:23:15.102452 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 29 11:23:15.102469 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 11:23:15.102485 kernel: smpboot: Max logical packages: 1 Jan 29 11:23:15.102506 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 29 11:23:15.102525 kernel: devtmpfs: initialized Jan 29 11:23:15.102543 kernel: x86/mm: Memory block size: 128MB Jan 29 11:23:15.102562 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 29 11:23:15.102580 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:23:15.102599 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 11:23:15.102617 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:23:15.102635 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:23:15.102655 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:23:15.102678 kernel: audit: type=2000 audit(1738149793.687:1): state=initialized audit_enabled=0 res=1 Jan 29 11:23:15.102697 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:23:15.102716 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:23:15.102734 kernel: cpuidle: using governor menu Jan 29 11:23:15.102753 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:23:15.102772 kernel: dca service started, version 1.12.1 Jan 29 11:23:15.102805 kernel: PCI: Using configuration type 1 for base access Jan 29 11:23:15.102824 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:23:15.102843 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:23:15.102889 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:23:15.102909 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:23:15.102927 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:23:15.102944 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:23:15.102962 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:23:15.102981 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:23:15.102999 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:23:15.103018 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 29 11:23:15.103036 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:23:15.103060 kernel: ACPI: Interpreter enabled Jan 29 11:23:15.103079 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 11:23:15.103099 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:23:15.103119 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:23:15.103138 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 29 11:23:15.103156 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 29 11:23:15.103175 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:23:15.103466 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:23:15.103695 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 11:23:15.103900 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 11:23:15.103926 kernel: PCI host bridge to bus 0000:00 Jan 29 11:23:15.104106 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:23:15.104275 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 11:23:15.104452 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:23:15.104617 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 29 11:23:15.104802 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:23:15.105021 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 11:23:15.105239 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 29 11:23:15.105459 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 29 11:23:15.105641 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 29 11:23:15.105853 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 29 11:23:15.106045 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 29 11:23:15.106222 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 29 11:23:15.106417 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 11:23:15.106612 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 29 11:23:15.106864 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 29 11:23:15.107073 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 11:23:15.107260 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 29 11:23:15.107463 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 29 11:23:15.107488 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 11:23:15.107508 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 11:23:15.107527 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:23:15.107547 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 11:23:15.107566 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 11:23:15.107586 kernel: iommu: Default domain type: Translated Jan 29 11:23:15.107606 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:23:15.107625 kernel: efivars: Registered efivars operations Jan 29 11:23:15.107650 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:23:15.107670 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:23:15.107689 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 29 11:23:15.107709 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 29 11:23:15.107729 kernel: e820: reserve RAM buffer [mem 0xbd325000-0xbfffffff] Jan 29 11:23:15.107747 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 29 11:23:15.107766 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 29 11:23:15.107786 kernel: vgaarb: loaded Jan 29 11:23:15.107850 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 11:23:15.107874 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:23:15.107891 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:23:15.107909 kernel: pnp: PnP ACPI init Jan 29 11:23:15.107925 kernel: pnp: PnP ACPI: found 7 devices Jan 29 11:23:15.107944 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:23:15.107962 kernel: NET: Registered PF_INET protocol family Jan 29 11:23:15.107979 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 11:23:15.107996 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 29 11:23:15.108018 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:23:15.108037 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:23:15.108056 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 29 11:23:15.108074 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 29 11:23:15.108091 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 29 11:23:15.108109 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 29 11:23:15.108127 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:23:15.108144 kernel: NET: Registered PF_XDP protocol family Jan 29 11:23:15.108341 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 11:23:15.108517 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 11:23:15.108688 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 11:23:15.108904 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 29 11:23:15.109097 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 11:23:15.109123 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:23:15.109141 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 11:23:15.109160 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 29 11:23:15.109186 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 11:23:15.109206 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 29 11:23:15.109224 kernel: clocksource: Switched to clocksource tsc Jan 29 11:23:15.109244 kernel: Initialise system trusted keyrings Jan 29 11:23:15.109263 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 29 11:23:15.109283 kernel: Key type asymmetric registered Jan 29 11:23:15.109302 kernel: Asymmetric key parser 'x509' registered Jan 29 11:23:15.109334 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:23:15.109354 kernel: io scheduler mq-deadline registered Jan 29 11:23:15.109378 kernel: io scheduler kyber registered Jan 29 11:23:15.109398 kernel: io scheduler bfq registered Jan 29 11:23:15.109418 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:23:15.109438 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 29 11:23:15.109635 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 29 11:23:15.109659 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 29 11:23:15.109860 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 29 11:23:15.109885 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 29 11:23:15.110067 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 29 11:23:15.110096 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:23:15.110114 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:23:15.110134 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 29 11:23:15.110153 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 29 11:23:15.110171 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 29 11:23:15.110376 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 29 11:23:15.110404 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 11:23:15.110422 kernel: i8042: Warning: Keylock active Jan 29 11:23:15.110446 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:23:15.110465 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:23:15.110674 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 29 11:23:15.110865 kernel: rtc_cmos 00:00: registered as rtc0 Jan 29 11:23:15.111048 kernel: rtc_cmos 00:00: setting system clock to 2025-01-29T11:23:14 UTC (1738149794) Jan 29 11:23:15.111218 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 29 11:23:15.111241 kernel: intel_pstate: CPU model not supported Jan 29 11:23:15.111261 kernel: pstore: Using crash dump compression: deflate Jan 29 11:23:15.111285 kernel: pstore: Registered efi_pstore as persistent store backend Jan 29 11:23:15.111304 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:23:15.111330 kernel: Segment Routing with IPv6 Jan 29 11:23:15.111348 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:23:15.111368 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:23:15.111387 kernel: Key type dns_resolver registered Jan 29 11:23:15.111403 kernel: IPI shorthand broadcast: enabled Jan 29 11:23:15.111419 kernel: sched_clock: Marking stable (880004056, 177498390)->(1095900485, -38398039) Jan 29 11:23:15.111437 kernel: registered taskstats version 1 Jan 29 11:23:15.111462 kernel: Loading compiled-in X.509 certificates Jan 29 11:23:15.111481 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: de92a621108c58f5771c86c5c3ccb1aa0728ed55' Jan 29 11:23:15.111501 kernel: Key type .fscrypt registered Jan 29 11:23:15.111520 kernel: Key type fscrypt-provisioning registered Jan 29 11:23:15.111540 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:23:15.111559 kernel: ima: No architecture policies found Jan 29 11:23:15.111576 kernel: clk: Disabling unused clocks Jan 29 11:23:15.111597 kernel: Freeing unused kernel image (initmem) memory: 42972K Jan 29 11:23:15.111616 kernel: Write protecting the kernel read-only data: 36864k Jan 29 11:23:15.111640 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 29 11:23:15.111656 kernel: Run /init as init process Jan 29 11:23:15.111673 kernel: with arguments: Jan 29 11:23:15.111690 kernel: /init Jan 29 11:23:15.111706 kernel: with environment: Jan 29 11:23:15.111724 kernel: HOME=/ Jan 29 11:23:15.111740 kernel: TERM=linux Jan 29 11:23:15.111759 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:23:15.111781 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:23:15.111842 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:23:15.111865 systemd[1]: Detected virtualization google. Jan 29 11:23:15.111885 systemd[1]: Detected architecture x86-64. Jan 29 11:23:15.111904 systemd[1]: Running in initrd. Jan 29 11:23:15.111923 systemd[1]: No hostname configured, using default hostname. Jan 29 11:23:15.111942 systemd[1]: Hostname set to <localhost>. Jan 29 11:23:15.111966 systemd[1]: Initializing machine ID from random generator. Jan 29 11:23:15.111985 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:23:15.112005 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:23:15.112025 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:23:15.112045 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:23:15.112064 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:23:15.112084 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:23:15.112104 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:23:15.112134 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:23:15.112171 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:23:15.112195 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:23:15.112215 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:23:15.112235 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:23:15.112259 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:23:15.112279 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:23:15.112299 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:23:15.112329 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:23:15.112349 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:23:15.112370 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:23:15.112390 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:23:15.112410 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:23:15.112431 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:23:15.112455 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:23:15.112475 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:23:15.112495 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:23:15.112516 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:23:15.112536 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:23:15.112556 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:23:15.112576 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:23:15.112597 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:23:15.112621 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:23:15.112684 systemd-journald[184]: Collecting audit messages is disabled. Jan 29 11:23:15.112726 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:23:15.112747 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:23:15.112771 systemd-journald[184]: Journal started Jan 29 11:23:15.112836 systemd-journald[184]: Runtime Journal (/run/log/journal/090433178c3d471c94aa12d6824efba4) is 8.0M, max 148.7M, 140.7M free. Jan 29 11:23:15.117862 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:23:15.121230 systemd-modules-load[185]: Inserted module 'overlay' Jan 29 11:23:15.130963 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:23:15.137086 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:23:15.149166 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:23:15.162546 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:23:15.174625 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:23:15.191110 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:23:15.191147 kernel: Bridge firewalling registered Jan 29 11:23:15.180225 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:23:15.182905 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 29 11:23:15.185450 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:23:15.202374 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:23:15.212136 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:23:15.217014 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:23:15.239880 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:23:15.250032 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:23:15.254265 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:23:15.262220 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:23:15.273002 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:23:15.293824 dracut-cmdline[214]: dracut-dracut-053 Jan 29 11:23:15.298896 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:23:15.327866 systemd-resolved[217]: Positive Trust Anchors: Jan 29 11:23:15.328435 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:23:15.328505 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:23:15.336145 systemd-resolved[217]: Defaulting to hostname 'linux'. Jan 29 11:23:15.341010 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:23:15.351596 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:23:15.404839 kernel: SCSI subsystem initialized Jan 29 11:23:15.415841 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:23:15.428828 kernel: iscsi: registered transport (tcp) Jan 29 11:23:15.452837 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:23:15.452934 kernel: QLogic iSCSI HBA Driver Jan 29 11:23:15.505262 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:23:15.512086 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:23:15.590915 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:23:15.591006 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:23:15.591035 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:23:15.648832 kernel: raid6: avx2x4 gen() 17857 MB/s Jan 29 11:23:15.669836 kernel: raid6: avx2x2 gen() 17780 MB/s Jan 29 11:23:15.695878 kernel: raid6: avx2x1 gen() 13949 MB/s Jan 29 11:23:15.695967 kernel: raid6: using algorithm avx2x4 gen() 17857 MB/s Jan 29 11:23:15.722839 kernel: raid6: .... xor() 6848 MB/s, rmw enabled Jan 29 11:23:15.722935 kernel: raid6: using avx2x2 recovery algorithm Jan 29 11:23:15.751833 kernel: xor: automatically using best checksumming function avx Jan 29 11:23:15.938833 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:23:15.952541 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:23:15.959154 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:23:16.014207 systemd-udevd[400]: Using default interface naming scheme 'v255'. Jan 29 11:23:16.021167 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:23:16.052017 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:23:16.072991 dracut-pre-trigger[414]: rd.md=0: removing MD RAID activation Jan 29 11:23:16.109928 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:23:16.135043 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:23:16.240021 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:23:16.258615 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:23:16.317041 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:23:16.338219 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:23:16.354058 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:23:16.353869 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:23:16.427521 kernel: scsi host0: Virtio SCSI HBA Jan 29 11:23:16.427836 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 29 11:23:16.383512 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:23:16.456967 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:23:16.457007 kernel: AES CTR mode by8 optimization enabled Jan 29 11:23:16.420036 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:23:16.491477 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:23:16.519892 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 29 11:23:16.581301 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 29 11:23:16.581579 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 29 11:23:16.581818 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 29 11:23:16.582044 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 29 11:23:16.582285 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:23:16.582313 kernel: GPT:17805311 != 25165823 Jan 29 11:23:16.582337 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:23:16.582360 kernel: GPT:17805311 != 25165823 Jan 29 11:23:16.582382 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:23:16.582406 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:23:16.582430 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 29 11:23:16.491690 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:23:16.519743 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:23:16.576056 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:23:16.657986 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (455) Jan 29 11:23:16.658028 kernel: BTRFS: device fsid 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (457) Jan 29 11:23:16.576283 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:23:16.593965 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:23:16.612751 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:23:16.687434 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:23:16.719245 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:23:16.751162 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 29 11:23:16.757969 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 29 11:23:16.777863 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 29 11:23:16.797869 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 29 11:23:16.835979 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 29 11:23:16.843006 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:23:16.874077 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:23:16.899492 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:23:16.899696 disk-uuid[542]: Primary Header is updated. Jan 29 11:23:16.899696 disk-uuid[542]: Secondary Entries is updated. Jan 29 11:23:16.899696 disk-uuid[542]: Secondary Header is updated. Jan 29 11:23:16.928830 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:23:16.971558 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:23:17.942840 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:23:17.943858 disk-uuid[544]: The operation has completed successfully. Jan 29 11:23:18.015195 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:23:18.015345 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:23:18.056036 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:23:18.086949 sh[566]: Success Jan 29 11:23:18.110832 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 11:23:18.207594 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:23:18.214956 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:23:18.241422 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:23:18.293088 kernel: BTRFS info (device dm-0): first mount of filesystem 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 Jan 29 11:23:18.293197 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:23:18.293223 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:23:18.302532 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:23:18.309383 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:23:18.343825 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 11:23:18.350017 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:23:18.351027 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:23:18.357078 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:23:18.379176 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:23:18.445984 kernel: BTRFS info (device sda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:23:18.446029 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:23:18.446076 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:23:18.446101 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:23:18.446124 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:23:18.466831 kernel: BTRFS info (device sda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:23:18.481143 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:23:18.508226 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:23:18.527652 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:23:18.537045 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:23:18.621441 systemd-networkd[748]: lo: Link UP Jan 29 11:23:18.621455 systemd-networkd[748]: lo: Gained carrier Jan 29 11:23:18.623867 systemd-networkd[748]: Enumeration completed Jan 29 11:23:18.624035 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:23:18.624633 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:23:18.624641 systemd-networkd[748]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:23:18.626880 systemd-networkd[748]: eth0: Link UP Jan 29 11:23:18.626887 systemd-networkd[748]: eth0: Gained carrier Jan 29 11:23:18.626901 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:23:18.719030 ignition[729]: Ignition 2.20.0 Jan 29 11:23:18.646933 systemd-networkd[748]: eth0: DHCPv4 address 10.128.0.21/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 29 11:23:18.719051 ignition[729]: Stage: fetch-offline Jan 29 11:23:18.672084 systemd[1]: Reached target network.target - Network. Jan 29 11:23:18.719123 ignition[729]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:23:18.726333 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:23:18.719139 ignition[729]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 11:23:18.762055 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 11:23:18.719276 ignition[729]: parsed url from cmdline: "" Jan 29 11:23:18.811440 unknown[758]: fetched base config from "system" Jan 29 11:23:18.719283 ignition[729]: no config URL provided Jan 29 11:23:18.811462 unknown[758]: fetched base config from "system" Jan 29 11:23:18.719292 ignition[729]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:23:18.811474 unknown[758]: fetched user config from "gcp" Jan 29 11:23:18.719304 ignition[729]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:23:18.828472 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 11:23:18.719313 ignition[729]: failed to fetch config: resource requires networking Jan 29 11:23:18.856106 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:23:18.719755 ignition[729]: Ignition finished successfully Jan 29 11:23:18.881187 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:23:18.798228 ignition[758]: Ignition 2.20.0 Jan 29 11:23:18.903052 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:23:18.798241 ignition[758]: Stage: fetch Jan 29 11:23:18.950202 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:23:18.798443 ignition[758]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:23:18.966323 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:23:18.798456 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 11:23:18.985103 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:23:18.798574 ignition[758]: parsed url from cmdline: "" Jan 29 11:23:19.000036 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:23:18.798581 ignition[758]: no config URL provided Jan 29 11:23:19.017112 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:23:18.798591 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:23:19.017259 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:23:18.798602 ignition[758]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:23:19.048091 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:23:18.798628 ignition[758]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 29 11:23:18.802852 ignition[758]: GET result: OK Jan 29 11:23:18.802945 ignition[758]: parsing config with SHA512: 42ec37db54c5b7643c549cd6705f10935cca3c16c8a391c77af8aaafc6282342ff66881319dca234bc8aebea4bd8d3ecfb382241b6e3bf82a7f9f92569d304f5 Jan 29 11:23:18.812469 ignition[758]: fetch: fetch complete Jan 29 11:23:18.812479 ignition[758]: fetch: fetch passed Jan 29 11:23:18.812559 ignition[758]: Ignition finished successfully Jan 29 11:23:18.878581 ignition[765]: Ignition 2.20.0 Jan 29 11:23:18.878593 ignition[765]: Stage: kargs Jan 29 11:23:18.878822 ignition[765]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:23:18.878837 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 11:23:18.879880 ignition[765]: kargs: kargs passed Jan 29 11:23:18.879939 ignition[765]: Ignition finished successfully Jan 29 11:23:18.947254 ignition[770]: Ignition 2.20.0 Jan 29 11:23:18.947264 ignition[770]: Stage: disks Jan 29 11:23:18.947469 ignition[770]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:23:18.947481 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 11:23:18.948658 ignition[770]: disks: disks passed Jan 29 11:23:18.948719 ignition[770]: Ignition finished successfully Jan 29 11:23:19.108565 systemd-fsck[779]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 11:23:19.250508 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:23:19.255991 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:23:19.404023 kernel: EXT4-fs (sda9): mounted filesystem 2fbf9359-701e-4995-b3f7-74280bd2b1c9 r/w with ordered data mode. Quota mode: none. Jan 29 11:23:19.404921 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:23:19.405807 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:23:19.437970 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:23:19.453965 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:23:19.456608 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:23:19.456690 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:23:19.555976 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (787) Jan 29 11:23:19.556022 kernel: BTRFS info (device sda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:23:19.556038 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:23:19.556053 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:23:19.556068 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:23:19.556083 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:23:19.456726 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:23:19.526227 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:23:19.576860 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:23:19.598066 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:23:19.732825 initrd-setup-root[811]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:23:19.743945 initrd-setup-root[818]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:23:19.753973 initrd-setup-root[825]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:23:19.763950 initrd-setup-root[832]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:23:19.897485 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:23:19.903957 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:23:19.942888 kernel: BTRFS info (device sda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:23:19.952161 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:23:19.962558 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:23:19.987811 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:23:20.000304 ignition[903]: INFO : Ignition 2.20.0 Jan 29 11:23:20.000304 ignition[903]: INFO : Stage: mount Jan 29 11:23:20.026991 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:23:20.026991 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 11:23:20.026991 ignition[903]: INFO : mount: mount passed Jan 29 11:23:20.026991 ignition[903]: INFO : Ignition finished successfully Jan 29 11:23:20.003374 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:23:20.025953 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:23:20.418151 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:23:20.452421 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (915) Jan 29 11:23:20.452467 kernel: BTRFS info (device sda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:23:20.452494 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:23:20.460171 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:23:20.493470 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:23:20.493566 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 11:23:20.497012 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:23:20.533009 ignition[932]: INFO : Ignition 2.20.0 Jan 29 11:23:20.533009 ignition[932]: INFO : Stage: files Jan 29 11:23:20.548982 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:23:20.548982 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 11:23:20.548982 ignition[932]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:23:20.548982 ignition[932]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:23:20.548982 ignition[932]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:23:20.548982 ignition[932]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:23:20.548982 ignition[932]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:23:20.548982 ignition[932]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:23:20.548982 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:23:20.548982 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 11:23:20.546504 unknown[932]: wrote ssh authorized keys file for user: core Jan 29 11:23:20.685001 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:23:20.557978 systemd-networkd[748]: eth0: Gained IPv6LL Jan 29 11:23:20.807704 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 11:23:20.825091 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:23:20.825091 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 29 11:23:22.124830 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 11:23:22.256239 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 11:23:22.272009 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:23:22.272009 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:23:22.272009 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:23:22.272009 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:23:22.272009 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:23:22.272009 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:23:22.272009 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:23:22.272009 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:23:22.272009 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:23:22.272009 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:23:22.272009 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:23:22.272009 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:23:22.272009 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:23:22.272009 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 11:23:22.508068 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 11:23:22.726541 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 11:23:22.726541 ignition[932]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 11:23:22.765006 ignition[932]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:23:22.765006 ignition[932]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:23:22.765006 ignition[932]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 11:23:22.765006 ignition[932]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:23:22.765006 ignition[932]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:23:22.765006 ignition[932]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:23:22.765006 ignition[932]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:23:22.765006 ignition[932]: INFO : files: files passed Jan 29 11:23:22.765006 ignition[932]: INFO : Ignition finished successfully Jan 29 11:23:22.731376 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:23:22.751071 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:23:22.771046 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:23:22.817528 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:23:22.981142 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:23:22.981142 initrd-setup-root-after-ignition[960]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:23:22.817707 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:23:23.047014 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:23:22.840913 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:23:22.851474 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:23:22.903081 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:23:22.979714 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:23:22.979863 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:23:22.992231 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:23:23.016034 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:23:23.037137 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:23:23.044089 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:23:23.105868 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:23:23.131202 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:23:23.165471 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:23:23.177196 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:23:23.201314 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:23:23.222185 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:23:23.222399 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:23:23.255269 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:23:23.275207 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:23:23.294263 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:23:23.312231 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:23:23.331170 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:23:23.353253 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:23:23.373193 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:23:23.394268 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:23:23.414177 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:23:23.434226 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:23:23.452121 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:23:23.452355 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:23:23.483246 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:23:23.503159 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:23:23.524207 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:23:23.524370 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:23:23.542113 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:23:23.542347 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:23:23.659985 ignition[985]: INFO : Ignition 2.20.0 Jan 29 11:23:23.659985 ignition[985]: INFO : Stage: umount Jan 29 11:23:23.659985 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:23:23.659985 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 11:23:23.659985 ignition[985]: INFO : umount: umount passed Jan 29 11:23:23.659985 ignition[985]: INFO : Ignition finished successfully Jan 29 11:23:23.570239 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:23:23.570479 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:23:23.591329 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:23:23.591530 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:23:23.619118 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:23:23.650098 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:23:23.650367 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:23:23.678136 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:23:23.691989 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:23:23.692284 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:23:23.702410 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:23:23.702600 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:23:23.739292 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:23:23.739451 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:23:23.771646 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:23:23.772554 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:23:23.772669 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:23:23.793384 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:23:23.793512 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:23:23.813381 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:23:23.813448 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:23:23.829206 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:23:23.829289 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:23:23.839254 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 11:23:23.839329 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 11:23:23.856262 systemd[1]: Stopped target network.target - Network. Jan 29 11:23:23.874214 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:23:23.874310 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:23:23.889285 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:23:23.907183 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:23:23.910913 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:23:23.924213 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:23:23.950115 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:23:23.960234 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:23:23.960296 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:23:23.975234 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:23:23.975297 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:23:24.011161 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:23:24.011250 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:23:24.034159 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:23:24.034244 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:23:24.042233 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:23:24.042306 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:23:24.061473 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:23:24.065903 systemd-networkd[748]: eth0: DHCPv6 lease lost Jan 29 11:23:24.088208 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:23:24.116566 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:23:24.116732 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:23:24.135451 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:23:24.135852 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:23:24.153608 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:23:24.153676 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:23:24.167235 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:23:24.623983 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 29 11:23:24.185925 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:23:24.186037 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:23:24.205197 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:23:24.205278 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:23:24.225195 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:23:24.225278 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:23:24.243177 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:23:24.243260 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:23:24.264314 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:23:24.277687 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:23:24.277899 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:23:24.302282 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:23:24.302352 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:23:24.322087 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:23:24.322167 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:23:24.332134 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:23:24.332215 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:23:24.357317 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:23:24.357430 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:23:24.386278 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:23:24.386373 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:23:24.446081 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:23:24.450158 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:23:24.450242 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:23:24.479075 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:23:24.479181 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:23:24.491652 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:23:24.491866 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:23:24.502494 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:23:24.502662 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:23:24.520812 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:23:24.545067 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:23:24.576779 systemd[1]: Switching root. Jan 29 11:23:24.954025 systemd-journald[184]: Journal stopped Jan 29 11:23:27.463324 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:23:27.463385 kernel: SELinux: policy capability open_perms=1 Jan 29 11:23:27.463408 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:23:27.463425 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:23:27.463442 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:23:27.463460 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:23:27.463480 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:23:27.463503 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:23:27.463522 kernel: audit: type=1403 audit(1738149805.326:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:23:27.463544 systemd[1]: Successfully loaded SELinux policy in 91.113ms. Jan 29 11:23:27.463567 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.838ms. Jan 29 11:23:27.463591 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:23:27.463612 systemd[1]: Detected virtualization google. Jan 29 11:23:27.463633 systemd[1]: Detected architecture x86-64. Jan 29 11:23:27.463660 systemd[1]: Detected first boot. Jan 29 11:23:27.463682 systemd[1]: Initializing machine ID from random generator. Jan 29 11:23:27.463703 zram_generator::config[1026]: No configuration found. Jan 29 11:23:27.463724 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:23:27.463744 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:23:27.463768 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:23:27.463806 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:23:27.463840 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:23:27.463861 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:23:27.463881 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:23:27.463903 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:23:27.463924 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:23:27.463951 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:23:27.463972 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:23:27.463993 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:23:27.464014 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:23:27.464035 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:23:27.464056 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:23:27.464077 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:23:27.464100 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:23:27.464134 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:23:27.464155 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:23:27.464176 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:23:27.464198 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:23:27.464219 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:23:27.464241 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:23:27.464268 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:23:27.464292 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:23:27.464314 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:23:27.464340 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:23:27.464362 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:23:27.464384 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:23:27.464406 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:23:27.464428 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:23:27.464450 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:23:27.464473 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:23:27.464500 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:23:27.464523 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:23:27.464546 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:23:27.464568 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:23:27.464592 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:23:27.464619 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:23:27.464642 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:23:27.464664 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:23:27.464688 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:23:27.464711 systemd[1]: Reached target machines.target - Containers. Jan 29 11:23:27.464734 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:23:27.464757 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:23:27.464782 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:23:27.464847 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:23:27.464871 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:23:27.464894 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:23:27.464913 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:23:27.464936 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:23:27.464959 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:23:27.464983 kernel: ACPI: bus type drm_connector registered Jan 29 11:23:27.465004 kernel: loop: module loaded Jan 29 11:23:27.465031 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:23:27.465055 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:23:27.465078 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:23:27.465102 kernel: fuse: init (API version 7.39) Jan 29 11:23:27.465132 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:23:27.465156 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:23:27.465180 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:23:27.465203 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:23:27.465227 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:23:27.465297 systemd-journald[1113]: Collecting audit messages is disabled. Jan 29 11:23:27.465342 systemd-journald[1113]: Journal started Jan 29 11:23:27.465391 systemd-journald[1113]: Runtime Journal (/run/log/journal/5ee8f24c8a134258a461ccfe13c96170) is 8.0M, max 148.7M, 140.7M free. Jan 29 11:23:27.467782 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:23:26.233235 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:23:26.258586 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 11:23:26.259198 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:23:27.509944 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:23:27.510055 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:23:27.510838 systemd[1]: Stopped verity-setup.service. Jan 29 11:23:27.550816 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:23:27.559840 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:23:27.571394 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:23:27.582247 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:23:27.592233 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:23:27.602244 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:23:27.612240 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:23:27.622262 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:23:27.633536 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:23:27.645401 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:23:27.657428 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:23:27.657663 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:23:27.669404 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:23:27.669632 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:23:27.681343 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:23:27.681582 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:23:27.691439 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:23:27.691704 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:23:27.703449 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:23:27.703712 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:23:27.714284 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:23:27.714507 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:23:27.724312 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:23:27.734312 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:23:27.746331 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:23:27.758360 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:23:27.783472 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:23:27.801990 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:23:27.817955 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:23:27.829042 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:23:27.829129 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:23:27.840212 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:23:27.860084 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:23:27.880088 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:23:27.890076 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:23:27.898215 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:23:27.913360 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:23:27.922569 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:23:27.925398 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:23:27.934998 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:23:27.944198 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:23:27.964056 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:23:27.971536 systemd-journald[1113]: Time spent on flushing to /var/log/journal/5ee8f24c8a134258a461ccfe13c96170 is 92.148ms for 934 entries. Jan 29 11:23:27.971536 systemd-journald[1113]: System Journal (/var/log/journal/5ee8f24c8a134258a461ccfe13c96170) is 8.0M, max 584.8M, 576.8M free. Jan 29 11:23:28.106115 systemd-journald[1113]: Received client request to flush runtime journal. Jan 29 11:23:28.106198 kernel: loop0: detected capacity change from 0 to 138184 Jan 29 11:23:27.991056 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:23:28.015042 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:23:28.029006 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:23:28.045839 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:23:28.057674 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:23:28.069404 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:23:28.081483 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:23:28.114250 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:23:28.135633 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:23:28.152822 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:23:28.153398 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:23:28.168508 udevadm[1148]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 11:23:28.184216 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:23:28.185918 kernel: loop1: detected capacity change from 0 to 52056 Jan 29 11:23:28.204558 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:23:28.217258 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:23:28.225997 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:23:28.252377 kernel: loop2: detected capacity change from 0 to 140992 Jan 29 11:23:28.268979 systemd-tmpfiles[1162]: ACLs are not supported, ignoring. Jan 29 11:23:28.269020 systemd-tmpfiles[1162]: ACLs are not supported, ignoring. Jan 29 11:23:28.284098 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:23:28.356948 kernel: loop3: detected capacity change from 0 to 210664 Jan 29 11:23:28.466976 kernel: loop4: detected capacity change from 0 to 138184 Jan 29 11:23:28.525911 kernel: loop5: detected capacity change from 0 to 52056 Jan 29 11:23:28.554284 kernel: loop6: detected capacity change from 0 to 140992 Jan 29 11:23:28.632011 kernel: loop7: detected capacity change from 0 to 210664 Jan 29 11:23:28.664489 (sd-merge)[1169]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jan 29 11:23:28.665834 (sd-merge)[1169]: Merged extensions into '/usr'. Jan 29 11:23:28.677555 systemd[1]: Reloading requested from client PID 1144 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:23:28.677577 systemd[1]: Reloading... Jan 29 11:23:28.856174 zram_generator::config[1198]: No configuration found. Jan 29 11:23:29.111279 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:23:29.162410 ldconfig[1139]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:23:29.228720 systemd[1]: Reloading finished in 549 ms. Jan 29 11:23:29.254235 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:23:29.264571 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:23:29.291068 systemd[1]: Starting ensure-sysext.service... Jan 29 11:23:29.307311 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:23:29.328378 systemd[1]: Reloading requested from client PID 1235 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:23:29.328400 systemd[1]: Reloading... Jan 29 11:23:29.375130 systemd-tmpfiles[1236]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:23:29.375839 systemd-tmpfiles[1236]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:23:29.378320 systemd-tmpfiles[1236]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:23:29.379502 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Jan 29 11:23:29.380137 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Jan 29 11:23:29.397717 systemd-tmpfiles[1236]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:23:29.401115 systemd-tmpfiles[1236]: Skipping /boot Jan 29 11:23:29.423899 zram_generator::config[1260]: No configuration found. Jan 29 11:23:29.457595 systemd-tmpfiles[1236]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:23:29.461852 systemd-tmpfiles[1236]: Skipping /boot Jan 29 11:23:29.630111 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:23:29.696007 systemd[1]: Reloading finished in 366 ms. Jan 29 11:23:29.720737 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:23:29.736453 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:23:29.762142 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:23:29.778292 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:23:29.796047 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:23:29.812668 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:23:29.832428 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:23:29.853075 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:23:29.881956 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:23:29.897535 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:23:29.898068 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:23:29.908304 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:23:29.914931 systemd-udevd[1322]: Using default interface naming scheme 'v255'. Jan 29 11:23:29.921556 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:23:29.939933 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:23:29.942060 augenrules[1333]: No rules Jan 29 11:23:29.950069 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:23:29.950333 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:23:29.953092 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:23:29.953621 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:23:29.964005 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:23:29.976696 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:23:29.976954 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:23:29.988878 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:23:29.989687 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:23:30.002243 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:23:30.015350 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:23:30.016478 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:23:30.026424 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:23:30.036868 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:23:30.092477 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:23:30.125656 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:23:30.135182 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:23:30.144196 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:23:30.151829 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:23:30.169938 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:23:30.187959 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:23:30.205254 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:23:30.221268 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 29 11:23:30.230110 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:23:30.243725 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:23:30.254152 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:23:30.276954 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:23:30.278242 systemd-resolved[1316]: Positive Trust Anchors: Jan 29 11:23:30.278728 systemd-resolved[1316]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:23:30.278841 systemd-resolved[1316]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:23:30.281419 augenrules[1372]: /sbin/augenrules: No change Jan 29 11:23:30.286978 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:23:30.287232 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:23:30.296115 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:23:30.296770 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:23:30.309756 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:23:30.310893 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:23:30.313695 systemd-resolved[1316]: Defaulting to hostname 'linux'. Jan 29 11:23:30.321892 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:23:30.323196 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:23:30.333576 augenrules[1399]: No rules Jan 29 11:23:30.335463 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:23:30.347173 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:23:30.353527 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:23:30.363757 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:23:30.364911 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:23:30.392735 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:23:30.410819 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 29 11:23:30.424842 systemd[1]: Finished ensure-sysext.service. Jan 29 11:23:30.435026 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 29 11:23:30.443835 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 11:23:30.455848 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 29 11:23:30.491827 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1347) Jan 29 11:23:30.491945 kernel: ACPI: button: Power Button [PWRF] Jan 29 11:23:30.491338 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 11:23:30.498821 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jan 29 11:23:30.511195 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:23:30.520851 kernel: ACPI: button: Sleep Button [SLPF] Jan 29 11:23:30.538181 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jan 29 11:23:30.547974 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:23:30.548103 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:23:30.626177 kernel: EDAC MC: Ver: 3.0.0 Jan 29 11:23:30.651915 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jan 29 11:23:30.661194 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:23:30.675299 systemd-networkd[1386]: lo: Link UP Jan 29 11:23:30.675318 systemd-networkd[1386]: lo: Gained carrier Jan 29 11:23:30.685088 systemd-networkd[1386]: Enumeration completed Jan 29 11:23:30.686312 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:23:30.687521 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:23:30.687541 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:23:30.688364 systemd-networkd[1386]: eth0: Link UP Jan 29 11:23:30.688381 systemd-networkd[1386]: eth0: Gained carrier Jan 29 11:23:30.688409 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:23:30.700422 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 29 11:23:30.701897 systemd-networkd[1386]: eth0: DHCPv4 address 10.128.0.21/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 29 11:23:30.713412 systemd[1]: Reached target network.target - Network. Jan 29 11:23:30.734536 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:23:30.753442 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:23:30.773362 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:23:30.789258 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:23:30.814407 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:23:30.822249 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:23:30.851579 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:23:30.880422 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:23:30.880971 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:23:30.889283 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:23:30.899819 lvm[1446]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:23:30.915545 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:23:30.927339 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:23:30.937136 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:23:30.948072 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:23:30.959258 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:23:30.969200 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:23:30.981030 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:23:30.991988 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:23:30.992060 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:23:31.001023 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:23:31.011628 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:23:31.023843 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:23:31.045784 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:23:31.057088 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:23:31.069303 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:23:31.079964 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:23:31.090034 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:23:31.099090 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:23:31.099149 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:23:31.109010 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:23:31.131135 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 11:23:31.169053 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:23:31.186988 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:23:31.199208 coreos-metadata[1453]: Jan 29 11:23:31.199 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jan 29 11:23:31.205572 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:23:31.206830 coreos-metadata[1453]: Jan 29 11:23:31.206 INFO Fetch successful Jan 29 11:23:31.206830 coreos-metadata[1453]: Jan 29 11:23:31.206 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jan 29 11:23:31.208229 coreos-metadata[1453]: Jan 29 11:23:31.207 INFO Fetch successful Jan 29 11:23:31.208229 coreos-metadata[1453]: Jan 29 11:23:31.208 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jan 29 11:23:31.210104 coreos-metadata[1453]: Jan 29 11:23:31.209 INFO Fetch successful Jan 29 11:23:31.210104 coreos-metadata[1453]: Jan 29 11:23:31.209 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jan 29 11:23:31.210451 coreos-metadata[1453]: Jan 29 11:23:31.210 INFO Fetch successful Jan 29 11:23:31.214326 jq[1457]: false Jan 29 11:23:31.215990 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:23:31.224360 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:23:31.242715 systemd[1]: Started ntpd.service - Network Time Service. Jan 29 11:23:31.261373 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:23:31.266825 extend-filesystems[1458]: Found loop4 Jan 29 11:23:31.266825 extend-filesystems[1458]: Found loop5 Jan 29 11:23:31.266825 extend-filesystems[1458]: Found loop6 Jan 29 11:23:31.266825 extend-filesystems[1458]: Found loop7 Jan 29 11:23:31.266825 extend-filesystems[1458]: Found sda Jan 29 11:23:31.266825 extend-filesystems[1458]: Found sda1 Jan 29 11:23:31.266825 extend-filesystems[1458]: Found sda2 Jan 29 11:23:31.266825 extend-filesystems[1458]: Found sda3 Jan 29 11:23:31.266825 extend-filesystems[1458]: Found usr Jan 29 11:23:31.266825 extend-filesystems[1458]: Found sda4 Jan 29 11:23:31.266825 extend-filesystems[1458]: Found sda6 Jan 29 11:23:31.266825 extend-filesystems[1458]: Found sda7 Jan 29 11:23:31.266825 extend-filesystems[1458]: Found sda9 Jan 29 11:23:31.266825 extend-filesystems[1458]: Checking size of /dev/sda9 Jan 29 11:23:31.464008 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jan 29 11:23:31.464068 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jan 29 11:23:31.464105 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1347) Jan 29 11:23:31.464227 ntpd[1461]: 29 Jan 11:23:31 ntpd[1461]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:04:56 UTC 2025 (1): Starting Jan 29 11:23:31.464227 ntpd[1461]: 29 Jan 11:23:31 ntpd[1461]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 11:23:31.464227 ntpd[1461]: 29 Jan 11:23:31 ntpd[1461]: ---------------------------------------------------- Jan 29 11:23:31.464227 ntpd[1461]: 29 Jan 11:23:31 ntpd[1461]: ntp-4 is maintained by Network Time Foundation, Jan 29 11:23:31.464227 ntpd[1461]: 29 Jan 11:23:31 ntpd[1461]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 11:23:31.464227 ntpd[1461]: 29 Jan 11:23:31 ntpd[1461]: corporation. Support and training for ntp-4 are Jan 29 11:23:31.464227 ntpd[1461]: 29 Jan 11:23:31 ntpd[1461]: available at https://www.nwtime.org/support Jan 29 11:23:31.464227 ntpd[1461]: 29 Jan 11:23:31 ntpd[1461]: ---------------------------------------------------- Jan 29 11:23:31.464227 ntpd[1461]: 29 Jan 11:23:31 ntpd[1461]: proto: precision = 0.095 usec (-23) Jan 29 11:23:31.464227 ntpd[1461]: 29 Jan 11:23:31 ntpd[1461]: basedate set to 2025-01-17 Jan 29 11:23:31.464227 ntpd[1461]: 29 Jan 11:23:31 ntpd[1461]: gps base set to 2025-01-19 (week 2350) Jan 29 11:23:31.464227 ntpd[1461]: 29 Jan 11:23:31 ntpd[1461]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 11:23:31.464227 ntpd[1461]: 29 Jan 11:23:31 ntpd[1461]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 11:23:31.464227 ntpd[1461]: 29 Jan 11:23:31 ntpd[1461]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 11:23:31.464227 ntpd[1461]: 29 Jan 11:23:31 ntpd[1461]: Listen normally on 3 eth0 10.128.0.21:123 Jan 29 11:23:31.464227 ntpd[1461]: 29 Jan 11:23:31 ntpd[1461]: Listen normally on 4 lo [::1]:123 Jan 29 11:23:31.464227 ntpd[1461]: 29 Jan 11:23:31 ntpd[1461]: bind(21) AF_INET6 fe80::4001:aff:fe80:15%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 11:23:31.464227 ntpd[1461]: 29 Jan 11:23:31 ntpd[1461]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:15%2#123 Jan 29 11:23:31.464227 ntpd[1461]: 29 Jan 11:23:31 ntpd[1461]: failed to init interface for address fe80::4001:aff:fe80:15%2 Jan 29 11:23:31.464227 ntpd[1461]: 29 Jan 11:23:31 ntpd[1461]: Listening on routing socket on fd #21 for interface updates Jan 29 11:23:31.464227 ntpd[1461]: 29 Jan 11:23:31 ntpd[1461]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 11:23:31.464227 ntpd[1461]: 29 Jan 11:23:31 ntpd[1461]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 11:23:31.324199 dbus-daemon[1454]: [system] SELinux support is enabled Jan 29 11:23:31.275441 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:23:31.469754 extend-filesystems[1458]: Resized partition /dev/sda9 Jan 29 11:23:31.330654 dbus-daemon[1454]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1386 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 29 11:23:31.296525 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:23:31.487569 extend-filesystems[1479]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:23:31.487569 extend-filesystems[1479]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 29 11:23:31.487569 extend-filesystems[1479]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 29 11:23:31.487569 extend-filesystems[1479]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jan 29 11:23:31.332810 ntpd[1461]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:04:56 UTC 2025 (1): Starting Jan 29 11:23:31.323069 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:23:31.534658 extend-filesystems[1458]: Resized filesystem in /dev/sda9 Jan 29 11:23:31.332847 ntpd[1461]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 11:23:31.388942 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jan 29 11:23:31.332863 ntpd[1461]: ---------------------------------------------------- Jan 29 11:23:31.389739 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:23:31.545716 update_engine[1482]: I20250129 11:23:31.526163 1482 main.cc:92] Flatcar Update Engine starting Jan 29 11:23:31.545716 update_engine[1482]: I20250129 11:23:31.535056 1482 update_check_scheduler.cc:74] Next update check in 6m57s Jan 29 11:23:31.332877 ntpd[1461]: ntp-4 is maintained by Network Time Foundation, Jan 29 11:23:31.395531 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:23:31.547129 jq[1484]: true Jan 29 11:23:31.332890 ntpd[1461]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 11:23:31.410933 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:23:31.332905 ntpd[1461]: corporation. Support and training for ntp-4 are Jan 29 11:23:31.435221 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:23:31.332920 ntpd[1461]: available at https://www.nwtime.org/support Jan 29 11:23:31.500554 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:23:31.332933 ntpd[1461]: ---------------------------------------------------- Jan 29 11:23:31.500893 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:23:31.334980 ntpd[1461]: proto: precision = 0.095 usec (-23) Jan 29 11:23:31.501390 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:23:31.335394 ntpd[1461]: basedate set to 2025-01-17 Jan 29 11:23:31.501656 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:23:31.335415 ntpd[1461]: gps base set to 2025-01-19 (week 2350) Jan 29 11:23:31.340078 ntpd[1461]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 11:23:31.340145 ntpd[1461]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 11:23:31.340403 ntpd[1461]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 11:23:31.340462 ntpd[1461]: Listen normally on 3 eth0 10.128.0.21:123 Jan 29 11:23:31.340526 ntpd[1461]: Listen normally on 4 lo [::1]:123 Jan 29 11:23:31.340593 ntpd[1461]: bind(21) AF_INET6 fe80::4001:aff:fe80:15%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 11:23:31.340622 ntpd[1461]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:15%2#123 Jan 29 11:23:31.340642 ntpd[1461]: failed to init interface for address fe80::4001:aff:fe80:15%2 Jan 29 11:23:31.340687 ntpd[1461]: Listening on routing socket on fd #21 for interface updates Jan 29 11:23:31.342849 ntpd[1461]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 11:23:31.342890 ntpd[1461]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 11:23:31.556004 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:23:31.557122 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:23:31.586525 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:23:31.586903 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:23:31.640410 systemd-logind[1477]: Watching system buttons on /dev/input/event2 (Power Button) Jan 29 11:23:31.641683 jq[1491]: true Jan 29 11:23:31.640441 systemd-logind[1477]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 29 11:23:31.640472 systemd-logind[1477]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 11:23:31.641210 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 11:23:31.642866 systemd-logind[1477]: New seat seat0. Jan 29 11:23:31.651479 (ntainerd)[1493]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:23:31.653246 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:23:31.692284 dbus-daemon[1454]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 29 11:23:31.738646 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:23:31.756532 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:23:31.757693 tar[1490]: linux-amd64/helm Jan 29 11:23:31.756886 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:23:31.757134 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:23:31.778662 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 29 11:23:31.788992 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:23:31.789278 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:23:31.812568 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:23:31.941102 bash[1523]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:23:31.943458 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:23:31.966199 systemd[1]: Starting sshkeys.service... Jan 29 11:23:32.061901 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 11:23:32.081429 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 11:23:32.118924 dbus-daemon[1454]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 29 11:23:32.119146 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 29 11:23:32.121099 dbus-daemon[1454]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1510 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 29 11:23:32.144301 systemd[1]: Starting polkit.service - Authorization Manager... Jan 29 11:23:32.175653 sshd_keygen[1487]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:23:32.180401 locksmithd[1516]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:23:32.246028 coreos-metadata[1532]: Jan 29 11:23:32.243 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jan 29 11:23:32.252234 coreos-metadata[1532]: Jan 29 11:23:32.251 INFO Fetch failed with 404: resource not found Jan 29 11:23:32.252234 coreos-metadata[1532]: Jan 29 11:23:32.252 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jan 29 11:23:32.253496 coreos-metadata[1532]: Jan 29 11:23:32.253 INFO Fetch successful Jan 29 11:23:32.253496 coreos-metadata[1532]: Jan 29 11:23:32.253 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jan 29 11:23:32.254596 coreos-metadata[1532]: Jan 29 11:23:32.254 INFO Fetch failed with 404: resource not found Jan 29 11:23:32.254596 coreos-metadata[1532]: Jan 29 11:23:32.254 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jan 29 11:23:32.258682 coreos-metadata[1532]: Jan 29 11:23:32.255 INFO Fetch failed with 404: resource not found Jan 29 11:23:32.258682 coreos-metadata[1532]: Jan 29 11:23:32.257 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jan 29 11:23:32.258682 coreos-metadata[1532]: Jan 29 11:23:32.258 INFO Fetch successful Jan 29 11:23:32.267909 unknown[1532]: wrote ssh authorized keys file for user: core Jan 29 11:23:32.272184 polkitd[1539]: Started polkitd version 121 Jan 29 11:23:32.299057 polkitd[1539]: Loading rules from directory /etc/polkit-1/rules.d Jan 29 11:23:32.299426 polkitd[1539]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 29 11:23:32.315859 polkitd[1539]: Finished loading, compiling and executing 2 rules Jan 29 11:23:32.318276 dbus-daemon[1454]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 29 11:23:32.318523 systemd[1]: Started polkit.service - Authorization Manager. Jan 29 11:23:32.321903 polkitd[1539]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 29 11:23:32.333431 ntpd[1461]: bind(24) AF_INET6 fe80::4001:aff:fe80:15%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 11:23:32.334248 ntpd[1461]: 29 Jan 11:23:32 ntpd[1461]: bind(24) AF_INET6 fe80::4001:aff:fe80:15%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 11:23:32.334248 ntpd[1461]: 29 Jan 11:23:32 ntpd[1461]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:15%2#123 Jan 29 11:23:32.334248 ntpd[1461]: 29 Jan 11:23:32 ntpd[1461]: failed to init interface for address fe80::4001:aff:fe80:15%2 Jan 29 11:23:32.333480 ntpd[1461]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:15%2#123 Jan 29 11:23:32.333502 ntpd[1461]: failed to init interface for address fe80::4001:aff:fe80:15%2 Jan 29 11:23:32.344105 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:23:32.352023 update-ssh-keys[1550]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:23:32.357934 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 11:23:32.379110 systemd[1]: Finished sshkeys.service. Jan 29 11:23:32.399391 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:23:32.400301 systemd-hostnamed[1510]: Hostname set to <ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal> (transient) Jan 29 11:23:32.401697 systemd-resolved[1316]: System hostname changed to 'ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal'. Jan 29 11:23:32.433292 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:23:32.435115 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:23:32.461432 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:23:32.516936 containerd[1493]: time="2025-01-29T11:23:32.516804116Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:23:32.521701 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:23:32.544037 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:23:32.559432 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:23:32.569315 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:23:32.593917 containerd[1493]: time="2025-01-29T11:23:32.593218027Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:23:32.597201 containerd[1493]: time="2025-01-29T11:23:32.597119403Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:23:32.597201 containerd[1493]: time="2025-01-29T11:23:32.597200390Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:23:32.597428 containerd[1493]: time="2025-01-29T11:23:32.597236248Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:23:32.597864 containerd[1493]: time="2025-01-29T11:23:32.597507253Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:23:32.597864 containerd[1493]: time="2025-01-29T11:23:32.597570481Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:23:32.597864 containerd[1493]: time="2025-01-29T11:23:32.597692526Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:23:32.597864 containerd[1493]: time="2025-01-29T11:23:32.597720020Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:23:32.598360 containerd[1493]: time="2025-01-29T11:23:32.598076629Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:23:32.598360 containerd[1493]: time="2025-01-29T11:23:32.598112050Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:23:32.598360 containerd[1493]: time="2025-01-29T11:23:32.598143649Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:23:32.598360 containerd[1493]: time="2025-01-29T11:23:32.598163196Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:23:32.598360 containerd[1493]: time="2025-01-29T11:23:32.598297208Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:23:32.598971 containerd[1493]: time="2025-01-29T11:23:32.598662164Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:23:32.598971 containerd[1493]: time="2025-01-29T11:23:32.598917306Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:23:32.598971 containerd[1493]: time="2025-01-29T11:23:32.598946327Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:23:32.599147 containerd[1493]: time="2025-01-29T11:23:32.599085823Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:23:32.599193 containerd[1493]: time="2025-01-29T11:23:32.599167649Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:23:32.608607 containerd[1493]: time="2025-01-29T11:23:32.607931367Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:23:32.608607 containerd[1493]: time="2025-01-29T11:23:32.608228192Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:23:32.608607 containerd[1493]: time="2025-01-29T11:23:32.608262144Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:23:32.608607 containerd[1493]: time="2025-01-29T11:23:32.608291455Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:23:32.608607 containerd[1493]: time="2025-01-29T11:23:32.608319258Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:23:32.608607 containerd[1493]: time="2025-01-29T11:23:32.608580165Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:23:32.609441 containerd[1493]: time="2025-01-29T11:23:32.609402948Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:23:32.609645 containerd[1493]: time="2025-01-29T11:23:32.609615650Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:23:32.610410 containerd[1493]: time="2025-01-29T11:23:32.609721888Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:23:32.610410 containerd[1493]: time="2025-01-29T11:23:32.609758187Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:23:32.610410 containerd[1493]: time="2025-01-29T11:23:32.609784034Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:23:32.610410 containerd[1493]: time="2025-01-29T11:23:32.609824981Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:23:32.610410 containerd[1493]: time="2025-01-29T11:23:32.609847930Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:23:32.610410 containerd[1493]: time="2025-01-29T11:23:32.609884989Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:23:32.610410 containerd[1493]: time="2025-01-29T11:23:32.609909985Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:23:32.610410 containerd[1493]: time="2025-01-29T11:23:32.609939446Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:23:32.610410 containerd[1493]: time="2025-01-29T11:23:32.609961023Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:23:32.610410 containerd[1493]: time="2025-01-29T11:23:32.609984877Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:23:32.610410 containerd[1493]: time="2025-01-29T11:23:32.610019212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:23:32.610410 containerd[1493]: time="2025-01-29T11:23:32.610045750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:23:32.610410 containerd[1493]: time="2025-01-29T11:23:32.610067257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:23:32.610410 containerd[1493]: time="2025-01-29T11:23:32.610089504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:23:32.611070 containerd[1493]: time="2025-01-29T11:23:32.610110787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:23:32.611070 containerd[1493]: time="2025-01-29T11:23:32.610133538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:23:32.611070 containerd[1493]: time="2025-01-29T11:23:32.610152754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:23:32.611070 containerd[1493]: time="2025-01-29T11:23:32.610174706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:23:32.611070 containerd[1493]: time="2025-01-29T11:23:32.610197928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:23:32.611070 containerd[1493]: time="2025-01-29T11:23:32.610223018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:23:32.611070 containerd[1493]: time="2025-01-29T11:23:32.610243572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:23:32.611070 containerd[1493]: time="2025-01-29T11:23:32.610268662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:23:32.611070 containerd[1493]: time="2025-01-29T11:23:32.610290605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:23:32.611070 containerd[1493]: time="2025-01-29T11:23:32.610316480Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:23:32.611070 containerd[1493]: time="2025-01-29T11:23:32.610352175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:23:32.611070 containerd[1493]: time="2025-01-29T11:23:32.610375129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:23:32.611070 containerd[1493]: time="2025-01-29T11:23:32.610394745Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:23:32.611070 containerd[1493]: time="2025-01-29T11:23:32.610587180Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:23:32.611637 containerd[1493]: time="2025-01-29T11:23:32.610718832Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:23:32.611637 containerd[1493]: time="2025-01-29T11:23:32.610742801Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:23:32.611637 containerd[1493]: time="2025-01-29T11:23:32.610766647Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:23:32.611637 containerd[1493]: time="2025-01-29T11:23:32.610784087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:23:32.611637 containerd[1493]: time="2025-01-29T11:23:32.610825526Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:23:32.611637 containerd[1493]: time="2025-01-29T11:23:32.610846584Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:23:32.611637 containerd[1493]: time="2025-01-29T11:23:32.610864692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:23:32.611969 containerd[1493]: time="2025-01-29T11:23:32.611387905Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:23:32.611969 containerd[1493]: time="2025-01-29T11:23:32.611470158Z" level=info msg="Connect containerd service" Jan 29 11:23:32.611969 containerd[1493]: time="2025-01-29T11:23:32.611519736Z" level=info msg="using legacy CRI server" Jan 29 11:23:32.611969 containerd[1493]: time="2025-01-29T11:23:32.611532036Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:23:32.611969 containerd[1493]: time="2025-01-29T11:23:32.611733187Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:23:32.614813 containerd[1493]: time="2025-01-29T11:23:32.613023150Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:23:32.614813 containerd[1493]: time="2025-01-29T11:23:32.613210740Z" level=info msg="Start subscribing containerd event" Jan 29 11:23:32.614813 containerd[1493]: time="2025-01-29T11:23:32.613286143Z" level=info msg="Start recovering state" Jan 29 11:23:32.614813 containerd[1493]: time="2025-01-29T11:23:32.613385025Z" level=info msg="Start event monitor" Jan 29 11:23:32.614813 containerd[1493]: time="2025-01-29T11:23:32.613403907Z" level=info msg="Start snapshots syncer" Jan 29 11:23:32.614813 containerd[1493]: time="2025-01-29T11:23:32.613420362Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:23:32.614813 containerd[1493]: time="2025-01-29T11:23:32.613432249Z" level=info msg="Start streaming server" Jan 29 11:23:32.614813 containerd[1493]: time="2025-01-29T11:23:32.614407461Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:23:32.614813 containerd[1493]: time="2025-01-29T11:23:32.614622743Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:23:32.615807 containerd[1493]: time="2025-01-29T11:23:32.615762291Z" level=info msg="containerd successfully booted in 0.100572s" Jan 29 11:23:32.615916 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:23:32.718036 systemd-networkd[1386]: eth0: Gained IPv6LL Jan 29 11:23:32.725128 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:23:32.737913 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:23:32.761988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:23:32.765065 tar[1490]: linux-amd64/LICENSE Jan 29 11:23:32.765065 tar[1490]: linux-amd64/README.md Jan 29 11:23:32.778393 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:23:32.798888 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jan 29 11:23:32.831583 init.sh[1573]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jan 29 11:23:32.831583 init.sh[1573]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jan 29 11:23:32.833289 init.sh[1573]: + /usr/bin/google_instance_setup Jan 29 11:23:32.843737 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:23:32.857084 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:23:33.329939 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:23:33.349936 systemd[1]: Started sshd@0-10.128.0.21:22-139.178.68.195:44494.service - OpenSSH per-connection server daemon (139.178.68.195:44494). Jan 29 11:23:33.393532 instance-setup[1580]: INFO Running google_set_multiqueue. Jan 29 11:23:33.426852 instance-setup[1580]: INFO Set channels for eth0 to 2. Jan 29 11:23:33.433236 instance-setup[1580]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jan 29 11:23:33.435516 instance-setup[1580]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jan 29 11:23:33.436074 instance-setup[1580]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jan 29 11:23:33.438312 instance-setup[1580]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jan 29 11:23:33.438622 instance-setup[1580]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jan 29 11:23:33.440620 instance-setup[1580]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jan 29 11:23:33.440980 instance-setup[1580]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jan 29 11:23:33.442853 instance-setup[1580]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jan 29 11:23:33.453004 instance-setup[1580]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 29 11:23:33.457915 instance-setup[1580]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 29 11:23:33.459988 instance-setup[1580]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jan 29 11:23:33.460043 instance-setup[1580]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jan 29 11:23:33.480993 init.sh[1573]: + /usr/bin/google_metadata_script_runner --script-type startup Jan 29 11:23:33.662520 startup-script[1619]: INFO Starting startup scripts. Jan 29 11:23:33.669914 startup-script[1619]: INFO No startup scripts found in metadata. Jan 29 11:23:33.670002 startup-script[1619]: INFO Finished running startup scripts. Jan 29 11:23:33.698978 init.sh[1573]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jan 29 11:23:33.698978 init.sh[1573]: + daemon_pids=() Jan 29 11:23:33.699225 init.sh[1573]: + for d in accounts clock_skew network Jan 29 11:23:33.699690 init.sh[1573]: + daemon_pids+=($!) Jan 29 11:23:33.699690 init.sh[1573]: + for d in accounts clock_skew network Jan 29 11:23:33.699831 init.sh[1622]: + /usr/bin/google_accounts_daemon Jan 29 11:23:33.700168 init.sh[1573]: + daemon_pids+=($!) Jan 29 11:23:33.700168 init.sh[1573]: + for d in accounts clock_skew network Jan 29 11:23:33.700666 init.sh[1623]: + /usr/bin/google_clock_skew_daemon Jan 29 11:23:33.701686 init.sh[1573]: + daemon_pids+=($!) Jan 29 11:23:33.701686 init.sh[1573]: + NOTIFY_SOCKET=/run/systemd/notify Jan 29 11:23:33.701686 init.sh[1573]: + /usr/bin/systemd-notify --ready Jan 29 11:23:33.701884 init.sh[1624]: + /usr/bin/google_network_daemon Jan 29 11:23:33.736719 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jan 29 11:23:33.744250 sshd[1589]: Accepted publickey for core from 139.178.68.195 port 44494 ssh2: RSA SHA256:TKm17rOOJGvnaSUIt3oFUlUbRDedEa602jCeeiSwRLI Jan 29 11:23:33.750211 sshd-session[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:23:33.765269 init.sh[1573]: + wait -n 1622 1623 1624 Jan 29 11:23:33.780593 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:23:33.803293 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:23:33.821019 systemd-logind[1477]: New session 1 of user core. Jan 29 11:23:33.847148 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:23:33.877385 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:23:33.924546 (systemd)[1628]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:23:34.189890 systemd[1628]: Queued start job for default target default.target. Jan 29 11:23:34.196488 systemd[1628]: Created slice app.slice - User Application Slice. Jan 29 11:23:34.196546 systemd[1628]: Reached target paths.target - Paths. Jan 29 11:23:34.196572 systemd[1628]: Reached target timers.target - Timers. Jan 29 11:23:34.203032 systemd[1628]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:23:34.236516 systemd[1628]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:23:34.236749 systemd[1628]: Reached target sockets.target - Sockets. Jan 29 11:23:34.236779 systemd[1628]: Reached target basic.target - Basic System. Jan 29 11:23:34.236888 systemd[1628]: Reached target default.target - Main User Target. Jan 29 11:23:34.236945 systemd[1628]: Startup finished in 287ms. Jan 29 11:23:34.237093 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:23:34.253292 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:23:34.267897 google-clock-skew[1623]: INFO Starting Google Clock Skew daemon. Jan 29 11:23:34.285595 google-clock-skew[1623]: INFO Clock drift token has changed: 0. Jan 29 11:23:34.332417 google-networking[1624]: INFO Starting Google Networking daemon. Jan 29 11:23:34.367756 groupadd[1644]: group added to /etc/group: name=google-sudoers, GID=1000 Jan 29 11:23:34.372023 groupadd[1644]: group added to /etc/gshadow: name=google-sudoers Jan 29 11:23:34.449949 groupadd[1644]: new group: name=google-sudoers, GID=1000 Jan 29 11:23:34.510285 systemd[1]: Started sshd@1-10.128.0.21:22-139.178.68.195:44500.service - OpenSSH per-connection server daemon (139.178.68.195:44500). Jan 29 11:23:34.543435 google-accounts[1622]: INFO Starting Google Accounts daemon. Jan 29 11:23:34.571098 google-accounts[1622]: WARNING OS Login not installed. Jan 29 11:23:34.573457 google-accounts[1622]: INFO Creating a new user account for 0. Jan 29 11:23:34.580823 init.sh[1656]: useradd: invalid user name '0': use --badname to ignore Jan 29 11:23:34.581705 google-accounts[1622]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jan 29 11:23:34.730850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:23:34.742899 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:23:34.746596 (kubelet)[1664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:23:34.754347 systemd[1]: Startup finished in 1.053s (kernel) + 10.552s (initrd) + 9.507s (userspace) = 21.113s. Jan 29 11:23:34.851408 sshd[1654]: Accepted publickey for core from 139.178.68.195 port 44500 ssh2: RSA SHA256:TKm17rOOJGvnaSUIt3oFUlUbRDedEa602jCeeiSwRLI Jan 29 11:23:34.853441 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:23:34.862192 systemd-logind[1477]: New session 2 of user core. Jan 29 11:23:34.867077 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:23:35.001146 google-clock-skew[1623]: INFO Synced system time with hardware clock. Jan 29 11:23:35.004166 systemd-resolved[1316]: Clock change detected. Flushing caches. Jan 29 11:23:35.049215 sshd[1669]: Connection closed by 139.178.68.195 port 44500 Jan 29 11:23:35.050027 sshd-session[1654]: pam_unix(sshd:session): session closed for user core Jan 29 11:23:35.055954 systemd[1]: sshd@1-10.128.0.21:22-139.178.68.195:44500.service: Deactivated successfully. Jan 29 11:23:35.058759 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:23:35.061575 systemd-logind[1477]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:23:35.064079 systemd-logind[1477]: Removed session 2. Jan 29 11:23:35.106911 systemd[1]: Started sshd@2-10.128.0.21:22-139.178.68.195:58438.service - OpenSSH per-connection server daemon (139.178.68.195:58438). Jan 29 11:23:35.309422 ntpd[1461]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:15%2]:123 Jan 29 11:23:35.311782 ntpd[1461]: 29 Jan 11:23:35 ntpd[1461]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:15%2]:123 Jan 29 11:23:35.406272 sshd[1678]: Accepted publickey for core from 139.178.68.195 port 58438 ssh2: RSA SHA256:TKm17rOOJGvnaSUIt3oFUlUbRDedEa602jCeeiSwRLI Jan 29 11:23:35.408751 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:23:35.416801 systemd-logind[1477]: New session 3 of user core. Jan 29 11:23:35.422638 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:23:35.617571 sshd[1681]: Connection closed by 139.178.68.195 port 58438 Jan 29 11:23:35.617741 sshd-session[1678]: pam_unix(sshd:session): session closed for user core Jan 29 11:23:35.625237 systemd[1]: sshd@2-10.128.0.21:22-139.178.68.195:58438.service: Deactivated successfully. Jan 29 11:23:35.628444 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:23:35.629893 systemd-logind[1477]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:23:35.631908 systemd-logind[1477]: Removed session 3. Jan 29 11:23:35.672285 systemd[1]: Started sshd@3-10.128.0.21:22-139.178.68.195:58440.service - OpenSSH per-connection server daemon (139.178.68.195:58440). Jan 29 11:23:35.736453 kubelet[1664]: E0129 11:23:35.736406 1664 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:23:35.738868 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:23:35.739088 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:23:35.739685 systemd[1]: kubelet.service: Consumed 1.262s CPU time. Jan 29 11:23:35.993219 sshd[1687]: Accepted publickey for core from 139.178.68.195 port 58440 ssh2: RSA SHA256:TKm17rOOJGvnaSUIt3oFUlUbRDedEa602jCeeiSwRLI Jan 29 11:23:35.995148 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:23:36.002178 systemd-logind[1477]: New session 4 of user core. Jan 29 11:23:36.007616 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:23:36.213450 sshd[1690]: Connection closed by 139.178.68.195 port 58440 Jan 29 11:23:36.214279 sshd-session[1687]: pam_unix(sshd:session): session closed for user core Jan 29 11:23:36.219407 systemd[1]: sshd@3-10.128.0.21:22-139.178.68.195:58440.service: Deactivated successfully. Jan 29 11:23:36.221618 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:23:36.222681 systemd-logind[1477]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:23:36.224052 systemd-logind[1477]: Removed session 4. Jan 29 11:23:36.265601 systemd[1]: Started sshd@4-10.128.0.21:22-139.178.68.195:58454.service - OpenSSH per-connection server daemon (139.178.68.195:58454). Jan 29 11:23:36.570079 sshd[1695]: Accepted publickey for core from 139.178.68.195 port 58454 ssh2: RSA SHA256:TKm17rOOJGvnaSUIt3oFUlUbRDedEa602jCeeiSwRLI Jan 29 11:23:36.571800 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:23:36.577416 systemd-logind[1477]: New session 5 of user core. Jan 29 11:23:36.584632 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:23:36.766160 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:23:36.766704 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:23:36.779252 sudo[1698]: pam_unix(sudo:session): session closed for user root Jan 29 11:23:36.822276 sshd[1697]: Connection closed by 139.178.68.195 port 58454 Jan 29 11:23:36.823913 sshd-session[1695]: pam_unix(sshd:session): session closed for user core Jan 29 11:23:36.828321 systemd[1]: sshd@4-10.128.0.21:22-139.178.68.195:58454.service: Deactivated successfully. Jan 29 11:23:36.830657 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:23:36.832683 systemd-logind[1477]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:23:36.834543 systemd-logind[1477]: Removed session 5. Jan 29 11:23:36.879073 systemd[1]: Started sshd@5-10.128.0.21:22-139.178.68.195:58456.service - OpenSSH per-connection server daemon (139.178.68.195:58456). Jan 29 11:23:37.177945 sshd[1703]: Accepted publickey for core from 139.178.68.195 port 58456 ssh2: RSA SHA256:TKm17rOOJGvnaSUIt3oFUlUbRDedEa602jCeeiSwRLI Jan 29 11:23:37.179870 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:23:37.186772 systemd-logind[1477]: New session 6 of user core. Jan 29 11:23:37.196753 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:23:37.361441 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:23:37.362223 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:23:37.367871 sudo[1707]: pam_unix(sudo:session): session closed for user root Jan 29 11:23:37.382098 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:23:37.382614 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:23:37.402911 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:23:37.440999 augenrules[1729]: No rules Jan 29 11:23:37.442292 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:23:37.442575 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:23:37.443980 sudo[1706]: pam_unix(sudo:session): session closed for user root Jan 29 11:23:37.487376 sshd[1705]: Connection closed by 139.178.68.195 port 58456 Jan 29 11:23:37.488195 sshd-session[1703]: pam_unix(sshd:session): session closed for user core Jan 29 11:23:37.492491 systemd[1]: sshd@5-10.128.0.21:22-139.178.68.195:58456.service: Deactivated successfully. Jan 29 11:23:37.494903 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:23:37.496808 systemd-logind[1477]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:23:37.498206 systemd-logind[1477]: Removed session 6. Jan 29 11:23:37.546974 systemd[1]: Started sshd@6-10.128.0.21:22-139.178.68.195:58468.service - OpenSSH per-connection server daemon (139.178.68.195:58468). Jan 29 11:23:37.850994 sshd[1737]: Accepted publickey for core from 139.178.68.195 port 58468 ssh2: RSA SHA256:TKm17rOOJGvnaSUIt3oFUlUbRDedEa602jCeeiSwRLI Jan 29 11:23:37.853072 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:23:37.860095 systemd-logind[1477]: New session 7 of user core. Jan 29 11:23:37.866606 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:23:38.031140 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:23:38.031657 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:23:38.483738 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:23:38.487055 (dockerd)[1757]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:23:38.918509 dockerd[1757]: time="2025-01-29T11:23:38.918130622Z" level=info msg="Starting up" Jan 29 11:23:39.038129 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport559787354-merged.mount: Deactivated successfully. Jan 29 11:23:39.131574 dockerd[1757]: time="2025-01-29T11:23:39.131514636Z" level=info msg="Loading containers: start." Jan 29 11:23:39.360365 kernel: Initializing XFRM netlink socket Jan 29 11:23:39.483329 systemd-networkd[1386]: docker0: Link UP Jan 29 11:23:39.523948 dockerd[1757]: time="2025-01-29T11:23:39.523810727Z" level=info msg="Loading containers: done." Jan 29 11:23:39.549072 dockerd[1757]: time="2025-01-29T11:23:39.548993650Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:23:39.549328 dockerd[1757]: time="2025-01-29T11:23:39.549133599Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 29 11:23:39.549328 dockerd[1757]: time="2025-01-29T11:23:39.549299383Z" level=info msg="Daemon has completed initialization" Jan 29 11:23:39.593916 dockerd[1757]: time="2025-01-29T11:23:39.593837166Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:23:39.594882 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:23:40.652859 containerd[1493]: time="2025-01-29T11:23:40.652806371Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 11:23:41.132021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount13840496.mount: Deactivated successfully. Jan 29 11:23:42.969916 containerd[1493]: time="2025-01-29T11:23:42.969837428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:42.971576 containerd[1493]: time="2025-01-29T11:23:42.971519734Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32683640" Jan 29 11:23:42.973077 containerd[1493]: time="2025-01-29T11:23:42.973007175Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:42.977204 containerd[1493]: time="2025-01-29T11:23:42.977120701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:42.979038 containerd[1493]: time="2025-01-29T11:23:42.978752510Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.32589199s" Jan 29 11:23:42.979038 containerd[1493]: time="2025-01-29T11:23:42.978809580Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 29 11:23:43.010383 containerd[1493]: time="2025-01-29T11:23:43.010297510Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 11:23:44.802473 containerd[1493]: time="2025-01-29T11:23:44.802376421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:44.804061 containerd[1493]: time="2025-01-29T11:23:44.803980139Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29607679" Jan 29 11:23:44.805708 containerd[1493]: time="2025-01-29T11:23:44.805626200Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:44.809917 containerd[1493]: time="2025-01-29T11:23:44.809832723Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:44.811833 containerd[1493]: time="2025-01-29T11:23:44.811398379Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.801018646s" Jan 29 11:23:44.811833 containerd[1493]: time="2025-01-29T11:23:44.811445631Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 29 11:23:44.842698 containerd[1493]: time="2025-01-29T11:23:44.842645492Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 11:23:45.901250 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:23:45.912650 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:23:46.214870 containerd[1493]: time="2025-01-29T11:23:46.213161280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:46.214870 containerd[1493]: time="2025-01-29T11:23:46.214635309Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17784980" Jan 29 11:23:46.216996 containerd[1493]: time="2025-01-29T11:23:46.216508019Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:46.222486 containerd[1493]: time="2025-01-29T11:23:46.222442936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:46.226469 containerd[1493]: time="2025-01-29T11:23:46.226009276Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.3833028s" Jan 29 11:23:46.226469 containerd[1493]: time="2025-01-29T11:23:46.226051233Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 29 11:23:46.245473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:23:46.256949 (kubelet)[2033]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:23:46.264443 containerd[1493]: time="2025-01-29T11:23:46.263143248Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 11:23:46.327544 kubelet[2033]: E0129 11:23:46.327397 2033 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:23:46.334018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:23:46.334276 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:23:47.343423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3409476812.mount: Deactivated successfully. Jan 29 11:23:47.961964 containerd[1493]: time="2025-01-29T11:23:47.961896230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:47.963622 containerd[1493]: time="2025-01-29T11:23:47.963542120Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29060232" Jan 29 11:23:47.965139 containerd[1493]: time="2025-01-29T11:23:47.965056000Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:47.969125 containerd[1493]: time="2025-01-29T11:23:47.968817892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:47.969872 containerd[1493]: time="2025-01-29T11:23:47.969825101Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.706626884s" Jan 29 11:23:47.969985 containerd[1493]: time="2025-01-29T11:23:47.969878222Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 11:23:48.001930 containerd[1493]: time="2025-01-29T11:23:48.001869925Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 11:23:48.414907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1705898680.mount: Deactivated successfully. Jan 29 11:23:49.515327 containerd[1493]: time="2025-01-29T11:23:49.515252064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:49.517056 containerd[1493]: time="2025-01-29T11:23:49.516986098Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Jan 29 11:23:49.518405 containerd[1493]: time="2025-01-29T11:23:49.518359966Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:49.522176 containerd[1493]: time="2025-01-29T11:23:49.522101912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:49.523780 containerd[1493]: time="2025-01-29T11:23:49.523594629Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.521668588s" Jan 29 11:23:49.523780 containerd[1493]: time="2025-01-29T11:23:49.523644303Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 11:23:49.554019 containerd[1493]: time="2025-01-29T11:23:49.553926358Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 11:23:49.893562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount383606079.mount: Deactivated successfully. Jan 29 11:23:49.902929 containerd[1493]: time="2025-01-29T11:23:49.902849481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:49.904163 containerd[1493]: time="2025-01-29T11:23:49.904099979Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=324188" Jan 29 11:23:49.905711 containerd[1493]: time="2025-01-29T11:23:49.905626536Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:49.909167 containerd[1493]: time="2025-01-29T11:23:49.909089867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:49.910274 containerd[1493]: time="2025-01-29T11:23:49.910227435Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 356.237519ms" Jan 29 11:23:49.910421 containerd[1493]: time="2025-01-29T11:23:49.910279800Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 29 11:23:49.940478 containerd[1493]: time="2025-01-29T11:23:49.940423473Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 11:23:50.356220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1066801271.mount: Deactivated successfully. Jan 29 11:23:52.582995 containerd[1493]: time="2025-01-29T11:23:52.582920930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:52.584775 containerd[1493]: time="2025-01-29T11:23:52.584704188Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57246061" Jan 29 11:23:52.586081 containerd[1493]: time="2025-01-29T11:23:52.585996826Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:52.590854 containerd[1493]: time="2025-01-29T11:23:52.590773380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:23:52.593035 containerd[1493]: time="2025-01-29T11:23:52.592406151Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.651934848s" Jan 29 11:23:52.593035 containerd[1493]: time="2025-01-29T11:23:52.592502624Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 29 11:23:56.400769 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:23:56.409479 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:23:56.669631 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:23:56.678871 (kubelet)[2222]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:23:56.759485 kubelet[2222]: E0129 11:23:56.759292 2222 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:23:56.763816 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:23:56.764233 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:23:57.677082 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:23:57.687766 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:23:57.715919 systemd[1]: Reloading requested from client PID 2236 ('systemctl') (unit session-7.scope)... Jan 29 11:23:57.716107 systemd[1]: Reloading... Jan 29 11:23:57.898405 zram_generator::config[2278]: No configuration found. Jan 29 11:23:58.038783 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:23:58.144912 systemd[1]: Reloading finished in 427 ms. Jan 29 11:23:58.215427 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:23:58.215570 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:23:58.215949 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:23:58.222822 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:23:58.529295 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:23:58.543061 (kubelet)[2329]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:23:58.598203 kubelet[2329]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:23:58.598203 kubelet[2329]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:23:58.598203 kubelet[2329]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:23:58.600008 kubelet[2329]: I0129 11:23:58.599937 2329 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:23:59.042951 kubelet[2329]: I0129 11:23:59.042892 2329 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:23:59.042951 kubelet[2329]: I0129 11:23:59.042930 2329 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:23:59.043275 kubelet[2329]: I0129 11:23:59.043238 2329 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:23:59.069040 kubelet[2329]: I0129 11:23:59.068999 2329 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:23:59.072516 kubelet[2329]: E0129 11:23:59.071124 2329 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.21:6443: connect: connection refused Jan 29 11:23:59.090200 kubelet[2329]: I0129 11:23:59.090162 2329 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:23:59.094406 kubelet[2329]: I0129 11:23:59.094300 2329 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:23:59.094666 kubelet[2329]: I0129 11:23:59.094385 2329 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:23:59.094867 kubelet[2329]: I0129 11:23:59.094678 2329 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:23:59.094867 kubelet[2329]: I0129 11:23:59.094695 2329 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:23:59.094988 kubelet[2329]: I0129 11:23:59.094874 2329 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:23:59.096377 kubelet[2329]: I0129 11:23:59.096208 2329 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:23:59.096377 kubelet[2329]: I0129 11:23:59.096243 2329 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:23:59.096377 kubelet[2329]: I0129 11:23:59.096276 2329 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:23:59.096377 kubelet[2329]: I0129 11:23:59.096310 2329 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:23:59.098369 kubelet[2329]: W0129 11:23:59.098003 2329 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.21:6443: connect: connection refused Jan 29 11:23:59.098369 kubelet[2329]: E0129 11:23:59.098094 2329 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.21:6443: connect: connection refused Jan 29 11:23:59.102776 kubelet[2329]: W0129 11:23:59.102713 2329 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.21:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.21:6443: connect: connection refused Jan 29 11:23:59.102901 kubelet[2329]: E0129 11:23:59.102788 2329 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.21:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.21:6443: connect: connection refused Jan 29 11:23:59.103629 kubelet[2329]: I0129 11:23:59.103255 2329 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:23:59.108152 kubelet[2329]: I0129 11:23:59.106633 2329 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:23:59.108152 kubelet[2329]: W0129 11:23:59.106740 2329 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:23:59.108152 kubelet[2329]: I0129 11:23:59.107605 2329 server.go:1264] "Started kubelet" Jan 29 11:23:59.113224 kubelet[2329]: I0129 11:23:59.112105 2329 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:23:59.113605 kubelet[2329]: I0129 11:23:59.113564 2329 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:23:59.115194 kubelet[2329]: I0129 11:23:59.114884 2329 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:23:59.115306 kubelet[2329]: I0129 11:23:59.115267 2329 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:23:59.115981 kubelet[2329]: E0129 11:23:59.115505 2329 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.21:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.21:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal.181f2611ce688425 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal,UID:ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal,},FirstTimestamp:2025-01-29 11:23:59.107556389 +0000 UTC m=+0.559462406,LastTimestamp:2025-01-29 11:23:59.107556389 +0000 UTC m=+0.559462406,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal,}" Jan 29 11:23:59.117656 kubelet[2329]: I0129 11:23:59.117362 2329 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:23:59.125634 kubelet[2329]: E0129 11:23:59.125601 2329 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:23:59.126469 kubelet[2329]: E0129 11:23:59.125900 2329 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" not found" Jan 29 11:23:59.126469 kubelet[2329]: I0129 11:23:59.125951 2329 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:23:59.126469 kubelet[2329]: I0129 11:23:59.126071 2329 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:23:59.126469 kubelet[2329]: I0129 11:23:59.126137 2329 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:23:59.127479 kubelet[2329]: W0129 11:23:59.127420 2329 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.21:6443: connect: connection refused Jan 29 11:23:59.127643 kubelet[2329]: E0129 11:23:59.127625 2329 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.21:6443: connect: connection refused Jan 29 11:23:59.127862 kubelet[2329]: E0129 11:23:59.127827 2329 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.21:6443: connect: connection refused" interval="200ms" Jan 29 11:23:59.128217 kubelet[2329]: I0129 11:23:59.128184 2329 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:23:59.128454 kubelet[2329]: I0129 11:23:59.128430 2329 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:23:59.130759 kubelet[2329]: I0129 11:23:59.130479 2329 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:23:59.167386 kubelet[2329]: I0129 11:23:59.166705 2329 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:23:59.167386 kubelet[2329]: I0129 11:23:59.166733 2329 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:23:59.167386 kubelet[2329]: I0129 11:23:59.166774 2329 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:23:59.169626 kubelet[2329]: I0129 11:23:59.169593 2329 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:23:59.172398 kubelet[2329]: I0129 11:23:59.172374 2329 policy_none.go:49] "None policy: Start" Jan 29 11:23:59.173117 kubelet[2329]: I0129 11:23:59.173089 2329 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:23:59.173228 kubelet[2329]: I0129 11:23:59.173127 2329 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:23:59.173228 kubelet[2329]: I0129 11:23:59.173153 2329 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:23:59.173332 kubelet[2329]: E0129 11:23:59.173224 2329 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:23:59.174555 kubelet[2329]: W0129 11:23:59.174486 2329 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.21:6443: connect: connection refused Jan 29 11:23:59.174689 kubelet[2329]: E0129 11:23:59.174572 2329 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.21:6443: connect: connection refused Jan 29 11:23:59.175049 kubelet[2329]: I0129 11:23:59.175026 2329 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:23:59.175176 kubelet[2329]: I0129 11:23:59.175162 2329 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:23:59.185641 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:23:59.196438 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:23:59.200796 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:23:59.212654 kubelet[2329]: I0129 11:23:59.212614 2329 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:23:59.213367 kubelet[2329]: I0129 11:23:59.212903 2329 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:23:59.213367 kubelet[2329]: I0129 11:23:59.213090 2329 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:23:59.215123 kubelet[2329]: E0129 11:23:59.215092 2329 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" not found" Jan 29 11:23:59.234731 kubelet[2329]: I0129 11:23:59.234697 2329 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:23:59.235222 kubelet[2329]: E0129 11:23:59.235168 2329 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.21:6443/api/v1/nodes\": dial tcp 10.128.0.21:6443: connect: connection refused" node="ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:23:59.273539 kubelet[2329]: I0129 11:23:59.273457 2329 topology_manager.go:215] "Topology Admit Handler" podUID="5f70c97af6090a2815accaac8368d373" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:23:59.280746 kubelet[2329]: I0129 11:23:59.280671 2329 topology_manager.go:215] "Topology Admit Handler" podUID="6fe04758a50b923c5f83f63e5f36233a" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:23:59.292432 kubelet[2329]: I0129 11:23:59.292103 2329 topology_manager.go:215] "Topology Admit Handler" podUID="c8195ae6b241e202191741cac01cffab" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:23:59.300263 systemd[1]: Created slice kubepods-burstable-pod5f70c97af6090a2815accaac8368d373.slice - libcontainer container kubepods-burstable-pod5f70c97af6090a2815accaac8368d373.slice. Jan 29 11:23:59.315503 systemd[1]: Created slice kubepods-burstable-pod6fe04758a50b923c5f83f63e5f36233a.slice - libcontainer container kubepods-burstable-pod6fe04758a50b923c5f83f63e5f36233a.slice. Jan 29 11:23:59.327265 kubelet[2329]: I0129 11:23:59.326897 2329 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5f70c97af6090a2815accaac8368d373-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" (UID: \"5f70c97af6090a2815accaac8368d373\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:23:59.327265 kubelet[2329]: I0129 11:23:59.326948 2329 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c8195ae6b241e202191741cac01cffab-ca-certs\") pod \"kube-apiserver-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" (UID: \"c8195ae6b241e202191741cac01cffab\") " pod="kube-system/kube-apiserver-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:23:59.327265 kubelet[2329]: I0129 11:23:59.326986 2329 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c8195ae6b241e202191741cac01cffab-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" (UID: \"c8195ae6b241e202191741cac01cffab\") " pod="kube-system/kube-apiserver-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:23:59.327265 kubelet[2329]: I0129 11:23:59.327018 2329 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5f70c97af6090a2815accaac8368d373-ca-certs\") pod \"kube-controller-manager-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" (UID: \"5f70c97af6090a2815accaac8368d373\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:23:59.327609 kubelet[2329]: I0129 11:23:59.327048 2329 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5f70c97af6090a2815accaac8368d373-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" (UID: \"5f70c97af6090a2815accaac8368d373\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:23:59.327609 kubelet[2329]: I0129 11:23:59.327077 2329 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c8195ae6b241e202191741cac01cffab-k8s-certs\") pod \"kube-apiserver-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" (UID: \"c8195ae6b241e202191741cac01cffab\") " pod="kube-system/kube-apiserver-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:23:59.327609 kubelet[2329]: I0129 11:23:59.327105 2329 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5f70c97af6090a2815accaac8368d373-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" (UID: \"5f70c97af6090a2815accaac8368d373\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:23:59.327609 kubelet[2329]: I0129 11:23:59.327134 2329 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5f70c97af6090a2815accaac8368d373-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" (UID: \"5f70c97af6090a2815accaac8368d373\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:23:59.327810 kubelet[2329]: I0129 11:23:59.327167 2329 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6fe04758a50b923c5f83f63e5f36233a-kubeconfig\") pod \"kube-scheduler-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" (UID: \"6fe04758a50b923c5f83f63e5f36233a\") " pod="kube-system/kube-scheduler-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:23:59.328867 kubelet[2329]: E0129 11:23:59.328804 2329 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.21:6443: connect: connection refused" interval="400ms" Jan 29 11:23:59.330634 systemd[1]: Created slice kubepods-burstable-podc8195ae6b241e202191741cac01cffab.slice - libcontainer container kubepods-burstable-podc8195ae6b241e202191741cac01cffab.slice. Jan 29 11:23:59.440986 kubelet[2329]: I0129 11:23:59.440936 2329 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:23:59.441385 kubelet[2329]: E0129 11:23:59.441322 2329 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.21:6443/api/v1/nodes\": dial tcp 10.128.0.21:6443: connect: connection refused" node="ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:23:59.613476 containerd[1493]: time="2025-01-29T11:23:59.613323297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal,Uid:5f70c97af6090a2815accaac8368d373,Namespace:kube-system,Attempt:0,}" Jan 29 11:23:59.627229 containerd[1493]: time="2025-01-29T11:23:59.627165521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal,Uid:6fe04758a50b923c5f83f63e5f36233a,Namespace:kube-system,Attempt:0,}" Jan 29 11:23:59.634886 containerd[1493]: time="2025-01-29T11:23:59.634836220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal,Uid:c8195ae6b241e202191741cac01cffab,Namespace:kube-system,Attempt:0,}" Jan 29 11:23:59.730280 kubelet[2329]: E0129 11:23:59.730204 2329 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.21:6443: connect: connection refused" interval="800ms" Jan 29 11:23:59.846578 kubelet[2329]: I0129 11:23:59.846514 2329 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:23:59.846970 kubelet[2329]: E0129 11:23:59.846922 2329 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.21:6443/api/v1/nodes\": dial tcp 10.128.0.21:6443: connect: connection refused" node="ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:23:59.966089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2001778171.mount: Deactivated successfully. Jan 29 11:23:59.975518 containerd[1493]: time="2025-01-29T11:23:59.975461086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:23:59.978013 containerd[1493]: time="2025-01-29T11:23:59.977950810Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:23:59.980444 containerd[1493]: time="2025-01-29T11:23:59.980370108Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Jan 29 11:23:59.981522 containerd[1493]: time="2025-01-29T11:23:59.981463027Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:23:59.983795 containerd[1493]: time="2025-01-29T11:23:59.983736268Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:23:59.985924 containerd[1493]: time="2025-01-29T11:23:59.985779467Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:23:59.986808 containerd[1493]: time="2025-01-29T11:23:59.986696824Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:23:59.992370 containerd[1493]: time="2025-01-29T11:23:59.990542783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:23:59.993638 containerd[1493]: time="2025-01-29T11:23:59.993589376Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 366.297099ms" Jan 29 11:23:59.997535 containerd[1493]: time="2025-01-29T11:23:59.997483049Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 384.01462ms" Jan 29 11:24:00.000183 containerd[1493]: time="2025-01-29T11:24:00.000106539Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 365.137818ms" Jan 29 11:24:00.073883 kubelet[2329]: W0129 11:24:00.073777 2329 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.21:6443: connect: connection refused Jan 29 11:24:00.073883 kubelet[2329]: E0129 11:24:00.073862 2329 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.21:6443: connect: connection refused Jan 29 11:24:00.194565 containerd[1493]: time="2025-01-29T11:24:00.194277519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:24:00.198611 containerd[1493]: time="2025-01-29T11:24:00.195534963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:24:00.198611 containerd[1493]: time="2025-01-29T11:24:00.198239435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:24:00.199148 containerd[1493]: time="2025-01-29T11:24:00.195709816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:24:00.199148 containerd[1493]: time="2025-01-29T11:24:00.198601530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:24:00.199148 containerd[1493]: time="2025-01-29T11:24:00.198327912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:24:00.199148 containerd[1493]: time="2025-01-29T11:24:00.198378776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:24:00.199148 containerd[1493]: time="2025-01-29T11:24:00.198542014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:24:00.203311 containerd[1493]: time="2025-01-29T11:24:00.203118737Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:24:00.203311 containerd[1493]: time="2025-01-29T11:24:00.203201294Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:24:00.203311 containerd[1493]: time="2025-01-29T11:24:00.203221722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:24:00.203780 containerd[1493]: time="2025-01-29T11:24:00.203416632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:24:00.246093 kubelet[2329]: W0129 11:24:00.245808 2329 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.21:6443: connect: connection refused Jan 29 11:24:00.246093 kubelet[2329]: E0129 11:24:00.245906 2329 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.21:6443: connect: connection refused Jan 29 11:24:00.251163 systemd[1]: Started cri-containerd-7d1927d456415a37f498428adc6b027d916deea66120bab6be9d9c1758952dca.scope - libcontainer container 7d1927d456415a37f498428adc6b027d916deea66120bab6be9d9c1758952dca. Jan 29 11:24:00.265736 systemd[1]: Started cri-containerd-7e56f6d92f0062ec3af744592bf381e7835ec541da1e7137fc9e8b13d8c35567.scope - libcontainer container 7e56f6d92f0062ec3af744592bf381e7835ec541da1e7137fc9e8b13d8c35567. Jan 29 11:24:00.277591 systemd[1]: Started cri-containerd-680b89ca04262aaee1d387c8af17e3b48fa8f5bc9bd03ba1fe2fb29bd3e4793b.scope - libcontainer container 680b89ca04262aaee1d387c8af17e3b48fa8f5bc9bd03ba1fe2fb29bd3e4793b. Jan 29 11:24:00.342003 kubelet[2329]: W0129 11:24:00.341922 2329 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.21:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.21:6443: connect: connection refused Jan 29 11:24:00.342003 kubelet[2329]: E0129 11:24:00.342009 2329 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.21:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.21:6443: connect: connection refused Jan 29 11:24:00.358271 containerd[1493]: time="2025-01-29T11:24:00.358062874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal,Uid:6fe04758a50b923c5f83f63e5f36233a,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d1927d456415a37f498428adc6b027d916deea66120bab6be9d9c1758952dca\"" Jan 29 11:24:00.363007 kubelet[2329]: E0129 11:24:00.362526 2329 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-21291" Jan 29 11:24:00.366647 containerd[1493]: time="2025-01-29T11:24:00.366606250Z" level=info msg="CreateContainer within sandbox \"7d1927d456415a37f498428adc6b027d916deea66120bab6be9d9c1758952dca\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:24:00.394108 containerd[1493]: time="2025-01-29T11:24:00.393896031Z" level=info msg="CreateContainer within sandbox \"7d1927d456415a37f498428adc6b027d916deea66120bab6be9d9c1758952dca\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1c0236248e93b05f4003c35cdd94c2d48c55ba108fe7c779b084600d43735f2c\"" Jan 29 11:24:00.396414 containerd[1493]: time="2025-01-29T11:24:00.395297647Z" level=info msg="StartContainer for \"1c0236248e93b05f4003c35cdd94c2d48c55ba108fe7c779b084600d43735f2c\"" Jan 29 11:24:00.401215 containerd[1493]: time="2025-01-29T11:24:00.401172156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal,Uid:c8195ae6b241e202191741cac01cffab,Namespace:kube-system,Attempt:0,} returns sandbox id \"680b89ca04262aaee1d387c8af17e3b48fa8f5bc9bd03ba1fe2fb29bd3e4793b\"" Jan 29 11:24:00.403641 kubelet[2329]: E0129 11:24:00.403591 2329 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-21291" Jan 29 11:24:00.407271 containerd[1493]: time="2025-01-29T11:24:00.407000742Z" level=info msg="CreateContainer within sandbox \"680b89ca04262aaee1d387c8af17e3b48fa8f5bc9bd03ba1fe2fb29bd3e4793b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:24:00.410158 containerd[1493]: time="2025-01-29T11:24:00.410119541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal,Uid:5f70c97af6090a2815accaac8368d373,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e56f6d92f0062ec3af744592bf381e7835ec541da1e7137fc9e8b13d8c35567\"" Jan 29 11:24:00.413671 kubelet[2329]: E0129 11:24:00.413635 2329 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4152-2-0-087b38d015dfbd817921.c.flat" Jan 29 11:24:00.418006 containerd[1493]: time="2025-01-29T11:24:00.417968010Z" level=info msg="CreateContainer within sandbox \"7e56f6d92f0062ec3af744592bf381e7835ec541da1e7137fc9e8b13d8c35567\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:24:00.430472 containerd[1493]: time="2025-01-29T11:24:00.430408034Z" level=info msg="CreateContainer within sandbox \"680b89ca04262aaee1d387c8af17e3b48fa8f5bc9bd03ba1fe2fb29bd3e4793b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3aa42245c2954846c9b224d36376828a5739218542bc8d54dc3ec0c2c5e52c98\"" Jan 29 11:24:00.433571 containerd[1493]: time="2025-01-29T11:24:00.433517201Z" level=info msg="StartContainer for \"3aa42245c2954846c9b224d36376828a5739218542bc8d54dc3ec0c2c5e52c98\"" Jan 29 11:24:00.440737 containerd[1493]: time="2025-01-29T11:24:00.440688687Z" level=info msg="CreateContainer within sandbox \"7e56f6d92f0062ec3af744592bf381e7835ec541da1e7137fc9e8b13d8c35567\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5b017525d19928b0c9575e068205843cb7439b375bc8c9289842b5242ee85f61\"" Jan 29 11:24:00.443007 containerd[1493]: time="2025-01-29T11:24:00.442965559Z" level=info msg="StartContainer for \"5b017525d19928b0c9575e068205843cb7439b375bc8c9289842b5242ee85f61\"" Jan 29 11:24:00.466621 systemd[1]: Started cri-containerd-1c0236248e93b05f4003c35cdd94c2d48c55ba108fe7c779b084600d43735f2c.scope - libcontainer container 1c0236248e93b05f4003c35cdd94c2d48c55ba108fe7c779b084600d43735f2c. Jan 29 11:24:00.502603 systemd[1]: Started cri-containerd-5b017525d19928b0c9575e068205843cb7439b375bc8c9289842b5242ee85f61.scope - libcontainer container 5b017525d19928b0c9575e068205843cb7439b375bc8c9289842b5242ee85f61. Jan 29 11:24:00.522893 systemd[1]: Started cri-containerd-3aa42245c2954846c9b224d36376828a5739218542bc8d54dc3ec0c2c5e52c98.scope - libcontainer container 3aa42245c2954846c9b224d36376828a5739218542bc8d54dc3ec0c2c5e52c98. Jan 29 11:24:00.531183 kubelet[2329]: E0129 11:24:00.531120 2329 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.21:6443: connect: connection refused" interval="1.6s" Jan 29 11:24:00.625439 containerd[1493]: time="2025-01-29T11:24:00.622439060Z" level=info msg="StartContainer for \"1c0236248e93b05f4003c35cdd94c2d48c55ba108fe7c779b084600d43735f2c\" returns successfully" Jan 29 11:24:00.625439 containerd[1493]: time="2025-01-29T11:24:00.622532258Z" level=info msg="StartContainer for \"5b017525d19928b0c9575e068205843cb7439b375bc8c9289842b5242ee85f61\" returns successfully" Jan 29 11:24:00.642694 containerd[1493]: time="2025-01-29T11:24:00.642643582Z" level=info msg="StartContainer for \"3aa42245c2954846c9b224d36376828a5739218542bc8d54dc3ec0c2c5e52c98\" returns successfully" Jan 29 11:24:00.654963 kubelet[2329]: I0129 11:24:00.654921 2329 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:24:00.655873 kubelet[2329]: E0129 11:24:00.655829 2329 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.21:6443/api/v1/nodes\": dial tcp 10.128.0.21:6443: connect: connection refused" node="ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:24:00.695982 kubelet[2329]: W0129 11:24:00.695878 2329 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.21:6443: connect: connection refused Jan 29 11:24:00.695982 kubelet[2329]: E0129 11:24:00.695944 2329 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.21:6443: connect: connection refused Jan 29 11:24:02.261916 kubelet[2329]: I0129 11:24:02.261854 2329 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:24:02.413231 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 29 11:24:04.002424 kubelet[2329]: E0129 11:24:04.002366 2329 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" not found" node="ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:24:04.100987 kubelet[2329]: I0129 11:24:04.100600 2329 apiserver.go:52] "Watching apiserver" Jan 29 11:24:04.126392 kubelet[2329]: I0129 11:24:04.126331 2329 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:24:04.208779 kubelet[2329]: E0129 11:24:04.208452 2329 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal.181f2611ce688425 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal,UID:ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal,},FirstTimestamp:2025-01-29 11:23:59.107556389 +0000 UTC m=+0.559462406,LastTimestamp:2025-01-29 11:23:59.107556389 +0000 UTC m=+0.559462406,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal,}" Jan 29 11:24:04.281077 kubelet[2329]: I0129 11:24:04.279058 2329 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:24:04.282414 kubelet[2329]: E0129 11:24:04.280982 2329 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal.181f2611cf7b8ab5 default 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal,UID:ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal,},FirstTimestamp:2025-01-29 11:23:59.125580469 +0000 UTC m=+0.577486489,LastTimestamp:2025-01-29 11:23:59.125580469 +0000 UTC m=+0.577486489,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal,}" Jan 29 11:24:05.613479 kubelet[2329]: W0129 11:24:05.613427 2329 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 29 11:24:06.494953 systemd[1]: Reloading requested from client PID 2609 ('systemctl') (unit session-7.scope)... Jan 29 11:24:06.494989 systemd[1]: Reloading... Jan 29 11:24:06.642472 zram_generator::config[2649]: No configuration found. Jan 29 11:24:06.786806 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:24:06.938232 systemd[1]: Reloading finished in 442 ms. Jan 29 11:24:06.994084 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:24:07.010480 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:24:07.010800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:24:07.010877 systemd[1]: kubelet.service: Consumed 1.093s CPU time, 115.8M memory peak, 0B memory swap peak. Jan 29 11:24:07.021009 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:24:07.260099 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:24:07.277113 (kubelet)[2697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:24:07.360402 kubelet[2697]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:24:07.360402 kubelet[2697]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:24:07.360402 kubelet[2697]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:24:07.361215 kubelet[2697]: I0129 11:24:07.360497 2697 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:24:07.366358 kubelet[2697]: I0129 11:24:07.366299 2697 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:24:07.366358 kubelet[2697]: I0129 11:24:07.366329 2697 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:24:07.366713 kubelet[2697]: I0129 11:24:07.366675 2697 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:24:07.368472 kubelet[2697]: I0129 11:24:07.368444 2697 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:24:07.370754 kubelet[2697]: I0129 11:24:07.370589 2697 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:24:07.385402 kubelet[2697]: I0129 11:24:07.384040 2697 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:24:07.385402 kubelet[2697]: I0129 11:24:07.384495 2697 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:24:07.385402 kubelet[2697]: I0129 11:24:07.384550 2697 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:24:07.385402 kubelet[2697]: I0129 11:24:07.384840 2697 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:24:07.385888 kubelet[2697]: I0129 11:24:07.384860 2697 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:24:07.385888 kubelet[2697]: I0129 11:24:07.384934 2697 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:24:07.385888 kubelet[2697]: I0129 11:24:07.385090 2697 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:24:07.385888 kubelet[2697]: I0129 11:24:07.385110 2697 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:24:07.385888 kubelet[2697]: I0129 11:24:07.385149 2697 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:24:07.385888 kubelet[2697]: I0129 11:24:07.385183 2697 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:24:07.389398 kubelet[2697]: I0129 11:24:07.389362 2697 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:24:07.389778 kubelet[2697]: I0129 11:24:07.389749 2697 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:24:07.390755 kubelet[2697]: I0129 11:24:07.390717 2697 server.go:1264] "Started kubelet" Jan 29 11:24:07.398393 kubelet[2697]: I0129 11:24:07.398364 2697 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:24:07.408771 kubelet[2697]: I0129 11:24:07.407396 2697 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:24:07.409614 kubelet[2697]: I0129 11:24:07.409421 2697 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:24:07.412031 kubelet[2697]: I0129 11:24:07.411506 2697 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:24:07.412031 kubelet[2697]: I0129 11:24:07.411775 2697 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:24:07.414521 kubelet[2697]: I0129 11:24:07.414267 2697 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:24:07.414915 kubelet[2697]: I0129 11:24:07.414877 2697 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:24:07.416370 kubelet[2697]: I0129 11:24:07.415081 2697 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:24:07.426209 kubelet[2697]: I0129 11:24:07.426141 2697 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:24:07.429420 kubelet[2697]: I0129 11:24:07.428293 2697 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:24:07.429420 kubelet[2697]: I0129 11:24:07.428367 2697 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:24:07.429420 kubelet[2697]: I0129 11:24:07.428403 2697 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:24:07.429420 kubelet[2697]: E0129 11:24:07.428468 2697 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:24:07.431398 kubelet[2697]: I0129 11:24:07.430235 2697 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:24:07.431398 kubelet[2697]: I0129 11:24:07.430397 2697 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:24:07.451826 kubelet[2697]: I0129 11:24:07.449510 2697 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:24:07.509628 sudo[2727]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 11:24:07.510844 sudo[2727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 11:24:07.533949 kubelet[2697]: I0129 11:24:07.527092 2697 kubelet_node_status.go:73] "Attempting to register node" node="ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:24:07.533949 kubelet[2697]: E0129 11:24:07.528563 2697 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 11:24:07.545640 kubelet[2697]: I0129 11:24:07.545593 2697 kubelet_node_status.go:112] "Node was previously registered" node="ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:24:07.545972 kubelet[2697]: I0129 11:24:07.545926 2697 kubelet_node_status.go:76] "Successfully registered node" node="ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:24:07.564185 kubelet[2697]: I0129 11:24:07.562984 2697 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:24:07.564185 kubelet[2697]: I0129 11:24:07.563052 2697 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:24:07.564185 kubelet[2697]: I0129 11:24:07.563080 2697 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:24:07.564185 kubelet[2697]: I0129 11:24:07.563577 2697 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:24:07.564185 kubelet[2697]: I0129 11:24:07.563614 2697 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:24:07.564185 kubelet[2697]: I0129 11:24:07.563644 2697 policy_none.go:49] "None policy: Start" Jan 29 11:24:07.565186 kubelet[2697]: I0129 11:24:07.565145 2697 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:24:07.565186 kubelet[2697]: I0129 11:24:07.565179 2697 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:24:07.565581 kubelet[2697]: I0129 11:24:07.565505 2697 state_mem.go:75] "Updated machine memory state" Jan 29 11:24:07.576313 kubelet[2697]: I0129 11:24:07.576278 2697 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:24:07.577528 kubelet[2697]: I0129 11:24:07.577081 2697 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:24:07.578036 kubelet[2697]: I0129 11:24:07.577674 2697 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:24:07.730664 kubelet[2697]: I0129 11:24:07.729153 2697 topology_manager.go:215] "Topology Admit Handler" podUID="c8195ae6b241e202191741cac01cffab" podNamespace="kube-system" podName="kube-apiserver-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:24:07.730664 kubelet[2697]: I0129 11:24:07.729285 2697 topology_manager.go:215] "Topology Admit Handler" podUID="5f70c97af6090a2815accaac8368d373" podNamespace="kube-system" podName="kube-controller-manager-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:24:07.730664 kubelet[2697]: I0129 11:24:07.729426 2697 topology_manager.go:215] "Topology Admit Handler" podUID="6fe04758a50b923c5f83f63e5f36233a" podNamespace="kube-system" podName="kube-scheduler-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:24:07.739094 kubelet[2697]: W0129 11:24:07.739053 2697 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 29 11:24:07.745429 kubelet[2697]: W0129 11:24:07.745027 2697 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 29 11:24:07.746978 kubelet[2697]: W0129 11:24:07.746606 2697 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 29 11:24:07.746978 kubelet[2697]: E0129 11:24:07.746836 2697 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:24:07.816263 kubelet[2697]: I0129 11:24:07.816028 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c8195ae6b241e202191741cac01cffab-ca-certs\") pod \"kube-apiserver-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" (UID: \"c8195ae6b241e202191741cac01cffab\") " pod="kube-system/kube-apiserver-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:24:07.816263 kubelet[2697]: I0129 11:24:07.816088 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c8195ae6b241e202191741cac01cffab-k8s-certs\") pod \"kube-apiserver-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" (UID: \"c8195ae6b241e202191741cac01cffab\") " pod="kube-system/kube-apiserver-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:24:07.817264 kubelet[2697]: I0129 11:24:07.816602 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c8195ae6b241e202191741cac01cffab-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" (UID: \"c8195ae6b241e202191741cac01cffab\") " pod="kube-system/kube-apiserver-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:24:07.917302 kubelet[2697]: I0129 11:24:07.917239 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5f70c97af6090a2815accaac8368d373-kubeconfig\") pod \"kube-controller-manager-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" (UID: \"5f70c97af6090a2815accaac8368d373\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:24:07.917519 kubelet[2697]: I0129 11:24:07.917322 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6fe04758a50b923c5f83f63e5f36233a-kubeconfig\") pod \"kube-scheduler-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" (UID: \"6fe04758a50b923c5f83f63e5f36233a\") " pod="kube-system/kube-scheduler-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:24:07.917519 kubelet[2697]: I0129 11:24:07.917385 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5f70c97af6090a2815accaac8368d373-ca-certs\") pod \"kube-controller-manager-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" (UID: \"5f70c97af6090a2815accaac8368d373\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:24:07.917519 kubelet[2697]: I0129 11:24:07.917415 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5f70c97af6090a2815accaac8368d373-flexvolume-dir\") pod \"kube-controller-manager-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" (UID: \"5f70c97af6090a2815accaac8368d373\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:24:07.917519 kubelet[2697]: I0129 11:24:07.917445 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5f70c97af6090a2815accaac8368d373-k8s-certs\") pod \"kube-controller-manager-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" (UID: \"5f70c97af6090a2815accaac8368d373\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:24:07.917757 kubelet[2697]: I0129 11:24:07.917491 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5f70c97af6090a2815accaac8368d373-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" (UID: \"5f70c97af6090a2815accaac8368d373\") " pod="kube-system/kube-controller-manager-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:24:08.286245 sudo[2727]: pam_unix(sudo:session): session closed for user root Jan 29 11:24:08.389994 kubelet[2697]: I0129 11:24:08.389638 2697 apiserver.go:52] "Watching apiserver" Jan 29 11:24:08.415773 kubelet[2697]: I0129 11:24:08.415714 2697 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:24:08.514387 kubelet[2697]: W0129 11:24:08.513587 2697 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 29 11:24:08.514387 kubelet[2697]: E0129 11:24:08.513702 2697 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" Jan 29 11:24:08.553462 kubelet[2697]: I0129 11:24:08.551864 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" podStartSLOduration=1.551813831 podStartE2EDuration="1.551813831s" podCreationTimestamp="2025-01-29 11:24:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:24:08.540165435 +0000 UTC m=+1.254732966" watchObservedRunningTime="2025-01-29 11:24:08.551813831 +0000 UTC m=+1.266381353" Jan 29 11:24:08.564011 kubelet[2697]: I0129 11:24:08.563937 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" podStartSLOduration=3.563911537 podStartE2EDuration="3.563911537s" podCreationTimestamp="2025-01-29 11:24:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:24:08.552694928 +0000 UTC m=+1.267262459" watchObservedRunningTime="2025-01-29 11:24:08.563911537 +0000 UTC m=+1.278479069" Jan 29 11:24:08.576737 kubelet[2697]: I0129 11:24:08.576663 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" podStartSLOduration=1.576638733 podStartE2EDuration="1.576638733s" podCreationTimestamp="2025-01-29 11:24:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:24:08.564661382 +0000 UTC m=+1.279228914" watchObservedRunningTime="2025-01-29 11:24:08.576638733 +0000 UTC m=+1.291206254" Jan 29 11:24:10.388181 sudo[1740]: pam_unix(sudo:session): session closed for user root Jan 29 11:24:10.431044 sshd[1739]: Connection closed by 139.178.68.195 port 58468 Jan 29 11:24:10.431938 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Jan 29 11:24:10.439039 systemd[1]: sshd@6-10.128.0.21:22-139.178.68.195:58468.service: Deactivated successfully. Jan 29 11:24:10.441979 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:24:10.442225 systemd[1]: session-7.scope: Consumed 8.462s CPU time, 189.3M memory peak, 0B memory swap peak. Jan 29 11:24:10.443178 systemd-logind[1477]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:24:10.444954 systemd-logind[1477]: Removed session 7. Jan 29 11:24:16.622523 update_engine[1482]: I20250129 11:24:16.622424 1482 update_attempter.cc:509] Updating boot flags... Jan 29 11:24:16.698394 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2778) Jan 29 11:24:16.827618 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2777) Jan 29 11:24:19.705776 kubelet[2697]: I0129 11:24:19.705731 2697 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:24:19.706693 containerd[1493]: time="2025-01-29T11:24:19.706650603Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:24:19.707327 kubelet[2697]: I0129 11:24:19.706970 2697 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:24:20.697622 kubelet[2697]: I0129 11:24:20.697570 2697 topology_manager.go:215] "Topology Admit Handler" podUID="a0991852-2aa7-4f69-9178-a606bd90ad2e" podNamespace="kube-system" podName="kube-proxy-zxqpj" Jan 29 11:24:20.712929 systemd[1]: Created slice kubepods-besteffort-poda0991852_2aa7_4f69_9178_a606bd90ad2e.slice - libcontainer container kubepods-besteffort-poda0991852_2aa7_4f69_9178_a606bd90ad2e.slice. Jan 29 11:24:20.747031 kubelet[2697]: I0129 11:24:20.745470 2697 topology_manager.go:215] "Topology Admit Handler" podUID="870488fe-68a9-4008-b25c-9a91d6df03ab" podNamespace="kube-system" podName="cilium-nf7b8" Jan 29 11:24:20.759717 systemd[1]: Created slice kubepods-burstable-pod870488fe_68a9_4008_b25c_9a91d6df03ab.slice - libcontainer container kubepods-burstable-pod870488fe_68a9_4008_b25c_9a91d6df03ab.slice. Jan 29 11:24:20.793833 kubelet[2697]: I0129 11:24:20.793585 2697 topology_manager.go:215] "Topology Admit Handler" podUID="2e51c5f5-1fac-4fcd-ba19-63c430c4ee7e" podNamespace="kube-system" podName="cilium-operator-599987898-l9tll" Jan 29 11:24:20.794769 kubelet[2697]: I0129 11:24:20.794486 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a0991852-2aa7-4f69-9178-a606bd90ad2e-xtables-lock\") pod \"kube-proxy-zxqpj\" (UID: \"a0991852-2aa7-4f69-9178-a606bd90ad2e\") " pod="kube-system/kube-proxy-zxqpj" Jan 29 11:24:20.795504 kubelet[2697]: I0129 11:24:20.795468 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-bpf-maps\") pod \"cilium-nf7b8\" (UID: \"870488fe-68a9-4008-b25c-9a91d6df03ab\") " pod="kube-system/cilium-nf7b8" Jan 29 11:24:20.795748 kubelet[2697]: I0129 11:24:20.795704 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a0991852-2aa7-4f69-9178-a606bd90ad2e-lib-modules\") pod \"kube-proxy-zxqpj\" (UID: \"a0991852-2aa7-4f69-9178-a606bd90ad2e\") " pod="kube-system/kube-proxy-zxqpj" Jan 29 11:24:20.796192 kubelet[2697]: I0129 11:24:20.796041 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-cilium-run\") pod \"cilium-nf7b8\" (UID: \"870488fe-68a9-4008-b25c-9a91d6df03ab\") " pod="kube-system/cilium-nf7b8" Jan 29 11:24:20.796192 kubelet[2697]: I0129 11:24:20.796094 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4bpg\" (UniqueName: \"kubernetes.io/projected/870488fe-68a9-4008-b25c-9a91d6df03ab-kube-api-access-c4bpg\") pod \"cilium-nf7b8\" (UID: \"870488fe-68a9-4008-b25c-9a91d6df03ab\") " pod="kube-system/cilium-nf7b8" Jan 29 11:24:20.796192 kubelet[2697]: I0129 11:24:20.796137 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-xtables-lock\") pod \"cilium-nf7b8\" (UID: \"870488fe-68a9-4008-b25c-9a91d6df03ab\") " pod="kube-system/cilium-nf7b8" Jan 29 11:24:20.796192 kubelet[2697]: I0129 11:24:20.796165 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/870488fe-68a9-4008-b25c-9a91d6df03ab-cilium-config-path\") pod \"cilium-nf7b8\" (UID: \"870488fe-68a9-4008-b25c-9a91d6df03ab\") " pod="kube-system/cilium-nf7b8" Jan 29 11:24:20.796686 kubelet[2697]: I0129 11:24:20.796233 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-host-proc-sys-net\") pod \"cilium-nf7b8\" (UID: \"870488fe-68a9-4008-b25c-9a91d6df03ab\") " pod="kube-system/cilium-nf7b8" Jan 29 11:24:20.796686 kubelet[2697]: I0129 11:24:20.796262 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a0991852-2aa7-4f69-9178-a606bd90ad2e-kube-proxy\") pod \"kube-proxy-zxqpj\" (UID: \"a0991852-2aa7-4f69-9178-a606bd90ad2e\") " pod="kube-system/kube-proxy-zxqpj" Jan 29 11:24:20.796686 kubelet[2697]: I0129 11:24:20.796309 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-cilium-cgroup\") pod \"cilium-nf7b8\" (UID: \"870488fe-68a9-4008-b25c-9a91d6df03ab\") " pod="kube-system/cilium-nf7b8" Jan 29 11:24:20.796686 kubelet[2697]: I0129 11:24:20.796357 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-cni-path\") pod \"cilium-nf7b8\" (UID: \"870488fe-68a9-4008-b25c-9a91d6df03ab\") " pod="kube-system/cilium-nf7b8" Jan 29 11:24:20.796686 kubelet[2697]: I0129 11:24:20.796436 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/870488fe-68a9-4008-b25c-9a91d6df03ab-clustermesh-secrets\") pod \"cilium-nf7b8\" (UID: \"870488fe-68a9-4008-b25c-9a91d6df03ab\") " pod="kube-system/cilium-nf7b8" Jan 29 11:24:20.796686 kubelet[2697]: I0129 11:24:20.796472 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-lib-modules\") pod \"cilium-nf7b8\" (UID: \"870488fe-68a9-4008-b25c-9a91d6df03ab\") " pod="kube-system/cilium-nf7b8" Jan 29 11:24:20.796995 kubelet[2697]: I0129 11:24:20.796510 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxd8k\" (UniqueName: \"kubernetes.io/projected/a0991852-2aa7-4f69-9178-a606bd90ad2e-kube-api-access-pxd8k\") pod \"kube-proxy-zxqpj\" (UID: \"a0991852-2aa7-4f69-9178-a606bd90ad2e\") " pod="kube-system/kube-proxy-zxqpj" Jan 29 11:24:20.796995 kubelet[2697]: I0129 11:24:20.796543 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-etc-cni-netd\") pod \"cilium-nf7b8\" (UID: \"870488fe-68a9-4008-b25c-9a91d6df03ab\") " pod="kube-system/cilium-nf7b8" Jan 29 11:24:20.796995 kubelet[2697]: I0129 11:24:20.796602 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-hostproc\") pod \"cilium-nf7b8\" (UID: \"870488fe-68a9-4008-b25c-9a91d6df03ab\") " pod="kube-system/cilium-nf7b8" Jan 29 11:24:20.796995 kubelet[2697]: I0129 11:24:20.796643 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-host-proc-sys-kernel\") pod \"cilium-nf7b8\" (UID: \"870488fe-68a9-4008-b25c-9a91d6df03ab\") " pod="kube-system/cilium-nf7b8" Jan 29 11:24:20.796995 kubelet[2697]: I0129 11:24:20.796680 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/870488fe-68a9-4008-b25c-9a91d6df03ab-hubble-tls\") pod \"cilium-nf7b8\" (UID: \"870488fe-68a9-4008-b25c-9a91d6df03ab\") " pod="kube-system/cilium-nf7b8" Jan 29 11:24:20.813994 systemd[1]: Created slice kubepods-besteffort-pod2e51c5f5_1fac_4fcd_ba19_63c430c4ee7e.slice - libcontainer container kubepods-besteffort-pod2e51c5f5_1fac_4fcd_ba19_63c430c4ee7e.slice. Jan 29 11:24:20.898438 kubelet[2697]: I0129 11:24:20.897396 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s842z\" (UniqueName: \"kubernetes.io/projected/2e51c5f5-1fac-4fcd-ba19-63c430c4ee7e-kube-api-access-s842z\") pod \"cilium-operator-599987898-l9tll\" (UID: \"2e51c5f5-1fac-4fcd-ba19-63c430c4ee7e\") " pod="kube-system/cilium-operator-599987898-l9tll" Jan 29 11:24:20.898438 kubelet[2697]: I0129 11:24:20.897678 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2e51c5f5-1fac-4fcd-ba19-63c430c4ee7e-cilium-config-path\") pod \"cilium-operator-599987898-l9tll\" (UID: \"2e51c5f5-1fac-4fcd-ba19-63c430c4ee7e\") " pod="kube-system/cilium-operator-599987898-l9tll" Jan 29 11:24:21.024682 containerd[1493]: time="2025-01-29T11:24:21.024526351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zxqpj,Uid:a0991852-2aa7-4f69-9178-a606bd90ad2e,Namespace:kube-system,Attempt:0,}" Jan 29 11:24:21.066822 containerd[1493]: time="2025-01-29T11:24:21.066105311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nf7b8,Uid:870488fe-68a9-4008-b25c-9a91d6df03ab,Namespace:kube-system,Attempt:0,}" Jan 29 11:24:21.070846 containerd[1493]: time="2025-01-29T11:24:21.070245356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:24:21.070846 containerd[1493]: time="2025-01-29T11:24:21.070488788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:24:21.070846 containerd[1493]: time="2025-01-29T11:24:21.070533692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:24:21.070846 containerd[1493]: time="2025-01-29T11:24:21.070757686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:24:21.108640 systemd[1]: Started cri-containerd-22cedadde8ea4ac92701fa58763e2ecba1e010e91133fdac14e979269c71d22e.scope - libcontainer container 22cedadde8ea4ac92701fa58763e2ecba1e010e91133fdac14e979269c71d22e. Jan 29 11:24:21.120090 containerd[1493]: time="2025-01-29T11:24:21.120036271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-l9tll,Uid:2e51c5f5-1fac-4fcd-ba19-63c430c4ee7e,Namespace:kube-system,Attempt:0,}" Jan 29 11:24:21.129218 containerd[1493]: time="2025-01-29T11:24:21.127969472Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:24:21.129218 containerd[1493]: time="2025-01-29T11:24:21.128086519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:24:21.129218 containerd[1493]: time="2025-01-29T11:24:21.128115552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:24:21.129218 containerd[1493]: time="2025-01-29T11:24:21.128274003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:24:21.172692 containerd[1493]: time="2025-01-29T11:24:21.172211556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zxqpj,Uid:a0991852-2aa7-4f69-9178-a606bd90ad2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"22cedadde8ea4ac92701fa58763e2ecba1e010e91133fdac14e979269c71d22e\"" Jan 29 11:24:21.180028 containerd[1493]: time="2025-01-29T11:24:21.179735323Z" level=info msg="CreateContainer within sandbox \"22cedadde8ea4ac92701fa58763e2ecba1e010e91133fdac14e979269c71d22e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:24:21.196796 systemd[1]: Started cri-containerd-3e9207d4e5dc38784061a88366e1bf330524dc113942a382eb025cb2d883412b.scope - libcontainer container 3e9207d4e5dc38784061a88366e1bf330524dc113942a382eb025cb2d883412b. Jan 29 11:24:21.201781 containerd[1493]: time="2025-01-29T11:24:21.201281529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:24:21.201781 containerd[1493]: time="2025-01-29T11:24:21.201399078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:24:21.201781 containerd[1493]: time="2025-01-29T11:24:21.201442972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:24:21.201781 containerd[1493]: time="2025-01-29T11:24:21.201592493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:24:21.222358 containerd[1493]: time="2025-01-29T11:24:21.222274681Z" level=info msg="CreateContainer within sandbox \"22cedadde8ea4ac92701fa58763e2ecba1e010e91133fdac14e979269c71d22e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8f6f1584f1d9ac7a95d5c9ccacd4074cab8aa104148e97579d51df29d4d2cc40\"" Jan 29 11:24:21.223673 containerd[1493]: time="2025-01-29T11:24:21.223638012Z" level=info msg="StartContainer for \"8f6f1584f1d9ac7a95d5c9ccacd4074cab8aa104148e97579d51df29d4d2cc40\"" Jan 29 11:24:21.242767 systemd[1]: Started cri-containerd-b9e942969d989713e5fb6c6b43ec97492da123d70058dd7902701a365ee17e27.scope - libcontainer container b9e942969d989713e5fb6c6b43ec97492da123d70058dd7902701a365ee17e27. Jan 29 11:24:21.288599 containerd[1493]: time="2025-01-29T11:24:21.288463597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nf7b8,Uid:870488fe-68a9-4008-b25c-9a91d6df03ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e9207d4e5dc38784061a88366e1bf330524dc113942a382eb025cb2d883412b\"" Jan 29 11:24:21.294270 containerd[1493]: time="2025-01-29T11:24:21.294221330Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 11:24:21.316133 systemd[1]: Started cri-containerd-8f6f1584f1d9ac7a95d5c9ccacd4074cab8aa104148e97579d51df29d4d2cc40.scope - libcontainer container 8f6f1584f1d9ac7a95d5c9ccacd4074cab8aa104148e97579d51df29d4d2cc40. Jan 29 11:24:21.364173 containerd[1493]: time="2025-01-29T11:24:21.364117910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-l9tll,Uid:2e51c5f5-1fac-4fcd-ba19-63c430c4ee7e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9e942969d989713e5fb6c6b43ec97492da123d70058dd7902701a365ee17e27\"" Jan 29 11:24:21.392616 containerd[1493]: time="2025-01-29T11:24:21.392469276Z" level=info msg="StartContainer for \"8f6f1584f1d9ac7a95d5c9ccacd4074cab8aa104148e97579d51df29d4d2cc40\" returns successfully" Jan 29 11:24:27.444195 kubelet[2697]: I0129 11:24:27.444116 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zxqpj" podStartSLOduration=7.444081075 podStartE2EDuration="7.444081075s" podCreationTimestamp="2025-01-29 11:24:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:24:21.543058644 +0000 UTC m=+14.257626177" watchObservedRunningTime="2025-01-29 11:24:27.444081075 +0000 UTC m=+20.158648606" Jan 29 11:24:32.621262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1990556953.mount: Deactivated successfully. Jan 29 11:24:35.597019 containerd[1493]: time="2025-01-29T11:24:35.596946123Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:24:35.598427 containerd[1493]: time="2025-01-29T11:24:35.598313346Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 29 11:24:35.599957 containerd[1493]: time="2025-01-29T11:24:35.599886375Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:24:35.602388 containerd[1493]: time="2025-01-29T11:24:35.602192079Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 14.307911072s" Jan 29 11:24:35.602388 containerd[1493]: time="2025-01-29T11:24:35.602240678Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 29 11:24:35.606247 containerd[1493]: time="2025-01-29T11:24:35.604545167Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 11:24:35.606756 containerd[1493]: time="2025-01-29T11:24:35.606506397Z" level=info msg="CreateContainer within sandbox \"3e9207d4e5dc38784061a88366e1bf330524dc113942a382eb025cb2d883412b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:24:35.629205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount204086006.mount: Deactivated successfully. Jan 29 11:24:35.641376 containerd[1493]: time="2025-01-29T11:24:35.641260842Z" level=info msg="CreateContainer within sandbox \"3e9207d4e5dc38784061a88366e1bf330524dc113942a382eb025cb2d883412b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"edd39428e821dd99ad66e5812579fd7b6a8702b2fe0fbb0f49e8fe27cd10bde1\"" Jan 29 11:24:35.643423 containerd[1493]: time="2025-01-29T11:24:35.642320192Z" level=info msg="StartContainer for \"edd39428e821dd99ad66e5812579fd7b6a8702b2fe0fbb0f49e8fe27cd10bde1\"" Jan 29 11:24:35.690573 systemd[1]: run-containerd-runc-k8s.io-edd39428e821dd99ad66e5812579fd7b6a8702b2fe0fbb0f49e8fe27cd10bde1-runc.dP12bL.mount: Deactivated successfully. Jan 29 11:24:35.700564 systemd[1]: Started cri-containerd-edd39428e821dd99ad66e5812579fd7b6a8702b2fe0fbb0f49e8fe27cd10bde1.scope - libcontainer container edd39428e821dd99ad66e5812579fd7b6a8702b2fe0fbb0f49e8fe27cd10bde1. Jan 29 11:24:35.741771 containerd[1493]: time="2025-01-29T11:24:35.741701880Z" level=info msg="StartContainer for \"edd39428e821dd99ad66e5812579fd7b6a8702b2fe0fbb0f49e8fe27cd10bde1\" returns successfully" Jan 29 11:24:35.759931 systemd[1]: cri-containerd-edd39428e821dd99ad66e5812579fd7b6a8702b2fe0fbb0f49e8fe27cd10bde1.scope: Deactivated successfully. Jan 29 11:24:36.622970 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-edd39428e821dd99ad66e5812579fd7b6a8702b2fe0fbb0f49e8fe27cd10bde1-rootfs.mount: Deactivated successfully. Jan 29 11:24:37.599050 containerd[1493]: time="2025-01-29T11:24:37.598968148Z" level=info msg="shim disconnected" id=edd39428e821dd99ad66e5812579fd7b6a8702b2fe0fbb0f49e8fe27cd10bde1 namespace=k8s.io Jan 29 11:24:37.599050 containerd[1493]: time="2025-01-29T11:24:37.599046273Z" level=warning msg="cleaning up after shim disconnected" id=edd39428e821dd99ad66e5812579fd7b6a8702b2fe0fbb0f49e8fe27cd10bde1 namespace=k8s.io Jan 29 11:24:37.599050 containerd[1493]: time="2025-01-29T11:24:37.599060252Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:24:38.443802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount588438063.mount: Deactivated successfully. Jan 29 11:24:38.603254 containerd[1493]: time="2025-01-29T11:24:38.602938826Z" level=info msg="CreateContainer within sandbox \"3e9207d4e5dc38784061a88366e1bf330524dc113942a382eb025cb2d883412b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:24:38.638530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1527485979.mount: Deactivated successfully. Jan 29 11:24:38.653254 containerd[1493]: time="2025-01-29T11:24:38.653204522Z" level=info msg="CreateContainer within sandbox \"3e9207d4e5dc38784061a88366e1bf330524dc113942a382eb025cb2d883412b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"45cfd619061b30a756d86247139986d84b3e79ba81ae2501a8ab95b1a67f8512\"" Jan 29 11:24:38.656162 containerd[1493]: time="2025-01-29T11:24:38.656092848Z" level=info msg="StartContainer for \"45cfd619061b30a756d86247139986d84b3e79ba81ae2501a8ab95b1a67f8512\"" Jan 29 11:24:38.712101 systemd[1]: Started cri-containerd-45cfd619061b30a756d86247139986d84b3e79ba81ae2501a8ab95b1a67f8512.scope - libcontainer container 45cfd619061b30a756d86247139986d84b3e79ba81ae2501a8ab95b1a67f8512. Jan 29 11:24:38.780242 containerd[1493]: time="2025-01-29T11:24:38.780089001Z" level=info msg="StartContainer for \"45cfd619061b30a756d86247139986d84b3e79ba81ae2501a8ab95b1a67f8512\" returns successfully" Jan 29 11:24:38.801168 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:24:38.802165 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:24:38.802285 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:24:38.812938 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:24:38.813297 systemd[1]: cri-containerd-45cfd619061b30a756d86247139986d84b3e79ba81ae2501a8ab95b1a67f8512.scope: Deactivated successfully. Jan 29 11:24:38.864800 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:24:38.897375 containerd[1493]: time="2025-01-29T11:24:38.896756129Z" level=info msg="shim disconnected" id=45cfd619061b30a756d86247139986d84b3e79ba81ae2501a8ab95b1a67f8512 namespace=k8s.io Jan 29 11:24:38.897375 containerd[1493]: time="2025-01-29T11:24:38.896827622Z" level=warning msg="cleaning up after shim disconnected" id=45cfd619061b30a756d86247139986d84b3e79ba81ae2501a8ab95b1a67f8512 namespace=k8s.io Jan 29 11:24:38.897375 containerd[1493]: time="2025-01-29T11:24:38.896841702Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:24:39.596396 containerd[1493]: time="2025-01-29T11:24:39.596309602Z" level=info msg="CreateContainer within sandbox \"3e9207d4e5dc38784061a88366e1bf330524dc113942a382eb025cb2d883412b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:24:39.640801 containerd[1493]: time="2025-01-29T11:24:39.640745526Z" level=info msg="CreateContainer within sandbox \"3e9207d4e5dc38784061a88366e1bf330524dc113942a382eb025cb2d883412b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1a70ebbf4bed48d1ceeac7463936c76cbf7a877e4fe6f5da197754570d1555d7\"" Jan 29 11:24:39.644607 containerd[1493]: time="2025-01-29T11:24:39.644364177Z" level=info msg="StartContainer for \"1a70ebbf4bed48d1ceeac7463936c76cbf7a877e4fe6f5da197754570d1555d7\"" Jan 29 11:24:39.664883 containerd[1493]: time="2025-01-29T11:24:39.664716823Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:24:39.668100 containerd[1493]: time="2025-01-29T11:24:39.666898461Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 29 11:24:39.674370 containerd[1493]: time="2025-01-29T11:24:39.674279328Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:24:39.678980 containerd[1493]: time="2025-01-29T11:24:39.678281306Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.071813736s" Jan 29 11:24:39.678980 containerd[1493]: time="2025-01-29T11:24:39.678360018Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 29 11:24:39.682510 containerd[1493]: time="2025-01-29T11:24:39.682474709Z" level=info msg="CreateContainer within sandbox \"b9e942969d989713e5fb6c6b43ec97492da123d70058dd7902701a365ee17e27\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 11:24:39.718844 systemd[1]: Started cri-containerd-1a70ebbf4bed48d1ceeac7463936c76cbf7a877e4fe6f5da197754570d1555d7.scope - libcontainer container 1a70ebbf4bed48d1ceeac7463936c76cbf7a877e4fe6f5da197754570d1555d7. Jan 29 11:24:39.720664 containerd[1493]: time="2025-01-29T11:24:39.720074125Z" level=info msg="CreateContainer within sandbox \"b9e942969d989713e5fb6c6b43ec97492da123d70058dd7902701a365ee17e27\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fc80df35533b1570c17a4b1130f2dcd1ca8c16ca909eef59d98190d4957cb4d8\"" Jan 29 11:24:39.726113 containerd[1493]: time="2025-01-29T11:24:39.723532146Z" level=info msg="StartContainer for \"fc80df35533b1570c17a4b1130f2dcd1ca8c16ca909eef59d98190d4957cb4d8\"" Jan 29 11:24:39.782613 systemd[1]: Started cri-containerd-fc80df35533b1570c17a4b1130f2dcd1ca8c16ca909eef59d98190d4957cb4d8.scope - libcontainer container fc80df35533b1570c17a4b1130f2dcd1ca8c16ca909eef59d98190d4957cb4d8. Jan 29 11:24:39.798956 containerd[1493]: time="2025-01-29T11:24:39.798900233Z" level=info msg="StartContainer for \"1a70ebbf4bed48d1ceeac7463936c76cbf7a877e4fe6f5da197754570d1555d7\" returns successfully" Jan 29 11:24:39.803953 systemd[1]: cri-containerd-1a70ebbf4bed48d1ceeac7463936c76cbf7a877e4fe6f5da197754570d1555d7.scope: Deactivated successfully. Jan 29 11:24:40.010834 containerd[1493]: time="2025-01-29T11:24:40.010082547Z" level=info msg="StartContainer for \"fc80df35533b1570c17a4b1130f2dcd1ca8c16ca909eef59d98190d4957cb4d8\" returns successfully" Jan 29 11:24:40.018029 containerd[1493]: time="2025-01-29T11:24:40.017423316Z" level=info msg="shim disconnected" id=1a70ebbf4bed48d1ceeac7463936c76cbf7a877e4fe6f5da197754570d1555d7 namespace=k8s.io Jan 29 11:24:40.018029 containerd[1493]: time="2025-01-29T11:24:40.017578265Z" level=warning msg="cleaning up after shim disconnected" id=1a70ebbf4bed48d1ceeac7463936c76cbf7a877e4fe6f5da197754570d1555d7 namespace=k8s.io Jan 29 11:24:40.018029 containerd[1493]: time="2025-01-29T11:24:40.017593321Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:24:40.043238 containerd[1493]: time="2025-01-29T11:24:40.043170505Z" level=warning msg="cleanup warnings time=\"2025-01-29T11:24:40Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 11:24:40.435210 systemd[1]: run-containerd-runc-k8s.io-1a70ebbf4bed48d1ceeac7463936c76cbf7a877e4fe6f5da197754570d1555d7-runc.AshVT3.mount: Deactivated successfully. Jan 29 11:24:40.435374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a70ebbf4bed48d1ceeac7463936c76cbf7a877e4fe6f5da197754570d1555d7-rootfs.mount: Deactivated successfully. Jan 29 11:24:40.607682 containerd[1493]: time="2025-01-29T11:24:40.607229463Z" level=info msg="CreateContainer within sandbox \"3e9207d4e5dc38784061a88366e1bf330524dc113942a382eb025cb2d883412b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:24:40.636998 containerd[1493]: time="2025-01-29T11:24:40.636946049Z" level=info msg="CreateContainer within sandbox \"3e9207d4e5dc38784061a88366e1bf330524dc113942a382eb025cb2d883412b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a02cb16dad4291a7a83bdf5390df6f4c59d2d719e74d0e4338912eefa18f8763\"" Jan 29 11:24:40.638628 containerd[1493]: time="2025-01-29T11:24:40.638586579Z" level=info msg="StartContainer for \"a02cb16dad4291a7a83bdf5390df6f4c59d2d719e74d0e4338912eefa18f8763\"" Jan 29 11:24:40.639971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3501624304.mount: Deactivated successfully. Jan 29 11:24:40.751016 systemd[1]: Started cri-containerd-a02cb16dad4291a7a83bdf5390df6f4c59d2d719e74d0e4338912eefa18f8763.scope - libcontainer container a02cb16dad4291a7a83bdf5390df6f4c59d2d719e74d0e4338912eefa18f8763. Jan 29 11:24:40.828518 containerd[1493]: time="2025-01-29T11:24:40.828467229Z" level=info msg="StartContainer for \"a02cb16dad4291a7a83bdf5390df6f4c59d2d719e74d0e4338912eefa18f8763\" returns successfully" Jan 29 11:24:40.837019 systemd[1]: cri-containerd-a02cb16dad4291a7a83bdf5390df6f4c59d2d719e74d0e4338912eefa18f8763.scope: Deactivated successfully. Jan 29 11:24:40.893508 containerd[1493]: time="2025-01-29T11:24:40.893279918Z" level=info msg="shim disconnected" id=a02cb16dad4291a7a83bdf5390df6f4c59d2d719e74d0e4338912eefa18f8763 namespace=k8s.io Jan 29 11:24:40.893508 containerd[1493]: time="2025-01-29T11:24:40.893371443Z" level=warning msg="cleaning up after shim disconnected" id=a02cb16dad4291a7a83bdf5390df6f4c59d2d719e74d0e4338912eefa18f8763 namespace=k8s.io Jan 29 11:24:40.893508 containerd[1493]: time="2025-01-29T11:24:40.893386393Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:24:40.927365 containerd[1493]: time="2025-01-29T11:24:40.926507104Z" level=warning msg="cleanup warnings time=\"2025-01-29T11:24:40Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 11:24:41.430478 systemd[1]: run-containerd-runc-k8s.io-a02cb16dad4291a7a83bdf5390df6f4c59d2d719e74d0e4338912eefa18f8763-runc.8bNBGX.mount: Deactivated successfully. Jan 29 11:24:41.430642 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a02cb16dad4291a7a83bdf5390df6f4c59d2d719e74d0e4338912eefa18f8763-rootfs.mount: Deactivated successfully. Jan 29 11:24:41.616563 containerd[1493]: time="2025-01-29T11:24:41.616103324Z" level=info msg="CreateContainer within sandbox \"3e9207d4e5dc38784061a88366e1bf330524dc113942a382eb025cb2d883412b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:24:41.639402 kubelet[2697]: I0129 11:24:41.636694 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-l9tll" podStartSLOduration=3.325887586 podStartE2EDuration="21.636658031s" podCreationTimestamp="2025-01-29 11:24:20 +0000 UTC" firstStartedPulling="2025-01-29 11:24:21.36916552 +0000 UTC m=+14.083733042" lastFinishedPulling="2025-01-29 11:24:39.679935965 +0000 UTC m=+32.394503487" observedRunningTime="2025-01-29 11:24:40.88669698 +0000 UTC m=+33.601264508" watchObservedRunningTime="2025-01-29 11:24:41.636658031 +0000 UTC m=+34.351225564" Jan 29 11:24:41.645922 containerd[1493]: time="2025-01-29T11:24:41.645794603Z" level=info msg="CreateContainer within sandbox \"3e9207d4e5dc38784061a88366e1bf330524dc113942a382eb025cb2d883412b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b7e982bc99d65f1dab0391269cae49f8da8ab64aae87f6983cc9bf6ad3d4aca1\"" Jan 29 11:24:41.648770 containerd[1493]: time="2025-01-29T11:24:41.648625129Z" level=info msg="StartContainer for \"b7e982bc99d65f1dab0391269cae49f8da8ab64aae87f6983cc9bf6ad3d4aca1\"" Jan 29 11:24:41.705616 systemd[1]: Started cri-containerd-b7e982bc99d65f1dab0391269cae49f8da8ab64aae87f6983cc9bf6ad3d4aca1.scope - libcontainer container b7e982bc99d65f1dab0391269cae49f8da8ab64aae87f6983cc9bf6ad3d4aca1. Jan 29 11:24:41.761058 containerd[1493]: time="2025-01-29T11:24:41.760980463Z" level=info msg="StartContainer for \"b7e982bc99d65f1dab0391269cae49f8da8ab64aae87f6983cc9bf6ad3d4aca1\" returns successfully" Jan 29 11:24:41.941383 kubelet[2697]: I0129 11:24:41.938072 2697 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 11:24:41.978853 kubelet[2697]: I0129 11:24:41.978013 2697 topology_manager.go:215] "Topology Admit Handler" podUID="31318784-09b8-4546-b4e1-58d907d16a01" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gqsvm" Jan 29 11:24:41.988597 kubelet[2697]: I0129 11:24:41.988514 2697 topology_manager.go:215] "Topology Admit Handler" podUID="5f91a202-a271-4d3c-98cf-4e8cca422eee" podNamespace="kube-system" podName="coredns-7db6d8ff4d-v8524" Jan 29 11:24:42.006279 systemd[1]: Created slice kubepods-burstable-pod31318784_09b8_4546_b4e1_58d907d16a01.slice - libcontainer container kubepods-burstable-pod31318784_09b8_4546_b4e1_58d907d16a01.slice. Jan 29 11:24:42.026017 systemd[1]: Created slice kubepods-burstable-pod5f91a202_a271_4d3c_98cf_4e8cca422eee.slice - libcontainer container kubepods-burstable-pod5f91a202_a271_4d3c_98cf_4e8cca422eee.slice. Jan 29 11:24:42.054629 kubelet[2697]: I0129 11:24:42.054573 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xgrb\" (UniqueName: \"kubernetes.io/projected/31318784-09b8-4546-b4e1-58d907d16a01-kube-api-access-7xgrb\") pod \"coredns-7db6d8ff4d-gqsvm\" (UID: \"31318784-09b8-4546-b4e1-58d907d16a01\") " pod="kube-system/coredns-7db6d8ff4d-gqsvm" Jan 29 11:24:42.054941 kubelet[2697]: I0129 11:24:42.054914 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5f91a202-a271-4d3c-98cf-4e8cca422eee-config-volume\") pod \"coredns-7db6d8ff4d-v8524\" (UID: \"5f91a202-a271-4d3c-98cf-4e8cca422eee\") " pod="kube-system/coredns-7db6d8ff4d-v8524" Jan 29 11:24:42.055124 kubelet[2697]: I0129 11:24:42.055101 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31318784-09b8-4546-b4e1-58d907d16a01-config-volume\") pod \"coredns-7db6d8ff4d-gqsvm\" (UID: \"31318784-09b8-4546-b4e1-58d907d16a01\") " pod="kube-system/coredns-7db6d8ff4d-gqsvm" Jan 29 11:24:42.055270 kubelet[2697]: I0129 11:24:42.055250 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcmd7\" (UniqueName: \"kubernetes.io/projected/5f91a202-a271-4d3c-98cf-4e8cca422eee-kube-api-access-jcmd7\") pod \"coredns-7db6d8ff4d-v8524\" (UID: \"5f91a202-a271-4d3c-98cf-4e8cca422eee\") " pod="kube-system/coredns-7db6d8ff4d-v8524" Jan 29 11:24:42.322994 containerd[1493]: time="2025-01-29T11:24:42.322708504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gqsvm,Uid:31318784-09b8-4546-b4e1-58d907d16a01,Namespace:kube-system,Attempt:0,}" Jan 29 11:24:42.331933 containerd[1493]: time="2025-01-29T11:24:42.331850553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v8524,Uid:5f91a202-a271-4d3c-98cf-4e8cca422eee,Namespace:kube-system,Attempt:0,}" Jan 29 11:24:42.659031 kubelet[2697]: I0129 11:24:42.658947 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nf7b8" podStartSLOduration=8.348520401 podStartE2EDuration="22.658921151s" podCreationTimestamp="2025-01-29 11:24:20 +0000 UTC" firstStartedPulling="2025-01-29 11:24:21.293437015 +0000 UTC m=+14.008004535" lastFinishedPulling="2025-01-29 11:24:35.603837761 +0000 UTC m=+28.318405285" observedRunningTime="2025-01-29 11:24:42.657761585 +0000 UTC m=+35.372329117" watchObservedRunningTime="2025-01-29 11:24:42.658921151 +0000 UTC m=+35.373488682" Jan 29 11:24:44.301725 systemd-networkd[1386]: cilium_host: Link UP Jan 29 11:24:44.306521 systemd-networkd[1386]: cilium_net: Link UP Jan 29 11:24:44.307513 systemd-networkd[1386]: cilium_net: Gained carrier Jan 29 11:24:44.307829 systemd-networkd[1386]: cilium_host: Gained carrier Jan 29 11:24:44.458533 systemd-networkd[1386]: cilium_vxlan: Link UP Jan 29 11:24:44.458545 systemd-networkd[1386]: cilium_vxlan: Gained carrier Jan 29 11:24:44.470658 systemd-networkd[1386]: cilium_host: Gained IPv6LL Jan 29 11:24:44.621640 systemd-networkd[1386]: cilium_net: Gained IPv6LL Jan 29 11:24:44.756514 kernel: NET: Registered PF_ALG protocol family Jan 29 11:24:45.627985 systemd-networkd[1386]: lxc_health: Link UP Jan 29 11:24:45.633320 systemd-networkd[1386]: lxc_health: Gained carrier Jan 29 11:24:45.911212 systemd-networkd[1386]: lxc59a54df1cb1e: Link UP Jan 29 11:24:45.924564 kernel: eth0: renamed from tmpfd005 Jan 29 11:24:45.931817 systemd-networkd[1386]: lxc59a54df1cb1e: Gained carrier Jan 29 11:24:45.955764 systemd-networkd[1386]: lxce5888fa3e18a: Link UP Jan 29 11:24:45.969389 kernel: eth0: renamed from tmp703c4 Jan 29 11:24:45.984838 systemd-networkd[1386]: lxce5888fa3e18a: Gained carrier Jan 29 11:24:46.485660 systemd-networkd[1386]: cilium_vxlan: Gained IPv6LL Jan 29 11:24:47.062098 systemd-networkd[1386]: lxce5888fa3e18a: Gained IPv6LL Jan 29 11:24:47.318222 systemd-networkd[1386]: lxc_health: Gained IPv6LL Jan 29 11:24:47.319587 systemd-networkd[1386]: lxc59a54df1cb1e: Gained IPv6LL Jan 29 11:24:50.309218 ntpd[1461]: Listen normally on 8 cilium_host 192.168.0.22:123 Jan 29 11:24:50.310401 ntpd[1461]: 29 Jan 11:24:50 ntpd[1461]: Listen normally on 8 cilium_host 192.168.0.22:123 Jan 29 11:24:50.310401 ntpd[1461]: 29 Jan 11:24:50 ntpd[1461]: Listen normally on 9 cilium_net [fe80::3479:2aff:fe93:e6e7%4]:123 Jan 29 11:24:50.310401 ntpd[1461]: 29 Jan 11:24:50 ntpd[1461]: Listen normally on 10 cilium_host [fe80::581f:a9ff:fefa:4cf1%5]:123 Jan 29 11:24:50.310401 ntpd[1461]: 29 Jan 11:24:50 ntpd[1461]: Listen normally on 11 cilium_vxlan [fe80::9c63:d6ff:fe0f:c9e8%6]:123 Jan 29 11:24:50.310401 ntpd[1461]: 29 Jan 11:24:50 ntpd[1461]: Listen normally on 12 lxc_health [fe80::e4a0:b2ff:fe09:53fe%8]:123 Jan 29 11:24:50.310401 ntpd[1461]: 29 Jan 11:24:50 ntpd[1461]: Listen normally on 13 lxc59a54df1cb1e [fe80::2c54:e5ff:fea0:bb39%10]:123 Jan 29 11:24:50.310401 ntpd[1461]: 29 Jan 11:24:50 ntpd[1461]: Listen normally on 14 lxce5888fa3e18a [fe80::d4e0:60ff:fe0e:c590%12]:123 Jan 29 11:24:50.309375 ntpd[1461]: Listen normally on 9 cilium_net [fe80::3479:2aff:fe93:e6e7%4]:123 Jan 29 11:24:50.309464 ntpd[1461]: Listen normally on 10 cilium_host [fe80::581f:a9ff:fefa:4cf1%5]:123 Jan 29 11:24:50.309522 ntpd[1461]: Listen normally on 11 cilium_vxlan [fe80::9c63:d6ff:fe0f:c9e8%6]:123 Jan 29 11:24:50.309577 ntpd[1461]: Listen normally on 12 lxc_health [fe80::e4a0:b2ff:fe09:53fe%8]:123 Jan 29 11:24:50.309637 ntpd[1461]: Listen normally on 13 lxc59a54df1cb1e [fe80::2c54:e5ff:fea0:bb39%10]:123 Jan 29 11:24:50.309693 ntpd[1461]: Listen normally on 14 lxce5888fa3e18a [fe80::d4e0:60ff:fe0e:c590%12]:123 Jan 29 11:24:51.389138 containerd[1493]: time="2025-01-29T11:24:51.388909698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:24:51.390001 containerd[1493]: time="2025-01-29T11:24:51.389481149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:24:51.390647 containerd[1493]: time="2025-01-29T11:24:51.390155420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:24:51.390895 containerd[1493]: time="2025-01-29T11:24:51.390544105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:24:51.452102 systemd[1]: Started cri-containerd-fd0055c4c3be80820859ad8a9a6921a3782dbb5adfb32d0d1d14995b95787247.scope - libcontainer container fd0055c4c3be80820859ad8a9a6921a3782dbb5adfb32d0d1d14995b95787247. Jan 29 11:24:51.463829 containerd[1493]: time="2025-01-29T11:24:51.463122537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:24:51.464377 containerd[1493]: time="2025-01-29T11:24:51.464288503Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:24:51.464795 containerd[1493]: time="2025-01-29T11:24:51.464605098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:24:51.465864 containerd[1493]: time="2025-01-29T11:24:51.465511851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:24:51.527254 systemd[1]: Started cri-containerd-703c4d052942e28bb1bd5bbc853db955a7e745b074743d56ff822d57e1616250.scope - libcontainer container 703c4d052942e28bb1bd5bbc853db955a7e745b074743d56ff822d57e1616250. Jan 29 11:24:51.581268 containerd[1493]: time="2025-01-29T11:24:51.579631059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gqsvm,Uid:31318784-09b8-4546-b4e1-58d907d16a01,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd0055c4c3be80820859ad8a9a6921a3782dbb5adfb32d0d1d14995b95787247\"" Jan 29 11:24:51.596729 containerd[1493]: time="2025-01-29T11:24:51.596385602Z" level=info msg="CreateContainer within sandbox \"fd0055c4c3be80820859ad8a9a6921a3782dbb5adfb32d0d1d14995b95787247\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:24:51.625244 containerd[1493]: time="2025-01-29T11:24:51.625008024Z" level=info msg="CreateContainer within sandbox \"fd0055c4c3be80820859ad8a9a6921a3782dbb5adfb32d0d1d14995b95787247\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6c5df9fb872885d2a17345aebf3ac39059c0384fe5aefcf5efa8e9b1a5058c9d\"" Jan 29 11:24:51.626066 containerd[1493]: time="2025-01-29T11:24:51.626021014Z" level=info msg="StartContainer for \"6c5df9fb872885d2a17345aebf3ac39059c0384fe5aefcf5efa8e9b1a5058c9d\"" Jan 29 11:24:51.694842 systemd[1]: Started cri-containerd-6c5df9fb872885d2a17345aebf3ac39059c0384fe5aefcf5efa8e9b1a5058c9d.scope - libcontainer container 6c5df9fb872885d2a17345aebf3ac39059c0384fe5aefcf5efa8e9b1a5058c9d. Jan 29 11:24:51.709722 containerd[1493]: time="2025-01-29T11:24:51.709647078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-v8524,Uid:5f91a202-a271-4d3c-98cf-4e8cca422eee,Namespace:kube-system,Attempt:0,} returns sandbox id \"703c4d052942e28bb1bd5bbc853db955a7e745b074743d56ff822d57e1616250\"" Jan 29 11:24:51.718663 containerd[1493]: time="2025-01-29T11:24:51.718578499Z" level=info msg="CreateContainer within sandbox \"703c4d052942e28bb1bd5bbc853db955a7e745b074743d56ff822d57e1616250\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:24:51.761110 containerd[1493]: time="2025-01-29T11:24:51.761053815Z" level=info msg="CreateContainer within sandbox \"703c4d052942e28bb1bd5bbc853db955a7e745b074743d56ff822d57e1616250\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fa1c382171d8515032b24eda662006d105ad015c536267a6ce584f2008014aec\"" Jan 29 11:24:51.761569 containerd[1493]: time="2025-01-29T11:24:51.761285070Z" level=info msg="StartContainer for \"6c5df9fb872885d2a17345aebf3ac39059c0384fe5aefcf5efa8e9b1a5058c9d\" returns successfully" Jan 29 11:24:51.763632 containerd[1493]: time="2025-01-29T11:24:51.763591687Z" level=info msg="StartContainer for \"fa1c382171d8515032b24eda662006d105ad015c536267a6ce584f2008014aec\"" Jan 29 11:24:51.827122 systemd[1]: Started cri-containerd-fa1c382171d8515032b24eda662006d105ad015c536267a6ce584f2008014aec.scope - libcontainer container fa1c382171d8515032b24eda662006d105ad015c536267a6ce584f2008014aec. Jan 29 11:24:51.889292 containerd[1493]: time="2025-01-29T11:24:51.889234882Z" level=info msg="StartContainer for \"fa1c382171d8515032b24eda662006d105ad015c536267a6ce584f2008014aec\" returns successfully" Jan 29 11:24:52.705760 kubelet[2697]: I0129 11:24:52.705667 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gqsvm" podStartSLOduration=32.705639211 podStartE2EDuration="32.705639211s" podCreationTimestamp="2025-01-29 11:24:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:24:52.685827396 +0000 UTC m=+45.400394927" watchObservedRunningTime="2025-01-29 11:24:52.705639211 +0000 UTC m=+45.420206743" Jan 29 11:24:53.698537 kubelet[2697]: I0129 11:24:53.698121 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-v8524" podStartSLOduration=33.698091024 podStartE2EDuration="33.698091024s" podCreationTimestamp="2025-01-29 11:24:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:24:52.706997298 +0000 UTC m=+45.421564834" watchObservedRunningTime="2025-01-29 11:24:53.698091024 +0000 UTC m=+46.412658557" Jan 29 11:25:07.236830 systemd[1]: Started sshd@7-10.128.0.21:22-139.178.68.195:60022.service - OpenSSH per-connection server daemon (139.178.68.195:60022). Jan 29 11:25:07.538936 sshd[4082]: Accepted publickey for core from 139.178.68.195 port 60022 ssh2: RSA SHA256:TKm17rOOJGvnaSUIt3oFUlUbRDedEa602jCeeiSwRLI Jan 29 11:25:07.541070 sshd-session[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:25:07.547995 systemd-logind[1477]: New session 8 of user core. Jan 29 11:25:07.552600 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:25:07.866478 sshd[4086]: Connection closed by 139.178.68.195 port 60022 Jan 29 11:25:07.867772 sshd-session[4082]: pam_unix(sshd:session): session closed for user core Jan 29 11:25:07.872885 systemd[1]: sshd@7-10.128.0.21:22-139.178.68.195:60022.service: Deactivated successfully. Jan 29 11:25:07.875926 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:25:07.878209 systemd-logind[1477]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:25:07.880479 systemd-logind[1477]: Removed session 8. Jan 29 11:25:12.921779 systemd[1]: Started sshd@8-10.128.0.21:22-139.178.68.195:60034.service - OpenSSH per-connection server daemon (139.178.68.195:60034). Jan 29 11:25:13.222066 sshd[4098]: Accepted publickey for core from 139.178.68.195 port 60034 ssh2: RSA SHA256:TKm17rOOJGvnaSUIt3oFUlUbRDedEa602jCeeiSwRLI Jan 29 11:25:13.223821 sshd-session[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:25:13.230507 systemd-logind[1477]: New session 9 of user core. Jan 29 11:25:13.235580 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:25:13.518383 sshd[4100]: Connection closed by 139.178.68.195 port 60034 Jan 29 11:25:13.519711 sshd-session[4098]: pam_unix(sshd:session): session closed for user core Jan 29 11:25:13.524838 systemd[1]: sshd@8-10.128.0.21:22-139.178.68.195:60034.service: Deactivated successfully. Jan 29 11:25:13.527748 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:25:13.528934 systemd-logind[1477]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:25:13.531004 systemd-logind[1477]: Removed session 9. Jan 29 11:25:18.573868 systemd[1]: Started sshd@9-10.128.0.21:22-139.178.68.195:60756.service - OpenSSH per-connection server daemon (139.178.68.195:60756). Jan 29 11:25:18.877613 sshd[4112]: Accepted publickey for core from 139.178.68.195 port 60756 ssh2: RSA SHA256:TKm17rOOJGvnaSUIt3oFUlUbRDedEa602jCeeiSwRLI Jan 29 11:25:18.879557 sshd-session[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:25:18.885419 systemd-logind[1477]: New session 10 of user core. Jan 29 11:25:18.889542 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:25:19.174681 sshd[4114]: Connection closed by 139.178.68.195 port 60756 Jan 29 11:25:19.175874 sshd-session[4112]: pam_unix(sshd:session): session closed for user core Jan 29 11:25:19.180819 systemd[1]: sshd@9-10.128.0.21:22-139.178.68.195:60756.service: Deactivated successfully. Jan 29 11:25:19.184089 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:25:19.187255 systemd-logind[1477]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:25:19.189307 systemd-logind[1477]: Removed session 10. Jan 29 11:25:24.235496 systemd[1]: Started sshd@10-10.128.0.21:22-139.178.68.195:60764.service - OpenSSH per-connection server daemon (139.178.68.195:60764). Jan 29 11:25:24.537236 sshd[4130]: Accepted publickey for core from 139.178.68.195 port 60764 ssh2: RSA SHA256:TKm17rOOJGvnaSUIt3oFUlUbRDedEa602jCeeiSwRLI Jan 29 11:25:24.538882 sshd-session[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:25:24.545234 systemd-logind[1477]: New session 11 of user core. Jan 29 11:25:24.552688 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:25:24.823444 sshd[4132]: Connection closed by 139.178.68.195 port 60764 Jan 29 11:25:24.824715 sshd-session[4130]: pam_unix(sshd:session): session closed for user core Jan 29 11:25:24.829238 systemd[1]: sshd@10-10.128.0.21:22-139.178.68.195:60764.service: Deactivated successfully. Jan 29 11:25:24.831578 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:25:24.833492 systemd-logind[1477]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:25:24.835153 systemd-logind[1477]: Removed session 11. Jan 29 11:25:29.878712 systemd[1]: Started sshd@11-10.128.0.21:22-139.178.68.195:38420.service - OpenSSH per-connection server daemon (139.178.68.195:38420). Jan 29 11:25:30.182909 sshd[4143]: Accepted publickey for core from 139.178.68.195 port 38420 ssh2: RSA SHA256:TKm17rOOJGvnaSUIt3oFUlUbRDedEa602jCeeiSwRLI Jan 29 11:25:30.184844 sshd-session[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:25:30.191574 systemd-logind[1477]: New session 12 of user core. Jan 29 11:25:30.196648 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:25:30.474464 sshd[4145]: Connection closed by 139.178.68.195 port 38420 Jan 29 11:25:30.475782 sshd-session[4143]: pam_unix(sshd:session): session closed for user core Jan 29 11:25:30.479934 systemd[1]: sshd@11-10.128.0.21:22-139.178.68.195:38420.service: Deactivated successfully. Jan 29 11:25:30.482538 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:25:30.484914 systemd-logind[1477]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:25:30.487103 systemd-logind[1477]: Removed session 12. Jan 29 11:25:30.532766 systemd[1]: Started sshd@12-10.128.0.21:22-139.178.68.195:38432.service - OpenSSH per-connection server daemon (139.178.68.195:38432). Jan 29 11:25:30.827433 sshd[4157]: Accepted publickey for core from 139.178.68.195 port 38432 ssh2: RSA SHA256:TKm17rOOJGvnaSUIt3oFUlUbRDedEa602jCeeiSwRLI Jan 29 11:25:30.828837 sshd-session[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:25:30.836271 systemd-logind[1477]: New session 13 of user core. Jan 29 11:25:30.842664 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:25:31.170818 sshd[4159]: Connection closed by 139.178.68.195 port 38432 Jan 29 11:25:31.172566 sshd-session[4157]: pam_unix(sshd:session): session closed for user core Jan 29 11:25:31.176905 systemd[1]: sshd@12-10.128.0.21:22-139.178.68.195:38432.service: Deactivated successfully. Jan 29 11:25:31.179852 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:25:31.181970 systemd-logind[1477]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:25:31.183407 systemd-logind[1477]: Removed session 13. Jan 29 11:25:31.231777 systemd[1]: Started sshd@13-10.128.0.21:22-139.178.68.195:38438.service - OpenSSH per-connection server daemon (139.178.68.195:38438). Jan 29 11:25:31.540850 sshd[4168]: Accepted publickey for core from 139.178.68.195 port 38438 ssh2: RSA SHA256:TKm17rOOJGvnaSUIt3oFUlUbRDedEa602jCeeiSwRLI Jan 29 11:25:31.542171 sshd-session[4168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:25:31.548664 systemd-logind[1477]: New session 14 of user core. Jan 29 11:25:31.558661 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:25:31.834261 sshd[4170]: Connection closed by 139.178.68.195 port 38438 Jan 29 11:25:31.835209 sshd-session[4168]: pam_unix(sshd:session): session closed for user core Jan 29 11:25:31.841234 systemd[1]: sshd@13-10.128.0.21:22-139.178.68.195:38438.service: Deactivated successfully. Jan 29 11:25:31.843861 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:25:31.845220 systemd-logind[1477]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:25:31.847061 systemd-logind[1477]: Removed session 14. Jan 29 11:25:36.896872 systemd[1]: Started sshd@14-10.128.0.21:22-139.178.68.195:46794.service - OpenSSH per-connection server daemon (139.178.68.195:46794). Jan 29 11:25:37.192884 sshd[4183]: Accepted publickey for core from 139.178.68.195 port 46794 ssh2: RSA SHA256:TKm17rOOJGvnaSUIt3oFUlUbRDedEa602jCeeiSwRLI Jan 29 11:25:37.194820 sshd-session[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:25:37.200418 systemd-logind[1477]: New session 15 of user core. Jan 29 11:25:37.206625 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:25:37.486025 sshd[4185]: Connection closed by 139.178.68.195 port 46794 Jan 29 11:25:37.486653 sshd-session[4183]: pam_unix(sshd:session): session closed for user core Jan 29 11:25:37.491302 systemd[1]: sshd@14-10.128.0.21:22-139.178.68.195:46794.service: Deactivated successfully. Jan 29 11:25:37.494246 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:25:37.496671 systemd-logind[1477]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:25:37.498799 systemd-logind[1477]: Removed session 15. Jan 29 11:25:42.543866 systemd[1]: Started sshd@15-10.128.0.21:22-139.178.68.195:46800.service - OpenSSH per-connection server daemon (139.178.68.195:46800). Jan 29 11:25:42.846047 sshd[4196]: Accepted publickey for core from 139.178.68.195 port 46800 ssh2: RSA SHA256:TKm17rOOJGvnaSUIt3oFUlUbRDedEa602jCeeiSwRLI Jan 29 11:25:42.847858 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:25:42.855093 systemd-logind[1477]: New session 16 of user core. Jan 29 11:25:42.860610 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:25:43.141506 sshd[4200]: Connection closed by 139.178.68.195 port 46800 Jan 29 11:25:43.142672 sshd-session[4196]: pam_unix(sshd:session): session closed for user core Jan 29 11:25:43.147977 systemd[1]: sshd@15-10.128.0.21:22-139.178.68.195:46800.service: Deactivated successfully. Jan 29 11:25:43.151015 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:25:43.152196 systemd-logind[1477]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:25:43.153735 systemd-logind[1477]: Removed session 16. Jan 29 11:25:43.196798 systemd[1]: Started sshd@16-10.128.0.21:22-139.178.68.195:46802.service - OpenSSH per-connection server daemon (139.178.68.195:46802). Jan 29 11:25:43.500056 sshd[4211]: Accepted publickey for core from 139.178.68.195 port 46802 ssh2: RSA SHA256:TKm17rOOJGvnaSUIt3oFUlUbRDedEa602jCeeiSwRLI Jan 29 11:25:43.502141 sshd-session[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:25:43.508918 systemd-logind[1477]: New session 17 of user core. Jan 29 11:25:43.514603 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:25:43.863551 sshd[4213]: Connection closed by 139.178.68.195 port 46802 Jan 29 11:25:43.865115 sshd-session[4211]: pam_unix(sshd:session): session closed for user core Jan 29 11:25:43.869126 systemd[1]: sshd@16-10.128.0.21:22-139.178.68.195:46802.service: Deactivated successfully. Jan 29 11:25:43.872020 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:25:43.874090 systemd-logind[1477]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:25:43.876594 systemd-logind[1477]: Removed session 17. Jan 29 11:25:43.920888 systemd[1]: Started sshd@17-10.128.0.21:22-139.178.68.195:46806.service - OpenSSH per-connection server daemon (139.178.68.195:46806). Jan 29 11:25:44.218826 sshd[4221]: Accepted publickey for core from 139.178.68.195 port 46806 ssh2: RSA SHA256:TKm17rOOJGvnaSUIt3oFUlUbRDedEa602jCeeiSwRLI Jan 29 11:25:44.220662 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:25:44.227270 systemd-logind[1477]: New session 18 of user core. Jan 29 11:25:44.231569 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:25:46.116126 sshd[4223]: Connection closed by 139.178.68.195 port 46806 Jan 29 11:25:46.119920 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Jan 29 11:25:46.124360 systemd[1]: sshd@17-10.128.0.21:22-139.178.68.195:46806.service: Deactivated successfully. Jan 29 11:25:46.129456 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:25:46.131784 systemd-logind[1477]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:25:46.133760 systemd-logind[1477]: Removed session 18. Jan 29 11:25:46.177804 systemd[1]: Started sshd@18-10.128.0.21:22-139.178.68.195:58836.service - OpenSSH per-connection server daemon (139.178.68.195:58836). Jan 29 11:25:46.475078 sshd[4239]: Accepted publickey for core from 139.178.68.195 port 58836 ssh2: RSA SHA256:TKm17rOOJGvnaSUIt3oFUlUbRDedEa602jCeeiSwRLI Jan 29 11:25:46.476945 sshd-session[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:25:46.482250 systemd-logind[1477]: New session 19 of user core. Jan 29 11:25:46.485900 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:25:46.898930 sshd[4241]: Connection closed by 139.178.68.195 port 58836 Jan 29 11:25:46.900478 sshd-session[4239]: pam_unix(sshd:session): session closed for user core Jan 29 11:25:46.905923 systemd[1]: sshd@18-10.128.0.21:22-139.178.68.195:58836.service: Deactivated successfully. Jan 29 11:25:46.908710 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:25:46.909883 systemd-logind[1477]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:25:46.911386 systemd-logind[1477]: Removed session 19. Jan 29 11:25:46.958820 systemd[1]: Started sshd@19-10.128.0.21:22-139.178.68.195:58852.service - OpenSSH per-connection server daemon (139.178.68.195:58852). Jan 29 11:25:47.254045 sshd[4250]: Accepted publickey for core from 139.178.68.195 port 58852 ssh2: RSA SHA256:TKm17rOOJGvnaSUIt3oFUlUbRDedEa602jCeeiSwRLI Jan 29 11:25:47.255994 sshd-session[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:25:47.261643 systemd-logind[1477]: New session 20 of user core. Jan 29 11:25:47.267570 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:25:47.546160 sshd[4252]: Connection closed by 139.178.68.195 port 58852 Jan 29 11:25:47.547711 sshd-session[4250]: pam_unix(sshd:session): session closed for user core Jan 29 11:25:47.552819 systemd[1]: sshd@19-10.128.0.21:22-139.178.68.195:58852.service: Deactivated successfully. Jan 29 11:25:47.556119 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:25:47.557417 systemd-logind[1477]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:25:47.558896 systemd-logind[1477]: Removed session 20. Jan 29 11:25:52.606833 systemd[1]: Started sshd@20-10.128.0.21:22-139.178.68.195:58856.service - OpenSSH per-connection server daemon (139.178.68.195:58856). Jan 29 11:25:52.898102 sshd[4268]: Accepted publickey for core from 139.178.68.195 port 58856 ssh2: RSA SHA256:TKm17rOOJGvnaSUIt3oFUlUbRDedEa602jCeeiSwRLI Jan 29 11:25:52.899908 sshd-session[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:25:52.906449 systemd-logind[1477]: New session 21 of user core. Jan 29 11:25:52.913608 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:25:53.180273 sshd[4270]: Connection closed by 139.178.68.195 port 58856 Jan 29 11:25:53.181498 sshd-session[4268]: pam_unix(sshd:session): session closed for user core Jan 29 11:25:53.186054 systemd[1]: sshd@20-10.128.0.21:22-139.178.68.195:58856.service: Deactivated successfully. Jan 29 11:25:53.189744 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:25:53.191897 systemd-logind[1477]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:25:53.193741 systemd-logind[1477]: Removed session 21. Jan 29 11:25:58.235757 systemd[1]: Started sshd@21-10.128.0.21:22-139.178.68.195:36856.service - OpenSSH per-connection server daemon (139.178.68.195:36856). Jan 29 11:25:58.539651 sshd[4281]: Accepted publickey for core from 139.178.68.195 port 36856 ssh2: RSA SHA256:TKm17rOOJGvnaSUIt3oFUlUbRDedEa602jCeeiSwRLI Jan 29 11:25:58.541635 sshd-session[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:25:58.548532 systemd-logind[1477]: New session 22 of user core. Jan 29 11:25:58.551618 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:25:58.835383 sshd[4283]: Connection closed by 139.178.68.195 port 36856 Jan 29 11:25:58.835023 sshd-session[4281]: pam_unix(sshd:session): session closed for user core Jan 29 11:25:58.842471 systemd[1]: sshd@21-10.128.0.21:22-139.178.68.195:36856.service: Deactivated successfully. Jan 29 11:25:58.846226 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:25:58.847689 systemd-logind[1477]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:25:58.849252 systemd-logind[1477]: Removed session 22. Jan 29 11:26:03.896257 systemd[1]: Started sshd@22-10.128.0.21:22-139.178.68.195:36870.service - OpenSSH per-connection server daemon (139.178.68.195:36870). Jan 29 11:26:04.198842 sshd[4293]: Accepted publickey for core from 139.178.68.195 port 36870 ssh2: RSA SHA256:TKm17rOOJGvnaSUIt3oFUlUbRDedEa602jCeeiSwRLI Jan 29 11:26:04.201235 sshd-session[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:26:04.208665 systemd-logind[1477]: New session 23 of user core. Jan 29 11:26:04.213683 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 11:26:04.493824 sshd[4295]: Connection closed by 139.178.68.195 port 36870 Jan 29 11:26:04.495004 sshd-session[4293]: pam_unix(sshd:session): session closed for user core Jan 29 11:26:04.500049 systemd[1]: sshd@22-10.128.0.21:22-139.178.68.195:36870.service: Deactivated successfully. Jan 29 11:26:04.502949 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 11:26:04.505662 systemd-logind[1477]: Session 23 logged out. Waiting for processes to exit. Jan 29 11:26:04.507574 systemd-logind[1477]: Removed session 23. Jan 29 11:26:04.549791 systemd[1]: Started sshd@23-10.128.0.21:22-139.178.68.195:36886.service - OpenSSH per-connection server daemon (139.178.68.195:36886). Jan 29 11:26:04.849966 sshd[4306]: Accepted publickey for core from 139.178.68.195 port 36886 ssh2: RSA SHA256:TKm17rOOJGvnaSUIt3oFUlUbRDedEa602jCeeiSwRLI Jan 29 11:26:04.852139 sshd-session[4306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:26:04.858447 systemd-logind[1477]: New session 24 of user core. Jan 29 11:26:04.865655 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 11:26:06.373448 systemd[1]: run-containerd-runc-k8s.io-b7e982bc99d65f1dab0391269cae49f8da8ab64aae87f6983cc9bf6ad3d4aca1-runc.gwC6vv.mount: Deactivated successfully. Jan 29 11:26:06.389375 containerd[1493]: time="2025-01-29T11:26:06.388603672Z" level=info msg="StopContainer for \"fc80df35533b1570c17a4b1130f2dcd1ca8c16ca909eef59d98190d4957cb4d8\" with timeout 30 (s)" Jan 29 11:26:06.390640 containerd[1493]: time="2025-01-29T11:26:06.390064882Z" level=info msg="Stop container \"fc80df35533b1570c17a4b1130f2dcd1ca8c16ca909eef59d98190d4957cb4d8\" with signal terminated" Jan 29 11:26:06.414870 containerd[1493]: time="2025-01-29T11:26:06.414803296Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:26:06.420479 systemd[1]: cri-containerd-fc80df35533b1570c17a4b1130f2dcd1ca8c16ca909eef59d98190d4957cb4d8.scope: Deactivated successfully. Jan 29 11:26:06.430977 containerd[1493]: time="2025-01-29T11:26:06.430763096Z" level=info msg="StopContainer for \"b7e982bc99d65f1dab0391269cae49f8da8ab64aae87f6983cc9bf6ad3d4aca1\" with timeout 2 (s)" Jan 29 11:26:06.431911 containerd[1493]: time="2025-01-29T11:26:06.431875953Z" level=info msg="Stop container \"b7e982bc99d65f1dab0391269cae49f8da8ab64aae87f6983cc9bf6ad3d4aca1\" with signal terminated" Jan 29 11:26:06.451688 systemd-networkd[1386]: lxc_health: Link DOWN Jan 29 11:26:06.451701 systemd-networkd[1386]: lxc_health: Lost carrier Jan 29 11:26:06.483191 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc80df35533b1570c17a4b1130f2dcd1ca8c16ca909eef59d98190d4957cb4d8-rootfs.mount: Deactivated successfully. Jan 29 11:26:06.486053 systemd[1]: cri-containerd-b7e982bc99d65f1dab0391269cae49f8da8ab64aae87f6983cc9bf6ad3d4aca1.scope: Deactivated successfully. Jan 29 11:26:06.487048 systemd[1]: cri-containerd-b7e982bc99d65f1dab0391269cae49f8da8ab64aae87f6983cc9bf6ad3d4aca1.scope: Consumed 9.976s CPU time. Jan 29 11:26:06.521122 containerd[1493]: time="2025-01-29T11:26:06.520926127Z" level=info msg="shim disconnected" id=fc80df35533b1570c17a4b1130f2dcd1ca8c16ca909eef59d98190d4957cb4d8 namespace=k8s.io Jan 29 11:26:06.521122 containerd[1493]: time="2025-01-29T11:26:06.521123807Z" level=warning msg="cleaning up after shim disconnected" id=fc80df35533b1570c17a4b1130f2dcd1ca8c16ca909eef59d98190d4957cb4d8 namespace=k8s.io Jan 29 11:26:06.521122 containerd[1493]: time="2025-01-29T11:26:06.521144491Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:26:06.532900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7e982bc99d65f1dab0391269cae49f8da8ab64aae87f6983cc9bf6ad3d4aca1-rootfs.mount: Deactivated successfully. Jan 29 11:26:06.540255 containerd[1493]: time="2025-01-29T11:26:06.540164916Z" level=info msg="shim disconnected" id=b7e982bc99d65f1dab0391269cae49f8da8ab64aae87f6983cc9bf6ad3d4aca1 namespace=k8s.io Jan 29 11:26:06.540255 containerd[1493]: time="2025-01-29T11:26:06.540252087Z" level=warning msg="cleaning up after shim disconnected" id=b7e982bc99d65f1dab0391269cae49f8da8ab64aae87f6983cc9bf6ad3d4aca1 namespace=k8s.io Jan 29 11:26:06.540885 containerd[1493]: time="2025-01-29T11:26:06.540266342Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:26:06.556059 containerd[1493]: time="2025-01-29T11:26:06.555932391Z" level=warning msg="cleanup warnings time=\"2025-01-29T11:26:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 11:26:06.560962 containerd[1493]: time="2025-01-29T11:26:06.560914148Z" level=info msg="StopContainer for \"fc80df35533b1570c17a4b1130f2dcd1ca8c16ca909eef59d98190d4957cb4d8\" returns successfully" Jan 29 11:26:06.564510 containerd[1493]: time="2025-01-29T11:26:06.564418891Z" level=info msg="StopPodSandbox for \"b9e942969d989713e5fb6c6b43ec97492da123d70058dd7902701a365ee17e27\"" Jan 29 11:26:06.564659 containerd[1493]: time="2025-01-29T11:26:06.564511284Z" level=info msg="Container to stop \"fc80df35533b1570c17a4b1130f2dcd1ca8c16ca909eef59d98190d4957cb4d8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:26:06.569151 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b9e942969d989713e5fb6c6b43ec97492da123d70058dd7902701a365ee17e27-shm.mount: Deactivated successfully. Jan 29 11:26:06.583149 systemd[1]: cri-containerd-b9e942969d989713e5fb6c6b43ec97492da123d70058dd7902701a365ee17e27.scope: Deactivated successfully. Jan 29 11:26:06.584912 containerd[1493]: time="2025-01-29T11:26:06.584845979Z" level=info msg="StopContainer for \"b7e982bc99d65f1dab0391269cae49f8da8ab64aae87f6983cc9bf6ad3d4aca1\" returns successfully" Jan 29 11:26:06.586122 containerd[1493]: time="2025-01-29T11:26:06.586074641Z" level=info msg="StopPodSandbox for \"3e9207d4e5dc38784061a88366e1bf330524dc113942a382eb025cb2d883412b\"" Jan 29 11:26:06.586249 containerd[1493]: time="2025-01-29T11:26:06.586135529Z" level=info msg="Container to stop \"edd39428e821dd99ad66e5812579fd7b6a8702b2fe0fbb0f49e8fe27cd10bde1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:26:06.586249 containerd[1493]: time="2025-01-29T11:26:06.586187251Z" level=info msg="Container to stop \"45cfd619061b30a756d86247139986d84b3e79ba81ae2501a8ab95b1a67f8512\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:26:06.586249 containerd[1493]: time="2025-01-29T11:26:06.586203062Z" level=info msg="Container to stop \"1a70ebbf4bed48d1ceeac7463936c76cbf7a877e4fe6f5da197754570d1555d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:26:06.586249 containerd[1493]: time="2025-01-29T11:26:06.586217625Z" level=info msg="Container to stop \"a02cb16dad4291a7a83bdf5390df6f4c59d2d719e74d0e4338912eefa18f8763\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:26:06.586249 containerd[1493]: time="2025-01-29T11:26:06.586231588Z" level=info msg="Container to stop \"b7e982bc99d65f1dab0391269cae49f8da8ab64aae87f6983cc9bf6ad3d4aca1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:26:06.600199 systemd[1]: cri-containerd-3e9207d4e5dc38784061a88366e1bf330524dc113942a382eb025cb2d883412b.scope: Deactivated successfully. Jan 29 11:26:06.633738 containerd[1493]: time="2025-01-29T11:26:06.633557223Z" level=info msg="shim disconnected" id=b9e942969d989713e5fb6c6b43ec97492da123d70058dd7902701a365ee17e27 namespace=k8s.io Jan 29 11:26:06.633738 containerd[1493]: time="2025-01-29T11:26:06.633640958Z" level=warning msg="cleaning up after shim disconnected" id=b9e942969d989713e5fb6c6b43ec97492da123d70058dd7902701a365ee17e27 namespace=k8s.io Jan 29 11:26:06.633738 containerd[1493]: time="2025-01-29T11:26:06.633654437Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:26:06.648266 containerd[1493]: time="2025-01-29T11:26:06.648070256Z" level=info msg="shim disconnected" id=3e9207d4e5dc38784061a88366e1bf330524dc113942a382eb025cb2d883412b namespace=k8s.io Jan 29 11:26:06.648266 containerd[1493]: time="2025-01-29T11:26:06.648266221Z" level=warning msg="cleaning up after shim disconnected" id=3e9207d4e5dc38784061a88366e1bf330524dc113942a382eb025cb2d883412b namespace=k8s.io Jan 29 11:26:06.648672 containerd[1493]: time="2025-01-29T11:26:06.648283555Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:26:06.670425 containerd[1493]: time="2025-01-29T11:26:06.670260786Z" level=info msg="TearDown network for sandbox \"b9e942969d989713e5fb6c6b43ec97492da123d70058dd7902701a365ee17e27\" successfully" Jan 29 11:26:06.670727 containerd[1493]: time="2025-01-29T11:26:06.670325463Z" level=info msg="StopPodSandbox for \"b9e942969d989713e5fb6c6b43ec97492da123d70058dd7902701a365ee17e27\" returns successfully" Jan 29 11:26:06.694743 containerd[1493]: time="2025-01-29T11:26:06.694687293Z" level=info msg="TearDown network for sandbox \"3e9207d4e5dc38784061a88366e1bf330524dc113942a382eb025cb2d883412b\" successfully" Jan 29 11:26:06.694743 containerd[1493]: time="2025-01-29T11:26:06.694738815Z" level=info msg="StopPodSandbox for \"3e9207d4e5dc38784061a88366e1bf330524dc113942a382eb025cb2d883412b\" returns successfully" Jan 29 11:26:06.755900 kubelet[2697]: I0129 11:26:06.755260 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-host-proc-sys-kernel\") pod \"870488fe-68a9-4008-b25c-9a91d6df03ab\" (UID: \"870488fe-68a9-4008-b25c-9a91d6df03ab\") " Jan 29 11:26:06.755900 kubelet[2697]: I0129 11:26:06.755279 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "870488fe-68a9-4008-b25c-9a91d6df03ab" (UID: "870488fe-68a9-4008-b25c-9a91d6df03ab"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:26:06.755900 kubelet[2697]: I0129 11:26:06.755376 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-cilium-run\") pod \"870488fe-68a9-4008-b25c-9a91d6df03ab\" (UID: \"870488fe-68a9-4008-b25c-9a91d6df03ab\") " Jan 29 11:26:06.755900 kubelet[2697]: I0129 11:26:06.755419 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4bpg\" (UniqueName: \"kubernetes.io/projected/870488fe-68a9-4008-b25c-9a91d6df03ab-kube-api-access-c4bpg\") pod \"870488fe-68a9-4008-b25c-9a91d6df03ab\" (UID: \"870488fe-68a9-4008-b25c-9a91d6df03ab\") " Jan 29 11:26:06.755900 kubelet[2697]: I0129 11:26:06.755437 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "870488fe-68a9-4008-b25c-9a91d6df03ab" (UID: "870488fe-68a9-4008-b25c-9a91d6df03ab"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:26:06.756922 kubelet[2697]: I0129 11:26:06.755449 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-xtables-lock\") pod \"870488fe-68a9-4008-b25c-9a91d6df03ab\" (UID: \"870488fe-68a9-4008-b25c-9a91d6df03ab\") " Jan 29 11:26:06.756922 kubelet[2697]: I0129 11:26:06.755473 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-host-proc-sys-net\") pod \"870488fe-68a9-4008-b25c-9a91d6df03ab\" (UID: \"870488fe-68a9-4008-b25c-9a91d6df03ab\") " Jan 29 11:26:06.756922 kubelet[2697]: I0129 11:26:06.755501 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/870488fe-68a9-4008-b25c-9a91d6df03ab-clustermesh-secrets\") pod \"870488fe-68a9-4008-b25c-9a91d6df03ab\" (UID: \"870488fe-68a9-4008-b25c-9a91d6df03ab\") " Jan 29 11:26:06.756922 kubelet[2697]: I0129 11:26:06.755534 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2e51c5f5-1fac-4fcd-ba19-63c430c4ee7e-cilium-config-path\") pod \"2e51c5f5-1fac-4fcd-ba19-63c430c4ee7e\" (UID: \"2e51c5f5-1fac-4fcd-ba19-63c430c4ee7e\") " Jan 29 11:26:06.756922 kubelet[2697]: I0129 11:26:06.755563 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/870488fe-68a9-4008-b25c-9a91d6df03ab-cilium-config-path\") pod \"870488fe-68a9-4008-b25c-9a91d6df03ab\" (UID: \"870488fe-68a9-4008-b25c-9a91d6df03ab\") " Jan 29 11:26:06.756922 kubelet[2697]: I0129 11:26:06.755588 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-cilium-cgroup\") pod \"870488fe-68a9-4008-b25c-9a91d6df03ab\" (UID: \"870488fe-68a9-4008-b25c-9a91d6df03ab\") " Jan 29 11:26:06.758497 kubelet[2697]: I0129 11:26:06.755613 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-cni-path\") pod \"870488fe-68a9-4008-b25c-9a91d6df03ab\" (UID: \"870488fe-68a9-4008-b25c-9a91d6df03ab\") " Jan 29 11:26:06.758497 kubelet[2697]: I0129 11:26:06.755636 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-hostproc\") pod \"870488fe-68a9-4008-b25c-9a91d6df03ab\" (UID: \"870488fe-68a9-4008-b25c-9a91d6df03ab\") " Jan 29 11:26:06.758497 kubelet[2697]: I0129 11:26:06.755663 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s842z\" (UniqueName: \"kubernetes.io/projected/2e51c5f5-1fac-4fcd-ba19-63c430c4ee7e-kube-api-access-s842z\") pod \"2e51c5f5-1fac-4fcd-ba19-63c430c4ee7e\" (UID: \"2e51c5f5-1fac-4fcd-ba19-63c430c4ee7e\") " Jan 29 11:26:06.758497 kubelet[2697]: I0129 11:26:06.755689 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-etc-cni-netd\") pod \"870488fe-68a9-4008-b25c-9a91d6df03ab\" (UID: \"870488fe-68a9-4008-b25c-9a91d6df03ab\") " Jan 29 11:26:06.758497 kubelet[2697]: I0129 11:26:06.755713 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-lib-modules\") pod \"870488fe-68a9-4008-b25c-9a91d6df03ab\" (UID: \"870488fe-68a9-4008-b25c-9a91d6df03ab\") " Jan 29 11:26:06.758497 kubelet[2697]: I0129 11:26:06.755738 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-bpf-maps\") pod \"870488fe-68a9-4008-b25c-9a91d6df03ab\" (UID: \"870488fe-68a9-4008-b25c-9a91d6df03ab\") " Jan 29 11:26:06.758819 kubelet[2697]: I0129 11:26:06.755788 2697 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/870488fe-68a9-4008-b25c-9a91d6df03ab-hubble-tls\") pod \"870488fe-68a9-4008-b25c-9a91d6df03ab\" (UID: \"870488fe-68a9-4008-b25c-9a91d6df03ab\") " Jan 29 11:26:06.758819 kubelet[2697]: I0129 11:26:06.755843 2697 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-host-proc-sys-kernel\") on node \"ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 11:26:06.758819 kubelet[2697]: I0129 11:26:06.755864 2697 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-cilium-run\") on node \"ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 11:26:06.759366 kubelet[2697]: I0129 11:26:06.759223 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-cni-path" (OuterVolumeSpecName: "cni-path") pod "870488fe-68a9-4008-b25c-9a91d6df03ab" (UID: "870488fe-68a9-4008-b25c-9a91d6df03ab"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:26:06.759366 kubelet[2697]: I0129 11:26:06.759300 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "870488fe-68a9-4008-b25c-9a91d6df03ab" (UID: "870488fe-68a9-4008-b25c-9a91d6df03ab"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:26:06.760201 kubelet[2697]: I0129 11:26:06.759162 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "870488fe-68a9-4008-b25c-9a91d6df03ab" (UID: "870488fe-68a9-4008-b25c-9a91d6df03ab"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:26:06.760201 kubelet[2697]: I0129 11:26:06.759864 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-hostproc" (OuterVolumeSpecName: "hostproc") pod "870488fe-68a9-4008-b25c-9a91d6df03ab" (UID: "870488fe-68a9-4008-b25c-9a91d6df03ab"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:26:06.762454 kubelet[2697]: I0129 11:26:06.762415 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "870488fe-68a9-4008-b25c-9a91d6df03ab" (UID: "870488fe-68a9-4008-b25c-9a91d6df03ab"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:26:06.762617 kubelet[2697]: I0129 11:26:06.762472 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "870488fe-68a9-4008-b25c-9a91d6df03ab" (UID: "870488fe-68a9-4008-b25c-9a91d6df03ab"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:26:06.762617 kubelet[2697]: I0129 11:26:06.762500 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "870488fe-68a9-4008-b25c-9a91d6df03ab" (UID: "870488fe-68a9-4008-b25c-9a91d6df03ab"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:26:06.762739 kubelet[2697]: I0129 11:26:06.762612 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/870488fe-68a9-4008-b25c-9a91d6df03ab-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "870488fe-68a9-4008-b25c-9a91d6df03ab" (UID: "870488fe-68a9-4008-b25c-9a91d6df03ab"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:26:06.763499 kubelet[2697]: I0129 11:26:06.763428 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "870488fe-68a9-4008-b25c-9a91d6df03ab" (UID: "870488fe-68a9-4008-b25c-9a91d6df03ab"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:26:06.768728 kubelet[2697]: I0129 11:26:06.768452 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/870488fe-68a9-4008-b25c-9a91d6df03ab-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "870488fe-68a9-4008-b25c-9a91d6df03ab" (UID: "870488fe-68a9-4008-b25c-9a91d6df03ab"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:26:06.768728 kubelet[2697]: I0129 11:26:06.768597 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/870488fe-68a9-4008-b25c-9a91d6df03ab-kube-api-access-c4bpg" (OuterVolumeSpecName: "kube-api-access-c4bpg") pod "870488fe-68a9-4008-b25c-9a91d6df03ab" (UID: "870488fe-68a9-4008-b25c-9a91d6df03ab"). InnerVolumeSpecName "kube-api-access-c4bpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:26:06.769384 kubelet[2697]: I0129 11:26:06.769322 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e51c5f5-1fac-4fcd-ba19-63c430c4ee7e-kube-api-access-s842z" (OuterVolumeSpecName: "kube-api-access-s842z") pod "2e51c5f5-1fac-4fcd-ba19-63c430c4ee7e" (UID: "2e51c5f5-1fac-4fcd-ba19-63c430c4ee7e"). InnerVolumeSpecName "kube-api-access-s842z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:26:06.770204 kubelet[2697]: I0129 11:26:06.770170 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/870488fe-68a9-4008-b25c-9a91d6df03ab-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "870488fe-68a9-4008-b25c-9a91d6df03ab" (UID: "870488fe-68a9-4008-b25c-9a91d6df03ab"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:26:06.771179 kubelet[2697]: I0129 11:26:06.771144 2697 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e51c5f5-1fac-4fcd-ba19-63c430c4ee7e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2e51c5f5-1fac-4fcd-ba19-63c430c4ee7e" (UID: "2e51c5f5-1fac-4fcd-ba19-63c430c4ee7e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:26:06.839799 kubelet[2697]: I0129 11:26:06.839754 2697 scope.go:117] "RemoveContainer" containerID="b7e982bc99d65f1dab0391269cae49f8da8ab64aae87f6983cc9bf6ad3d4aca1" Jan 29 11:26:06.849389 containerd[1493]: time="2025-01-29T11:26:06.848175073Z" level=info msg="RemoveContainer for \"b7e982bc99d65f1dab0391269cae49f8da8ab64aae87f6983cc9bf6ad3d4aca1\"" Jan 29 11:26:06.851229 systemd[1]: Removed slice kubepods-burstable-pod870488fe_68a9_4008_b25c_9a91d6df03ab.slice - libcontainer container kubepods-burstable-pod870488fe_68a9_4008_b25c_9a91d6df03ab.slice. Jan 29 11:26:06.851727 systemd[1]: kubepods-burstable-pod870488fe_68a9_4008_b25c_9a91d6df03ab.slice: Consumed 10.113s CPU time. Jan 29 11:26:06.858267 kubelet[2697]: I0129 11:26:06.858197 2697 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-cni-path\") on node \"ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 11:26:06.858963 kubelet[2697]: I0129 11:26:06.858911 2697 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-hostproc\") on node \"ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 11:26:06.860092 systemd[1]: Removed slice kubepods-besteffort-pod2e51c5f5_1fac_4fcd_ba19_63c430c4ee7e.slice - libcontainer container kubepods-besteffort-pod2e51c5f5_1fac_4fcd_ba19_63c430c4ee7e.slice. Jan 29 11:26:06.861087 kubelet[2697]: I0129 11:26:06.860944 2697 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-s842z\" (UniqueName: \"kubernetes.io/projected/2e51c5f5-1fac-4fcd-ba19-63c430c4ee7e-kube-api-access-s842z\") on node \"ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 11:26:06.861189 kubelet[2697]: I0129 11:26:06.861092 2697 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-cilium-cgroup\") on node \"ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 11:26:06.861189 kubelet[2697]: I0129 11:26:06.861110 2697 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-etc-cni-netd\") on node \"ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 11:26:06.861317 kubelet[2697]: I0129 11:26:06.861250 2697 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-bpf-maps\") on node \"ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 11:26:06.861317 kubelet[2697]: I0129 11:26:06.861275 2697 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/870488fe-68a9-4008-b25c-9a91d6df03ab-hubble-tls\") on node \"ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 11:26:06.861728 kubelet[2697]: I0129 11:26:06.861480 2697 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-lib-modules\") on node \"ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 11:26:06.861728 kubelet[2697]: I0129 11:26:06.861506 2697 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-xtables-lock\") on node \"ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 11:26:06.861846 containerd[1493]: time="2025-01-29T11:26:06.861377912Z" level=info msg="RemoveContainer for \"b7e982bc99d65f1dab0391269cae49f8da8ab64aae87f6983cc9bf6ad3d4aca1\" returns successfully" Jan 29 11:26:06.861911 kubelet[2697]: I0129 11:26:06.861522 2697 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/870488fe-68a9-4008-b25c-9a91d6df03ab-host-proc-sys-net\") on node \"ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 11:26:06.861911 kubelet[2697]: I0129 11:26:06.861862 2697 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/870488fe-68a9-4008-b25c-9a91d6df03ab-clustermesh-secrets\") on node \"ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 11:26:06.861911 kubelet[2697]: I0129 11:26:06.861881 2697 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-c4bpg\" (UniqueName: \"kubernetes.io/projected/870488fe-68a9-4008-b25c-9a91d6df03ab-kube-api-access-c4bpg\") on node \"ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 11:26:06.862509 kubelet[2697]: I0129 11:26:06.862469 2697 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2e51c5f5-1fac-4fcd-ba19-63c430c4ee7e-cilium-config-path\") on node \"ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 11:26:06.862509 kubelet[2697]: I0129 11:26:06.862500 2697 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/870488fe-68a9-4008-b25c-9a91d6df03ab-cilium-config-path\") on node \"ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 11:26:06.864117 kubelet[2697]: I0129 11:26:06.864082 2697 scope.go:117] "RemoveContainer" containerID="a02cb16dad4291a7a83bdf5390df6f4c59d2d719e74d0e4338912eefa18f8763" Jan 29 11:26:06.867879 containerd[1493]: time="2025-01-29T11:26:06.867618836Z" level=info msg="RemoveContainer for \"a02cb16dad4291a7a83bdf5390df6f4c59d2d719e74d0e4338912eefa18f8763\"" Jan 29 11:26:06.879351 containerd[1493]: time="2025-01-29T11:26:06.879124050Z" level=info msg="RemoveContainer for \"a02cb16dad4291a7a83bdf5390df6f4c59d2d719e74d0e4338912eefa18f8763\" returns successfully" Jan 29 11:26:06.883236 kubelet[2697]: I0129 11:26:06.882771 2697 scope.go:117] "RemoveContainer" containerID="1a70ebbf4bed48d1ceeac7463936c76cbf7a877e4fe6f5da197754570d1555d7" Jan 29 11:26:06.890995 containerd[1493]: time="2025-01-29T11:26:06.887863310Z" level=info msg="RemoveContainer for \"1a70ebbf4bed48d1ceeac7463936c76cbf7a877e4fe6f5da197754570d1555d7\"" Jan 29 11:26:06.895413 containerd[1493]: time="2025-01-29T11:26:06.895116139Z" level=info msg="RemoveContainer for \"1a70ebbf4bed48d1ceeac7463936c76cbf7a877e4fe6f5da197754570d1555d7\" returns successfully" Jan 29 11:26:06.896201 kubelet[2697]: I0129 11:26:06.896163 2697 scope.go:117] "RemoveContainer" containerID="45cfd619061b30a756d86247139986d84b3e79ba81ae2501a8ab95b1a67f8512" Jan 29 11:26:06.901055 containerd[1493]: time="2025-01-29T11:26:06.901009846Z" level=info msg="RemoveContainer for \"45cfd619061b30a756d86247139986d84b3e79ba81ae2501a8ab95b1a67f8512\"" Jan 29 11:26:06.906079 containerd[1493]: time="2025-01-29T11:26:06.906034717Z" level=info msg="RemoveContainer for \"45cfd619061b30a756d86247139986d84b3e79ba81ae2501a8ab95b1a67f8512\" returns successfully" Jan 29 11:26:06.907441 kubelet[2697]: I0129 11:26:06.907186 2697 scope.go:117] "RemoveContainer" containerID="edd39428e821dd99ad66e5812579fd7b6a8702b2fe0fbb0f49e8fe27cd10bde1" Jan 29 11:26:06.909818 containerd[1493]: time="2025-01-29T11:26:06.909747536Z" level=info msg="RemoveContainer for \"edd39428e821dd99ad66e5812579fd7b6a8702b2fe0fbb0f49e8fe27cd10bde1\"" Jan 29 11:26:06.914127 containerd[1493]: time="2025-01-29T11:26:06.914076796Z" level=info msg="RemoveContainer for \"edd39428e821dd99ad66e5812579fd7b6a8702b2fe0fbb0f49e8fe27cd10bde1\" returns successfully" Jan 29 11:26:06.914853 kubelet[2697]: I0129 11:26:06.914694 2697 scope.go:117] "RemoveContainer" containerID="b7e982bc99d65f1dab0391269cae49f8da8ab64aae87f6983cc9bf6ad3d4aca1" Jan 29 11:26:06.915323 containerd[1493]: time="2025-01-29T11:26:06.915275901Z" level=error msg="ContainerStatus for \"b7e982bc99d65f1dab0391269cae49f8da8ab64aae87f6983cc9bf6ad3d4aca1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b7e982bc99d65f1dab0391269cae49f8da8ab64aae87f6983cc9bf6ad3d4aca1\": not found" Jan 29 11:26:06.915876 kubelet[2697]: E0129 11:26:06.915835 2697 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b7e982bc99d65f1dab0391269cae49f8da8ab64aae87f6983cc9bf6ad3d4aca1\": not found" containerID="b7e982bc99d65f1dab0391269cae49f8da8ab64aae87f6983cc9bf6ad3d4aca1" Jan 29 11:26:06.916019 kubelet[2697]: I0129 11:26:06.915891 2697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b7e982bc99d65f1dab0391269cae49f8da8ab64aae87f6983cc9bf6ad3d4aca1"} err="failed to get container status \"b7e982bc99d65f1dab0391269cae49f8da8ab64aae87f6983cc9bf6ad3d4aca1\": rpc error: code = NotFound desc = an error occurred when try to find container \"b7e982bc99d65f1dab0391269cae49f8da8ab64aae87f6983cc9bf6ad3d4aca1\": not found" Jan 29 11:26:06.916107 kubelet[2697]: I0129 11:26:06.916025 2697 scope.go:117] "RemoveContainer" containerID="a02cb16dad4291a7a83bdf5390df6f4c59d2d719e74d0e4338912eefa18f8763" Jan 29 11:26:06.916318 containerd[1493]: time="2025-01-29T11:26:06.916275437Z" level=error msg="ContainerStatus for \"a02cb16dad4291a7a83bdf5390df6f4c59d2d719e74d0e4338912eefa18f8763\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a02cb16dad4291a7a83bdf5390df6f4c59d2d719e74d0e4338912eefa18f8763\": not found" Jan 29 11:26:06.916645 kubelet[2697]: E0129 11:26:06.916597 2697 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a02cb16dad4291a7a83bdf5390df6f4c59d2d719e74d0e4338912eefa18f8763\": not found" containerID="a02cb16dad4291a7a83bdf5390df6f4c59d2d719e74d0e4338912eefa18f8763" Jan 29 11:26:06.916778 kubelet[2697]: I0129 11:26:06.916636 2697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a02cb16dad4291a7a83bdf5390df6f4c59d2d719e74d0e4338912eefa18f8763"} err="failed to get container status \"a02cb16dad4291a7a83bdf5390df6f4c59d2d719e74d0e4338912eefa18f8763\": rpc error: code = NotFound desc = an error occurred when try to find container \"a02cb16dad4291a7a83bdf5390df6f4c59d2d719e74d0e4338912eefa18f8763\": not found" Jan 29 11:26:06.916778 kubelet[2697]: I0129 11:26:06.916670 2697 scope.go:117] "RemoveContainer" containerID="1a70ebbf4bed48d1ceeac7463936c76cbf7a877e4fe6f5da197754570d1555d7" Jan 29 11:26:06.917010 containerd[1493]: time="2025-01-29T11:26:06.916889265Z" level=error msg="ContainerStatus for \"1a70ebbf4bed48d1ceeac7463936c76cbf7a877e4fe6f5da197754570d1555d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1a70ebbf4bed48d1ceeac7463936c76cbf7a877e4fe6f5da197754570d1555d7\": not found" Jan 29 11:26:06.917379 kubelet[2697]: E0129 11:26:06.917297 2697 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1a70ebbf4bed48d1ceeac7463936c76cbf7a877e4fe6f5da197754570d1555d7\": not found" containerID="1a70ebbf4bed48d1ceeac7463936c76cbf7a877e4fe6f5da197754570d1555d7" Jan 29 11:26:06.917473 kubelet[2697]: I0129 11:26:06.917374 2697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1a70ebbf4bed48d1ceeac7463936c76cbf7a877e4fe6f5da197754570d1555d7"} err="failed to get container status \"1a70ebbf4bed48d1ceeac7463936c76cbf7a877e4fe6f5da197754570d1555d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"1a70ebbf4bed48d1ceeac7463936c76cbf7a877e4fe6f5da197754570d1555d7\": not found" Jan 29 11:26:06.917473 kubelet[2697]: I0129 11:26:06.917432 2697 scope.go:117] "RemoveContainer" containerID="45cfd619061b30a756d86247139986d84b3e79ba81ae2501a8ab95b1a67f8512" Jan 29 11:26:06.917844 containerd[1493]: time="2025-01-29T11:26:06.917802674Z" level=error msg="ContainerStatus for \"45cfd619061b30a756d86247139986d84b3e79ba81ae2501a8ab95b1a67f8512\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"45cfd619061b30a756d86247139986d84b3e79ba81ae2501a8ab95b1a67f8512\": not found" Jan 29 11:26:06.918217 kubelet[2697]: E0129 11:26:06.918052 2697 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"45cfd619061b30a756d86247139986d84b3e79ba81ae2501a8ab95b1a67f8512\": not found" containerID="45cfd619061b30a756d86247139986d84b3e79ba81ae2501a8ab95b1a67f8512" Jan 29 11:26:06.918217 kubelet[2697]: I0129 11:26:06.918085 2697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"45cfd619061b30a756d86247139986d84b3e79ba81ae2501a8ab95b1a67f8512"} err="failed to get container status \"45cfd619061b30a756d86247139986d84b3e79ba81ae2501a8ab95b1a67f8512\": rpc error: code = NotFound desc = an error occurred when try to find container \"45cfd619061b30a756d86247139986d84b3e79ba81ae2501a8ab95b1a67f8512\": not found" Jan 29 11:26:06.918217 kubelet[2697]: I0129 11:26:06.918108 2697 scope.go:117] "RemoveContainer" containerID="edd39428e821dd99ad66e5812579fd7b6a8702b2fe0fbb0f49e8fe27cd10bde1" Jan 29 11:26:06.918733 kubelet[2697]: E0129 11:26:06.918696 2697 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"edd39428e821dd99ad66e5812579fd7b6a8702b2fe0fbb0f49e8fe27cd10bde1\": not found" containerID="edd39428e821dd99ad66e5812579fd7b6a8702b2fe0fbb0f49e8fe27cd10bde1" Jan 29 11:26:06.918798 containerd[1493]: time="2025-01-29T11:26:06.918471160Z" level=error msg="ContainerStatus for \"edd39428e821dd99ad66e5812579fd7b6a8702b2fe0fbb0f49e8fe27cd10bde1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"edd39428e821dd99ad66e5812579fd7b6a8702b2fe0fbb0f49e8fe27cd10bde1\": not found" Jan 29 11:26:06.918859 kubelet[2697]: I0129 11:26:06.918735 2697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"edd39428e821dd99ad66e5812579fd7b6a8702b2fe0fbb0f49e8fe27cd10bde1"} err="failed to get container status \"edd39428e821dd99ad66e5812579fd7b6a8702b2fe0fbb0f49e8fe27cd10bde1\": rpc error: code = NotFound desc = an error occurred when try to find container \"edd39428e821dd99ad66e5812579fd7b6a8702b2fe0fbb0f49e8fe27cd10bde1\": not found" Jan 29 11:26:06.918859 kubelet[2697]: I0129 11:26:06.918764 2697 scope.go:117] "RemoveContainer" containerID="fc80df35533b1570c17a4b1130f2dcd1ca8c16ca909eef59d98190d4957cb4d8" Jan 29 11:26:06.920674 containerd[1493]: time="2025-01-29T11:26:06.920642444Z" level=info msg="RemoveContainer for \"fc80df35533b1570c17a4b1130f2dcd1ca8c16ca909eef59d98190d4957cb4d8\"" Jan 29 11:26:06.925221 containerd[1493]: time="2025-01-29T11:26:06.925146625Z" level=info msg="RemoveContainer for \"fc80df35533b1570c17a4b1130f2dcd1ca8c16ca909eef59d98190d4957cb4d8\" returns successfully" Jan 29 11:26:06.928389 kubelet[2697]: I0129 11:26:06.926309 2697 scope.go:117] "RemoveContainer" containerID="fc80df35533b1570c17a4b1130f2dcd1ca8c16ca909eef59d98190d4957cb4d8" Jan 29 11:26:06.928389 kubelet[2697]: E0129 11:26:06.927507 2697 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fc80df35533b1570c17a4b1130f2dcd1ca8c16ca909eef59d98190d4957cb4d8\": not found" containerID="fc80df35533b1570c17a4b1130f2dcd1ca8c16ca909eef59d98190d4957cb4d8" Jan 29 11:26:06.928389 kubelet[2697]: I0129 11:26:06.927538 2697 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fc80df35533b1570c17a4b1130f2dcd1ca8c16ca909eef59d98190d4957cb4d8"} err="failed to get container status \"fc80df35533b1570c17a4b1130f2dcd1ca8c16ca909eef59d98190d4957cb4d8\": rpc error: code = NotFound desc = an error occurred when try to find container \"fc80df35533b1570c17a4b1130f2dcd1ca8c16ca909eef59d98190d4957cb4d8\": not found" Jan 29 11:26:06.928628 containerd[1493]: time="2025-01-29T11:26:06.926635971Z" level=error msg="ContainerStatus for \"fc80df35533b1570c17a4b1130f2dcd1ca8c16ca909eef59d98190d4957cb4d8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fc80df35533b1570c17a4b1130f2dcd1ca8c16ca909eef59d98190d4957cb4d8\": not found" Jan 29 11:26:07.362967 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9e942969d989713e5fb6c6b43ec97492da123d70058dd7902701a365ee17e27-rootfs.mount: Deactivated successfully. Jan 29 11:26:07.363112 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e9207d4e5dc38784061a88366e1bf330524dc113942a382eb025cb2d883412b-rootfs.mount: Deactivated successfully. Jan 29 11:26:07.363210 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3e9207d4e5dc38784061a88366e1bf330524dc113942a382eb025cb2d883412b-shm.mount: Deactivated successfully. Jan 29 11:26:07.363512 systemd[1]: var-lib-kubelet-pods-2e51c5f5\x2d1fac\x2d4fcd\x2dba19\x2d63c430c4ee7e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds842z.mount: Deactivated successfully. Jan 29 11:26:07.363711 systemd[1]: var-lib-kubelet-pods-870488fe\x2d68a9\x2d4008\x2db25c\x2d9a91d6df03ab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc4bpg.mount: Deactivated successfully. Jan 29 11:26:07.363831 systemd[1]: var-lib-kubelet-pods-870488fe\x2d68a9\x2d4008\x2db25c\x2d9a91d6df03ab-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 11:26:07.363947 systemd[1]: var-lib-kubelet-pods-870488fe\x2d68a9\x2d4008\x2db25c\x2d9a91d6df03ab-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 11:26:07.432142 kubelet[2697]: I0129 11:26:07.432079 2697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e51c5f5-1fac-4fcd-ba19-63c430c4ee7e" path="/var/lib/kubelet/pods/2e51c5f5-1fac-4fcd-ba19-63c430c4ee7e/volumes" Jan 29 11:26:07.432722 kubelet[2697]: I0129 11:26:07.432697 2697 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="870488fe-68a9-4008-b25c-9a91d6df03ab" path="/var/lib/kubelet/pods/870488fe-68a9-4008-b25c-9a91d6df03ab/volumes" Jan 29 11:26:07.462613 containerd[1493]: time="2025-01-29T11:26:07.462522570Z" level=info msg="StopPodSandbox for \"3e9207d4e5dc38784061a88366e1bf330524dc113942a382eb025cb2d883412b\"" Jan 29 11:26:07.463184 containerd[1493]: time="2025-01-29T11:26:07.462666462Z" level=info msg="TearDown network for sandbox \"3e9207d4e5dc38784061a88366e1bf330524dc113942a382eb025cb2d883412b\" successfully" Jan 29 11:26:07.463184 containerd[1493]: time="2025-01-29T11:26:07.462731605Z" level=info msg="StopPodSandbox for \"3e9207d4e5dc38784061a88366e1bf330524dc113942a382eb025cb2d883412b\" returns successfully" Jan 29 11:26:07.463576 containerd[1493]: time="2025-01-29T11:26:07.463528098Z" level=info msg="RemovePodSandbox for \"3e9207d4e5dc38784061a88366e1bf330524dc113942a382eb025cb2d883412b\"" Jan 29 11:26:07.463576 containerd[1493]: time="2025-01-29T11:26:07.463569746Z" level=info msg="Forcibly stopping sandbox \"3e9207d4e5dc38784061a88366e1bf330524dc113942a382eb025cb2d883412b\"" Jan 29 11:26:07.463745 containerd[1493]: time="2025-01-29T11:26:07.463649635Z" level=info msg="TearDown network for sandbox \"3e9207d4e5dc38784061a88366e1bf330524dc113942a382eb025cb2d883412b\" successfully" Jan 29 11:26:07.469371 containerd[1493]: time="2025-01-29T11:26:07.468796826Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e9207d4e5dc38784061a88366e1bf330524dc113942a382eb025cb2d883412b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:26:07.469371 containerd[1493]: time="2025-01-29T11:26:07.468876164Z" level=info msg="RemovePodSandbox \"3e9207d4e5dc38784061a88366e1bf330524dc113942a382eb025cb2d883412b\" returns successfully" Jan 29 11:26:07.470759 containerd[1493]: time="2025-01-29T11:26:07.470714032Z" level=info msg="StopPodSandbox for \"b9e942969d989713e5fb6c6b43ec97492da123d70058dd7902701a365ee17e27\"" Jan 29 11:26:07.470870 containerd[1493]: time="2025-01-29T11:26:07.470836016Z" level=info msg="TearDown network for sandbox \"b9e942969d989713e5fb6c6b43ec97492da123d70058dd7902701a365ee17e27\" successfully" Jan 29 11:26:07.470944 containerd[1493]: time="2025-01-29T11:26:07.470853876Z" level=info msg="StopPodSandbox for \"b9e942969d989713e5fb6c6b43ec97492da123d70058dd7902701a365ee17e27\" returns successfully" Jan 29 11:26:07.472454 containerd[1493]: time="2025-01-29T11:26:07.471394608Z" level=info msg="RemovePodSandbox for \"b9e942969d989713e5fb6c6b43ec97492da123d70058dd7902701a365ee17e27\"" Jan 29 11:26:07.472454 containerd[1493]: time="2025-01-29T11:26:07.471430787Z" level=info msg="Forcibly stopping sandbox \"b9e942969d989713e5fb6c6b43ec97492da123d70058dd7902701a365ee17e27\"" Jan 29 11:26:07.472454 containerd[1493]: time="2025-01-29T11:26:07.471508052Z" level=info msg="TearDown network for sandbox \"b9e942969d989713e5fb6c6b43ec97492da123d70058dd7902701a365ee17e27\" successfully" Jan 29 11:26:07.476254 containerd[1493]: time="2025-01-29T11:26:07.476209551Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9e942969d989713e5fb6c6b43ec97492da123d70058dd7902701a365ee17e27\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:26:07.476254 containerd[1493]: time="2025-01-29T11:26:07.476277850Z" level=info msg="RemovePodSandbox \"b9e942969d989713e5fb6c6b43ec97492da123d70058dd7902701a365ee17e27\" returns successfully" Jan 29 11:26:07.608783 kubelet[2697]: E0129 11:26:07.608708 2697 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:26:08.321702 sshd[4308]: Connection closed by 139.178.68.195 port 36886 Jan 29 11:26:08.321462 sshd-session[4306]: pam_unix(sshd:session): session closed for user core Jan 29 11:26:08.327672 systemd[1]: sshd@23-10.128.0.21:22-139.178.68.195:36886.service: Deactivated successfully. Jan 29 11:26:08.330866 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 11:26:08.335102 systemd-logind[1477]: Session 24 logged out. Waiting for processes to exit. Jan 29 11:26:08.339200 systemd-logind[1477]: Removed session 24. Jan 29 11:26:08.379546 systemd[1]: Started sshd@24-10.128.0.21:22-139.178.68.195:49900.service - OpenSSH per-connection server daemon (139.178.68.195:49900). Jan 29 11:26:08.687567 sshd[4470]: Accepted publickey for core from 139.178.68.195 port 49900 ssh2: RSA SHA256:TKm17rOOJGvnaSUIt3oFUlUbRDedEa602jCeeiSwRLI Jan 29 11:26:08.689439 sshd-session[4470]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:26:08.697410 systemd-logind[1477]: New session 25 of user core. Jan 29 11:26:08.704683 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 11:26:09.309051 ntpd[1461]: Deleting interface #12 lxc_health, fe80::e4a0:b2ff:fe09:53fe%8#123, interface stats: received=0, sent=0, dropped=0, active_time=79 secs Jan 29 11:26:09.309624 ntpd[1461]: 29 Jan 11:26:09 ntpd[1461]: Deleting interface #12 lxc_health, fe80::e4a0:b2ff:fe09:53fe%8#123, interface stats: received=0, sent=0, dropped=0, active_time=79 secs Jan 29 11:26:09.555307 kubelet[2697]: I0129 11:26:09.555235 2697 topology_manager.go:215] "Topology Admit Handler" podUID="9b103b5f-b355-43e0-a7a1-620f25511822" podNamespace="kube-system" podName="cilium-kznx8" Jan 29 11:26:09.557417 kubelet[2697]: E0129 11:26:09.556406 2697 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="870488fe-68a9-4008-b25c-9a91d6df03ab" containerName="apply-sysctl-overwrites" Jan 29 11:26:09.557417 kubelet[2697]: E0129 11:26:09.556460 2697 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="870488fe-68a9-4008-b25c-9a91d6df03ab" containerName="mount-bpf-fs" Jan 29 11:26:09.557417 kubelet[2697]: E0129 11:26:09.556472 2697 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2e51c5f5-1fac-4fcd-ba19-63c430c4ee7e" containerName="cilium-operator" Jan 29 11:26:09.557417 kubelet[2697]: E0129 11:26:09.556481 2697 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="870488fe-68a9-4008-b25c-9a91d6df03ab" containerName="clean-cilium-state" Jan 29 11:26:09.557417 kubelet[2697]: E0129 11:26:09.556497 2697 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="870488fe-68a9-4008-b25c-9a91d6df03ab" containerName="mount-cgroup" Jan 29 11:26:09.557417 kubelet[2697]: E0129 11:26:09.556508 2697 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="870488fe-68a9-4008-b25c-9a91d6df03ab" containerName="cilium-agent" Jan 29 11:26:09.557417 kubelet[2697]: I0129 11:26:09.556674 2697 memory_manager.go:354] "RemoveStaleState removing state" podUID="870488fe-68a9-4008-b25c-9a91d6df03ab" containerName="cilium-agent" Jan 29 11:26:09.557417 kubelet[2697]: I0129 11:26:09.556692 2697 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e51c5f5-1fac-4fcd-ba19-63c430c4ee7e" containerName="cilium-operator" Jan 29 11:26:09.574460 systemd[1]: Created slice kubepods-burstable-pod9b103b5f_b355_43e0_a7a1_620f25511822.slice - libcontainer container kubepods-burstable-pod9b103b5f_b355_43e0_a7a1_620f25511822.slice. Jan 29 11:26:09.581551 sshd[4472]: Connection closed by 139.178.68.195 port 49900 Jan 29 11:26:09.582929 sshd-session[4470]: pam_unix(sshd:session): session closed for user core Jan 29 11:26:09.596301 systemd[1]: sshd@24-10.128.0.21:22-139.178.68.195:49900.service: Deactivated successfully. Jan 29 11:26:09.604942 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 11:26:09.612849 systemd-logind[1477]: Session 25 logged out. Waiting for processes to exit. Jan 29 11:26:09.615215 systemd-logind[1477]: Removed session 25. Jan 29 11:26:09.641869 systemd[1]: Started sshd@25-10.128.0.21:22-139.178.68.195:49902.service - OpenSSH per-connection server daemon (139.178.68.195:49902). Jan 29 11:26:09.681291 kubelet[2697]: I0129 11:26:09.680662 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9b103b5f-b355-43e0-a7a1-620f25511822-clustermesh-secrets\") pod \"cilium-kznx8\" (UID: \"9b103b5f-b355-43e0-a7a1-620f25511822\") " pod="kube-system/cilium-kznx8" Jan 29 11:26:09.681291 kubelet[2697]: I0129 11:26:09.680722 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9b103b5f-b355-43e0-a7a1-620f25511822-host-proc-sys-kernel\") pod \"cilium-kznx8\" (UID: \"9b103b5f-b355-43e0-a7a1-620f25511822\") " pod="kube-system/cilium-kznx8" Jan 29 11:26:09.681291 kubelet[2697]: I0129 11:26:09.680763 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b103b5f-b355-43e0-a7a1-620f25511822-xtables-lock\") pod \"cilium-kznx8\" (UID: \"9b103b5f-b355-43e0-a7a1-620f25511822\") " pod="kube-system/cilium-kznx8" Jan 29 11:26:09.681291 kubelet[2697]: I0129 11:26:09.680789 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9b103b5f-b355-43e0-a7a1-620f25511822-cilium-ipsec-secrets\") pod \"cilium-kznx8\" (UID: \"9b103b5f-b355-43e0-a7a1-620f25511822\") " pod="kube-system/cilium-kznx8" Jan 29 11:26:09.681291 kubelet[2697]: I0129 11:26:09.680820 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vv7h\" (UniqueName: \"kubernetes.io/projected/9b103b5f-b355-43e0-a7a1-620f25511822-kube-api-access-9vv7h\") pod \"cilium-kznx8\" (UID: \"9b103b5f-b355-43e0-a7a1-620f25511822\") " pod="kube-system/cilium-kznx8" Jan 29 11:26:09.681733 kubelet[2697]: I0129 11:26:09.680850 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9b103b5f-b355-43e0-a7a1-620f25511822-bpf-maps\") pod \"cilium-kznx8\" (UID: \"9b103b5f-b355-43e0-a7a1-620f25511822\") " pod="kube-system/cilium-kznx8" Jan 29 11:26:09.681733 kubelet[2697]: I0129 11:26:09.680879 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9b103b5f-b355-43e0-a7a1-620f25511822-hostproc\") pod \"cilium-kznx8\" (UID: \"9b103b5f-b355-43e0-a7a1-620f25511822\") " pod="kube-system/cilium-kznx8" Jan 29 11:26:09.681733 kubelet[2697]: I0129 11:26:09.680905 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9b103b5f-b355-43e0-a7a1-620f25511822-cilium-cgroup\") pod \"cilium-kznx8\" (UID: \"9b103b5f-b355-43e0-a7a1-620f25511822\") " pod="kube-system/cilium-kznx8" Jan 29 11:26:09.681733 kubelet[2697]: I0129 11:26:09.680935 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b103b5f-b355-43e0-a7a1-620f25511822-lib-modules\") pod \"cilium-kznx8\" (UID: \"9b103b5f-b355-43e0-a7a1-620f25511822\") " pod="kube-system/cilium-kznx8" Jan 29 11:26:09.681733 kubelet[2697]: I0129 11:26:09.680971 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9b103b5f-b355-43e0-a7a1-620f25511822-cilium-run\") pod \"cilium-kznx8\" (UID: \"9b103b5f-b355-43e0-a7a1-620f25511822\") " pod="kube-system/cilium-kznx8" Jan 29 11:26:09.681733 kubelet[2697]: I0129 11:26:09.680999 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9b103b5f-b355-43e0-a7a1-620f25511822-hubble-tls\") pod \"cilium-kznx8\" (UID: \"9b103b5f-b355-43e0-a7a1-620f25511822\") " pod="kube-system/cilium-kznx8" Jan 29 11:26:09.682066 kubelet[2697]: I0129 11:26:09.681025 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b103b5f-b355-43e0-a7a1-620f25511822-cilium-config-path\") pod \"cilium-kznx8\" (UID: \"9b103b5f-b355-43e0-a7a1-620f25511822\") " pod="kube-system/cilium-kznx8" Jan 29 11:26:09.682066 kubelet[2697]: I0129 11:26:09.681052 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9b103b5f-b355-43e0-a7a1-620f25511822-cni-path\") pod \"cilium-kznx8\" (UID: \"9b103b5f-b355-43e0-a7a1-620f25511822\") " pod="kube-system/cilium-kznx8" Jan 29 11:26:09.682066 kubelet[2697]: I0129 11:26:09.681079 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b103b5f-b355-43e0-a7a1-620f25511822-etc-cni-netd\") pod \"cilium-kznx8\" (UID: \"9b103b5f-b355-43e0-a7a1-620f25511822\") " pod="kube-system/cilium-kznx8" Jan 29 11:26:09.682066 kubelet[2697]: I0129 11:26:09.681107 2697 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9b103b5f-b355-43e0-a7a1-620f25511822-host-proc-sys-net\") pod \"cilium-kznx8\" (UID: \"9b103b5f-b355-43e0-a7a1-620f25511822\") " pod="kube-system/cilium-kznx8" Jan 29 11:26:09.887133 containerd[1493]: time="2025-01-29T11:26:09.886959193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kznx8,Uid:9b103b5f-b355-43e0-a7a1-620f25511822,Namespace:kube-system,Attempt:0,}" Jan 29 11:26:09.930000 containerd[1493]: time="2025-01-29T11:26:09.929809149Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:26:09.930302 containerd[1493]: time="2025-01-29T11:26:09.929925627Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:26:09.930302 containerd[1493]: time="2025-01-29T11:26:09.930046725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:26:09.930501 containerd[1493]: time="2025-01-29T11:26:09.930266590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:26:09.953606 systemd[1]: Started cri-containerd-c7e74479932374b89faaadbf4ed37b3246a0f041fee2cfb429e1b611330af9d8.scope - libcontainer container c7e74479932374b89faaadbf4ed37b3246a0f041fee2cfb429e1b611330af9d8. Jan 29 11:26:09.974058 sshd[4483]: Accepted publickey for core from 139.178.68.195 port 49902 ssh2: RSA SHA256:TKm17rOOJGvnaSUIt3oFUlUbRDedEa602jCeeiSwRLI Jan 29 11:26:09.977162 sshd-session[4483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:26:09.992070 systemd-logind[1477]: New session 26 of user core. Jan 29 11:26:09.998586 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 11:26:10.015585 kubelet[2697]: I0129 11:26:10.014832 2697 setters.go:580] "Node became not ready" node="ci-4152-2-0-087b38d015dfbd817921.c.flatcar-212911.internal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T11:26:10Z","lastTransitionTime":"2025-01-29T11:26:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 11:26:10.020554 containerd[1493]: time="2025-01-29T11:26:10.020313646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kznx8,Uid:9b103b5f-b355-43e0-a7a1-620f25511822,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7e74479932374b89faaadbf4ed37b3246a0f041fee2cfb429e1b611330af9d8\"" Jan 29 11:26:10.028110 containerd[1493]: time="2025-01-29T11:26:10.028061147Z" level=info msg="CreateContainer within sandbox \"c7e74479932374b89faaadbf4ed37b3246a0f041fee2cfb429e1b611330af9d8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:26:10.047440 containerd[1493]: time="2025-01-29T11:26:10.047257118Z" level=info msg="CreateContainer within sandbox \"c7e74479932374b89faaadbf4ed37b3246a0f041fee2cfb429e1b611330af9d8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0ffe0fddabe935c100219df6dd0d2e355d7f46388cc25dfde767305cc38422c7\"" Jan 29 11:26:10.048515 containerd[1493]: time="2025-01-29T11:26:10.048478308Z" level=info msg="StartContainer for \"0ffe0fddabe935c100219df6dd0d2e355d7f46388cc25dfde767305cc38422c7\"" Jan 29 11:26:10.086651 systemd[1]: Started cri-containerd-0ffe0fddabe935c100219df6dd0d2e355d7f46388cc25dfde767305cc38422c7.scope - libcontainer container 0ffe0fddabe935c100219df6dd0d2e355d7f46388cc25dfde767305cc38422c7. Jan 29 11:26:10.124980 containerd[1493]: time="2025-01-29T11:26:10.124764410Z" level=info msg="StartContainer for \"0ffe0fddabe935c100219df6dd0d2e355d7f46388cc25dfde767305cc38422c7\" returns successfully" Jan 29 11:26:10.136564 systemd[1]: cri-containerd-0ffe0fddabe935c100219df6dd0d2e355d7f46388cc25dfde767305cc38422c7.scope: Deactivated successfully. Jan 29 11:26:10.182973 containerd[1493]: time="2025-01-29T11:26:10.182786558Z" level=info msg="shim disconnected" id=0ffe0fddabe935c100219df6dd0d2e355d7f46388cc25dfde767305cc38422c7 namespace=k8s.io Jan 29 11:26:10.182973 containerd[1493]: time="2025-01-29T11:26:10.182863715Z" level=warning msg="cleaning up after shim disconnected" id=0ffe0fddabe935c100219df6dd0d2e355d7f46388cc25dfde767305cc38422c7 namespace=k8s.io Jan 29 11:26:10.182973 containerd[1493]: time="2025-01-29T11:26:10.182878949Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:26:10.187331 sshd[4529]: Connection closed by 139.178.68.195 port 49902 Jan 29 11:26:10.189280 sshd-session[4483]: pam_unix(sshd:session): session closed for user core Jan 29 11:26:10.200494 systemd[1]: sshd@25-10.128.0.21:22-139.178.68.195:49902.service: Deactivated successfully. Jan 29 11:26:10.204637 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 11:26:10.206272 systemd-logind[1477]: Session 26 logged out. Waiting for processes to exit. Jan 29 11:26:10.208974 systemd-logind[1477]: Removed session 26. Jan 29 11:26:10.245777 systemd[1]: Started sshd@26-10.128.0.21:22-139.178.68.195:49914.service - OpenSSH per-connection server daemon (139.178.68.195:49914). Jan 29 11:26:10.546279 sshd[4601]: Accepted publickey for core from 139.178.68.195 port 49914 ssh2: RSA SHA256:TKm17rOOJGvnaSUIt3oFUlUbRDedEa602jCeeiSwRLI Jan 29 11:26:10.548438 sshd-session[4601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:26:10.554586 systemd-logind[1477]: New session 27 of user core. Jan 29 11:26:10.561638 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 11:26:10.863083 containerd[1493]: time="2025-01-29T11:26:10.862927298Z" level=info msg="CreateContainer within sandbox \"c7e74479932374b89faaadbf4ed37b3246a0f041fee2cfb429e1b611330af9d8\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:26:10.895999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3788106611.mount: Deactivated successfully. Jan 29 11:26:10.897446 containerd[1493]: time="2025-01-29T11:26:10.896285984Z" level=info msg="CreateContainer within sandbox \"c7e74479932374b89faaadbf4ed37b3246a0f041fee2cfb429e1b611330af9d8\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"526fa6ad66996a80632349406af9bb8906f0bfa785c8966c3287f52a108f7fe7\"" Jan 29 11:26:10.902552 containerd[1493]: time="2025-01-29T11:26:10.901630898Z" level=info msg="StartContainer for \"526fa6ad66996a80632349406af9bb8906f0bfa785c8966c3287f52a108f7fe7\"" Jan 29 11:26:10.953572 systemd[1]: Started cri-containerd-526fa6ad66996a80632349406af9bb8906f0bfa785c8966c3287f52a108f7fe7.scope - libcontainer container 526fa6ad66996a80632349406af9bb8906f0bfa785c8966c3287f52a108f7fe7. Jan 29 11:26:10.996400 containerd[1493]: time="2025-01-29T11:26:10.995769697Z" level=info msg="StartContainer for \"526fa6ad66996a80632349406af9bb8906f0bfa785c8966c3287f52a108f7fe7\" returns successfully" Jan 29 11:26:11.004821 systemd[1]: cri-containerd-526fa6ad66996a80632349406af9bb8906f0bfa785c8966c3287f52a108f7fe7.scope: Deactivated successfully. Jan 29 11:26:11.044650 containerd[1493]: time="2025-01-29T11:26:11.044525760Z" level=info msg="shim disconnected" id=526fa6ad66996a80632349406af9bb8906f0bfa785c8966c3287f52a108f7fe7 namespace=k8s.io Jan 29 11:26:11.044650 containerd[1493]: time="2025-01-29T11:26:11.044623122Z" level=warning msg="cleaning up after shim disconnected" id=526fa6ad66996a80632349406af9bb8906f0bfa785c8966c3287f52a108f7fe7 namespace=k8s.io Jan 29 11:26:11.044650 containerd[1493]: time="2025-01-29T11:26:11.044641046Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:26:11.789701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-526fa6ad66996a80632349406af9bb8906f0bfa785c8966c3287f52a108f7fe7-rootfs.mount: Deactivated successfully. Jan 29 11:26:11.869773 containerd[1493]: time="2025-01-29T11:26:11.869539671Z" level=info msg="CreateContainer within sandbox \"c7e74479932374b89faaadbf4ed37b3246a0f041fee2cfb429e1b611330af9d8\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:26:11.912689 containerd[1493]: time="2025-01-29T11:26:11.912503811Z" level=info msg="CreateContainer within sandbox \"c7e74479932374b89faaadbf4ed37b3246a0f041fee2cfb429e1b611330af9d8\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f20d8e17ccb29d1846a62b68406290343be06ef3c7649ed510db2b557e9eb2e8\"" Jan 29 11:26:11.915387 containerd[1493]: time="2025-01-29T11:26:11.913824103Z" level=info msg="StartContainer for \"f20d8e17ccb29d1846a62b68406290343be06ef3c7649ed510db2b557e9eb2e8\"" Jan 29 11:26:11.964589 systemd[1]: Started cri-containerd-f20d8e17ccb29d1846a62b68406290343be06ef3c7649ed510db2b557e9eb2e8.scope - libcontainer container f20d8e17ccb29d1846a62b68406290343be06ef3c7649ed510db2b557e9eb2e8. Jan 29 11:26:12.022927 containerd[1493]: time="2025-01-29T11:26:12.022398433Z" level=info msg="StartContainer for \"f20d8e17ccb29d1846a62b68406290343be06ef3c7649ed510db2b557e9eb2e8\" returns successfully" Jan 29 11:26:12.027564 systemd[1]: cri-containerd-f20d8e17ccb29d1846a62b68406290343be06ef3c7649ed510db2b557e9eb2e8.scope: Deactivated successfully. Jan 29 11:26:12.070506 containerd[1493]: time="2025-01-29T11:26:12.070000047Z" level=info msg="shim disconnected" id=f20d8e17ccb29d1846a62b68406290343be06ef3c7649ed510db2b557e9eb2e8 namespace=k8s.io Jan 29 11:26:12.070506 containerd[1493]: time="2025-01-29T11:26:12.070079535Z" level=warning msg="cleaning up after shim disconnected" id=f20d8e17ccb29d1846a62b68406290343be06ef3c7649ed510db2b557e9eb2e8 namespace=k8s.io Jan 29 11:26:12.070506 containerd[1493]: time="2025-01-29T11:26:12.070093764Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:26:12.610108 kubelet[2697]: E0129 11:26:12.610025 2697 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:26:12.789746 systemd[1]: run-containerd-runc-k8s.io-f20d8e17ccb29d1846a62b68406290343be06ef3c7649ed510db2b557e9eb2e8-runc.sWiBZT.mount: Deactivated successfully. Jan 29 11:26:12.789897 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f20d8e17ccb29d1846a62b68406290343be06ef3c7649ed510db2b557e9eb2e8-rootfs.mount: Deactivated successfully. Jan 29 11:26:12.876081 containerd[1493]: time="2025-01-29T11:26:12.875593762Z" level=info msg="CreateContainer within sandbox \"c7e74479932374b89faaadbf4ed37b3246a0f041fee2cfb429e1b611330af9d8\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:26:12.905431 containerd[1493]: time="2025-01-29T11:26:12.905124649Z" level=info msg="CreateContainer within sandbox \"c7e74479932374b89faaadbf4ed37b3246a0f041fee2cfb429e1b611330af9d8\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1578d364ebef11fd0d18a383ea65b7f6f5ee313f0520a08fa2eb70bbdc042397\"" Jan 29 11:26:12.906761 containerd[1493]: time="2025-01-29T11:26:12.906709716Z" level=info msg="StartContainer for \"1578d364ebef11fd0d18a383ea65b7f6f5ee313f0520a08fa2eb70bbdc042397\"" Jan 29 11:26:12.954581 systemd[1]: Started cri-containerd-1578d364ebef11fd0d18a383ea65b7f6f5ee313f0520a08fa2eb70bbdc042397.scope - libcontainer container 1578d364ebef11fd0d18a383ea65b7f6f5ee313f0520a08fa2eb70bbdc042397. Jan 29 11:26:12.995783 systemd[1]: cri-containerd-1578d364ebef11fd0d18a383ea65b7f6f5ee313f0520a08fa2eb70bbdc042397.scope: Deactivated successfully. Jan 29 11:26:13.001553 containerd[1493]: time="2025-01-29T11:26:13.001155182Z" level=info msg="StartContainer for \"1578d364ebef11fd0d18a383ea65b7f6f5ee313f0520a08fa2eb70bbdc042397\" returns successfully" Jan 29 11:26:13.035583 containerd[1493]: time="2025-01-29T11:26:13.035198545Z" level=info msg="shim disconnected" id=1578d364ebef11fd0d18a383ea65b7f6f5ee313f0520a08fa2eb70bbdc042397 namespace=k8s.io Jan 29 11:26:13.035583 containerd[1493]: time="2025-01-29T11:26:13.035275339Z" level=warning msg="cleaning up after shim disconnected" id=1578d364ebef11fd0d18a383ea65b7f6f5ee313f0520a08fa2eb70bbdc042397 namespace=k8s.io Jan 29 11:26:13.035583 containerd[1493]: time="2025-01-29T11:26:13.035293275Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:26:13.055674 containerd[1493]: time="2025-01-29T11:26:13.055590265Z" level=warning msg="cleanup warnings time=\"2025-01-29T11:26:13Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 11:26:13.790239 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1578d364ebef11fd0d18a383ea65b7f6f5ee313f0520a08fa2eb70bbdc042397-rootfs.mount: Deactivated successfully. Jan 29 11:26:13.879030 containerd[1493]: time="2025-01-29T11:26:13.878975486Z" level=info msg="CreateContainer within sandbox \"c7e74479932374b89faaadbf4ed37b3246a0f041fee2cfb429e1b611330af9d8\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:26:13.903631 containerd[1493]: time="2025-01-29T11:26:13.903315481Z" level=info msg="CreateContainer within sandbox \"c7e74479932374b89faaadbf4ed37b3246a0f041fee2cfb429e1b611330af9d8\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7718e242c0a7019605d207ed136181a583f781f21a6ba5708bebf25b6734a7d3\"" Jan 29 11:26:13.905255 containerd[1493]: time="2025-01-29T11:26:13.905211703Z" level=info msg="StartContainer for \"7718e242c0a7019605d207ed136181a583f781f21a6ba5708bebf25b6734a7d3\"" Jan 29 11:26:13.963645 systemd[1]: Started cri-containerd-7718e242c0a7019605d207ed136181a583f781f21a6ba5708bebf25b6734a7d3.scope - libcontainer container 7718e242c0a7019605d207ed136181a583f781f21a6ba5708bebf25b6734a7d3. Jan 29 11:26:14.011722 containerd[1493]: time="2025-01-29T11:26:14.011657658Z" level=info msg="StartContainer for \"7718e242c0a7019605d207ed136181a583f781f21a6ba5708bebf25b6734a7d3\" returns successfully" Jan 29 11:26:14.517393 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 29 11:26:14.790180 systemd[1]: run-containerd-runc-k8s.io-7718e242c0a7019605d207ed136181a583f781f21a6ba5708bebf25b6734a7d3-runc.jgfSiW.mount: Deactivated successfully. Jan 29 11:26:17.158259 kubelet[2697]: E0129 11:26:17.158140 2697 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:40502->127.0.0.1:35809: write tcp 127.0.0.1:40502->127.0.0.1:35809: write: broken pipe Jan 29 11:26:17.776695 systemd-networkd[1386]: lxc_health: Link UP Jan 29 11:26:17.788850 systemd-networkd[1386]: lxc_health: Gained carrier Jan 29 11:26:17.927173 kubelet[2697]: I0129 11:26:17.927089 2697 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kznx8" podStartSLOduration=8.927065834 podStartE2EDuration="8.927065834s" podCreationTimestamp="2025-01-29 11:26:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:26:14.910122259 +0000 UTC m=+127.624689803" watchObservedRunningTime="2025-01-29 11:26:17.927065834 +0000 UTC m=+130.641633365" Jan 29 11:26:19.478178 systemd-networkd[1386]: lxc_health: Gained IPv6LL Jan 29 11:26:22.309132 ntpd[1461]: Listen normally on 15 lxc_health [fe80::7cdb:eaff:fe48:6f77%14]:123 Jan 29 11:26:22.309900 ntpd[1461]: 29 Jan 11:26:22 ntpd[1461]: Listen normally on 15 lxc_health [fe80::7cdb:eaff:fe48:6f77%14]:123 Jan 29 11:26:23.822179 systemd[1]: run-containerd-runc-k8s.io-7718e242c0a7019605d207ed136181a583f781f21a6ba5708bebf25b6734a7d3-runc.ycGCbp.mount: Deactivated successfully. Jan 29 11:26:26.124835 sshd[4604]: Connection closed by 139.178.68.195 port 49914 Jan 29 11:26:26.125933 sshd-session[4601]: pam_unix(sshd:session): session closed for user core Jan 29 11:26:26.131925 systemd[1]: sshd@26-10.128.0.21:22-139.178.68.195:49914.service: Deactivated successfully. Jan 29 11:26:26.134745 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 11:26:26.136054 systemd-logind[1477]: Session 27 logged out. Waiting for processes to exit. Jan 29 11:26:26.138079 systemd-logind[1477]: Removed session 27.