Jan 29 12:03:29.068542 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 12:03:29.068584 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:03:29.068603 kernel: BIOS-provided physical RAM map: Jan 29 12:03:29.068617 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 29 12:03:29.068630 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 29 12:03:29.068644 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 29 12:03:29.068661 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 29 12:03:29.068678 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 29 12:03:29.068692 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 29 12:03:29.068706 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 29 12:03:29.068720 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 29 12:03:29.068735 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 29 12:03:29.068749 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 29 12:03:29.068763 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 29 12:03:29.068784 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 29 12:03:29.068800 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 29 12:03:29.068816 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 29 12:03:29.068839 kernel: NX (Execute Disable) protection: active Jan 29 12:03:29.068855 kernel: APIC: Static calls initialized Jan 29 12:03:29.068871 kernel: efi: EFI v2.7 by EDK II Jan 29 12:03:29.068886 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Jan 29 12:03:29.068902 kernel: SMBIOS 2.4 present. Jan 29 12:03:29.068917 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 29 12:03:29.068933 kernel: Hypervisor detected: KVM Jan 29 12:03:29.068952 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 12:03:29.068967 kernel: kvm-clock: using sched offset of 12132403738 cycles Jan 29 12:03:29.068983 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 12:03:29.068999 kernel: tsc: Detected 2299.998 MHz processor Jan 29 12:03:29.069015 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 12:03:29.069031 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 12:03:29.069047 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 29 12:03:29.069063 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 29 12:03:29.069079 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 12:03:29.069098 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 29 12:03:29.069148 kernel: Using GB pages for direct mapping Jan 29 12:03:29.069174 kernel: Secure boot disabled Jan 29 12:03:29.069191 kernel: ACPI: Early table checksum verification disabled Jan 29 12:03:29.069207 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 29 12:03:29.069222 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 29 12:03:29.069238 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 29 12:03:29.069264 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 29 12:03:29.069286 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 29 12:03:29.069303 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 29 12:03:29.069322 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 29 12:03:29.069340 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 29 12:03:29.069370 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 29 12:03:29.069389 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 29 12:03:29.069411 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 29 12:03:29.069428 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 29 12:03:29.069455 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 29 12:03:29.069473 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 29 12:03:29.069490 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 29 12:03:29.069509 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 29 12:03:29.069526 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 29 12:03:29.069544 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 29 12:03:29.069568 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 29 12:03:29.069591 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 29 12:03:29.069608 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 12:03:29.069626 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 12:03:29.069644 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 29 12:03:29.069661 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 29 12:03:29.069680 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 29 12:03:29.069705 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 29 12:03:29.069723 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 29 12:03:29.069740 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Jan 29 12:03:29.069764 kernel: Zone ranges: Jan 29 12:03:29.069788 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 12:03:29.069806 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 29 12:03:29.069823 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 29 12:03:29.069841 kernel: Movable zone start for each node Jan 29 12:03:29.069859 kernel: Early memory node ranges Jan 29 12:03:29.069877 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 29 12:03:29.069894 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 29 12:03:29.069911 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 29 12:03:29.069934 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 29 12:03:29.069952 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 29 12:03:29.069970 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 29 12:03:29.069988 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 12:03:29.070004 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 29 12:03:29.070021 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 29 12:03:29.070039 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 29 12:03:29.070057 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 29 12:03:29.070073 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 29 12:03:29.070095 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 12:03:29.070112 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 12:03:29.070164 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 12:03:29.070182 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 12:03:29.070198 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 12:03:29.070216 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 12:03:29.070234 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 12:03:29.070250 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 12:03:29.070267 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 29 12:03:29.070290 kernel: Booting paravirtualized kernel on KVM Jan 29 12:03:29.070307 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 12:03:29.070324 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 12:03:29.070350 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 12:03:29.070377 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 12:03:29.070394 kernel: pcpu-alloc: [0] 0 1 Jan 29 12:03:29.070412 kernel: kvm-guest: PV spinlocks enabled Jan 29 12:03:29.070430 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 12:03:29.070456 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:03:29.070480 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 12:03:29.070498 kernel: random: crng init done Jan 29 12:03:29.070515 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 29 12:03:29.070534 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 12:03:29.070552 kernel: Fallback order for Node 0: 0 Jan 29 12:03:29.070570 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 29 12:03:29.070587 kernel: Policy zone: Normal Jan 29 12:03:29.070605 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 12:03:29.070627 kernel: software IO TLB: area num 2. Jan 29 12:03:29.070646 kernel: Memory: 7513384K/7860584K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 346940K reserved, 0K cma-reserved) Jan 29 12:03:29.070673 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 12:03:29.070691 kernel: Kernel/User page tables isolation: enabled Jan 29 12:03:29.070709 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 12:03:29.070725 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 12:03:29.070742 kernel: Dynamic Preempt: voluntary Jan 29 12:03:29.070766 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 12:03:29.070791 kernel: rcu: RCU event tracing is enabled. Jan 29 12:03:29.070826 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 12:03:29.070844 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 12:03:29.070862 kernel: Rude variant of Tasks RCU enabled. Jan 29 12:03:29.070884 kernel: Tracing variant of Tasks RCU enabled. Jan 29 12:03:29.070913 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 12:03:29.070936 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 12:03:29.070954 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 12:03:29.070971 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 12:03:29.070989 kernel: Console: colour dummy device 80x25 Jan 29 12:03:29.071011 kernel: printk: console [ttyS0] enabled Jan 29 12:03:29.071045 kernel: ACPI: Core revision 20230628 Jan 29 12:03:29.071072 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 12:03:29.071098 kernel: x2apic enabled Jan 29 12:03:29.071147 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 12:03:29.071167 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 29 12:03:29.071190 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 29 12:03:29.071207 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 29 12:03:29.071230 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 29 12:03:29.071247 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 29 12:03:29.071265 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 12:03:29.071285 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 29 12:03:29.071303 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 29 12:03:29.071319 kernel: Spectre V2 : Mitigation: IBRS Jan 29 12:03:29.071338 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 12:03:29.071364 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 12:03:29.071382 kernel: RETBleed: Mitigation: IBRS Jan 29 12:03:29.071406 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 12:03:29.071425 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 29 12:03:29.071445 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 12:03:29.071465 kernel: MDS: Mitigation: Clear CPU buffers Jan 29 12:03:29.071485 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 12:03:29.071504 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 12:03:29.071524 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 12:03:29.071545 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 12:03:29.071564 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 12:03:29.071596 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 29 12:03:29.071615 kernel: Freeing SMP alternatives memory: 32K Jan 29 12:03:29.071634 kernel: pid_max: default: 32768 minimum: 301 Jan 29 12:03:29.071655 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 12:03:29.071675 kernel: landlock: Up and running. Jan 29 12:03:29.071694 kernel: SELinux: Initializing. Jan 29 12:03:29.071714 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 12:03:29.071734 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 12:03:29.071754 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 29 12:03:29.071778 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:03:29.071798 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:03:29.071817 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:03:29.071834 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 29 12:03:29.071851 kernel: signal: max sigframe size: 1776 Jan 29 12:03:29.071877 kernel: rcu: Hierarchical SRCU implementation. Jan 29 12:03:29.071894 kernel: rcu: Max phase no-delay instances is 400. Jan 29 12:03:29.071911 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 12:03:29.071928 kernel: smp: Bringing up secondary CPUs ... Jan 29 12:03:29.071950 kernel: smpboot: x86: Booting SMP configuration: Jan 29 12:03:29.071968 kernel: .... node #0, CPUs: #1 Jan 29 12:03:29.071988 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 29 12:03:29.072007 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 29 12:03:29.072026 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 12:03:29.072044 kernel: smpboot: Max logical packages: 1 Jan 29 12:03:29.072063 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 29 12:03:29.072081 kernel: devtmpfs: initialized Jan 29 12:03:29.072101 kernel: x86/mm: Memory block size: 128MB Jan 29 12:03:29.072141 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 29 12:03:29.072161 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 12:03:29.072178 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 12:03:29.072204 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 12:03:29.072223 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 12:03:29.072241 kernel: audit: initializing netlink subsys (disabled) Jan 29 12:03:29.072259 kernel: audit: type=2000 audit(1738152207.836:1): state=initialized audit_enabled=0 res=1 Jan 29 12:03:29.072278 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 12:03:29.072301 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 12:03:29.072319 kernel: cpuidle: using governor menu Jan 29 12:03:29.072337 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 12:03:29.072355 kernel: dca service started, version 1.12.1 Jan 29 12:03:29.072380 kernel: PCI: Using configuration type 1 for base access Jan 29 12:03:29.072399 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 12:03:29.072417 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 12:03:29.072435 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 12:03:29.072453 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 12:03:29.072475 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 12:03:29.072493 kernel: ACPI: Added _OSI(Module Device) Jan 29 12:03:29.072511 kernel: ACPI: Added _OSI(Processor Device) Jan 29 12:03:29.072529 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 12:03:29.072548 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 12:03:29.072566 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 29 12:03:29.072585 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 12:03:29.072603 kernel: ACPI: Interpreter enabled Jan 29 12:03:29.072621 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 12:03:29.072643 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 12:03:29.072662 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 12:03:29.072680 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 29 12:03:29.072698 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 29 12:03:29.072715 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 12:03:29.072975 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 12:03:29.073235 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 12:03:29.073440 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 12:03:29.073464 kernel: PCI host bridge to bus 0000:00 Jan 29 12:03:29.073643 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 12:03:29.073810 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 12:03:29.073976 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 12:03:29.074164 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 29 12:03:29.074353 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 12:03:29.074579 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 12:03:29.074792 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 29 12:03:29.075012 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 29 12:03:29.075235 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 29 12:03:29.075460 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 29 12:03:29.075660 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 29 12:03:29.075855 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 29 12:03:29.076061 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 12:03:29.076273 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 29 12:03:29.076484 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 29 12:03:29.076677 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 12:03:29.076864 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 29 12:03:29.077050 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 29 12:03:29.077081 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 12:03:29.077100 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 12:03:29.077143 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 12:03:29.077164 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 12:03:29.077183 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 12:03:29.077204 kernel: iommu: Default domain type: Translated Jan 29 12:03:29.077223 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 12:03:29.077243 kernel: efivars: Registered efivars operations Jan 29 12:03:29.077263 kernel: PCI: Using ACPI for IRQ routing Jan 29 12:03:29.077288 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 12:03:29.077307 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 29 12:03:29.077327 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 29 12:03:29.077346 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 29 12:03:29.077373 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 29 12:03:29.077392 kernel: vgaarb: loaded Jan 29 12:03:29.077411 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 12:03:29.077431 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 12:03:29.077449 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 12:03:29.077481 kernel: pnp: PnP ACPI init Jan 29 12:03:29.077506 kernel: pnp: PnP ACPI: found 7 devices Jan 29 12:03:29.077526 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 12:03:29.077546 kernel: NET: Registered PF_INET protocol family Jan 29 12:03:29.077566 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 12:03:29.077586 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 29 12:03:29.077614 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 12:03:29.077638 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 12:03:29.077658 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 29 12:03:29.077682 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 29 12:03:29.077701 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 29 12:03:29.077721 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 29 12:03:29.077740 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 12:03:29.077760 kernel: NET: Registered PF_XDP protocol family Jan 29 12:03:29.077947 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 12:03:29.078185 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 12:03:29.078362 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 12:03:29.078542 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 29 12:03:29.078738 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 12:03:29.078765 kernel: PCI: CLS 0 bytes, default 64 Jan 29 12:03:29.078786 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 12:03:29.078805 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 29 12:03:29.078825 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 12:03:29.078845 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 29 12:03:29.078864 kernel: clocksource: Switched to clocksource tsc Jan 29 12:03:29.078887 kernel: Initialise system trusted keyrings Jan 29 12:03:29.078907 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 29 12:03:29.078927 kernel: Key type asymmetric registered Jan 29 12:03:29.078946 kernel: Asymmetric key parser 'x509' registered Jan 29 12:03:29.078965 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 12:03:29.078984 kernel: io scheduler mq-deadline registered Jan 29 12:03:29.079003 kernel: io scheduler kyber registered Jan 29 12:03:29.079023 kernel: io scheduler bfq registered Jan 29 12:03:29.079042 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 12:03:29.079065 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 29 12:03:29.079283 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 29 12:03:29.079309 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 29 12:03:29.079506 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 29 12:03:29.079532 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 29 12:03:29.079722 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 29 12:03:29.079748 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 12:03:29.079768 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 12:03:29.079788 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 29 12:03:29.079813 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 29 12:03:29.079832 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 29 12:03:29.080026 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 29 12:03:29.080054 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 12:03:29.080074 kernel: i8042: Warning: Keylock active Jan 29 12:03:29.080093 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 12:03:29.080112 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 12:03:29.080382 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 29 12:03:29.080562 kernel: rtc_cmos 00:00: registered as rtc0 Jan 29 12:03:29.080730 kernel: rtc_cmos 00:00: setting system clock to 2025-01-29T12:03:28 UTC (1738152208) Jan 29 12:03:29.081216 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 29 12:03:29.081246 kernel: intel_pstate: CPU model not supported Jan 29 12:03:29.081265 kernel: pstore: Using crash dump compression: deflate Jan 29 12:03:29.081284 kernel: pstore: Registered efi_pstore as persistent store backend Jan 29 12:03:29.081303 kernel: NET: Registered PF_INET6 protocol family Jan 29 12:03:29.081322 kernel: Segment Routing with IPv6 Jan 29 12:03:29.081346 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 12:03:29.081373 kernel: NET: Registered PF_PACKET protocol family Jan 29 12:03:29.081392 kernel: Key type dns_resolver registered Jan 29 12:03:29.081410 kernel: IPI shorthand broadcast: enabled Jan 29 12:03:29.081429 kernel: sched_clock: Marking stable (844003919, 160913271)->(1038609153, -33691963) Jan 29 12:03:29.081448 kernel: registered taskstats version 1 Jan 29 12:03:29.081466 kernel: Loading compiled-in X.509 certificates Jan 29 12:03:29.081484 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 12:03:29.081502 kernel: Key type .fscrypt registered Jan 29 12:03:29.081524 kernel: Key type fscrypt-provisioning registered Jan 29 12:03:29.081542 kernel: ima: Allocated hash algorithm: sha1 Jan 29 12:03:29.081560 kernel: ima: No architecture policies found Jan 29 12:03:29.081579 kernel: clk: Disabling unused clocks Jan 29 12:03:29.081598 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 12:03:29.081616 kernel: Write protecting the kernel read-only data: 36864k Jan 29 12:03:29.081635 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 12:03:29.081653 kernel: Run /init as init process Jan 29 12:03:29.081676 kernel: with arguments: Jan 29 12:03:29.081694 kernel: /init Jan 29 12:03:29.081712 kernel: with environment: Jan 29 12:03:29.081730 kernel: HOME=/ Jan 29 12:03:29.081747 kernel: TERM=linux Jan 29 12:03:29.081764 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 12:03:29.081782 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 12:03:29.081805 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:03:29.081831 systemd[1]: Detected virtualization google. Jan 29 12:03:29.081851 systemd[1]: Detected architecture x86-64. Jan 29 12:03:29.081870 systemd[1]: Running in initrd. Jan 29 12:03:29.081889 systemd[1]: No hostname configured, using default hostname. Jan 29 12:03:29.081907 systemd[1]: Hostname set to . Jan 29 12:03:29.081927 systemd[1]: Initializing machine ID from random generator. Jan 29 12:03:29.081945 systemd[1]: Queued start job for default target initrd.target. Jan 29 12:03:29.081964 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:03:29.081987 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:03:29.082007 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 12:03:29.082027 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:03:29.082047 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 12:03:29.082066 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 12:03:29.082088 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 12:03:29.082107 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 12:03:29.082146 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:03:29.082165 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:03:29.082204 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:03:29.082228 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:03:29.082247 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:03:29.082903 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:03:29.082943 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:03:29.082965 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:03:29.082985 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:03:29.083004 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:03:29.083024 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:03:29.083046 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:03:29.083067 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:03:29.083089 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:03:29.083113 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 12:03:29.083157 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:03:29.083176 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 12:03:29.083194 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 12:03:29.083213 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:03:29.083233 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:03:29.083254 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:03:29.083310 systemd-journald[183]: Collecting audit messages is disabled. Jan 29 12:03:29.083369 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 12:03:29.083390 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:03:29.083412 systemd-journald[183]: Journal started Jan 29 12:03:29.083456 systemd-journald[183]: Runtime Journal (/run/log/journal/15c67c99b2ec464db39876b3264bdaab) is 8.0M, max 148.7M, 140.7M free. Jan 29 12:03:29.090142 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:03:29.094974 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 12:03:29.104150 systemd-modules-load[184]: Inserted module 'overlay' Jan 29 12:03:29.106761 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:03:29.123294 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:03:29.134292 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:03:29.139328 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:03:29.156285 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 12:03:29.156326 kernel: Bridge firewalling registered Jan 29 12:03:29.149290 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:03:29.152585 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 29 12:03:29.156965 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:03:29.171793 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:03:29.177863 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:03:29.192347 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:03:29.192910 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:03:29.204506 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:03:29.213507 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:03:29.220327 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 12:03:29.227398 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:03:29.247529 dracut-cmdline[216]: dracut-dracut-053 Jan 29 12:03:29.251914 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:03:29.284237 systemd-resolved[217]: Positive Trust Anchors: Jan 29 12:03:29.284743 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:03:29.284958 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:03:29.293376 systemd-resolved[217]: Defaulting to hostname 'linux'. Jan 29 12:03:29.298271 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:03:29.305711 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:03:29.355167 kernel: SCSI subsystem initialized Jan 29 12:03:29.367163 kernel: Loading iSCSI transport class v2.0-870. Jan 29 12:03:29.378147 kernel: iscsi: registered transport (tcp) Jan 29 12:03:29.402167 kernel: iscsi: registered transport (qla4xxx) Jan 29 12:03:29.402238 kernel: QLogic iSCSI HBA Driver Jan 29 12:03:29.453363 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 12:03:29.463290 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 12:03:29.491251 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 12:03:29.491333 kernel: device-mapper: uevent: version 1.0.3 Jan 29 12:03:29.491361 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 12:03:29.536158 kernel: raid6: avx2x4 gen() 18328 MB/s Jan 29 12:03:29.553156 kernel: raid6: avx2x2 gen() 18310 MB/s Jan 29 12:03:29.570577 kernel: raid6: avx2x1 gen() 14225 MB/s Jan 29 12:03:29.570621 kernel: raid6: using algorithm avx2x4 gen() 18328 MB/s Jan 29 12:03:29.588513 kernel: raid6: .... xor() 7992 MB/s, rmw enabled Jan 29 12:03:29.588586 kernel: raid6: using avx2x2 recovery algorithm Jan 29 12:03:29.612154 kernel: xor: automatically using best checksumming function avx Jan 29 12:03:29.791164 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 12:03:29.804059 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:03:29.811350 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:03:29.840556 systemd-udevd[400]: Using default interface naming scheme 'v255'. Jan 29 12:03:29.847797 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:03:29.861066 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 12:03:29.895408 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Jan 29 12:03:29.932041 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:03:29.946380 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:03:30.024919 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:03:30.037349 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 12:03:30.079471 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 12:03:30.090675 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:03:30.099234 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:03:30.101476 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:03:30.121932 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 12:03:30.174167 kernel: scsi host0: Virtio SCSI HBA Jan 29 12:03:30.187140 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 12:03:30.197727 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:03:30.230750 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 12:03:30.230832 kernel: AES CTR mode by8 optimization enabled Jan 29 12:03:30.235140 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 29 12:03:30.238726 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:03:30.239491 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:03:30.250459 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:03:30.254112 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:03:30.254361 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:03:30.256437 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:03:30.279464 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:03:30.298323 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 29 12:03:30.314276 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 29 12:03:30.314537 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 29 12:03:30.314782 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 29 12:03:30.315192 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 29 12:03:30.315436 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 12:03:30.315463 kernel: GPT:17805311 != 25165823 Jan 29 12:03:30.315487 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 12:03:30.315510 kernel: GPT:17805311 != 25165823 Jan 29 12:03:30.315533 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 12:03:30.315556 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 12:03:30.316017 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 29 12:03:30.314585 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:03:30.324356 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:03:30.365404 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:03:30.374291 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (448) Jan 29 12:03:30.377325 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (454) Jan 29 12:03:30.407788 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 29 12:03:30.415867 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 29 12:03:30.422723 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 29 12:03:30.422955 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 29 12:03:30.435503 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 29 12:03:30.441458 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 12:03:30.458895 disk-uuid[552]: Primary Header is updated. Jan 29 12:03:30.458895 disk-uuid[552]: Secondary Entries is updated. Jan 29 12:03:30.458895 disk-uuid[552]: Secondary Header is updated. Jan 29 12:03:30.477164 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 12:03:30.501143 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 12:03:30.531154 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 12:03:31.518373 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 12:03:31.518452 disk-uuid[553]: The operation has completed successfully. Jan 29 12:03:31.585450 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 12:03:31.585595 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 12:03:31.631327 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 12:03:31.662316 sh[570]: Success Jan 29 12:03:31.685180 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 12:03:31.764805 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 12:03:31.771663 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 12:03:31.798649 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 12:03:31.840994 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 12:03:31.841078 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:03:31.841104 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 12:03:31.857264 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 12:03:31.857319 kernel: BTRFS info (device dm-0): using free space tree Jan 29 12:03:31.890160 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 12:03:31.894414 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 12:03:31.895375 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 12:03:31.902299 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 12:03:31.947874 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:03:31.947901 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:03:31.947917 kernel: BTRFS info (device sda6): using free space tree Jan 29 12:03:31.965657 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 12:03:31.965729 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 12:03:31.987166 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:03:31.988410 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 12:03:31.998472 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 12:03:32.014368 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 12:03:32.129645 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:03:32.144848 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:03:32.220519 ignition[649]: Ignition 2.19.0 Jan 29 12:03:32.220540 ignition[649]: Stage: fetch-offline Jan 29 12:03:32.224480 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:03:32.220607 ignition[649]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:32.227494 systemd-networkd[752]: lo: Link UP Jan 29 12:03:32.220623 ignition[649]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 12:03:32.227501 systemd-networkd[752]: lo: Gained carrier Jan 29 12:03:32.220779 ignition[649]: parsed url from cmdline: "" Jan 29 12:03:32.229568 systemd-networkd[752]: Enumeration completed Jan 29 12:03:32.220786 ignition[649]: no config URL provided Jan 29 12:03:32.230215 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:03:32.220794 ignition[649]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:03:32.230222 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:03:32.220815 ignition[649]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:03:32.232274 systemd-networkd[752]: eth0: Link UP Jan 29 12:03:32.220826 ignition[649]: failed to fetch config: resource requires networking Jan 29 12:03:32.232280 systemd-networkd[752]: eth0: Gained carrier Jan 29 12:03:32.221836 ignition[649]: Ignition finished successfully Jan 29 12:03:32.232291 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:03:32.306873 ignition[763]: Ignition 2.19.0 Jan 29 12:03:32.245437 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:03:32.306882 ignition[763]: Stage: fetch Jan 29 12:03:32.249227 systemd-networkd[752]: eth0: DHCPv4 address 10.128.0.18/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 29 12:03:32.307087 ignition[763]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:32.255629 systemd[1]: Reached target network.target - Network. Jan 29 12:03:32.307105 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 12:03:32.286333 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 12:03:32.307280 ignition[763]: parsed url from cmdline: "" Jan 29 12:03:32.314990 unknown[763]: fetched base config from "system" Jan 29 12:03:32.307287 ignition[763]: no config URL provided Jan 29 12:03:32.315002 unknown[763]: fetched base config from "system" Jan 29 12:03:32.307298 ignition[763]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:03:32.315012 unknown[763]: fetched user config from "gcp" Jan 29 12:03:32.307312 ignition[763]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:03:32.317455 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 12:03:32.307349 ignition[763]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 29 12:03:32.336343 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 12:03:32.310321 ignition[763]: GET result: OK Jan 29 12:03:32.387856 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 12:03:32.310385 ignition[763]: parsing config with SHA512: 1e8c3b0dcb13d87381f6cea7236e73322b7322fab390259cdbe01193af1390067ba71d846f0166c6cee01ae356a8c955f96eb2dc53af0e6bbb2ee28a401bfa78 Jan 29 12:03:32.396335 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 12:03:32.315625 ignition[763]: fetch: fetch complete Jan 29 12:03:32.437675 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 12:03:32.315632 ignition[763]: fetch: fetch passed Jan 29 12:03:32.448399 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 12:03:32.315684 ignition[763]: Ignition finished successfully Jan 29 12:03:32.477254 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:03:32.385411 ignition[769]: Ignition 2.19.0 Jan 29 12:03:32.492270 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:03:32.385419 ignition[769]: Stage: kargs Jan 29 12:03:32.509246 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:03:32.385621 ignition[769]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:32.523235 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:03:32.385632 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 12:03:32.548318 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 12:03:32.386657 ignition[769]: kargs: kargs passed Jan 29 12:03:32.386710 ignition[769]: Ignition finished successfully Jan 29 12:03:32.435440 ignition[774]: Ignition 2.19.0 Jan 29 12:03:32.435448 ignition[774]: Stage: disks Jan 29 12:03:32.435625 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:32.435642 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 12:03:32.436706 ignition[774]: disks: disks passed Jan 29 12:03:32.436757 ignition[774]: Ignition finished successfully Jan 29 12:03:32.600383 systemd-fsck[783]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 12:03:32.806236 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 12:03:32.824259 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 12:03:32.959186 kernel: EXT4-fs (sda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 12:03:32.960553 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 12:03:32.969879 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 12:03:32.996244 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:03:33.012251 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 12:03:33.036150 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (791) Jan 29 12:03:33.054523 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:03:33.054594 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:03:33.054620 kernel: BTRFS info (device sda6): using free space tree Jan 29 12:03:33.055332 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 12:03:33.100377 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 12:03:33.100416 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 12:03:33.055413 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 12:03:33.055456 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:03:33.086243 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:03:33.108408 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 12:03:33.132343 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 12:03:33.271425 initrd-setup-root[815]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 12:03:33.282266 initrd-setup-root[822]: cut: /sysroot/etc/group: No such file or directory Jan 29 12:03:33.292780 initrd-setup-root[829]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 12:03:33.303224 initrd-setup-root[836]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 12:03:33.427780 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 12:03:33.456253 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 12:03:33.483351 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:03:33.475385 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 12:03:33.501592 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 12:03:33.533248 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 12:03:33.543538 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 12:03:33.568259 ignition[903]: INFO : Ignition 2.19.0 Jan 29 12:03:33.568259 ignition[903]: INFO : Stage: mount Jan 29 12:03:33.568259 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:33.568259 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 12:03:33.568259 ignition[903]: INFO : mount: mount passed Jan 29 12:03:33.568259 ignition[903]: INFO : Ignition finished successfully Jan 29 12:03:33.558285 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 12:03:33.590354 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:03:33.659153 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (915) Jan 29 12:03:33.659208 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:03:33.676079 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:03:33.676138 kernel: BTRFS info (device sda6): using free space tree Jan 29 12:03:33.697588 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 12:03:33.697650 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 12:03:33.701039 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:03:33.739557 ignition[932]: INFO : Ignition 2.19.0 Jan 29 12:03:33.739557 ignition[932]: INFO : Stage: files Jan 29 12:03:33.754271 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:33.754271 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 12:03:33.754271 ignition[932]: DEBUG : files: compiled without relabeling support, skipping Jan 29 12:03:33.754271 ignition[932]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 12:03:33.754271 ignition[932]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 12:03:33.754271 ignition[932]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 12:03:33.754271 ignition[932]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 12:03:33.754271 ignition[932]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 12:03:33.754271 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:03:33.754271 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 12:03:33.751970 unknown[932]: wrote ssh authorized keys file for user: core Jan 29 12:03:33.881312 systemd-networkd[752]: eth0: Gained IPv6LL Jan 29 12:03:36.991954 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 12:03:37.175133 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:03:37.192272 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 12:03:37.192272 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 29 12:03:37.490221 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 12:03:37.703865 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 12:03:37.728243 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 12:03:37.728243 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 12:03:37.728243 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:03:37.728243 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:03:37.728243 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:03:37.728243 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:03:37.728243 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:03:37.728243 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:03:37.728243 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:03:37.728243 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:03:37.728243 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:03:37.728243 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:03:37.728243 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:03:37.728243 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 12:03:37.957250 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 12:03:38.399847 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:03:38.399847 ignition[932]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 12:03:38.439275 ignition[932]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:03:38.439275 ignition[932]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:03:38.439275 ignition[932]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 12:03:38.439275 ignition[932]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 29 12:03:38.439275 ignition[932]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 12:03:38.439275 ignition[932]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:03:38.439275 ignition[932]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:03:38.439275 ignition[932]: INFO : files: files passed Jan 29 12:03:38.439275 ignition[932]: INFO : Ignition finished successfully Jan 29 12:03:38.404845 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 12:03:38.435352 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 12:03:38.462286 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 12:03:38.504742 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 12:03:38.645236 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:03:38.645236 initrd-setup-root-after-ignition[959]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:03:38.504859 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 12:03:38.703258 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:03:38.525710 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:03:38.549651 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 12:03:38.577318 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 12:03:38.651705 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 12:03:38.651825 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 12:03:38.659573 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 12:03:38.693355 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 12:03:38.713438 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 12:03:38.719307 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 12:03:38.771288 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:03:38.797369 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 12:03:38.831250 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:03:38.844383 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:03:38.868479 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 12:03:38.887445 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 12:03:38.887634 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:03:38.920456 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 12:03:38.940387 systemd[1]: Stopped target basic.target - Basic System. Jan 29 12:03:38.958470 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 12:03:38.976456 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:03:38.995460 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 12:03:39.017465 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 12:03:39.037481 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:03:39.056499 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 12:03:39.066606 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 12:03:39.084570 systemd[1]: Stopped target swap.target - Swaps. Jan 29 12:03:39.102558 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 12:03:39.102783 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:03:39.136564 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:03:39.146640 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:03:39.163501 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 12:03:39.163671 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:03:39.180506 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 12:03:39.180714 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 12:03:39.294280 ignition[984]: INFO : Ignition 2.19.0 Jan 29 12:03:39.294280 ignition[984]: INFO : Stage: umount Jan 29 12:03:39.294280 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:39.294280 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 12:03:39.294280 ignition[984]: INFO : umount: umount passed Jan 29 12:03:39.294280 ignition[984]: INFO : Ignition finished successfully Jan 29 12:03:39.216566 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 12:03:39.216797 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:03:39.226631 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 12:03:39.226809 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 12:03:39.253338 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 12:03:39.306409 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 12:03:39.317254 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 12:03:39.317500 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:03:39.352604 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 12:03:39.352775 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:03:39.393872 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 12:03:39.394855 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 12:03:39.394970 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 12:03:39.411821 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 12:03:39.411928 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 12:03:39.431180 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 12:03:39.431334 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 12:03:39.439287 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 12:03:39.439337 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 12:03:39.455487 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 12:03:39.455547 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 12:03:39.472481 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 12:03:39.472534 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 12:03:39.489435 systemd[1]: Stopped target network.target - Network. Jan 29 12:03:39.507403 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 12:03:39.507461 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:03:39.522455 systemd[1]: Stopped target paths.target - Path Units. Jan 29 12:03:39.540381 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 12:03:39.544206 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:03:39.555401 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 12:03:39.573422 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 12:03:39.588429 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 12:03:39.588488 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:03:39.613427 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 12:03:39.613495 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:03:39.621468 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 12:03:39.621530 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 12:03:39.638475 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 12:03:39.638537 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 12:03:39.656463 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 12:03:39.656542 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 12:03:39.673726 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 12:03:39.678191 systemd-networkd[752]: eth0: DHCPv6 lease lost Jan 29 12:03:39.700438 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 12:03:39.718732 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 12:03:39.718855 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 12:03:39.728196 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 12:03:39.728439 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 12:03:39.744618 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 12:03:39.744771 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:03:39.783237 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 12:03:39.795194 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 12:03:40.254283 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 29 12:03:39.795286 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:03:39.807296 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:03:39.807383 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:03:39.827294 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 12:03:39.827392 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 12:03:39.845280 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 12:03:39.845374 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:03:39.866442 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:03:39.890741 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 12:03:39.890904 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:03:39.916283 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 12:03:39.916349 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 12:03:39.925452 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 12:03:39.925500 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:03:39.942416 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 12:03:39.942475 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:03:39.979514 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 12:03:39.979601 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 12:03:40.005488 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:03:40.005568 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:03:40.040393 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 12:03:40.054230 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 12:03:40.054334 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:03:40.065333 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 12:03:40.065413 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:03:40.076315 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 12:03:40.076396 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:03:40.097350 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:03:40.097441 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:03:40.118749 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 12:03:40.118873 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 12:03:40.135794 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 12:03:40.135904 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 12:03:40.157590 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 12:03:40.179362 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 12:03:40.207811 systemd[1]: Switching root. Jan 29 12:03:40.612249 systemd-journald[183]: Journal stopped Jan 29 12:03:29.068542 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 12:03:29.068584 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:03:29.068603 kernel: BIOS-provided physical RAM map: Jan 29 12:03:29.068617 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 29 12:03:29.068630 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 29 12:03:29.068644 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 29 12:03:29.068661 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 29 12:03:29.068678 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 29 12:03:29.068692 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 29 12:03:29.068706 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 29 12:03:29.068720 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 29 12:03:29.068735 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 29 12:03:29.068749 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 29 12:03:29.068763 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 29 12:03:29.068784 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 29 12:03:29.068800 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 29 12:03:29.068816 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 29 12:03:29.068839 kernel: NX (Execute Disable) protection: active Jan 29 12:03:29.068855 kernel: APIC: Static calls initialized Jan 29 12:03:29.068871 kernel: efi: EFI v2.7 by EDK II Jan 29 12:03:29.068886 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Jan 29 12:03:29.068902 kernel: SMBIOS 2.4 present. Jan 29 12:03:29.068917 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 29 12:03:29.068933 kernel: Hypervisor detected: KVM Jan 29 12:03:29.068952 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 12:03:29.068967 kernel: kvm-clock: using sched offset of 12132403738 cycles Jan 29 12:03:29.068983 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 12:03:29.068999 kernel: tsc: Detected 2299.998 MHz processor Jan 29 12:03:29.069015 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 12:03:29.069031 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 12:03:29.069047 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 29 12:03:29.069063 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 29 12:03:29.069079 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 12:03:29.069098 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 29 12:03:29.069148 kernel: Using GB pages for direct mapping Jan 29 12:03:29.069174 kernel: Secure boot disabled Jan 29 12:03:29.069191 kernel: ACPI: Early table checksum verification disabled Jan 29 12:03:29.069207 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 29 12:03:29.069222 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 29 12:03:29.069238 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 29 12:03:29.069264 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 29 12:03:29.069286 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 29 12:03:29.069303 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 29 12:03:29.069322 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 29 12:03:29.069340 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 29 12:03:29.069370 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 29 12:03:29.069389 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 29 12:03:29.069411 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 29 12:03:29.069428 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 29 12:03:29.069455 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 29 12:03:29.069473 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 29 12:03:29.069490 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 29 12:03:29.069509 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 29 12:03:29.069526 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 29 12:03:29.069544 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 29 12:03:29.069568 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 29 12:03:29.069591 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 29 12:03:29.069608 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 12:03:29.069626 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 12:03:29.069644 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 29 12:03:29.069661 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 29 12:03:29.069680 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 29 12:03:29.069705 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 29 12:03:29.069723 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 29 12:03:29.069740 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Jan 29 12:03:29.069764 kernel: Zone ranges: Jan 29 12:03:29.069788 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 12:03:29.069806 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 29 12:03:29.069823 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 29 12:03:29.069841 kernel: Movable zone start for each node Jan 29 12:03:29.069859 kernel: Early memory node ranges Jan 29 12:03:29.069877 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 29 12:03:29.069894 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 29 12:03:29.069911 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 29 12:03:29.069934 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 29 12:03:29.069952 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 29 12:03:29.069970 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 29 12:03:29.069988 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 12:03:29.070004 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 29 12:03:29.070021 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 29 12:03:29.070039 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 29 12:03:29.070057 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 29 12:03:29.070073 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 29 12:03:29.070095 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 12:03:29.070112 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 12:03:29.070164 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 12:03:29.070182 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 12:03:29.070198 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 12:03:29.070216 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 12:03:29.070234 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 12:03:29.070250 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 12:03:29.070267 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 29 12:03:29.070290 kernel: Booting paravirtualized kernel on KVM Jan 29 12:03:29.070307 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 12:03:29.070324 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 12:03:29.070350 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 12:03:29.070377 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 12:03:29.070394 kernel: pcpu-alloc: [0] 0 1 Jan 29 12:03:29.070412 kernel: kvm-guest: PV spinlocks enabled Jan 29 12:03:29.070430 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 12:03:29.070456 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:03:29.070480 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 12:03:29.070498 kernel: random: crng init done Jan 29 12:03:29.070515 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 29 12:03:29.070534 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 12:03:29.070552 kernel: Fallback order for Node 0: 0 Jan 29 12:03:29.070570 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 29 12:03:29.070587 kernel: Policy zone: Normal Jan 29 12:03:29.070605 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 12:03:29.070627 kernel: software IO TLB: area num 2. Jan 29 12:03:29.070646 kernel: Memory: 7513384K/7860584K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 346940K reserved, 0K cma-reserved) Jan 29 12:03:29.070673 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 12:03:29.070691 kernel: Kernel/User page tables isolation: enabled Jan 29 12:03:29.070709 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 12:03:29.070725 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 12:03:29.070742 kernel: Dynamic Preempt: voluntary Jan 29 12:03:29.070766 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 12:03:29.070791 kernel: rcu: RCU event tracing is enabled. Jan 29 12:03:29.070826 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 12:03:29.070844 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 12:03:29.070862 kernel: Rude variant of Tasks RCU enabled. Jan 29 12:03:29.070884 kernel: Tracing variant of Tasks RCU enabled. Jan 29 12:03:29.070913 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 12:03:29.070936 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 12:03:29.070954 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 12:03:29.070971 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 12:03:29.070989 kernel: Console: colour dummy device 80x25 Jan 29 12:03:29.071011 kernel: printk: console [ttyS0] enabled Jan 29 12:03:29.071045 kernel: ACPI: Core revision 20230628 Jan 29 12:03:29.071072 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 12:03:29.071098 kernel: x2apic enabled Jan 29 12:03:29.071147 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 12:03:29.071167 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 29 12:03:29.071190 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 29 12:03:29.071207 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 29 12:03:29.071230 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 29 12:03:29.071247 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 29 12:03:29.071265 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 12:03:29.071285 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 29 12:03:29.071303 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 29 12:03:29.071319 kernel: Spectre V2 : Mitigation: IBRS Jan 29 12:03:29.071338 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 12:03:29.071364 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 12:03:29.071382 kernel: RETBleed: Mitigation: IBRS Jan 29 12:03:29.071406 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 12:03:29.071425 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 29 12:03:29.071445 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 12:03:29.071465 kernel: MDS: Mitigation: Clear CPU buffers Jan 29 12:03:29.071485 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 12:03:29.071504 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 12:03:29.071524 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 12:03:29.071545 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 12:03:29.071564 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 12:03:29.071596 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 29 12:03:29.071615 kernel: Freeing SMP alternatives memory: 32K Jan 29 12:03:29.071634 kernel: pid_max: default: 32768 minimum: 301 Jan 29 12:03:29.071655 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 12:03:29.071675 kernel: landlock: Up and running. Jan 29 12:03:29.071694 kernel: SELinux: Initializing. Jan 29 12:03:29.071714 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 12:03:29.071734 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 12:03:29.071754 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 29 12:03:29.071778 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:03:29.071798 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:03:29.071817 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:03:29.071834 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 29 12:03:29.071851 kernel: signal: max sigframe size: 1776 Jan 29 12:03:29.071877 kernel: rcu: Hierarchical SRCU implementation. Jan 29 12:03:29.071894 kernel: rcu: Max phase no-delay instances is 400. Jan 29 12:03:29.071911 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 12:03:29.071928 kernel: smp: Bringing up secondary CPUs ... Jan 29 12:03:29.071950 kernel: smpboot: x86: Booting SMP configuration: Jan 29 12:03:29.071968 kernel: .... node #0, CPUs: #1 Jan 29 12:03:29.071988 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 29 12:03:29.072007 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 29 12:03:29.072026 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 12:03:29.072044 kernel: smpboot: Max logical packages: 1 Jan 29 12:03:29.072063 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 29 12:03:29.072081 kernel: devtmpfs: initialized Jan 29 12:03:29.072101 kernel: x86/mm: Memory block size: 128MB Jan 29 12:03:29.072141 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 29 12:03:29.072161 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 12:03:29.072178 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 12:03:29.072204 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 12:03:29.072223 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 12:03:29.072241 kernel: audit: initializing netlink subsys (disabled) Jan 29 12:03:29.072259 kernel: audit: type=2000 audit(1738152207.836:1): state=initialized audit_enabled=0 res=1 Jan 29 12:03:29.072278 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 12:03:29.072301 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 12:03:29.072319 kernel: cpuidle: using governor menu Jan 29 12:03:29.072337 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 12:03:29.072355 kernel: dca service started, version 1.12.1 Jan 29 12:03:29.072380 kernel: PCI: Using configuration type 1 for base access Jan 29 12:03:29.072399 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 12:03:29.072417 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 12:03:29.072435 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 12:03:29.072453 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 12:03:29.072475 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 12:03:29.072493 kernel: ACPI: Added _OSI(Module Device) Jan 29 12:03:29.072511 kernel: ACPI: Added _OSI(Processor Device) Jan 29 12:03:29.072529 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 12:03:29.072548 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 12:03:29.072566 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 29 12:03:29.072585 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 12:03:29.072603 kernel: ACPI: Interpreter enabled Jan 29 12:03:29.072621 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 12:03:29.072643 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 12:03:29.072662 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 12:03:29.072680 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 29 12:03:29.072698 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 29 12:03:29.072715 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 12:03:29.072975 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 12:03:29.073235 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 12:03:29.073440 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 12:03:29.073464 kernel: PCI host bridge to bus 0000:00 Jan 29 12:03:29.073643 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 12:03:29.073810 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 12:03:29.073976 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 12:03:29.074164 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 29 12:03:29.074353 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 12:03:29.074579 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 12:03:29.074792 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 29 12:03:29.075012 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 29 12:03:29.075235 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 29 12:03:29.075460 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 29 12:03:29.075660 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 29 12:03:29.075855 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 29 12:03:29.076061 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 12:03:29.076273 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 29 12:03:29.076484 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 29 12:03:29.076677 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 12:03:29.076864 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 29 12:03:29.077050 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 29 12:03:29.077081 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 12:03:29.077100 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 12:03:29.077143 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 12:03:29.077164 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 12:03:29.077183 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 12:03:29.077204 kernel: iommu: Default domain type: Translated Jan 29 12:03:29.077223 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 12:03:29.077243 kernel: efivars: Registered efivars operations Jan 29 12:03:29.077263 kernel: PCI: Using ACPI for IRQ routing Jan 29 12:03:29.077288 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 12:03:29.077307 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 29 12:03:29.077327 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 29 12:03:29.077346 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 29 12:03:29.077373 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 29 12:03:29.077392 kernel: vgaarb: loaded Jan 29 12:03:29.077411 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 12:03:29.077431 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 12:03:29.077449 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 12:03:29.077481 kernel: pnp: PnP ACPI init Jan 29 12:03:29.077506 kernel: pnp: PnP ACPI: found 7 devices Jan 29 12:03:29.077526 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 12:03:29.077546 kernel: NET: Registered PF_INET protocol family Jan 29 12:03:29.077566 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 12:03:29.077586 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 29 12:03:29.077614 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 12:03:29.077638 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 12:03:29.077658 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 29 12:03:29.077682 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 29 12:03:29.077701 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 29 12:03:29.077721 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 29 12:03:29.077740 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 12:03:29.077760 kernel: NET: Registered PF_XDP protocol family Jan 29 12:03:29.077947 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 12:03:29.078185 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 12:03:29.078362 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 12:03:29.078542 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 29 12:03:29.078738 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 12:03:29.078765 kernel: PCI: CLS 0 bytes, default 64 Jan 29 12:03:29.078786 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 12:03:29.078805 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 29 12:03:29.078825 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 12:03:29.078845 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 29 12:03:29.078864 kernel: clocksource: Switched to clocksource tsc Jan 29 12:03:29.078887 kernel: Initialise system trusted keyrings Jan 29 12:03:29.078907 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 29 12:03:29.078927 kernel: Key type asymmetric registered Jan 29 12:03:29.078946 kernel: Asymmetric key parser 'x509' registered Jan 29 12:03:29.078965 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 12:03:29.078984 kernel: io scheduler mq-deadline registered Jan 29 12:03:29.079003 kernel: io scheduler kyber registered Jan 29 12:03:29.079023 kernel: io scheduler bfq registered Jan 29 12:03:29.079042 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 12:03:29.079065 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 29 12:03:29.079283 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 29 12:03:29.079309 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 29 12:03:29.079506 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 29 12:03:29.079532 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 29 12:03:29.079722 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 29 12:03:29.079748 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 12:03:29.079768 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 12:03:29.079788 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 29 12:03:29.079813 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 29 12:03:29.079832 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 29 12:03:29.080026 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 29 12:03:29.080054 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 12:03:29.080074 kernel: i8042: Warning: Keylock active Jan 29 12:03:29.080093 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 12:03:29.080112 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 12:03:29.080382 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 29 12:03:29.080562 kernel: rtc_cmos 00:00: registered as rtc0 Jan 29 12:03:29.080730 kernel: rtc_cmos 00:00: setting system clock to 2025-01-29T12:03:28 UTC (1738152208) Jan 29 12:03:29.081216 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 29 12:03:29.081246 kernel: intel_pstate: CPU model not supported Jan 29 12:03:29.081265 kernel: pstore: Using crash dump compression: deflate Jan 29 12:03:29.081284 kernel: pstore: Registered efi_pstore as persistent store backend Jan 29 12:03:29.081303 kernel: NET: Registered PF_INET6 protocol family Jan 29 12:03:29.081322 kernel: Segment Routing with IPv6 Jan 29 12:03:29.081346 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 12:03:29.081373 kernel: NET: Registered PF_PACKET protocol family Jan 29 12:03:29.081392 kernel: Key type dns_resolver registered Jan 29 12:03:29.081410 kernel: IPI shorthand broadcast: enabled Jan 29 12:03:29.081429 kernel: sched_clock: Marking stable (844003919, 160913271)->(1038609153, -33691963) Jan 29 12:03:29.081448 kernel: registered taskstats version 1 Jan 29 12:03:29.081466 kernel: Loading compiled-in X.509 certificates Jan 29 12:03:29.081484 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 12:03:29.081502 kernel: Key type .fscrypt registered Jan 29 12:03:29.081524 kernel: Key type fscrypt-provisioning registered Jan 29 12:03:29.081542 kernel: ima: Allocated hash algorithm: sha1 Jan 29 12:03:29.081560 kernel: ima: No architecture policies found Jan 29 12:03:29.081579 kernel: clk: Disabling unused clocks Jan 29 12:03:29.081598 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 12:03:29.081616 kernel: Write protecting the kernel read-only data: 36864k Jan 29 12:03:29.081635 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 12:03:29.081653 kernel: Run /init as init process Jan 29 12:03:29.081676 kernel: with arguments: Jan 29 12:03:29.081694 kernel: /init Jan 29 12:03:29.081712 kernel: with environment: Jan 29 12:03:29.081730 kernel: HOME=/ Jan 29 12:03:29.081747 kernel: TERM=linux Jan 29 12:03:29.081764 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 12:03:29.081782 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 12:03:29.081805 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:03:29.081831 systemd[1]: Detected virtualization google. Jan 29 12:03:29.081851 systemd[1]: Detected architecture x86-64. Jan 29 12:03:29.081870 systemd[1]: Running in initrd. Jan 29 12:03:29.081889 systemd[1]: No hostname configured, using default hostname. Jan 29 12:03:29.081907 systemd[1]: Hostname set to . Jan 29 12:03:29.081927 systemd[1]: Initializing machine ID from random generator. Jan 29 12:03:29.081945 systemd[1]: Queued start job for default target initrd.target. Jan 29 12:03:29.081964 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:03:29.081987 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:03:29.082007 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 12:03:29.082027 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:03:29.082047 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 12:03:29.082066 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 12:03:29.082088 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 12:03:29.082107 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 12:03:29.082146 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:03:29.082165 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:03:29.082204 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:03:29.082228 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:03:29.082247 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:03:29.082903 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:03:29.082943 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:03:29.082965 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:03:29.082985 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:03:29.083004 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:03:29.083024 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:03:29.083046 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:03:29.083067 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:03:29.083089 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:03:29.083113 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 12:03:29.083157 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:03:29.083176 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 12:03:29.083194 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 12:03:29.083213 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:03:29.083233 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:03:29.083254 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:03:29.083310 systemd-journald[183]: Collecting audit messages is disabled. Jan 29 12:03:29.083369 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 12:03:29.083390 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:03:29.083412 systemd-journald[183]: Journal started Jan 29 12:03:29.083456 systemd-journald[183]: Runtime Journal (/run/log/journal/15c67c99b2ec464db39876b3264bdaab) is 8.0M, max 148.7M, 140.7M free. Jan 29 12:03:29.090142 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:03:29.094974 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 12:03:29.104150 systemd-modules-load[184]: Inserted module 'overlay' Jan 29 12:03:29.106761 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:03:29.123294 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:03:29.134292 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:03:29.139328 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:03:29.156285 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 12:03:29.156326 kernel: Bridge firewalling registered Jan 29 12:03:29.149290 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:03:29.152585 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 29 12:03:29.156965 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:03:29.171793 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:03:29.177863 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:03:29.192347 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:03:29.192910 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:03:29.204506 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:03:29.213507 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:03:29.220327 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 12:03:29.227398 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:03:29.247529 dracut-cmdline[216]: dracut-dracut-053 Jan 29 12:03:29.251914 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:03:29.284237 systemd-resolved[217]: Positive Trust Anchors: Jan 29 12:03:29.284743 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:03:29.284958 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:03:29.293376 systemd-resolved[217]: Defaulting to hostname 'linux'. Jan 29 12:03:29.298271 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:03:29.305711 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:03:29.355167 kernel: SCSI subsystem initialized Jan 29 12:03:29.367163 kernel: Loading iSCSI transport class v2.0-870. Jan 29 12:03:29.378147 kernel: iscsi: registered transport (tcp) Jan 29 12:03:29.402167 kernel: iscsi: registered transport (qla4xxx) Jan 29 12:03:29.402238 kernel: QLogic iSCSI HBA Driver Jan 29 12:03:29.453363 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 12:03:29.463290 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 12:03:29.491251 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 12:03:29.491333 kernel: device-mapper: uevent: version 1.0.3 Jan 29 12:03:29.491361 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 12:03:29.536158 kernel: raid6: avx2x4 gen() 18328 MB/s Jan 29 12:03:29.553156 kernel: raid6: avx2x2 gen() 18310 MB/s Jan 29 12:03:29.570577 kernel: raid6: avx2x1 gen() 14225 MB/s Jan 29 12:03:29.570621 kernel: raid6: using algorithm avx2x4 gen() 18328 MB/s Jan 29 12:03:29.588513 kernel: raid6: .... xor() 7992 MB/s, rmw enabled Jan 29 12:03:29.588586 kernel: raid6: using avx2x2 recovery algorithm Jan 29 12:03:29.612154 kernel: xor: automatically using best checksumming function avx Jan 29 12:03:29.791164 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 12:03:29.804059 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:03:29.811350 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:03:29.840556 systemd-udevd[400]: Using default interface naming scheme 'v255'. Jan 29 12:03:29.847797 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:03:29.861066 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 12:03:29.895408 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Jan 29 12:03:29.932041 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:03:29.946380 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:03:30.024919 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:03:30.037349 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 12:03:30.079471 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 12:03:30.090675 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:03:30.099234 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:03:30.101476 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:03:30.121932 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 12:03:30.174167 kernel: scsi host0: Virtio SCSI HBA Jan 29 12:03:30.187140 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 12:03:30.197727 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:03:30.230750 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 12:03:30.230832 kernel: AES CTR mode by8 optimization enabled Jan 29 12:03:30.235140 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 29 12:03:30.238726 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:03:30.239491 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:03:30.250459 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:03:30.254112 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:03:30.254361 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:03:30.256437 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:03:30.279464 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:03:30.298323 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 29 12:03:30.314276 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 29 12:03:30.314537 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 29 12:03:30.314782 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 29 12:03:30.315192 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 29 12:03:30.315436 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 12:03:30.315463 kernel: GPT:17805311 != 25165823 Jan 29 12:03:30.315487 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 12:03:30.315510 kernel: GPT:17805311 != 25165823 Jan 29 12:03:30.315533 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 12:03:30.315556 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 12:03:30.316017 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 29 12:03:30.314585 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:03:30.324356 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:03:30.365404 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:03:30.374291 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (448) Jan 29 12:03:30.377325 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (454) Jan 29 12:03:30.407788 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 29 12:03:30.415867 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 29 12:03:30.422723 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 29 12:03:30.422955 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 29 12:03:30.435503 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 29 12:03:30.441458 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 12:03:30.458895 disk-uuid[552]: Primary Header is updated. Jan 29 12:03:30.458895 disk-uuid[552]: Secondary Entries is updated. Jan 29 12:03:30.458895 disk-uuid[552]: Secondary Header is updated. Jan 29 12:03:30.477164 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 12:03:30.501143 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 12:03:30.531154 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 12:03:31.518373 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 12:03:31.518452 disk-uuid[553]: The operation has completed successfully. Jan 29 12:03:31.585450 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 12:03:31.585595 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 12:03:31.631327 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 12:03:31.662316 sh[570]: Success Jan 29 12:03:31.685180 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 12:03:31.764805 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 12:03:31.771663 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 12:03:31.798649 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 12:03:31.840994 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 12:03:31.841078 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:03:31.841104 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 12:03:31.857264 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 12:03:31.857319 kernel: BTRFS info (device dm-0): using free space tree Jan 29 12:03:31.890160 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 12:03:31.894414 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 12:03:31.895375 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 12:03:31.902299 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 12:03:31.947874 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:03:31.947901 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:03:31.947917 kernel: BTRFS info (device sda6): using free space tree Jan 29 12:03:31.965657 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 12:03:31.965729 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 12:03:31.987166 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:03:31.988410 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 12:03:31.998472 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 12:03:32.014368 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 12:03:32.129645 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:03:32.144848 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:03:32.220519 ignition[649]: Ignition 2.19.0 Jan 29 12:03:32.220540 ignition[649]: Stage: fetch-offline Jan 29 12:03:32.224480 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:03:32.220607 ignition[649]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:32.227494 systemd-networkd[752]: lo: Link UP Jan 29 12:03:32.220623 ignition[649]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 12:03:32.227501 systemd-networkd[752]: lo: Gained carrier Jan 29 12:03:32.220779 ignition[649]: parsed url from cmdline: "" Jan 29 12:03:32.229568 systemd-networkd[752]: Enumeration completed Jan 29 12:03:32.220786 ignition[649]: no config URL provided Jan 29 12:03:32.230215 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:03:32.220794 ignition[649]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:03:32.230222 systemd-networkd[752]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:03:32.220815 ignition[649]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:03:32.232274 systemd-networkd[752]: eth0: Link UP Jan 29 12:03:32.220826 ignition[649]: failed to fetch config: resource requires networking Jan 29 12:03:32.232280 systemd-networkd[752]: eth0: Gained carrier Jan 29 12:03:32.221836 ignition[649]: Ignition finished successfully Jan 29 12:03:32.232291 systemd-networkd[752]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:03:32.306873 ignition[763]: Ignition 2.19.0 Jan 29 12:03:32.245437 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:03:32.306882 ignition[763]: Stage: fetch Jan 29 12:03:32.249227 systemd-networkd[752]: eth0: DHCPv4 address 10.128.0.18/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 29 12:03:32.307087 ignition[763]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:32.255629 systemd[1]: Reached target network.target - Network. Jan 29 12:03:32.307105 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 12:03:32.286333 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 12:03:32.307280 ignition[763]: parsed url from cmdline: "" Jan 29 12:03:32.314990 unknown[763]: fetched base config from "system" Jan 29 12:03:32.307287 ignition[763]: no config URL provided Jan 29 12:03:32.315002 unknown[763]: fetched base config from "system" Jan 29 12:03:32.307298 ignition[763]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:03:32.315012 unknown[763]: fetched user config from "gcp" Jan 29 12:03:32.307312 ignition[763]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:03:32.317455 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 12:03:32.307349 ignition[763]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 29 12:03:32.336343 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 12:03:32.310321 ignition[763]: GET result: OK Jan 29 12:03:32.387856 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 12:03:32.310385 ignition[763]: parsing config with SHA512: 1e8c3b0dcb13d87381f6cea7236e73322b7322fab390259cdbe01193af1390067ba71d846f0166c6cee01ae356a8c955f96eb2dc53af0e6bbb2ee28a401bfa78 Jan 29 12:03:32.396335 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 12:03:32.315625 ignition[763]: fetch: fetch complete Jan 29 12:03:32.437675 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 12:03:32.315632 ignition[763]: fetch: fetch passed Jan 29 12:03:32.448399 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 12:03:32.315684 ignition[763]: Ignition finished successfully Jan 29 12:03:32.477254 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:03:32.385411 ignition[769]: Ignition 2.19.0 Jan 29 12:03:32.492270 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:03:32.385419 ignition[769]: Stage: kargs Jan 29 12:03:32.509246 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:03:32.385621 ignition[769]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:32.523235 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:03:32.385632 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 12:03:32.548318 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 12:03:32.386657 ignition[769]: kargs: kargs passed Jan 29 12:03:32.386710 ignition[769]: Ignition finished successfully Jan 29 12:03:32.435440 ignition[774]: Ignition 2.19.0 Jan 29 12:03:32.435448 ignition[774]: Stage: disks Jan 29 12:03:32.435625 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:32.435642 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 12:03:32.436706 ignition[774]: disks: disks passed Jan 29 12:03:32.436757 ignition[774]: Ignition finished successfully Jan 29 12:03:32.600383 systemd-fsck[783]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 12:03:32.806236 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 12:03:32.824259 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 12:03:32.959186 kernel: EXT4-fs (sda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 12:03:32.960553 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 12:03:32.969879 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 12:03:32.996244 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:03:33.012251 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 12:03:33.036150 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (791) Jan 29 12:03:33.054523 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:03:33.054594 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:03:33.054620 kernel: BTRFS info (device sda6): using free space tree Jan 29 12:03:33.055332 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 12:03:33.100377 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 12:03:33.100416 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 12:03:33.055413 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 12:03:33.055456 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:03:33.086243 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:03:33.108408 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 12:03:33.132343 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 12:03:33.271425 initrd-setup-root[815]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 12:03:33.282266 initrd-setup-root[822]: cut: /sysroot/etc/group: No such file or directory Jan 29 12:03:33.292780 initrd-setup-root[829]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 12:03:33.303224 initrd-setup-root[836]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 12:03:33.427780 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 12:03:33.456253 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 12:03:33.483351 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:03:33.475385 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 12:03:33.501592 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 12:03:33.533248 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 12:03:33.543538 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 12:03:33.568259 ignition[903]: INFO : Ignition 2.19.0 Jan 29 12:03:33.568259 ignition[903]: INFO : Stage: mount Jan 29 12:03:33.568259 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:33.568259 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 12:03:33.568259 ignition[903]: INFO : mount: mount passed Jan 29 12:03:33.568259 ignition[903]: INFO : Ignition finished successfully Jan 29 12:03:33.558285 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 12:03:33.590354 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:03:33.659153 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (915) Jan 29 12:03:33.659208 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:03:33.676079 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:03:33.676138 kernel: BTRFS info (device sda6): using free space tree Jan 29 12:03:33.697588 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 12:03:33.697650 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 12:03:33.701039 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:03:33.739557 ignition[932]: INFO : Ignition 2.19.0 Jan 29 12:03:33.739557 ignition[932]: INFO : Stage: files Jan 29 12:03:33.754271 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:33.754271 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 12:03:33.754271 ignition[932]: DEBUG : files: compiled without relabeling support, skipping Jan 29 12:03:33.754271 ignition[932]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 12:03:33.754271 ignition[932]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 12:03:33.754271 ignition[932]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 12:03:33.754271 ignition[932]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 12:03:33.754271 ignition[932]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 12:03:33.754271 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:03:33.754271 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 12:03:33.751970 unknown[932]: wrote ssh authorized keys file for user: core Jan 29 12:03:33.881312 systemd-networkd[752]: eth0: Gained IPv6LL Jan 29 12:03:36.991954 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 12:03:37.175133 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:03:37.192272 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 12:03:37.192272 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 29 12:03:37.490221 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 12:03:37.703865 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 29 12:03:37.728243 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 12:03:37.728243 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 12:03:37.728243 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:03:37.728243 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:03:37.728243 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:03:37.728243 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:03:37.728243 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:03:37.728243 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:03:37.728243 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:03:37.728243 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:03:37.728243 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:03:37.728243 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:03:37.728243 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:03:37.728243 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 12:03:37.957250 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 12:03:38.399847 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:03:38.399847 ignition[932]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 12:03:38.439275 ignition[932]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:03:38.439275 ignition[932]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:03:38.439275 ignition[932]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 12:03:38.439275 ignition[932]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 29 12:03:38.439275 ignition[932]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 12:03:38.439275 ignition[932]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:03:38.439275 ignition[932]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:03:38.439275 ignition[932]: INFO : files: files passed Jan 29 12:03:38.439275 ignition[932]: INFO : Ignition finished successfully Jan 29 12:03:38.404845 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 12:03:38.435352 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 12:03:38.462286 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 12:03:38.504742 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 12:03:38.645236 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:03:38.645236 initrd-setup-root-after-ignition[959]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:03:38.504859 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 12:03:38.703258 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:03:38.525710 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:03:38.549651 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 12:03:38.577318 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 12:03:38.651705 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 12:03:38.651825 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 12:03:38.659573 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 12:03:38.693355 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 12:03:38.713438 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 12:03:38.719307 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 12:03:38.771288 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:03:38.797369 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 12:03:38.831250 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:03:38.844383 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:03:38.868479 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 12:03:38.887445 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 12:03:38.887634 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:03:38.920456 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 12:03:38.940387 systemd[1]: Stopped target basic.target - Basic System. Jan 29 12:03:38.958470 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 12:03:38.976456 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:03:38.995460 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 12:03:39.017465 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 12:03:39.037481 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:03:39.056499 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 12:03:39.066606 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 12:03:39.084570 systemd[1]: Stopped target swap.target - Swaps. Jan 29 12:03:39.102558 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 12:03:39.102783 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:03:39.136564 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:03:39.146640 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:03:39.163501 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 12:03:39.163671 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:03:39.180506 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 12:03:39.180714 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 12:03:39.294280 ignition[984]: INFO : Ignition 2.19.0 Jan 29 12:03:39.294280 ignition[984]: INFO : Stage: umount Jan 29 12:03:39.294280 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:03:39.294280 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 29 12:03:39.294280 ignition[984]: INFO : umount: umount passed Jan 29 12:03:39.294280 ignition[984]: INFO : Ignition finished successfully Jan 29 12:03:39.216566 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 12:03:39.216797 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:03:39.226631 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 12:03:39.226809 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 12:03:39.253338 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 12:03:39.306409 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 12:03:39.317254 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 12:03:39.317500 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:03:39.352604 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 12:03:39.352775 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:03:39.393872 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 12:03:39.394855 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 12:03:39.394970 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 12:03:39.411821 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 12:03:39.411928 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 12:03:39.431180 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 12:03:39.431334 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 12:03:39.439287 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 12:03:39.439337 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 12:03:39.455487 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 12:03:39.455547 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 12:03:39.472481 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 12:03:39.472534 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 12:03:39.489435 systemd[1]: Stopped target network.target - Network. Jan 29 12:03:39.507403 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 12:03:39.507461 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:03:39.522455 systemd[1]: Stopped target paths.target - Path Units. Jan 29 12:03:39.540381 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 12:03:39.544206 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:03:39.555401 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 12:03:39.573422 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 12:03:39.588429 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 12:03:39.588488 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:03:39.613427 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 12:03:39.613495 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:03:39.621468 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 12:03:39.621530 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 12:03:39.638475 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 12:03:39.638537 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 12:03:39.656463 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 12:03:39.656542 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 12:03:39.673726 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 12:03:39.678191 systemd-networkd[752]: eth0: DHCPv6 lease lost Jan 29 12:03:39.700438 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 12:03:39.718732 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 12:03:39.718855 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 12:03:39.728196 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 12:03:39.728439 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 12:03:39.744618 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 12:03:39.744771 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:03:39.783237 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 12:03:39.795194 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 12:03:40.254283 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 29 12:03:39.795286 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:03:39.807296 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:03:39.807383 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:03:39.827294 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 12:03:39.827392 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 12:03:39.845280 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 12:03:39.845374 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:03:39.866442 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:03:39.890741 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 12:03:39.890904 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:03:39.916283 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 12:03:39.916349 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 12:03:39.925452 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 12:03:39.925500 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:03:39.942416 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 12:03:39.942475 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:03:39.979514 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 12:03:39.979601 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 12:03:40.005488 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:03:40.005568 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:03:40.040393 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 12:03:40.054230 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 12:03:40.054334 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:03:40.065333 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 12:03:40.065413 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:03:40.076315 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 12:03:40.076396 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:03:40.097350 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:03:40.097441 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:03:40.118749 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 12:03:40.118873 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 12:03:40.135794 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 12:03:40.135904 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 12:03:40.157590 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 12:03:40.179362 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 12:03:40.207811 systemd[1]: Switching root. Jan 29 12:03:40.612249 systemd-journald[183]: Journal stopped Jan 29 12:03:43.009625 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 12:03:43.009675 kernel: SELinux: policy capability open_perms=1 Jan 29 12:03:43.009697 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 12:03:43.009714 kernel: SELinux: policy capability always_check_network=0 Jan 29 12:03:43.009731 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 12:03:43.009749 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 12:03:43.009769 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 12:03:43.009792 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 12:03:43.009810 kernel: audit: type=1403 audit(1738152220.955:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 12:03:43.009831 systemd[1]: Successfully loaded SELinux policy in 89.744ms. Jan 29 12:03:43.009854 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.257ms. Jan 29 12:03:43.009875 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:03:43.009895 systemd[1]: Detected virtualization google. Jan 29 12:03:43.009915 systemd[1]: Detected architecture x86-64. Jan 29 12:03:43.009940 systemd[1]: Detected first boot. Jan 29 12:03:43.009962 systemd[1]: Initializing machine ID from random generator. Jan 29 12:03:43.009983 zram_generator::config[1025]: No configuration found. Jan 29 12:03:43.010005 systemd[1]: Populated /etc with preset unit settings. Jan 29 12:03:43.010026 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 12:03:43.010058 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 12:03:43.010080 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 12:03:43.010104 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 12:03:43.010137 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 12:03:43.010158 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 12:03:43.010181 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 12:03:43.010202 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 12:03:43.010228 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 12:03:43.010250 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 12:03:43.010271 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 12:03:43.010292 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:03:43.010313 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:03:43.010334 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 12:03:43.010355 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 12:03:43.010377 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 12:03:43.010401 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:03:43.010423 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 12:03:43.010446 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:03:43.010467 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 12:03:43.010488 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 12:03:43.010510 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 12:03:43.010538 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 12:03:43.010561 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:03:43.010584 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:03:43.010610 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:03:43.010632 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:03:43.010654 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 12:03:43.010675 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 12:03:43.010697 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:03:43.010719 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:03:43.010741 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:03:43.010768 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 12:03:43.010790 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 12:03:43.010818 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 12:03:43.010840 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 12:03:43.010862 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:03:43.010889 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 12:03:43.010911 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 12:03:43.010933 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 12:03:43.010956 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 12:03:43.010979 systemd[1]: Reached target machines.target - Containers. Jan 29 12:03:43.011001 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 12:03:43.011024 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:03:43.011053 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:03:43.011081 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 12:03:43.011103 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:03:43.011138 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:03:43.011161 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:03:43.011183 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 12:03:43.011205 kernel: ACPI: bus type drm_connector registered Jan 29 12:03:43.011226 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:03:43.011249 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 12:03:43.011275 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 12:03:43.011297 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 12:03:43.011319 kernel: fuse: init (API version 7.39) Jan 29 12:03:43.011340 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 12:03:43.011361 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 12:03:43.011383 kernel: loop: module loaded Jan 29 12:03:43.011404 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:03:43.011426 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:03:43.011474 systemd-journald[1112]: Collecting audit messages is disabled. Jan 29 12:03:43.011522 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 12:03:43.011545 systemd-journald[1112]: Journal started Jan 29 12:03:43.011591 systemd-journald[1112]: Runtime Journal (/run/log/journal/ac6a17518a4e4968ad88868ec5f27167) is 8.0M, max 148.7M, 140.7M free. Jan 29 12:03:41.798430 systemd[1]: Queued start job for default target multi-user.target. Jan 29 12:03:41.820731 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 12:03:41.821311 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 12:03:43.041155 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 12:03:43.067148 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:03:43.085147 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 12:03:43.085231 systemd[1]: Stopped verity-setup.service. Jan 29 12:03:43.116143 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:03:43.125172 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:03:43.136697 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 12:03:43.147519 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 12:03:43.158538 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 12:03:43.168489 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 12:03:43.178465 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 12:03:43.188416 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 12:03:43.198557 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 12:03:43.209559 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:03:43.220603 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 12:03:43.220837 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 12:03:43.232574 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:03:43.232803 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:03:43.244534 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:03:43.244753 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:03:43.254533 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:03:43.254745 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:03:43.266520 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 12:03:43.266745 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 12:03:43.276541 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:03:43.276765 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:03:43.286549 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:03:43.297528 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 12:03:43.308541 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 12:03:43.319539 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:03:43.342738 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 12:03:43.364266 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 12:03:43.376559 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 12:03:43.386291 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 12:03:43.386360 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:03:43.397594 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 12:03:43.421373 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 12:03:43.444386 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 12:03:43.454425 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:03:43.460447 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 12:03:43.481356 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 12:03:43.493265 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:03:43.503206 systemd-journald[1112]: Time spent on flushing to /var/log/journal/ac6a17518a4e4968ad88868ec5f27167 is 74.245ms for 930 entries. Jan 29 12:03:43.503206 systemd-journald[1112]: System Journal (/var/log/journal/ac6a17518a4e4968ad88868ec5f27167) is 8.0M, max 584.8M, 576.8M free. Jan 29 12:03:43.606819 systemd-journald[1112]: Received client request to flush runtime journal. Jan 29 12:03:43.606885 kernel: loop0: detected capacity change from 0 to 142488 Jan 29 12:03:43.502774 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 12:03:43.519281 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:03:43.526327 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:03:43.545380 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 12:03:43.564336 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:03:43.583679 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 12:03:43.600266 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 12:03:43.617669 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 12:03:43.629753 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 12:03:43.641689 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 12:03:43.656973 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 12:03:43.669759 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:03:43.682176 systemd-tmpfiles[1144]: ACLs are not supported, ignoring. Jan 29 12:03:43.682209 systemd-tmpfiles[1144]: ACLs are not supported, ignoring. Jan 29 12:03:43.702927 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:03:43.715291 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 12:03:43.725147 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 12:03:43.741621 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 12:03:43.760565 kernel: loop1: detected capacity change from 0 to 140768 Jan 29 12:03:43.766613 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 12:03:43.775375 udevadm[1146]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 12:03:43.778755 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 12:03:43.783396 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 12:03:43.862852 kernel: loop2: detected capacity change from 0 to 54824 Jan 29 12:03:43.879906 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 12:03:43.900692 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:03:43.934299 kernel: loop3: detected capacity change from 0 to 210664 Jan 29 12:03:43.956986 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Jan 29 12:03:43.957021 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Jan 29 12:03:43.966409 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:03:44.075258 kernel: loop4: detected capacity change from 0 to 142488 Jan 29 12:03:44.147350 kernel: loop5: detected capacity change from 0 to 140768 Jan 29 12:03:44.196860 kernel: loop6: detected capacity change from 0 to 54824 Jan 29 12:03:44.242212 kernel: loop7: detected capacity change from 0 to 210664 Jan 29 12:03:44.280829 (sd-merge)[1171]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jan 29 12:03:44.281756 (sd-merge)[1171]: Merged extensions into '/usr'. Jan 29 12:03:44.290874 systemd[1]: Reloading requested from client PID 1143 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 12:03:44.291285 systemd[1]: Reloading... Jan 29 12:03:44.392157 zram_generator::config[1193]: No configuration found. Jan 29 12:03:44.621215 ldconfig[1138]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 12:03:44.701261 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:03:44.805906 systemd[1]: Reloading finished in 513 ms. Jan 29 12:03:44.841058 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 12:03:44.851733 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 12:03:44.878424 systemd[1]: Starting ensure-sysext.service... Jan 29 12:03:44.895348 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:03:44.912283 systemd[1]: Reloading requested from client PID 1237 ('systemctl') (unit ensure-sysext.service)... Jan 29 12:03:44.912305 systemd[1]: Reloading... Jan 29 12:03:44.941545 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 12:03:44.942235 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 12:03:44.943640 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 12:03:44.944456 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Jan 29 12:03:44.944682 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Jan 29 12:03:44.950198 systemd-tmpfiles[1238]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:03:44.950400 systemd-tmpfiles[1238]: Skipping /boot Jan 29 12:03:44.968083 systemd-tmpfiles[1238]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:03:44.968110 systemd-tmpfiles[1238]: Skipping /boot Jan 29 12:03:45.028185 zram_generator::config[1265]: No configuration found. Jan 29 12:03:45.167755 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:03:45.232930 systemd[1]: Reloading finished in 319 ms. Jan 29 12:03:45.252871 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 12:03:45.268793 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:03:45.292397 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:03:45.310701 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 12:03:45.336563 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 12:03:45.354355 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:03:45.371458 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:03:45.388330 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 12:03:45.395792 augenrules[1328]: No rules Jan 29 12:03:45.400256 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:03:45.411582 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 12:03:45.438853 systemd-udevd[1324]: Using default interface naming scheme 'v255'. Jan 29 12:03:45.443579 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:03:45.444636 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:03:45.450794 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:03:45.470482 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:03:45.488792 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:03:45.498895 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:03:45.507524 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 12:03:45.524476 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 12:03:45.535210 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:03:45.538844 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:03:45.551320 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 12:03:45.564965 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 12:03:45.577006 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:03:45.578212 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:03:45.591022 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:03:45.592322 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:03:45.603977 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:03:45.605207 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:03:45.618231 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 12:03:45.656282 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 12:03:45.689092 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 12:03:45.695055 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:03:45.695873 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:03:45.704350 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:03:45.720358 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:03:45.735362 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:03:45.748993 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:03:45.764427 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 29 12:03:45.773378 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:03:45.782350 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:03:45.791271 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 12:03:45.802276 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 12:03:45.802320 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:03:45.804521 systemd[1]: Finished ensure-sysext.service. Jan 29 12:03:45.813712 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:03:45.815218 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:03:45.826702 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:03:45.828201 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:03:45.838777 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:03:45.839245 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:03:45.850713 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:03:45.851468 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:03:45.874148 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 29 12:03:45.874234 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1356) Jan 29 12:03:45.884142 kernel: ACPI: button: Power Button [PWRF] Jan 29 12:03:45.907690 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jan 29 12:03:45.925142 kernel: ACPI: button: Sleep Button [SLPF] Jan 29 12:03:45.933705 systemd-resolved[1321]: Positive Trust Anchors: Jan 29 12:03:45.934179 systemd-resolved[1321]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:03:45.934348 systemd-resolved[1321]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:03:45.935509 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 29 12:03:45.962148 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 29 12:03:46.011398 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Jan 29 12:03:46.011444 kernel: EDAC MC: Ver: 3.0.0 Jan 29 12:03:45.972316 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jan 29 12:03:45.976785 systemd-resolved[1321]: Defaulting to hostname 'linux'. Jan 29 12:03:45.987186 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:03:45.987300 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:03:45.987546 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:03:45.997357 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:03:46.062499 systemd-networkd[1382]: lo: Link UP Jan 29 12:03:46.062513 systemd-networkd[1382]: lo: Gained carrier Jan 29 12:03:46.066971 systemd-networkd[1382]: Enumeration completed Jan 29 12:03:46.067136 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:03:46.067761 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:03:46.067769 systemd-networkd[1382]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:03:46.068479 systemd-networkd[1382]: eth0: Link UP Jan 29 12:03:46.068486 systemd-networkd[1382]: eth0: Gained carrier Jan 29 12:03:46.068511 systemd-networkd[1382]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:03:46.077364 systemd[1]: Reached target network.target - Network. Jan 29 12:03:46.078205 systemd-networkd[1382]: eth0: DHCPv4 address 10.128.0.18/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 29 12:03:46.095045 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 12:03:46.105449 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jan 29 12:03:46.133340 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 29 12:03:46.144177 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 12:03:46.160319 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 12:03:46.177775 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:03:46.187823 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 12:03:46.199614 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 12:03:46.220495 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 12:03:46.238019 lvm[1417]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:03:46.267333 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 12:03:46.267932 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:03:46.272413 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 12:03:46.288149 lvm[1419]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:03:46.310353 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:03:46.322927 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:03:46.332375 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 12:03:46.343265 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 12:03:46.354409 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 12:03:46.364348 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 12:03:46.375226 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 12:03:46.386213 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 12:03:46.386266 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:03:46.394201 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:03:46.402903 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 12:03:46.414739 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 12:03:46.427081 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 12:03:46.438071 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 12:03:46.449534 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 12:03:46.459973 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:03:46.469238 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:03:46.477300 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:03:46.477355 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:03:46.489261 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 12:03:46.501805 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 12:03:46.518303 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 12:03:46.535441 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 12:03:46.560414 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 12:03:46.571262 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 12:03:46.575091 jq[1429]: false Jan 29 12:03:46.582354 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 12:03:46.604761 systemd[1]: Started ntpd.service - Network Time Service. Jan 29 12:03:46.610211 extend-filesystems[1430]: Found loop4 Jan 29 12:03:46.628354 extend-filesystems[1430]: Found loop5 Jan 29 12:03:46.628354 extend-filesystems[1430]: Found loop6 Jan 29 12:03:46.628354 extend-filesystems[1430]: Found loop7 Jan 29 12:03:46.628354 extend-filesystems[1430]: Found sda Jan 29 12:03:46.628354 extend-filesystems[1430]: Found sda1 Jan 29 12:03:46.628354 extend-filesystems[1430]: Found sda2 Jan 29 12:03:46.628354 extend-filesystems[1430]: Found sda3 Jan 29 12:03:46.628354 extend-filesystems[1430]: Found usr Jan 29 12:03:46.628354 extend-filesystems[1430]: Found sda4 Jan 29 12:03:46.628354 extend-filesystems[1430]: Found sda6 Jan 29 12:03:46.628354 extend-filesystems[1430]: Found sda7 Jan 29 12:03:46.628354 extend-filesystems[1430]: Found sda9 Jan 29 12:03:46.628354 extend-filesystems[1430]: Checking size of /dev/sda9 Jan 29 12:03:46.789303 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jan 29 12:03:46.789359 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jan 29 12:03:46.789390 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1347) Jan 29 12:03:46.789460 coreos-metadata[1427]: Jan 29 12:03:46.621 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jan 29 12:03:46.789460 coreos-metadata[1427]: Jan 29 12:03:46.627 INFO Fetch successful Jan 29 12:03:46.789460 coreos-metadata[1427]: Jan 29 12:03:46.627 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jan 29 12:03:46.789460 coreos-metadata[1427]: Jan 29 12:03:46.628 INFO Fetch successful Jan 29 12:03:46.789460 coreos-metadata[1427]: Jan 29 12:03:46.628 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jan 29 12:03:46.789460 coreos-metadata[1427]: Jan 29 12:03:46.629 INFO Fetch successful Jan 29 12:03:46.789460 coreos-metadata[1427]: Jan 29 12:03:46.629 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jan 29 12:03:46.789460 coreos-metadata[1427]: Jan 29 12:03:46.630 INFO Fetch successful Jan 29 12:03:46.622848 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 12:03:46.685106 ntpd[1435]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:52 UTC 2025 (1): Starting Jan 29 12:03:46.794812 extend-filesystems[1430]: Resized partition /dev/sda9 Jan 29 12:03:46.808320 ntpd[1435]: 29 Jan 12:03:46 ntpd[1435]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:52 UTC 2025 (1): Starting Jan 29 12:03:46.808320 ntpd[1435]: 29 Jan 12:03:46 ntpd[1435]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 12:03:46.808320 ntpd[1435]: 29 Jan 12:03:46 ntpd[1435]: ---------------------------------------------------- Jan 29 12:03:46.808320 ntpd[1435]: 29 Jan 12:03:46 ntpd[1435]: ntp-4 is maintained by Network Time Foundation, Jan 29 12:03:46.808320 ntpd[1435]: 29 Jan 12:03:46 ntpd[1435]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 12:03:46.808320 ntpd[1435]: 29 Jan 12:03:46 ntpd[1435]: corporation. Support and training for ntp-4 are Jan 29 12:03:46.808320 ntpd[1435]: 29 Jan 12:03:46 ntpd[1435]: available at https://www.nwtime.org/support Jan 29 12:03:46.808320 ntpd[1435]: 29 Jan 12:03:46 ntpd[1435]: ---------------------------------------------------- Jan 29 12:03:46.808320 ntpd[1435]: 29 Jan 12:03:46 ntpd[1435]: proto: precision = 0.084 usec (-23) Jan 29 12:03:46.808320 ntpd[1435]: 29 Jan 12:03:46 ntpd[1435]: basedate set to 2025-01-17 Jan 29 12:03:46.808320 ntpd[1435]: 29 Jan 12:03:46 ntpd[1435]: gps base set to 2025-01-19 (week 2350) Jan 29 12:03:46.808320 ntpd[1435]: 29 Jan 12:03:46 ntpd[1435]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 12:03:46.808320 ntpd[1435]: 29 Jan 12:03:46 ntpd[1435]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 12:03:46.808320 ntpd[1435]: 29 Jan 12:03:46 ntpd[1435]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 12:03:46.808320 ntpd[1435]: 29 Jan 12:03:46 ntpd[1435]: Listen normally on 3 eth0 10.128.0.18:123 Jan 29 12:03:46.808320 ntpd[1435]: 29 Jan 12:03:46 ntpd[1435]: Listen normally on 4 lo [::1]:123 Jan 29 12:03:46.808320 ntpd[1435]: 29 Jan 12:03:46 ntpd[1435]: bind(21) AF_INET6 fe80::4001:aff:fe80:12%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 12:03:46.808320 ntpd[1435]: 29 Jan 12:03:46 ntpd[1435]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:12%2#123 Jan 29 12:03:46.808320 ntpd[1435]: 29 Jan 12:03:46 ntpd[1435]: failed to init interface for address fe80::4001:aff:fe80:12%2 Jan 29 12:03:46.808320 ntpd[1435]: 29 Jan 12:03:46 ntpd[1435]: Listening on routing socket on fd #21 for interface updates Jan 29 12:03:46.808320 ntpd[1435]: 29 Jan 12:03:46 ntpd[1435]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 12:03:46.808320 ntpd[1435]: 29 Jan 12:03:46 ntpd[1435]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 12:03:46.644244 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 12:03:46.685171 ntpd[1435]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 29 12:03:46.817948 extend-filesystems[1450]: resize2fs 1.47.1 (20-May-2024) Jan 29 12:03:46.817948 extend-filesystems[1450]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 29 12:03:46.817948 extend-filesystems[1450]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 29 12:03:46.817948 extend-filesystems[1450]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jan 29 12:03:46.736342 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 12:03:46.685186 ntpd[1435]: ---------------------------------------------------- Jan 29 12:03:46.904694 extend-filesystems[1430]: Resized filesystem in /dev/sda9 Jan 29 12:03:46.764361 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 12:03:46.685200 ntpd[1435]: ntp-4 is maintained by Network Time Foundation, Jan 29 12:03:46.774907 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jan 29 12:03:46.685213 ntpd[1435]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 29 12:03:46.775449 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 12:03:46.685226 ntpd[1435]: corporation. Support and training for ntp-4 are Jan 29 12:03:46.914323 update_engine[1460]: I20250129 12:03:46.843802 1460 main.cc:92] Flatcar Update Engine starting Jan 29 12:03:46.914323 update_engine[1460]: I20250129 12:03:46.857494 1460 update_check_scheduler.cc:74] Next update check in 7m44s Jan 29 12:03:46.781943 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 12:03:46.685239 ntpd[1435]: available at https://www.nwtime.org/support Jan 29 12:03:46.914899 jq[1461]: true Jan 29 12:03:46.814397 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 12:03:46.685252 ntpd[1435]: ---------------------------------------------------- Jan 29 12:03:46.828483 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 12:03:46.690881 ntpd[1435]: proto: precision = 0.084 usec (-23) Jan 29 12:03:46.852677 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 12:03:46.691444 dbus-daemon[1428]: [system] SELinux support is enabled Jan 29 12:03:46.855189 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 12:03:46.692532 ntpd[1435]: basedate set to 2025-01-17 Jan 29 12:03:46.855675 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 12:03:46.692553 ntpd[1435]: gps base set to 2025-01-19 (week 2350) Jan 29 12:03:46.855940 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 12:03:46.698510 dbus-daemon[1428]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1382 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 29 12:03:46.881654 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 12:03:46.703553 ntpd[1435]: Listen and drop on 0 v6wildcard [::]:123 Jan 29 12:03:46.881894 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 12:03:46.703614 ntpd[1435]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 29 12:03:46.897609 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 12:03:46.704003 ntpd[1435]: Listen normally on 2 lo 127.0.0.1:123 Jan 29 12:03:46.897848 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 12:03:46.704060 ntpd[1435]: Listen normally on 3 eth0 10.128.0.18:123 Jan 29 12:03:46.919619 systemd-logind[1456]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 12:03:46.704148 ntpd[1435]: Listen normally on 4 lo [::1]:123 Jan 29 12:03:46.919646 systemd-logind[1456]: Watching system buttons on /dev/input/event2 (Sleep Button) Jan 29 12:03:46.704211 ntpd[1435]: bind(21) AF_INET6 fe80::4001:aff:fe80:12%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 12:03:46.919675 systemd-logind[1456]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 12:03:46.704243 ntpd[1435]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:12%2#123 Jan 29 12:03:46.920774 systemd-logind[1456]: New seat seat0. Jan 29 12:03:46.704264 ntpd[1435]: failed to init interface for address fe80::4001:aff:fe80:12%2 Jan 29 12:03:46.704305 ntpd[1435]: Listening on routing socket on fd #21 for interface updates Jan 29 12:03:46.709740 ntpd[1435]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 12:03:46.709772 ntpd[1435]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 29 12:03:46.926065 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 12:03:46.993223 jq[1465]: true Jan 29 12:03:46.985678 (ntainerd)[1467]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 12:03:46.995292 dbus-daemon[1428]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 29 12:03:47.003765 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 12:03:47.045248 tar[1464]: linux-amd64/helm Jan 29 12:03:47.054989 systemd[1]: Started update-engine.service - Update Engine. Jan 29 12:03:47.072455 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 12:03:47.072760 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 12:03:47.073019 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 12:03:47.099488 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 29 12:03:47.107275 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 12:03:47.107548 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 12:03:47.132647 bash[1495]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:03:47.132484 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 12:03:47.151809 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 12:03:47.180480 systemd[1]: Starting sshkeys.service... Jan 29 12:03:47.241022 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 12:03:47.264575 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 12:03:47.285812 dbus-daemon[1428]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 29 12:03:47.286021 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 29 12:03:47.287612 dbus-daemon[1428]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1496 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 29 12:03:47.310605 systemd[1]: Starting polkit.service - Authorization Manager... Jan 29 12:03:47.334191 coreos-metadata[1500]: Jan 29 12:03:47.333 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jan 29 12:03:47.342807 coreos-metadata[1500]: Jan 29 12:03:47.342 INFO Fetch failed with 404: resource not found Jan 29 12:03:47.342807 coreos-metadata[1500]: Jan 29 12:03:47.342 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jan 29 12:03:47.344523 coreos-metadata[1500]: Jan 29 12:03:47.344 INFO Fetch successful Jan 29 12:03:47.344523 coreos-metadata[1500]: Jan 29 12:03:47.344 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jan 29 12:03:47.347794 coreos-metadata[1500]: Jan 29 12:03:47.347 INFO Fetch failed with 404: resource not found Jan 29 12:03:47.347794 coreos-metadata[1500]: Jan 29 12:03:47.347 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jan 29 12:03:47.348573 coreos-metadata[1500]: Jan 29 12:03:47.348 INFO Fetch failed with 404: resource not found Jan 29 12:03:47.348573 coreos-metadata[1500]: Jan 29 12:03:47.348 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jan 29 12:03:47.349493 coreos-metadata[1500]: Jan 29 12:03:47.349 INFO Fetch successful Jan 29 12:03:47.355547 unknown[1500]: wrote ssh authorized keys file for user: core Jan 29 12:03:47.466589 polkitd[1503]: Started polkitd version 121 Jan 29 12:03:47.469948 update-ssh-keys[1505]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:03:47.469813 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 12:03:47.487339 systemd[1]: Finished sshkeys.service. Jan 29 12:03:47.494455 polkitd[1503]: Loading rules from directory /etc/polkit-1/rules.d Jan 29 12:03:47.494971 polkitd[1503]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 29 12:03:47.500161 polkitd[1503]: Finished loading, compiling and executing 2 rules Jan 29 12:03:47.505940 dbus-daemon[1428]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 29 12:03:47.506831 polkitd[1503]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 29 12:03:47.507351 systemd[1]: Started polkit.service - Authorization Manager. Jan 29 12:03:47.554279 locksmithd[1497]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 12:03:47.580819 systemd-hostnamed[1496]: Hostname set to (transient) Jan 29 12:03:47.581731 systemd-resolved[1321]: System hostname changed to 'ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal'. Jan 29 12:03:47.685696 ntpd[1435]: bind(24) AF_INET6 fe80::4001:aff:fe80:12%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 12:03:47.687625 ntpd[1435]: 29 Jan 12:03:47 ntpd[1435]: bind(24) AF_INET6 fe80::4001:aff:fe80:12%2#123 flags 0x11 failed: Cannot assign requested address Jan 29 12:03:47.687625 ntpd[1435]: 29 Jan 12:03:47 ntpd[1435]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:12%2#123 Jan 29 12:03:47.687625 ntpd[1435]: 29 Jan 12:03:47 ntpd[1435]: failed to init interface for address fe80::4001:aff:fe80:12%2 Jan 29 12:03:47.685749 ntpd[1435]: unable to create socket on eth0 (6) for fe80::4001:aff:fe80:12%2#123 Jan 29 12:03:47.685771 ntpd[1435]: failed to init interface for address fe80::4001:aff:fe80:12%2 Jan 29 12:03:47.736147 containerd[1467]: time="2025-01-29T12:03:47.734080070Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 12:03:47.763365 sshd_keygen[1459]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 12:03:47.833723 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 12:03:47.837433 containerd[1467]: time="2025-01-29T12:03:47.837295007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:03:47.840579 containerd[1467]: time="2025-01-29T12:03:47.840215697Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:03:47.840579 containerd[1467]: time="2025-01-29T12:03:47.840262412Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 12:03:47.840579 containerd[1467]: time="2025-01-29T12:03:47.840289463Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 12:03:47.840579 containerd[1467]: time="2025-01-29T12:03:47.840507537Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 12:03:47.840579 containerd[1467]: time="2025-01-29T12:03:47.840533144Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 12:03:47.840864 containerd[1467]: time="2025-01-29T12:03:47.840624545Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:03:47.840864 containerd[1467]: time="2025-01-29T12:03:47.840646617Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:03:47.841484 containerd[1467]: time="2025-01-29T12:03:47.841056307Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:03:47.841484 containerd[1467]: time="2025-01-29T12:03:47.841090680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 12:03:47.841484 containerd[1467]: time="2025-01-29T12:03:47.841131571Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:03:47.841484 containerd[1467]: time="2025-01-29T12:03:47.841164349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 12:03:47.841484 containerd[1467]: time="2025-01-29T12:03:47.841296950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:03:47.841756 containerd[1467]: time="2025-01-29T12:03:47.841585882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:03:47.842276 containerd[1467]: time="2025-01-29T12:03:47.841787737Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:03:47.842276 containerd[1467]: time="2025-01-29T12:03:47.841821509Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 12:03:47.842276 containerd[1467]: time="2025-01-29T12:03:47.841938170Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 12:03:47.842276 containerd[1467]: time="2025-01-29T12:03:47.842051288Z" level=info msg="metadata content store policy set" policy=shared Jan 29 12:03:47.851674 containerd[1467]: time="2025-01-29T12:03:47.849627421Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 12:03:47.851674 containerd[1467]: time="2025-01-29T12:03:47.849698913Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 12:03:47.851674 containerd[1467]: time="2025-01-29T12:03:47.849731072Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 12:03:47.851674 containerd[1467]: time="2025-01-29T12:03:47.849760110Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 12:03:47.851674 containerd[1467]: time="2025-01-29T12:03:47.849785294Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 12:03:47.851674 containerd[1467]: time="2025-01-29T12:03:47.849982070Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 12:03:47.851674 containerd[1467]: time="2025-01-29T12:03:47.850368741Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 12:03:47.851674 containerd[1467]: time="2025-01-29T12:03:47.850532399Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 12:03:47.851674 containerd[1467]: time="2025-01-29T12:03:47.850559022Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 12:03:47.851674 containerd[1467]: time="2025-01-29T12:03:47.850581956Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 12:03:47.851674 containerd[1467]: time="2025-01-29T12:03:47.850605111Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 12:03:47.851674 containerd[1467]: time="2025-01-29T12:03:47.850625795Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 12:03:47.851674 containerd[1467]: time="2025-01-29T12:03:47.850660086Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 12:03:47.851674 containerd[1467]: time="2025-01-29T12:03:47.850684856Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 12:03:47.851553 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 12:03:47.852413 containerd[1467]: time="2025-01-29T12:03:47.850708491Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 12:03:47.852413 containerd[1467]: time="2025-01-29T12:03:47.850730577Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 12:03:47.852413 containerd[1467]: time="2025-01-29T12:03:47.850751900Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 12:03:47.852413 containerd[1467]: time="2025-01-29T12:03:47.850773535Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 12:03:47.852413 containerd[1467]: time="2025-01-29T12:03:47.850808121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 12:03:47.852413 containerd[1467]: time="2025-01-29T12:03:47.850838982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 12:03:47.852413 containerd[1467]: time="2025-01-29T12:03:47.850861224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 12:03:47.852413 containerd[1467]: time="2025-01-29T12:03:47.850895048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 12:03:47.852413 containerd[1467]: time="2025-01-29T12:03:47.850918604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 12:03:47.852413 containerd[1467]: time="2025-01-29T12:03:47.850950819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 12:03:47.852413 containerd[1467]: time="2025-01-29T12:03:47.850972397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 12:03:47.852413 containerd[1467]: time="2025-01-29T12:03:47.850996397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 12:03:47.852413 containerd[1467]: time="2025-01-29T12:03:47.851019119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 12:03:47.852413 containerd[1467]: time="2025-01-29T12:03:47.851045868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 12:03:47.852991 containerd[1467]: time="2025-01-29T12:03:47.851068403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 12:03:47.852991 containerd[1467]: time="2025-01-29T12:03:47.851089696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 12:03:47.856157 containerd[1467]: time="2025-01-29T12:03:47.851112633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 12:03:47.856157 containerd[1467]: time="2025-01-29T12:03:47.854961077Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 12:03:47.856157 containerd[1467]: time="2025-01-29T12:03:47.855006453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 12:03:47.856157 containerd[1467]: time="2025-01-29T12:03:47.855029217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 12:03:47.856157 containerd[1467]: time="2025-01-29T12:03:47.855049056Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 12:03:47.856157 containerd[1467]: time="2025-01-29T12:03:47.855138893Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 12:03:47.856157 containerd[1467]: time="2025-01-29T12:03:47.855169487Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 12:03:47.856157 containerd[1467]: time="2025-01-29T12:03:47.855187435Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 12:03:47.856157 containerd[1467]: time="2025-01-29T12:03:47.855207643Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 12:03:47.856157 containerd[1467]: time="2025-01-29T12:03:47.855224698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 12:03:47.856157 containerd[1467]: time="2025-01-29T12:03:47.855245022Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 12:03:47.856157 containerd[1467]: time="2025-01-29T12:03:47.855261101Z" level=info msg="NRI interface is disabled by configuration." Jan 29 12:03:47.856157 containerd[1467]: time="2025-01-29T12:03:47.855277052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 12:03:47.856786 containerd[1467]: time="2025-01-29T12:03:47.855729534Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 12:03:47.856786 containerd[1467]: time="2025-01-29T12:03:47.855834345Z" level=info msg="Connect containerd service" Jan 29 12:03:47.856786 containerd[1467]: time="2025-01-29T12:03:47.855890832Z" level=info msg="using legacy CRI server" Jan 29 12:03:47.856786 containerd[1467]: time="2025-01-29T12:03:47.855903608Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 12:03:47.856786 containerd[1467]: time="2025-01-29T12:03:47.856066796Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 12:03:47.858047 containerd[1467]: time="2025-01-29T12:03:47.858010801Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:03:47.858389 containerd[1467]: time="2025-01-29T12:03:47.858326784Z" level=info msg="Start subscribing containerd event" Jan 29 12:03:47.858971 containerd[1467]: time="2025-01-29T12:03:47.858493344Z" level=info msg="Start recovering state" Jan 29 12:03:47.858971 containerd[1467]: time="2025-01-29T12:03:47.858586242Z" level=info msg="Start event monitor" Jan 29 12:03:47.858971 containerd[1467]: time="2025-01-29T12:03:47.858609542Z" level=info msg="Start snapshots syncer" Jan 29 12:03:47.858971 containerd[1467]: time="2025-01-29T12:03:47.858623869Z" level=info msg="Start cni network conf syncer for default" Jan 29 12:03:47.858971 containerd[1467]: time="2025-01-29T12:03:47.858637916Z" level=info msg="Start streaming server" Jan 29 12:03:47.859527 containerd[1467]: time="2025-01-29T12:03:47.859502303Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 12:03:47.859680 containerd[1467]: time="2025-01-29T12:03:47.859657538Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 12:03:47.859924 containerd[1467]: time="2025-01-29T12:03:47.859904583Z" level=info msg="containerd successfully booted in 0.128958s" Jan 29 12:03:47.860530 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 12:03:47.888882 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 12:03:47.889217 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 12:03:47.909259 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 12:03:47.942039 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 12:03:47.964296 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 12:03:47.982419 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 12:03:47.992674 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 12:03:48.026276 systemd-networkd[1382]: eth0: Gained IPv6LL Jan 29 12:03:48.032042 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 12:03:48.043935 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 12:03:48.063491 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:03:48.083985 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 12:03:48.101538 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jan 29 12:03:48.109779 init.sh[1547]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jan 29 12:03:48.113508 init.sh[1547]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jan 29 12:03:48.113508 init.sh[1547]: + /usr/bin/google_instance_setup Jan 29 12:03:48.132035 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 12:03:48.211657 tar[1464]: linux-amd64/LICENSE Jan 29 12:03:48.211657 tar[1464]: linux-amd64/README.md Jan 29 12:03:48.227956 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 12:03:48.634504 instance-setup[1551]: INFO Running google_set_multiqueue. Jan 29 12:03:48.654593 instance-setup[1551]: INFO Set channels for eth0 to 2. Jan 29 12:03:48.658582 instance-setup[1551]: INFO Setting /proc/irq/27/smp_affinity_list to 0 for device virtio1. Jan 29 12:03:48.660268 instance-setup[1551]: INFO /proc/irq/27/smp_affinity_list: real affinity 0 Jan 29 12:03:48.661285 instance-setup[1551]: INFO Setting /proc/irq/28/smp_affinity_list to 0 for device virtio1. Jan 29 12:03:48.663076 instance-setup[1551]: INFO /proc/irq/28/smp_affinity_list: real affinity 0 Jan 29 12:03:48.663187 instance-setup[1551]: INFO Setting /proc/irq/29/smp_affinity_list to 1 for device virtio1. Jan 29 12:03:48.665271 instance-setup[1551]: INFO /proc/irq/29/smp_affinity_list: real affinity 1 Jan 29 12:03:48.665746 instance-setup[1551]: INFO Setting /proc/irq/30/smp_affinity_list to 1 for device virtio1. Jan 29 12:03:48.667360 instance-setup[1551]: INFO /proc/irq/30/smp_affinity_list: real affinity 1 Jan 29 12:03:48.677957 instance-setup[1551]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 29 12:03:48.682427 instance-setup[1551]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 29 12:03:48.684380 instance-setup[1551]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jan 29 12:03:48.684583 instance-setup[1551]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jan 29 12:03:48.704046 init.sh[1547]: + /usr/bin/google_metadata_script_runner --script-type startup Jan 29 12:03:48.815424 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 12:03:48.832607 systemd[1]: Started sshd@0-10.128.0.18:22-147.75.109.163:60798.service - OpenSSH per-connection server daemon (147.75.109.163:60798). Jan 29 12:03:48.880272 startup-script[1590]: INFO Starting startup scripts. Jan 29 12:03:48.886908 startup-script[1590]: INFO No startup scripts found in metadata. Jan 29 12:03:48.886984 startup-script[1590]: INFO Finished running startup scripts. Jan 29 12:03:48.907243 init.sh[1547]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jan 29 12:03:48.907243 init.sh[1547]: + daemon_pids=() Jan 29 12:03:48.907421 init.sh[1547]: + for d in accounts clock_skew network Jan 29 12:03:48.907648 init.sh[1547]: + daemon_pids+=($!) Jan 29 12:03:48.907717 init.sh[1547]: + for d in accounts clock_skew network Jan 29 12:03:48.908174 init.sh[1547]: + daemon_pids+=($!) Jan 29 12:03:48.908174 init.sh[1547]: + for d in accounts clock_skew network Jan 29 12:03:48.908292 init.sh[1596]: + /usr/bin/google_accounts_daemon Jan 29 12:03:48.908623 init.sh[1547]: + daemon_pids+=($!) Jan 29 12:03:48.908623 init.sh[1547]: + NOTIFY_SOCKET=/run/systemd/notify Jan 29 12:03:48.908623 init.sh[1547]: + /usr/bin/systemd-notify --ready Jan 29 12:03:48.909138 init.sh[1597]: + /usr/bin/google_clock_skew_daemon Jan 29 12:03:48.909874 init.sh[1598]: + /usr/bin/google_network_daemon Jan 29 12:03:48.928737 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jan 29 12:03:48.943088 init.sh[1547]: + wait -n 1596 1597 1598 Jan 29 12:03:49.202911 sshd[1594]: Accepted publickey for core from 147.75.109.163 port 60798 ssh2: RSA SHA256:o3wzruhQrnnLrK/WKthtucnIRobYJRiusEDRL06Fd88 Jan 29 12:03:49.210641 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:49.236655 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 12:03:49.253247 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 12:03:49.276200 systemd-logind[1456]: New session 1 of user core. Jan 29 12:03:49.305449 google-clock-skew[1597]: INFO Starting Google Clock Skew daemon. Jan 29 12:03:49.313134 google-clock-skew[1597]: INFO Clock drift token has changed: 0. Jan 29 12:03:49.313207 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 12:03:49.337485 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 12:03:49.378582 google-networking[1598]: INFO Starting Google Networking daemon. Jan 29 12:03:49.386025 (systemd)[1610]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 12:03:49.411468 groupadd[1608]: group added to /etc/group: name=google-sudoers, GID=1000 Jan 29 12:03:49.415948 groupadd[1608]: group added to /etc/gshadow: name=google-sudoers Jan 29 12:03:49.500516 groupadd[1608]: new group: name=google-sudoers, GID=1000 Jan 29 12:03:49.536372 google-accounts[1596]: INFO Starting Google Accounts daemon. Jan 29 12:03:49.555643 google-accounts[1596]: WARNING OS Login not installed. Jan 29 12:03:49.559814 google-accounts[1596]: INFO Creating a new user account for 0. Jan 29 12:03:49.568480 init.sh[1624]: useradd: invalid user name '0': use --badname to ignore Jan 29 12:03:49.568796 google-accounts[1596]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jan 29 12:03:49.604216 systemd[1610]: Queued start job for default target default.target. Jan 29 12:03:49.612441 systemd[1610]: Created slice app.slice - User Application Slice. Jan 29 12:03:49.612489 systemd[1610]: Reached target paths.target - Paths. Jan 29 12:03:49.612516 systemd[1610]: Reached target timers.target - Timers. Jan 29 12:03:49.615199 systemd[1610]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 12:03:49.636507 systemd[1610]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 12:03:49.636685 systemd[1610]: Reached target sockets.target - Sockets. Jan 29 12:03:49.636711 systemd[1610]: Reached target basic.target - Basic System. Jan 29 12:03:49.636770 systemd[1610]: Reached target default.target - Main User Target. Jan 29 12:03:49.636821 systemd[1610]: Startup finished in 234ms. Jan 29 12:03:49.637767 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 12:03:49.657739 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 12:03:49.765716 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:03:49.778017 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 12:03:49.781627 (kubelet)[1634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:03:49.789410 systemd[1]: Startup finished in 1.015s (kernel) + 12.181s (initrd) + 8.913s (userspace) = 22.110s. Jan 29 12:03:49.909426 systemd[1]: Started sshd@1-10.128.0.18:22-147.75.109.163:60812.service - OpenSSH per-connection server daemon (147.75.109.163:60812). Jan 29 12:03:50.000646 systemd-resolved[1321]: Clock change detected. Flushing caches. Jan 29 12:03:50.001638 google-clock-skew[1597]: INFO Synced system time with hardware clock. Jan 29 12:03:50.289335 sshd[1641]: Accepted publickey for core from 147.75.109.163 port 60812 ssh2: RSA SHA256:o3wzruhQrnnLrK/WKthtucnIRobYJRiusEDRL06Fd88 Jan 29 12:03:50.289768 sshd[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:50.297020 systemd-logind[1456]: New session 2 of user core. Jan 29 12:03:50.303510 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 12:03:50.504490 sshd[1641]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:50.509088 systemd[1]: sshd@1-10.128.0.18:22-147.75.109.163:60812.service: Deactivated successfully. Jan 29 12:03:50.512106 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 12:03:50.513968 systemd-logind[1456]: Session 2 logged out. Waiting for processes to exit. Jan 29 12:03:50.516201 systemd-logind[1456]: Removed session 2. Jan 29 12:03:50.560487 systemd[1]: Started sshd@2-10.128.0.18:22-147.75.109.163:60828.service - OpenSSH per-connection server daemon (147.75.109.163:60828). Jan 29 12:03:50.760211 ntpd[1435]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:12%2]:123 Jan 29 12:03:50.760908 ntpd[1435]: 29 Jan 12:03:50 ntpd[1435]: Listen normally on 7 eth0 [fe80::4001:aff:fe80:12%2]:123 Jan 29 12:03:50.828564 kubelet[1634]: E0129 12:03:50.828117 1634 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:03:50.831069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:03:50.831338 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:03:50.831774 systemd[1]: kubelet.service: Consumed 1.247s CPU time. Jan 29 12:03:50.855191 sshd[1652]: Accepted publickey for core from 147.75.109.163 port 60828 ssh2: RSA SHA256:o3wzruhQrnnLrK/WKthtucnIRobYJRiusEDRL06Fd88 Jan 29 12:03:50.856949 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:50.863384 systemd-logind[1456]: New session 3 of user core. Jan 29 12:03:50.870524 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 12:03:51.063776 sshd[1652]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:51.068276 systemd[1]: sshd@2-10.128.0.18:22-147.75.109.163:60828.service: Deactivated successfully. Jan 29 12:03:51.070574 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 12:03:51.072600 systemd-logind[1456]: Session 3 logged out. Waiting for processes to exit. Jan 29 12:03:51.074076 systemd-logind[1456]: Removed session 3. Jan 29 12:03:51.118700 systemd[1]: Started sshd@3-10.128.0.18:22-147.75.109.163:60842.service - OpenSSH per-connection server daemon (147.75.109.163:60842). Jan 29 12:03:51.401597 sshd[1662]: Accepted publickey for core from 147.75.109.163 port 60842 ssh2: RSA SHA256:o3wzruhQrnnLrK/WKthtucnIRobYJRiusEDRL06Fd88 Jan 29 12:03:51.403462 sshd[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:51.408726 systemd-logind[1456]: New session 4 of user core. Jan 29 12:03:51.424511 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 12:03:51.614782 sshd[1662]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:51.619735 systemd[1]: sshd@3-10.128.0.18:22-147.75.109.163:60842.service: Deactivated successfully. Jan 29 12:03:51.621947 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 12:03:51.622854 systemd-logind[1456]: Session 4 logged out. Waiting for processes to exit. Jan 29 12:03:51.624247 systemd-logind[1456]: Removed session 4. Jan 29 12:03:51.669701 systemd[1]: Started sshd@4-10.128.0.18:22-147.75.109.163:60852.service - OpenSSH per-connection server daemon (147.75.109.163:60852). Jan 29 12:03:51.954909 sshd[1669]: Accepted publickey for core from 147.75.109.163 port 60852 ssh2: RSA SHA256:o3wzruhQrnnLrK/WKthtucnIRobYJRiusEDRL06Fd88 Jan 29 12:03:51.956762 sshd[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:51.963148 systemd-logind[1456]: New session 5 of user core. Jan 29 12:03:51.970516 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 12:03:52.147538 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 12:03:52.148032 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:03:52.161096 sudo[1672]: pam_unix(sudo:session): session closed for user root Jan 29 12:03:52.204199 sshd[1669]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:52.209399 systemd-logind[1456]: Session 5 logged out. Waiting for processes to exit. Jan 29 12:03:52.210463 systemd[1]: sshd@4-10.128.0.18:22-147.75.109.163:60852.service: Deactivated successfully. Jan 29 12:03:52.212763 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 12:03:52.213981 systemd-logind[1456]: Removed session 5. Jan 29 12:03:52.262676 systemd[1]: Started sshd@5-10.128.0.18:22-147.75.109.163:60854.service - OpenSSH per-connection server daemon (147.75.109.163:60854). Jan 29 12:03:52.546531 sshd[1677]: Accepted publickey for core from 147.75.109.163 port 60854 ssh2: RSA SHA256:o3wzruhQrnnLrK/WKthtucnIRobYJRiusEDRL06Fd88 Jan 29 12:03:52.548430 sshd[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:52.553936 systemd-logind[1456]: New session 6 of user core. Jan 29 12:03:52.564495 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 12:03:52.724654 sudo[1681]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 12:03:52.725140 sudo[1681]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:03:52.729718 sudo[1681]: pam_unix(sudo:session): session closed for user root Jan 29 12:03:52.742239 sudo[1680]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 12:03:52.742732 sudo[1680]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:03:52.757710 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 12:03:52.761581 auditctl[1684]: No rules Jan 29 12:03:52.762070 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 12:03:52.762350 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 12:03:52.765660 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:03:52.803359 augenrules[1702]: No rules Jan 29 12:03:52.804567 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:03:52.805861 sudo[1680]: pam_unix(sudo:session): session closed for user root Jan 29 12:03:52.848891 sshd[1677]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:52.853974 systemd[1]: sshd@5-10.128.0.18:22-147.75.109.163:60854.service: Deactivated successfully. Jan 29 12:03:52.856037 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 12:03:52.856915 systemd-logind[1456]: Session 6 logged out. Waiting for processes to exit. Jan 29 12:03:52.858238 systemd-logind[1456]: Removed session 6. Jan 29 12:03:52.904684 systemd[1]: Started sshd@6-10.128.0.18:22-147.75.109.163:60860.service - OpenSSH per-connection server daemon (147.75.109.163:60860). Jan 29 12:03:53.192131 sshd[1710]: Accepted publickey for core from 147.75.109.163 port 60860 ssh2: RSA SHA256:o3wzruhQrnnLrK/WKthtucnIRobYJRiusEDRL06Fd88 Jan 29 12:03:53.194176 sshd[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:03:53.200391 systemd-logind[1456]: New session 7 of user core. Jan 29 12:03:53.210538 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 12:03:53.371917 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 12:03:53.372416 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:03:53.804697 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 12:03:53.816907 (dockerd)[1729]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 12:03:54.243950 dockerd[1729]: time="2025-01-29T12:03:54.243790615Z" level=info msg="Starting up" Jan 29 12:03:54.359550 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3034167763-merged.mount: Deactivated successfully. Jan 29 12:03:54.445037 dockerd[1729]: time="2025-01-29T12:03:54.444791227Z" level=info msg="Loading containers: start." Jan 29 12:03:54.588336 kernel: Initializing XFRM netlink socket Jan 29 12:03:54.687604 systemd-networkd[1382]: docker0: Link UP Jan 29 12:03:54.702030 dockerd[1729]: time="2025-01-29T12:03:54.701980557Z" level=info msg="Loading containers: done." Jan 29 12:03:54.722988 dockerd[1729]: time="2025-01-29T12:03:54.722928266Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 12:03:54.723210 dockerd[1729]: time="2025-01-29T12:03:54.723041601Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 12:03:54.723210 dockerd[1729]: time="2025-01-29T12:03:54.723177514Z" level=info msg="Daemon has completed initialization" Jan 29 12:03:54.759862 dockerd[1729]: time="2025-01-29T12:03:54.759751131Z" level=info msg="API listen on /run/docker.sock" Jan 29 12:03:54.760241 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 12:03:55.788345 containerd[1467]: time="2025-01-29T12:03:55.788271107Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 12:03:56.543336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2066700810.mount: Deactivated successfully. Jan 29 12:03:58.224017 containerd[1467]: time="2025-01-29T12:03:58.223947350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:58.225609 containerd[1467]: time="2025-01-29T12:03:58.225551681Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32683640" Jan 29 12:03:58.226824 containerd[1467]: time="2025-01-29T12:03:58.226748338Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:58.230296 containerd[1467]: time="2025-01-29T12:03:58.230229627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:58.231999 containerd[1467]: time="2025-01-29T12:03:58.231777576Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.443435647s" Jan 29 12:03:58.231999 containerd[1467]: time="2025-01-29T12:03:58.231829753Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 29 12:03:58.261323 containerd[1467]: time="2025-01-29T12:03:58.261253760Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 12:04:01.081663 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 12:04:01.094076 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:01.349853 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:01.356409 (kubelet)[1938]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:04:01.412185 kubelet[1938]: E0129 12:04:01.412113 1938 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:04:01.416678 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:04:01.416927 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:04:11.667296 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 12:04:11.675033 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:12.230345 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:12.244840 (kubelet)[1954]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:04:12.296569 kubelet[1954]: E0129 12:04:12.296495 1954 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:04:12.298555 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:04:12.298789 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:04:17.689650 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 29 12:04:22.549291 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 12:04:22.554993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:22.863804 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:22.870012 (kubelet)[1973]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:04:22.921433 kubelet[1973]: E0129 12:04:22.921363 1973 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:04:22.923375 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:04:22.923609 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:04:28.341820 containerd[1467]: time="2025-01-29T12:04:28.341741015Z" level=error msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" failed" error="failed to pull and unpack image \"registry.k8s.io/kube-controller-manager:v1.30.9\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://registry.k8s.io/v2/kube-controller-manager/manifests/sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\": dial tcp 34.96.108.209:443: i/o timeout" Jan 29 12:04:28.342768 containerd[1467]: time="2025-01-29T12:04:28.341874544Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=0" Jan 29 12:04:28.372733 containerd[1467]: time="2025-01-29T12:04:28.372657771Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 12:04:29.929257 containerd[1467]: time="2025-01-29T12:04:29.929188502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:29.930867 containerd[1467]: time="2025-01-29T12:04:29.930794094Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29607679" Jan 29 12:04:29.932202 containerd[1467]: time="2025-01-29T12:04:29.932110931Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:29.935719 containerd[1467]: time="2025-01-29T12:04:29.935646830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:29.937244 containerd[1467]: time="2025-01-29T12:04:29.937060082Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.564352703s" Jan 29 12:04:29.937244 containerd[1467]: time="2025-01-29T12:04:29.937111887Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 29 12:04:29.967628 containerd[1467]: time="2025-01-29T12:04:29.967579559Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 12:04:31.031413 containerd[1467]: time="2025-01-29T12:04:31.031339816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:31.033075 containerd[1467]: time="2025-01-29T12:04:31.033007348Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17784980" Jan 29 12:04:31.034136 containerd[1467]: time="2025-01-29T12:04:31.034062921Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:31.037779 containerd[1467]: time="2025-01-29T12:04:31.037708939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:31.039324 containerd[1467]: time="2025-01-29T12:04:31.039112326Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.071477971s" Jan 29 12:04:31.039324 containerd[1467]: time="2025-01-29T12:04:31.039160444Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 29 12:04:31.069375 containerd[1467]: time="2025-01-29T12:04:31.069243202Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 12:04:31.767440 update_engine[1460]: I20250129 12:04:31.767359 1460 update_attempter.cc:509] Updating boot flags... Jan 29 12:04:31.870227 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2016) Jan 29 12:04:32.012021 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2016) Jan 29 12:04:32.400576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1021275022.mount: Deactivated successfully. Jan 29 12:04:32.963432 containerd[1467]: time="2025-01-29T12:04:32.963363353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:32.964639 containerd[1467]: time="2025-01-29T12:04:32.964570189Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29060232" Jan 29 12:04:32.966024 containerd[1467]: time="2025-01-29T12:04:32.965951475Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:32.968723 containerd[1467]: time="2025-01-29T12:04:32.968658112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:32.969820 containerd[1467]: time="2025-01-29T12:04:32.969573005Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.900274635s" Jan 29 12:04:32.969820 containerd[1467]: time="2025-01-29T12:04:32.969620626Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 12:04:32.993172 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 12:04:33.001238 containerd[1467]: time="2025-01-29T12:04:33.000938460Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 12:04:33.003077 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:33.282763 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:33.294860 (kubelet)[2041]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:04:33.351005 kubelet[2041]: E0129 12:04:33.350932 2041 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:04:33.353929 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:04:33.354177 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:04:33.563589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount260902110.mount: Deactivated successfully. Jan 29 12:04:34.503446 systemd[1]: Started sshd@7-10.128.0.18:22-194.0.234.38:48320.service - OpenSSH per-connection server daemon (194.0.234.38:48320). Jan 29 12:04:34.599480 containerd[1467]: time="2025-01-29T12:04:34.599412462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:34.601276 containerd[1467]: time="2025-01-29T12:04:34.601231272Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Jan 29 12:04:34.602411 containerd[1467]: time="2025-01-29T12:04:34.602321484Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:34.607050 containerd[1467]: time="2025-01-29T12:04:34.606985586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:34.608996 containerd[1467]: time="2025-01-29T12:04:34.608945655Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.607956444s" Jan 29 12:04:34.609106 containerd[1467]: time="2025-01-29T12:04:34.609001664Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 12:04:34.639261 containerd[1467]: time="2025-01-29T12:04:34.639207757Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 12:04:34.972986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount271771831.mount: Deactivated successfully. Jan 29 12:04:34.979449 containerd[1467]: time="2025-01-29T12:04:34.979400899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:34.980542 containerd[1467]: time="2025-01-29T12:04:34.980473292Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=324188" Jan 29 12:04:34.981978 containerd[1467]: time="2025-01-29T12:04:34.981912188Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:34.984869 containerd[1467]: time="2025-01-29T12:04:34.984833003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:34.986368 containerd[1467]: time="2025-01-29T12:04:34.985890650Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 346.627325ms" Jan 29 12:04:34.986368 containerd[1467]: time="2025-01-29T12:04:34.985935883Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 29 12:04:35.016340 containerd[1467]: time="2025-01-29T12:04:35.016281938Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 12:04:35.436567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount205579947.mount: Deactivated successfully. Jan 29 12:04:36.847733 sshd[2094]: Invalid user vpn from 194.0.234.38 port 48320 Jan 29 12:04:37.592231 containerd[1467]: time="2025-01-29T12:04:37.592163906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:37.593943 containerd[1467]: time="2025-01-29T12:04:37.593881426Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57246061" Jan 29 12:04:37.595164 containerd[1467]: time="2025-01-29T12:04:37.595090596Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:37.598846 containerd[1467]: time="2025-01-29T12:04:37.598773884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:37.600802 containerd[1467]: time="2025-01-29T12:04:37.600344475Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.583994174s" Jan 29 12:04:37.600802 containerd[1467]: time="2025-01-29T12:04:37.600398437Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 29 12:04:38.147136 sshd[2094]: Connection closed by invalid user vpn 194.0.234.38 port 48320 [preauth] Jan 29 12:04:38.149979 systemd[1]: sshd@7-10.128.0.18:22-194.0.234.38:48320.service: Deactivated successfully. Jan 29 12:04:40.740419 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:40.746684 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:40.775861 systemd[1]: Reloading requested from client PID 2218 ('systemctl') (unit session-7.scope)... Jan 29 12:04:40.775878 systemd[1]: Reloading... Jan 29 12:04:40.919369 zram_generator::config[2253]: No configuration found. Jan 29 12:04:41.093904 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:04:41.195473 systemd[1]: Reloading finished in 418 ms. Jan 29 12:04:41.259841 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 12:04:41.259971 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 12:04:41.260343 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:41.268818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:41.478768 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:41.487541 (kubelet)[2308]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:04:41.543153 kubelet[2308]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:04:41.543153 kubelet[2308]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:04:41.543153 kubelet[2308]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:04:41.543153 kubelet[2308]: I0129 12:04:41.543267 2308 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:04:41.965669 kubelet[2308]: I0129 12:04:41.965610 2308 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:04:41.965669 kubelet[2308]: I0129 12:04:41.965645 2308 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:04:41.966023 kubelet[2308]: I0129 12:04:41.965981 2308 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:04:41.993903 kubelet[2308]: I0129 12:04:41.993861 2308 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:04:41.994847 kubelet[2308]: E0129 12:04:41.994739 2308 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.18:6443: connect: connection refused Jan 29 12:04:42.010768 kubelet[2308]: I0129 12:04:42.010720 2308 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:04:42.011135 kubelet[2308]: I0129 12:04:42.011086 2308 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:04:42.011515 kubelet[2308]: I0129 12:04:42.011124 2308 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:04:42.011716 kubelet[2308]: I0129 12:04:42.011524 2308 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:04:42.011716 kubelet[2308]: I0129 12:04:42.011545 2308 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:04:42.011821 kubelet[2308]: I0129 12:04:42.011743 2308 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:04:42.013017 kubelet[2308]: I0129 12:04:42.012982 2308 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:04:42.013134 kubelet[2308]: I0129 12:04:42.013022 2308 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:04:42.013134 kubelet[2308]: I0129 12:04:42.013058 2308 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:04:42.013134 kubelet[2308]: I0129 12:04:42.013088 2308 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:04:42.023587 kubelet[2308]: W0129 12:04:42.023169 2308 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.18:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.18:6443: connect: connection refused Jan 29 12:04:42.023778 kubelet[2308]: E0129 12:04:42.023758 2308 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.18:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.18:6443: connect: connection refused Jan 29 12:04:42.024342 kubelet[2308]: I0129 12:04:42.024062 2308 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:04:42.024449 kubelet[2308]: W0129 12:04:42.024300 2308 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.18:6443: connect: connection refused Jan 29 12:04:42.024449 kubelet[2308]: E0129 12:04:42.024391 2308 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.18:6443: connect: connection refused Jan 29 12:04:42.026645 kubelet[2308]: I0129 12:04:42.026617 2308 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:04:42.027410 kubelet[2308]: W0129 12:04:42.026809 2308 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 12:04:42.027941 kubelet[2308]: I0129 12:04:42.027922 2308 server.go:1264] "Started kubelet" Jan 29 12:04:42.029481 kubelet[2308]: I0129 12:04:42.029439 2308 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:04:42.038073 kubelet[2308]: I0129 12:04:42.038019 2308 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:04:42.039788 kubelet[2308]: I0129 12:04:42.039369 2308 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:04:42.040606 kubelet[2308]: I0129 12:04:42.040538 2308 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:04:42.041726 kubelet[2308]: I0129 12:04:42.040799 2308 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:04:42.042607 kubelet[2308]: E0129 12:04:42.042432 2308 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.18:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.18:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal.181f284a97ce7d10 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal,UID:ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal,},FirstTimestamp:2025-01-29 12:04:42.027883792 +0000 UTC m=+0.534229282,LastTimestamp:2025-01-29 12:04:42.027883792 +0000 UTC m=+0.534229282,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal,}" Jan 29 12:04:42.043177 kubelet[2308]: I0129 12:04:42.043156 2308 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:04:42.044892 kubelet[2308]: I0129 12:04:42.044269 2308 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:04:42.044892 kubelet[2308]: I0129 12:04:42.044376 2308 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:04:42.049334 kubelet[2308]: W0129 12:04:42.048688 2308 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.18:6443: connect: connection refused Jan 29 12:04:42.049334 kubelet[2308]: E0129 12:04:42.048776 2308 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.18:6443: connect: connection refused Jan 29 12:04:42.049507 kubelet[2308]: E0129 12:04:42.049458 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.18:6443: connect: connection refused" interval="200ms" Jan 29 12:04:42.054095 kubelet[2308]: I0129 12:04:42.054060 2308 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:04:42.054228 kubelet[2308]: I0129 12:04:42.054199 2308 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:04:42.058602 kubelet[2308]: I0129 12:04:42.058577 2308 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:04:42.062634 kubelet[2308]: E0129 12:04:42.058973 2308 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:04:42.083249 kubelet[2308]: I0129 12:04:42.083180 2308 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:04:42.085159 kubelet[2308]: I0129 12:04:42.084815 2308 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:04:42.085159 kubelet[2308]: I0129 12:04:42.084839 2308 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:04:42.085159 kubelet[2308]: I0129 12:04:42.084858 2308 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:04:42.085159 kubelet[2308]: E0129 12:04:42.084903 2308 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:04:42.088630 kubelet[2308]: W0129 12:04:42.088596 2308 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.18:6443: connect: connection refused Jan 29 12:04:42.090231 kubelet[2308]: E0129 12:04:42.089043 2308 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.18:6443: connect: connection refused Jan 29 12:04:42.092720 kubelet[2308]: I0129 12:04:42.092693 2308 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:04:42.092720 kubelet[2308]: I0129 12:04:42.092718 2308 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:04:42.092959 kubelet[2308]: I0129 12:04:42.092824 2308 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:04:42.096599 kubelet[2308]: I0129 12:04:42.096567 2308 policy_none.go:49] "None policy: Start" Jan 29 12:04:42.097275 kubelet[2308]: I0129 12:04:42.097206 2308 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:04:42.097275 kubelet[2308]: I0129 12:04:42.097238 2308 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:04:42.107678 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 12:04:42.123524 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 12:04:42.128057 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 12:04:42.138359 kubelet[2308]: I0129 12:04:42.138083 2308 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:04:42.138489 kubelet[2308]: I0129 12:04:42.138374 2308 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:04:42.138559 kubelet[2308]: I0129 12:04:42.138544 2308 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:04:42.141233 kubelet[2308]: E0129 12:04:42.141206 2308 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" not found" Jan 29 12:04:42.150049 kubelet[2308]: I0129 12:04:42.150017 2308 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:42.150487 kubelet[2308]: E0129 12:04:42.150442 2308 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.18:6443/api/v1/nodes\": dial tcp 10.128.0.18:6443: connect: connection refused" node="ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:42.185901 kubelet[2308]: I0129 12:04:42.185797 2308 topology_manager.go:215] "Topology Admit Handler" podUID="31281151e9459076137410ad5275da45" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:42.190809 kubelet[2308]: I0129 12:04:42.190767 2308 topology_manager.go:215] "Topology Admit Handler" podUID="ec93f3028ed5e01285be816c167f9d87" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:42.199093 kubelet[2308]: I0129 12:04:42.198757 2308 topology_manager.go:215] "Topology Admit Handler" podUID="f01e04d2b692b1905f32f9b17eca73fe" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:42.205277 systemd[1]: Created slice kubepods-burstable-pod31281151e9459076137410ad5275da45.slice - libcontainer container kubepods-burstable-pod31281151e9459076137410ad5275da45.slice. Jan 29 12:04:42.231058 systemd[1]: Created slice kubepods-burstable-podec93f3028ed5e01285be816c167f9d87.slice - libcontainer container kubepods-burstable-podec93f3028ed5e01285be816c167f9d87.slice. Jan 29 12:04:42.242390 systemd[1]: Created slice kubepods-burstable-podf01e04d2b692b1905f32f9b17eca73fe.slice - libcontainer container kubepods-burstable-podf01e04d2b692b1905f32f9b17eca73fe.slice. Jan 29 12:04:42.245135 kubelet[2308]: I0129 12:04:42.245069 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/31281151e9459076137410ad5275da45-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" (UID: \"31281151e9459076137410ad5275da45\") " pod="kube-system/kube-scheduler-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:42.245135 kubelet[2308]: I0129 12:04:42.245123 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec93f3028ed5e01285be816c167f9d87-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" (UID: \"ec93f3028ed5e01285be816c167f9d87\") " pod="kube-system/kube-apiserver-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:42.245424 kubelet[2308]: I0129 12:04:42.245166 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f01e04d2b692b1905f32f9b17eca73fe-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" (UID: \"f01e04d2b692b1905f32f9b17eca73fe\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:42.245424 kubelet[2308]: I0129 12:04:42.245224 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f01e04d2b692b1905f32f9b17eca73fe-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" (UID: \"f01e04d2b692b1905f32f9b17eca73fe\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:42.245424 kubelet[2308]: I0129 12:04:42.245253 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f01e04d2b692b1905f32f9b17eca73fe-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" (UID: \"f01e04d2b692b1905f32f9b17eca73fe\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:42.245424 kubelet[2308]: I0129 12:04:42.245300 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec93f3028ed5e01285be816c167f9d87-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" (UID: \"ec93f3028ed5e01285be816c167f9d87\") " pod="kube-system/kube-apiserver-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:42.245664 kubelet[2308]: I0129 12:04:42.245368 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec93f3028ed5e01285be816c167f9d87-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" (UID: \"ec93f3028ed5e01285be816c167f9d87\") " pod="kube-system/kube-apiserver-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:42.245664 kubelet[2308]: I0129 12:04:42.245401 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f01e04d2b692b1905f32f9b17eca73fe-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" (UID: \"f01e04d2b692b1905f32f9b17eca73fe\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:42.245664 kubelet[2308]: I0129 12:04:42.245439 2308 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f01e04d2b692b1905f32f9b17eca73fe-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" (UID: \"f01e04d2b692b1905f32f9b17eca73fe\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:42.250540 kubelet[2308]: E0129 12:04:42.250493 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.18:6443: connect: connection refused" interval="400ms" Jan 29 12:04:42.362204 kubelet[2308]: I0129 12:04:42.362142 2308 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:42.362606 kubelet[2308]: E0129 12:04:42.362555 2308 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.18:6443/api/v1/nodes\": dial tcp 10.128.0.18:6443: connect: connection refused" node="ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:42.526663 containerd[1467]: time="2025-01-29T12:04:42.526597420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal,Uid:31281151e9459076137410ad5275da45,Namespace:kube-system,Attempt:0,}" Jan 29 12:04:42.542172 containerd[1467]: time="2025-01-29T12:04:42.542095494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal,Uid:ec93f3028ed5e01285be816c167f9d87,Namespace:kube-system,Attempt:0,}" Jan 29 12:04:42.545816 containerd[1467]: time="2025-01-29T12:04:42.545764915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal,Uid:f01e04d2b692b1905f32f9b17eca73fe,Namespace:kube-system,Attempt:0,}" Jan 29 12:04:42.651193 kubelet[2308]: E0129 12:04:42.651133 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.18:6443: connect: connection refused" interval="800ms" Jan 29 12:04:42.770479 kubelet[2308]: I0129 12:04:42.770430 2308 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:42.770889 kubelet[2308]: E0129 12:04:42.770834 2308 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.18:6443/api/v1/nodes\": dial tcp 10.128.0.18:6443: connect: connection refused" node="ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:42.921482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1193853680.mount: Deactivated successfully. Jan 29 12:04:42.932111 containerd[1467]: time="2025-01-29T12:04:42.932047517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:04:42.933443 containerd[1467]: time="2025-01-29T12:04:42.933387530Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:04:42.934618 containerd[1467]: time="2025-01-29T12:04:42.934554111Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:04:42.935720 containerd[1467]: time="2025-01-29T12:04:42.935676887Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:04:42.937103 containerd[1467]: time="2025-01-29T12:04:42.937043893Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Jan 29 12:04:42.938648 containerd[1467]: time="2025-01-29T12:04:42.938608298Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:04:42.940217 containerd[1467]: time="2025-01-29T12:04:42.940098552Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:04:42.948094 containerd[1467]: time="2025-01-29T12:04:42.948047144Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 401.25969ms" Jan 29 12:04:42.948520 containerd[1467]: time="2025-01-29T12:04:42.948343160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:04:42.953929 containerd[1467]: time="2025-01-29T12:04:42.953888185Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 411.705467ms" Jan 29 12:04:42.961564 containerd[1467]: time="2025-01-29T12:04:42.961514427Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 434.824955ms" Jan 29 12:04:43.149446 containerd[1467]: time="2025-01-29T12:04:43.147893085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:43.149446 containerd[1467]: time="2025-01-29T12:04:43.147976772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:43.149446 containerd[1467]: time="2025-01-29T12:04:43.148018404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:43.149446 containerd[1467]: time="2025-01-29T12:04:43.148190735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:43.153462 containerd[1467]: time="2025-01-29T12:04:43.153123929Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:43.153462 containerd[1467]: time="2025-01-29T12:04:43.153194543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:43.153462 containerd[1467]: time="2025-01-29T12:04:43.153220383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:43.157207 containerd[1467]: time="2025-01-29T12:04:43.157043495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:43.159931 containerd[1467]: time="2025-01-29T12:04:43.159836940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:43.160522 containerd[1467]: time="2025-01-29T12:04:43.160101571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:43.160522 containerd[1467]: time="2025-01-29T12:04:43.160367444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:43.161248 containerd[1467]: time="2025-01-29T12:04:43.160536818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:43.173364 kubelet[2308]: W0129 12:04:43.172967 2308 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.18:6443: connect: connection refused Jan 29 12:04:43.174147 kubelet[2308]: E0129 12:04:43.173993 2308 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.18:6443: connect: connection refused Jan 29 12:04:43.201709 systemd[1]: Started cri-containerd-92e2d86f577e77f7c5422d6cb14ca9b10e774caa9711b49f8603c2471dc5f6cb.scope - libcontainer container 92e2d86f577e77f7c5422d6cb14ca9b10e774caa9711b49f8603c2471dc5f6cb. Jan 29 12:04:43.215914 systemd[1]: Started cri-containerd-55bfc3809cb6399eb2b883685e34d273f2202b29efe7896a2695cd3702d30dbf.scope - libcontainer container 55bfc3809cb6399eb2b883685e34d273f2202b29efe7896a2695cd3702d30dbf. Jan 29 12:04:43.219489 systemd[1]: Started cri-containerd-f409283eca7a4f37c59b0db1a1aa082ba9b43fc3877eb1ba9670d085788b278b.scope - libcontainer container f409283eca7a4f37c59b0db1a1aa082ba9b43fc3877eb1ba9670d085788b278b. Jan 29 12:04:43.296979 containerd[1467]: time="2025-01-29T12:04:43.296741731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal,Uid:ec93f3028ed5e01285be816c167f9d87,Namespace:kube-system,Attempt:0,} returns sandbox id \"92e2d86f577e77f7c5422d6cb14ca9b10e774caa9711b49f8603c2471dc5f6cb\"" Jan 29 12:04:43.303779 kubelet[2308]: E0129 12:04:43.303713 2308 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-21291" Jan 29 12:04:43.311270 containerd[1467]: time="2025-01-29T12:04:43.311219677Z" level=info msg="CreateContainer within sandbox \"92e2d86f577e77f7c5422d6cb14ca9b10e774caa9711b49f8603c2471dc5f6cb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 12:04:43.320526 containerd[1467]: time="2025-01-29T12:04:43.320438278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal,Uid:31281151e9459076137410ad5275da45,Namespace:kube-system,Attempt:0,} returns sandbox id \"f409283eca7a4f37c59b0db1a1aa082ba9b43fc3877eb1ba9670d085788b278b\"" Jan 29 12:04:43.323728 kubelet[2308]: E0129 12:04:43.323595 2308 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-21291" Jan 29 12:04:43.326448 containerd[1467]: time="2025-01-29T12:04:43.326033848Z" level=info msg="CreateContainer within sandbox \"f409283eca7a4f37c59b0db1a1aa082ba9b43fc3877eb1ba9670d085788b278b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 12:04:43.339265 containerd[1467]: time="2025-01-29T12:04:43.339187717Z" level=info msg="CreateContainer within sandbox \"92e2d86f577e77f7c5422d6cb14ca9b10e774caa9711b49f8603c2471dc5f6cb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d2b415a81c29488e97cafef7e858619158906a9ef30d17aef2dd90f520155870\"" Jan 29 12:04:43.340658 containerd[1467]: time="2025-01-29T12:04:43.340338788Z" level=info msg="StartContainer for \"d2b415a81c29488e97cafef7e858619158906a9ef30d17aef2dd90f520155870\"" Jan 29 12:04:43.348016 containerd[1467]: time="2025-01-29T12:04:43.347697949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal,Uid:f01e04d2b692b1905f32f9b17eca73fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"55bfc3809cb6399eb2b883685e34d273f2202b29efe7896a2695cd3702d30dbf\"" Jan 29 12:04:43.352249 kubelet[2308]: E0129 12:04:43.351761 2308 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-0-8aea487ea02063b715a0.c.flat" Jan 29 12:04:43.355096 containerd[1467]: time="2025-01-29T12:04:43.355055012Z" level=info msg="CreateContainer within sandbox \"55bfc3809cb6399eb2b883685e34d273f2202b29efe7896a2695cd3702d30dbf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 12:04:43.357490 containerd[1467]: time="2025-01-29T12:04:43.357391999Z" level=info msg="CreateContainer within sandbox \"f409283eca7a4f37c59b0db1a1aa082ba9b43fc3877eb1ba9670d085788b278b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bb6e2494b7f85ba79cf29f533d1ea52ec1467493c0088ec06841ac77246317c3\"" Jan 29 12:04:43.358637 containerd[1467]: time="2025-01-29T12:04:43.358600380Z" level=info msg="StartContainer for \"bb6e2494b7f85ba79cf29f533d1ea52ec1467493c0088ec06841ac77246317c3\"" Jan 29 12:04:43.383256 containerd[1467]: time="2025-01-29T12:04:43.382827795Z" level=info msg="CreateContainer within sandbox \"55bfc3809cb6399eb2b883685e34d273f2202b29efe7896a2695cd3702d30dbf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"048a89c10cb31dd9de602ab29591102b3d33ebd498f351eaf5fa4c5d133e3224\"" Jan 29 12:04:43.385106 containerd[1467]: time="2025-01-29T12:04:43.383783362Z" level=info msg="StartContainer for \"048a89c10cb31dd9de602ab29591102b3d33ebd498f351eaf5fa4c5d133e3224\"" Jan 29 12:04:43.394982 systemd[1]: Started cri-containerd-d2b415a81c29488e97cafef7e858619158906a9ef30d17aef2dd90f520155870.scope - libcontainer container d2b415a81c29488e97cafef7e858619158906a9ef30d17aef2dd90f520155870. Jan 29 12:04:43.430091 systemd[1]: Started cri-containerd-bb6e2494b7f85ba79cf29f533d1ea52ec1467493c0088ec06841ac77246317c3.scope - libcontainer container bb6e2494b7f85ba79cf29f533d1ea52ec1467493c0088ec06841ac77246317c3. Jan 29 12:04:43.446277 systemd[1]: Started cri-containerd-048a89c10cb31dd9de602ab29591102b3d33ebd498f351eaf5fa4c5d133e3224.scope - libcontainer container 048a89c10cb31dd9de602ab29591102b3d33ebd498f351eaf5fa4c5d133e3224. Jan 29 12:04:43.452964 kubelet[2308]: E0129 12:04:43.452884 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.18:6443: connect: connection refused" interval="1.6s" Jan 29 12:04:43.496873 containerd[1467]: time="2025-01-29T12:04:43.496811388Z" level=info msg="StartContainer for \"d2b415a81c29488e97cafef7e858619158906a9ef30d17aef2dd90f520155870\" returns successfully" Jan 29 12:04:43.507368 kubelet[2308]: W0129 12:04:43.507135 2308 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.18:6443: connect: connection refused Jan 29 12:04:43.507368 kubelet[2308]: E0129 12:04:43.507203 2308 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.18:6443: connect: connection refused Jan 29 12:04:43.538368 kubelet[2308]: W0129 12:04:43.538039 2308 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.18:6443: connect: connection refused Jan 29 12:04:43.538368 kubelet[2308]: E0129 12:04:43.538110 2308 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.18:6443: connect: connection refused Jan 29 12:04:43.562078 containerd[1467]: time="2025-01-29T12:04:43.560941241Z" level=info msg="StartContainer for \"048a89c10cb31dd9de602ab29591102b3d33ebd498f351eaf5fa4c5d133e3224\" returns successfully" Jan 29 12:04:43.588604 kubelet[2308]: I0129 12:04:43.588074 2308 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:43.590738 kubelet[2308]: E0129 12:04:43.590681 2308 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.18:6443/api/v1/nodes\": dial tcp 10.128.0.18:6443: connect: connection refused" node="ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:43.591660 containerd[1467]: time="2025-01-29T12:04:43.591580835Z" level=info msg="StartContainer for \"bb6e2494b7f85ba79cf29f533d1ea52ec1467493c0088ec06841ac77246317c3\" returns successfully" Jan 29 12:04:43.615730 kubelet[2308]: W0129 12:04:43.615385 2308 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.18:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.18:6443: connect: connection refused Jan 29 12:04:43.615730 kubelet[2308]: E0129 12:04:43.615663 2308 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.18:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.18:6443: connect: connection refused Jan 29 12:04:45.198594 kubelet[2308]: I0129 12:04:45.197956 2308 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:46.380133 kubelet[2308]: E0129 12:04:46.380032 2308 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" not found" node="ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:46.534235 kubelet[2308]: I0129 12:04:46.534109 2308 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:47.026511 kubelet[2308]: I0129 12:04:47.025073 2308 apiserver.go:52] "Watching apiserver" Jan 29 12:04:47.044489 kubelet[2308]: I0129 12:04:47.044444 2308 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:04:47.272347 kubelet[2308]: E0129 12:04:47.270989 2308 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:48.030086 kubelet[2308]: W0129 12:04:48.030036 2308 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 29 12:04:48.449266 systemd[1]: Reloading requested from client PID 2579 ('systemctl') (unit session-7.scope)... Jan 29 12:04:48.449287 systemd[1]: Reloading... Jan 29 12:04:48.589377 zram_generator::config[2619]: No configuration found. Jan 29 12:04:48.729660 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:04:48.856084 systemd[1]: Reloading finished in 406 ms. Jan 29 12:04:48.909708 kubelet[2308]: I0129 12:04:48.909610 2308 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:04:48.909878 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:48.918937 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 12:04:48.919264 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:48.926675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:04:49.173160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:04:49.181462 (kubelet)[2667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:04:49.268440 kubelet[2667]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:04:49.268440 kubelet[2667]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:04:49.268440 kubelet[2667]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:04:49.269129 kubelet[2667]: I0129 12:04:49.268555 2667 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:04:49.275274 kubelet[2667]: I0129 12:04:49.275226 2667 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:04:49.275274 kubelet[2667]: I0129 12:04:49.275252 2667 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:04:49.275581 kubelet[2667]: I0129 12:04:49.275544 2667 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:04:49.277202 kubelet[2667]: I0129 12:04:49.277164 2667 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 12:04:49.279556 kubelet[2667]: I0129 12:04:49.278864 2667 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:04:49.291992 kubelet[2667]: I0129 12:04:49.291967 2667 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:04:49.292465 kubelet[2667]: I0129 12:04:49.292407 2667 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:04:49.292699 kubelet[2667]: I0129 12:04:49.292453 2667 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:04:49.292888 kubelet[2667]: I0129 12:04:49.292715 2667 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:04:49.292888 kubelet[2667]: I0129 12:04:49.292733 2667 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:04:49.292888 kubelet[2667]: I0129 12:04:49.292793 2667 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:04:49.293044 kubelet[2667]: I0129 12:04:49.292949 2667 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:04:49.293044 kubelet[2667]: I0129 12:04:49.292968 2667 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:04:49.293044 kubelet[2667]: I0129 12:04:49.293000 2667 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:04:49.293044 kubelet[2667]: I0129 12:04:49.293027 2667 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:04:49.295638 kubelet[2667]: I0129 12:04:49.294035 2667 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:04:49.295638 kubelet[2667]: I0129 12:04:49.294273 2667 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:04:49.295638 kubelet[2667]: I0129 12:04:49.294820 2667 server.go:1264] "Started kubelet" Jan 29 12:04:49.297061 kubelet[2667]: I0129 12:04:49.297036 2667 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:04:49.307168 kubelet[2667]: I0129 12:04:49.307103 2667 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:04:49.308628 kubelet[2667]: I0129 12:04:49.308604 2667 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:04:49.312324 kubelet[2667]: I0129 12:04:49.309981 2667 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:04:49.312324 kubelet[2667]: I0129 12:04:49.310242 2667 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:04:49.314175 kubelet[2667]: I0129 12:04:49.313058 2667 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:04:49.319354 kubelet[2667]: I0129 12:04:49.315423 2667 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:04:49.319354 kubelet[2667]: I0129 12:04:49.315624 2667 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:04:49.319354 kubelet[2667]: I0129 12:04:49.319153 2667 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:04:49.322199 kubelet[2667]: I0129 12:04:49.320810 2667 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:04:49.322199 kubelet[2667]: I0129 12:04:49.320852 2667 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:04:49.322199 kubelet[2667]: I0129 12:04:49.320873 2667 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:04:49.322199 kubelet[2667]: E0129 12:04:49.320959 2667 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:04:49.339349 kubelet[2667]: I0129 12:04:49.338846 2667 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:04:49.339349 kubelet[2667]: I0129 12:04:49.338946 2667 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:04:49.352341 kubelet[2667]: I0129 12:04:49.350029 2667 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:04:49.362466 kubelet[2667]: E0129 12:04:49.362380 2667 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:04:49.421275 kubelet[2667]: E0129 12:04:49.421165 2667 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 12:04:49.427361 kubelet[2667]: I0129 12:04:49.424122 2667 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:49.433649 kubelet[2667]: I0129 12:04:49.433613 2667 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:04:49.433882 kubelet[2667]: I0129 12:04:49.433861 2667 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:04:49.435638 kubelet[2667]: I0129 12:04:49.435606 2667 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:04:49.436012 kubelet[2667]: I0129 12:04:49.435988 2667 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 12:04:49.436722 kubelet[2667]: I0129 12:04:49.436664 2667 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 12:04:49.436722 kubelet[2667]: I0129 12:04:49.436721 2667 policy_none.go:49] "None policy: Start" Jan 29 12:04:49.438555 kubelet[2667]: I0129 12:04:49.438531 2667 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:49.439852 kubelet[2667]: I0129 12:04:49.439833 2667 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:49.440686 kubelet[2667]: I0129 12:04:49.439642 2667 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:04:49.440791 kubelet[2667]: I0129 12:04:49.440698 2667 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:04:49.441824 kubelet[2667]: I0129 12:04:49.441415 2667 state_mem.go:75] "Updated machine memory state" Jan 29 12:04:49.451147 kubelet[2667]: I0129 12:04:49.451118 2667 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:04:49.453817 kubelet[2667]: I0129 12:04:49.451382 2667 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:04:49.453817 kubelet[2667]: I0129 12:04:49.451512 2667 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:04:49.476563 sudo[2698]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 29 12:04:49.477163 sudo[2698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 29 12:04:49.622372 kubelet[2667]: I0129 12:04:49.622320 2667 topology_manager.go:215] "Topology Admit Handler" podUID="ec93f3028ed5e01285be816c167f9d87" podNamespace="kube-system" podName="kube-apiserver-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:49.624028 kubelet[2667]: I0129 12:04:49.622672 2667 topology_manager.go:215] "Topology Admit Handler" podUID="f01e04d2b692b1905f32f9b17eca73fe" podNamespace="kube-system" podName="kube-controller-manager-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:49.624300 kubelet[2667]: I0129 12:04:49.624274 2667 topology_manager.go:215] "Topology Admit Handler" podUID="31281151e9459076137410ad5275da45" podNamespace="kube-system" podName="kube-scheduler-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:49.640909 kubelet[2667]: W0129 12:04:49.640873 2667 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 29 12:04:49.642017 kubelet[2667]: W0129 12:04:49.641993 2667 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 29 12:04:49.646908 kubelet[2667]: E0129 12:04:49.646507 2667 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:49.646908 kubelet[2667]: W0129 12:04:49.646622 2667 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 29 12:04:49.717280 kubelet[2667]: I0129 12:04:49.717151 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f01e04d2b692b1905f32f9b17eca73fe-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" (UID: \"f01e04d2b692b1905f32f9b17eca73fe\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:49.717280 kubelet[2667]: I0129 12:04:49.717207 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f01e04d2b692b1905f32f9b17eca73fe-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" (UID: \"f01e04d2b692b1905f32f9b17eca73fe\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:49.717280 kubelet[2667]: I0129 12:04:49.717244 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec93f3028ed5e01285be816c167f9d87-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" (UID: \"ec93f3028ed5e01285be816c167f9d87\") " pod="kube-system/kube-apiserver-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:49.717280 kubelet[2667]: I0129 12:04:49.717277 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec93f3028ed5e01285be816c167f9d87-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" (UID: \"ec93f3028ed5e01285be816c167f9d87\") " pod="kube-system/kube-apiserver-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:49.717625 kubelet[2667]: I0129 12:04:49.717329 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f01e04d2b692b1905f32f9b17eca73fe-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" (UID: \"f01e04d2b692b1905f32f9b17eca73fe\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:49.717625 kubelet[2667]: I0129 12:04:49.717356 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f01e04d2b692b1905f32f9b17eca73fe-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" (UID: \"f01e04d2b692b1905f32f9b17eca73fe\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:49.717625 kubelet[2667]: I0129 12:04:49.717387 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f01e04d2b692b1905f32f9b17eca73fe-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" (UID: \"f01e04d2b692b1905f32f9b17eca73fe\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:49.717625 kubelet[2667]: I0129 12:04:49.717415 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/31281151e9459076137410ad5275da45-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" (UID: \"31281151e9459076137410ad5275da45\") " pod="kube-system/kube-scheduler-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:49.717850 kubelet[2667]: I0129 12:04:49.717443 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec93f3028ed5e01285be816c167f9d87-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" (UID: \"ec93f3028ed5e01285be816c167f9d87\") " pod="kube-system/kube-apiserver-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:50.213275 sudo[2698]: pam_unix(sudo:session): session closed for user root Jan 29 12:04:50.302723 kubelet[2667]: I0129 12:04:50.302675 2667 apiserver.go:52] "Watching apiserver" Jan 29 12:04:50.315653 kubelet[2667]: I0129 12:04:50.315616 2667 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:04:50.421148 kubelet[2667]: W0129 12:04:50.420829 2667 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 29 12:04:50.421148 kubelet[2667]: E0129 12:04:50.420933 2667 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" Jan 29 12:04:50.477809 kubelet[2667]: I0129 12:04:50.477483 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" podStartSLOduration=2.477458463 podStartE2EDuration="2.477458463s" podCreationTimestamp="2025-01-29 12:04:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:04:50.449661399 +0000 UTC m=+1.260955684" watchObservedRunningTime="2025-01-29 12:04:50.477458463 +0000 UTC m=+1.288752736" Jan 29 12:04:50.509025 kubelet[2667]: I0129 12:04:50.508950 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" podStartSLOduration=1.508924003 podStartE2EDuration="1.508924003s" podCreationTimestamp="2025-01-29 12:04:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:04:50.505953673 +0000 UTC m=+1.317247947" watchObservedRunningTime="2025-01-29 12:04:50.508924003 +0000 UTC m=+1.320218285" Jan 29 12:04:50.509269 kubelet[2667]: I0129 12:04:50.509059 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" podStartSLOduration=1.5090521300000002 podStartE2EDuration="1.50905213s" podCreationTimestamp="2025-01-29 12:04:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:04:50.478656925 +0000 UTC m=+1.289951210" watchObservedRunningTime="2025-01-29 12:04:50.50905213 +0000 UTC m=+1.320346413" Jan 29 12:04:52.091265 sudo[1713]: pam_unix(sudo:session): session closed for user root Jan 29 12:04:52.134991 sshd[1710]: pam_unix(sshd:session): session closed for user core Jan 29 12:04:52.141005 systemd[1]: sshd@6-10.128.0.18:22-147.75.109.163:60860.service: Deactivated successfully. Jan 29 12:04:52.144112 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 12:04:52.144364 systemd[1]: session-7.scope: Consumed 6.239s CPU time, 197.7M memory peak, 0B memory swap peak. Jan 29 12:04:52.145268 systemd-logind[1456]: Session 7 logged out. Waiting for processes to exit. Jan 29 12:04:52.146856 systemd-logind[1456]: Removed session 7. Jan 29 12:05:03.713174 kubelet[2667]: I0129 12:05:03.713134 2667 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 12:05:03.713795 containerd[1467]: time="2025-01-29T12:05:03.713675730Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 12:05:03.714212 kubelet[2667]: I0129 12:05:03.713932 2667 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 12:05:04.448985 kubelet[2667]: I0129 12:05:04.448886 2667 topology_manager.go:215] "Topology Admit Handler" podUID="cc301f95-e487-40b4-9a9c-835fdef7786b" podNamespace="kube-system" podName="kube-proxy-d877w" Jan 29 12:05:04.463771 systemd[1]: Created slice kubepods-besteffort-podcc301f95_e487_40b4_9a9c_835fdef7786b.slice - libcontainer container kubepods-besteffort-podcc301f95_e487_40b4_9a9c_835fdef7786b.slice. Jan 29 12:05:04.493336 kubelet[2667]: I0129 12:05:04.493110 2667 topology_manager.go:215] "Topology Admit Handler" podUID="fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f" podNamespace="kube-system" podName="cilium-kgz5z" Jan 29 12:05:04.506841 systemd[1]: Created slice kubepods-burstable-podfbdccb3d_4a0c_4fbe_a4de_7fb0e056240f.slice - libcontainer container kubepods-burstable-podfbdccb3d_4a0c_4fbe_a4de_7fb0e056240f.slice. Jan 29 12:05:04.512147 kubelet[2667]: I0129 12:05:04.512105 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-cni-path\") pod \"cilium-kgz5z\" (UID: \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\") " pod="kube-system/cilium-kgz5z" Jan 29 12:05:04.512358 kubelet[2667]: I0129 12:05:04.512336 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-host-proc-sys-net\") pod \"cilium-kgz5z\" (UID: \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\") " pod="kube-system/cilium-kgz5z" Jan 29 12:05:04.512539 kubelet[2667]: I0129 12:05:04.512517 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-host-proc-sys-kernel\") pod \"cilium-kgz5z\" (UID: \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\") " pod="kube-system/cilium-kgz5z" Jan 29 12:05:04.512740 kubelet[2667]: I0129 12:05:04.512718 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn29x\" (UniqueName: \"kubernetes.io/projected/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-kube-api-access-kn29x\") pod \"cilium-kgz5z\" (UID: \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\") " pod="kube-system/cilium-kgz5z" Jan 29 12:05:04.512968 kubelet[2667]: I0129 12:05:04.512921 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-lib-modules\") pod \"cilium-kgz5z\" (UID: \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\") " pod="kube-system/cilium-kgz5z" Jan 29 12:05:04.514824 kubelet[2667]: I0129 12:05:04.514798 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-bpf-maps\") pod \"cilium-kgz5z\" (UID: \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\") " pod="kube-system/cilium-kgz5z" Jan 29 12:05:04.514997 kubelet[2667]: I0129 12:05:04.514977 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-clustermesh-secrets\") pod \"cilium-kgz5z\" (UID: \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\") " pod="kube-system/cilium-kgz5z" Jan 29 12:05:04.515241 kubelet[2667]: I0129 12:05:04.515169 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-cilium-run\") pod \"cilium-kgz5z\" (UID: \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\") " pod="kube-system/cilium-kgz5z" Jan 29 12:05:04.515401 kubelet[2667]: I0129 12:05:04.515202 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-cilium-cgroup\") pod \"cilium-kgz5z\" (UID: \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\") " pod="kube-system/cilium-kgz5z" Jan 29 12:05:04.515656 kubelet[2667]: I0129 12:05:04.515518 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-xtables-lock\") pod \"cilium-kgz5z\" (UID: \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\") " pod="kube-system/cilium-kgz5z" Jan 29 12:05:04.515656 kubelet[2667]: I0129 12:05:04.515602 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-cilium-config-path\") pod \"cilium-kgz5z\" (UID: \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\") " pod="kube-system/cilium-kgz5z" Jan 29 12:05:04.515656 kubelet[2667]: I0129 12:05:04.515632 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cc301f95-e487-40b4-9a9c-835fdef7786b-xtables-lock\") pod \"kube-proxy-d877w\" (UID: \"cc301f95-e487-40b4-9a9c-835fdef7786b\") " pod="kube-system/kube-proxy-d877w" Jan 29 12:05:04.515982 kubelet[2667]: I0129 12:05:04.515934 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgtbv\" (UniqueName: \"kubernetes.io/projected/cc301f95-e487-40b4-9a9c-835fdef7786b-kube-api-access-qgtbv\") pod \"kube-proxy-d877w\" (UID: \"cc301f95-e487-40b4-9a9c-835fdef7786b\") " pod="kube-system/kube-proxy-d877w" Jan 29 12:05:04.516169 kubelet[2667]: I0129 12:05:04.516091 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-etc-cni-netd\") pod \"cilium-kgz5z\" (UID: \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\") " pod="kube-system/cilium-kgz5z" Jan 29 12:05:04.516403 kubelet[2667]: I0129 12:05:04.516265 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-hubble-tls\") pod \"cilium-kgz5z\" (UID: \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\") " pod="kube-system/cilium-kgz5z" Jan 29 12:05:04.516403 kubelet[2667]: I0129 12:05:04.516344 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-hostproc\") pod \"cilium-kgz5z\" (UID: \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\") " pod="kube-system/cilium-kgz5z" Jan 29 12:05:04.518396 kubelet[2667]: I0129 12:05:04.516690 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cc301f95-e487-40b4-9a9c-835fdef7786b-kube-proxy\") pod \"kube-proxy-d877w\" (UID: \"cc301f95-e487-40b4-9a9c-835fdef7786b\") " pod="kube-system/kube-proxy-d877w" Jan 29 12:05:04.518396 kubelet[2667]: I0129 12:05:04.518352 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc301f95-e487-40b4-9a9c-835fdef7786b-lib-modules\") pod \"kube-proxy-d877w\" (UID: \"cc301f95-e487-40b4-9a9c-835fdef7786b\") " pod="kube-system/kube-proxy-d877w" Jan 29 12:05:04.536698 kubelet[2667]: W0129 12:05:04.536359 2667 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal' and this object Jan 29 12:05:04.536698 kubelet[2667]: E0129 12:05:04.536426 2667 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal' and this object Jan 29 12:05:04.536698 kubelet[2667]: W0129 12:05:04.536575 2667 reflector.go:547] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal' and this object Jan 29 12:05:04.536698 kubelet[2667]: E0129 12:05:04.536601 2667 reflector.go:150] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal' and this object Jan 29 12:05:04.537021 kubelet[2667]: W0129 12:05:04.536656 2667 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal' and this object Jan 29 12:05:04.537021 kubelet[2667]: E0129 12:05:04.536672 2667 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal' and this object Jan 29 12:05:04.777928 containerd[1467]: time="2025-01-29T12:05:04.777781881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d877w,Uid:cc301f95-e487-40b4-9a9c-835fdef7786b,Namespace:kube-system,Attempt:0,}" Jan 29 12:05:04.841339 containerd[1467]: time="2025-01-29T12:05:04.839977454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:05:04.841339 containerd[1467]: time="2025-01-29T12:05:04.840504693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:05:04.841763 containerd[1467]: time="2025-01-29T12:05:04.841102086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:04.841763 containerd[1467]: time="2025-01-29T12:05:04.841255095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:04.844584 kubelet[2667]: I0129 12:05:04.844536 2667 topology_manager.go:215] "Topology Admit Handler" podUID="e43c240b-1052-40ac-919c-34be121e1e40" podNamespace="kube-system" podName="cilium-operator-599987898-dlfnw" Jan 29 12:05:04.872971 systemd[1]: Created slice kubepods-besteffort-pode43c240b_1052_40ac_919c_34be121e1e40.slice - libcontainer container kubepods-besteffort-pode43c240b_1052_40ac_919c_34be121e1e40.slice. Jan 29 12:05:04.911577 systemd[1]: Started cri-containerd-ee855288fbfc5440dd7f4ce2f49095c20a1a23ecdeab6c911f1875eaa4d3d4da.scope - libcontainer container ee855288fbfc5440dd7f4ce2f49095c20a1a23ecdeab6c911f1875eaa4d3d4da. Jan 29 12:05:04.922065 kubelet[2667]: I0129 12:05:04.921928 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md8ng\" (UniqueName: \"kubernetes.io/projected/e43c240b-1052-40ac-919c-34be121e1e40-kube-api-access-md8ng\") pod \"cilium-operator-599987898-dlfnw\" (UID: \"e43c240b-1052-40ac-919c-34be121e1e40\") " pod="kube-system/cilium-operator-599987898-dlfnw" Jan 29 12:05:04.922570 kubelet[2667]: I0129 12:05:04.922270 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e43c240b-1052-40ac-919c-34be121e1e40-cilium-config-path\") pod \"cilium-operator-599987898-dlfnw\" (UID: \"e43c240b-1052-40ac-919c-34be121e1e40\") " pod="kube-system/cilium-operator-599987898-dlfnw" Jan 29 12:05:04.942184 containerd[1467]: time="2025-01-29T12:05:04.942036366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d877w,Uid:cc301f95-e487-40b4-9a9c-835fdef7786b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee855288fbfc5440dd7f4ce2f49095c20a1a23ecdeab6c911f1875eaa4d3d4da\"" Jan 29 12:05:04.946247 containerd[1467]: time="2025-01-29T12:05:04.946189309Z" level=info msg="CreateContainer within sandbox \"ee855288fbfc5440dd7f4ce2f49095c20a1a23ecdeab6c911f1875eaa4d3d4da\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 12:05:04.964588 containerd[1467]: time="2025-01-29T12:05:04.964533533Z" level=info msg="CreateContainer within sandbox \"ee855288fbfc5440dd7f4ce2f49095c20a1a23ecdeab6c911f1875eaa4d3d4da\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a0f0d0b547b2c63893c79361d91148463d021b4a9333e81ac9b02ba74c74b3df\"" Jan 29 12:05:04.965224 containerd[1467]: time="2025-01-29T12:05:04.965184665Z" level=info msg="StartContainer for \"a0f0d0b547b2c63893c79361d91148463d021b4a9333e81ac9b02ba74c74b3df\"" Jan 29 12:05:05.000498 systemd[1]: Started cri-containerd-a0f0d0b547b2c63893c79361d91148463d021b4a9333e81ac9b02ba74c74b3df.scope - libcontainer container a0f0d0b547b2c63893c79361d91148463d021b4a9333e81ac9b02ba74c74b3df. Jan 29 12:05:05.044984 containerd[1467]: time="2025-01-29T12:05:05.044796628Z" level=info msg="StartContainer for \"a0f0d0b547b2c63893c79361d91148463d021b4a9333e81ac9b02ba74c74b3df\" returns successfully" Jan 29 12:05:05.620899 kubelet[2667]: E0129 12:05:05.620844 2667 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Jan 29 12:05:05.621092 kubelet[2667]: E0129 12:05:05.620965 2667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-clustermesh-secrets podName:fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f nodeName:}" failed. No retries permitted until 2025-01-29 12:05:06.120935769 +0000 UTC m=+16.932230046 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-clustermesh-secrets") pod "cilium-kgz5z" (UID: "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f") : failed to sync secret cache: timed out waiting for the condition Jan 29 12:05:05.621381 kubelet[2667]: E0129 12:05:05.621280 2667 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 29 12:05:05.621381 kubelet[2667]: E0129 12:05:05.621293 2667 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Jan 29 12:05:05.621381 kubelet[2667]: E0129 12:05:05.621332 2667 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-kgz5z: failed to sync secret cache: timed out waiting for the condition Jan 29 12:05:05.621381 kubelet[2667]: E0129 12:05:05.621373 2667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-cilium-config-path podName:fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f nodeName:}" failed. No retries permitted until 2025-01-29 12:05:06.121355582 +0000 UTC m=+16.932649868 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-cilium-config-path") pod "cilium-kgz5z" (UID: "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f") : failed to sync configmap cache: timed out waiting for the condition Jan 29 12:05:05.621702 kubelet[2667]: E0129 12:05:05.621395 2667 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-hubble-tls podName:fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f nodeName:}" failed. No retries permitted until 2025-01-29 12:05:06.121386051 +0000 UTC m=+16.932680309 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-hubble-tls") pod "cilium-kgz5z" (UID: "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f") : failed to sync secret cache: timed out waiting for the condition Jan 29 12:05:05.666983 systemd[1]: run-containerd-runc-k8s.io-ee855288fbfc5440dd7f4ce2f49095c20a1a23ecdeab6c911f1875eaa4d3d4da-runc.hEfGUm.mount: Deactivated successfully. Jan 29 12:05:06.082451 containerd[1467]: time="2025-01-29T12:05:06.082196965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-dlfnw,Uid:e43c240b-1052-40ac-919c-34be121e1e40,Namespace:kube-system,Attempt:0,}" Jan 29 12:05:06.118506 containerd[1467]: time="2025-01-29T12:05:06.118389114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:05:06.118919 containerd[1467]: time="2025-01-29T12:05:06.118534985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:05:06.118919 containerd[1467]: time="2025-01-29T12:05:06.118565057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:06.118919 containerd[1467]: time="2025-01-29T12:05:06.118699732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:06.168524 systemd[1]: Started cri-containerd-e57caa1501781861476f077e589206a7505f9a600aa5193e32e3d539bcaad18b.scope - libcontainer container e57caa1501781861476f077e589206a7505f9a600aa5193e32e3d539bcaad18b. Jan 29 12:05:06.226386 containerd[1467]: time="2025-01-29T12:05:06.226288040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-dlfnw,Uid:e43c240b-1052-40ac-919c-34be121e1e40,Namespace:kube-system,Attempt:0,} returns sandbox id \"e57caa1501781861476f077e589206a7505f9a600aa5193e32e3d539bcaad18b\"" Jan 29 12:05:06.229257 containerd[1467]: time="2025-01-29T12:05:06.229217277Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 12:05:06.313301 containerd[1467]: time="2025-01-29T12:05:06.312812596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kgz5z,Uid:fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f,Namespace:kube-system,Attempt:0,}" Jan 29 12:05:06.345230 containerd[1467]: time="2025-01-29T12:05:06.344735406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:05:06.345230 containerd[1467]: time="2025-01-29T12:05:06.344830059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:05:06.345230 containerd[1467]: time="2025-01-29T12:05:06.344874694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:06.345230 containerd[1467]: time="2025-01-29T12:05:06.345010732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:06.371540 systemd[1]: Started cri-containerd-d147079fa313902fa99899c18dca8babe91c2abc328649b51e139b38b954b494.scope - libcontainer container d147079fa313902fa99899c18dca8babe91c2abc328649b51e139b38b954b494. Jan 29 12:05:06.402598 containerd[1467]: time="2025-01-29T12:05:06.402509362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kgz5z,Uid:fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d147079fa313902fa99899c18dca8babe91c2abc328649b51e139b38b954b494\"" Jan 29 12:05:07.148228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2024763723.mount: Deactivated successfully. Jan 29 12:05:09.368813 containerd[1467]: time="2025-01-29T12:05:09.368742152Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:09.370244 containerd[1467]: time="2025-01-29T12:05:09.370170594Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 29 12:05:09.371737 containerd[1467]: time="2025-01-29T12:05:09.371669640Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:09.374210 containerd[1467]: time="2025-01-29T12:05:09.373533330Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.14426524s" Jan 29 12:05:09.374210 containerd[1467]: time="2025-01-29T12:05:09.373580479Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 29 12:05:09.375996 containerd[1467]: time="2025-01-29T12:05:09.375609091Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 12:05:09.378152 containerd[1467]: time="2025-01-29T12:05:09.378100769Z" level=info msg="CreateContainer within sandbox \"e57caa1501781861476f077e589206a7505f9a600aa5193e32e3d539bcaad18b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 12:05:09.397209 containerd[1467]: time="2025-01-29T12:05:09.397169290Z" level=info msg="CreateContainer within sandbox \"e57caa1501781861476f077e589206a7505f9a600aa5193e32e3d539bcaad18b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"47d6ba67cc36eb4c142d2f32b6f7d0795949e2121acc3137f315084b6af3b7c7\"" Jan 29 12:05:09.398949 containerd[1467]: time="2025-01-29T12:05:09.397823390Z" level=info msg="StartContainer for \"47d6ba67cc36eb4c142d2f32b6f7d0795949e2121acc3137f315084b6af3b7c7\"" Jan 29 12:05:09.448506 systemd[1]: Started cri-containerd-47d6ba67cc36eb4c142d2f32b6f7d0795949e2121acc3137f315084b6af3b7c7.scope - libcontainer container 47d6ba67cc36eb4c142d2f32b6f7d0795949e2121acc3137f315084b6af3b7c7. Jan 29 12:05:09.483393 containerd[1467]: time="2025-01-29T12:05:09.483288842Z" level=info msg="StartContainer for \"47d6ba67cc36eb4c142d2f32b6f7d0795949e2121acc3137f315084b6af3b7c7\" returns successfully" Jan 29 12:05:10.718176 kubelet[2667]: I0129 12:05:10.718093 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d877w" podStartSLOduration=6.71807114 podStartE2EDuration="6.71807114s" podCreationTimestamp="2025-01-29 12:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:05:05.439827151 +0000 UTC m=+16.251121434" watchObservedRunningTime="2025-01-29 12:05:10.71807114 +0000 UTC m=+21.529365457" Jan 29 12:05:14.964157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3916438245.mount: Deactivated successfully. Jan 29 12:05:17.576157 containerd[1467]: time="2025-01-29T12:05:17.576082494Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:17.577688 containerd[1467]: time="2025-01-29T12:05:17.577619698Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 29 12:05:17.578549 containerd[1467]: time="2025-01-29T12:05:17.578407321Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:05:17.580712 containerd[1467]: time="2025-01-29T12:05:17.580653017Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.20500102s" Jan 29 12:05:17.580712 containerd[1467]: time="2025-01-29T12:05:17.580701344Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 29 12:05:17.583911 containerd[1467]: time="2025-01-29T12:05:17.583861256Z" level=info msg="CreateContainer within sandbox \"d147079fa313902fa99899c18dca8babe91c2abc328649b51e139b38b954b494\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 12:05:17.602190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1993210392.mount: Deactivated successfully. Jan 29 12:05:17.603014 containerd[1467]: time="2025-01-29T12:05:17.602284059Z" level=info msg="CreateContainer within sandbox \"d147079fa313902fa99899c18dca8babe91c2abc328649b51e139b38b954b494\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5727503964734a0253add2e6a6cd1926dd8be03794e59b8738e8a29d6c0cb220\"" Jan 29 12:05:17.603759 containerd[1467]: time="2025-01-29T12:05:17.603517910Z" level=info msg="StartContainer for \"5727503964734a0253add2e6a6cd1926dd8be03794e59b8738e8a29d6c0cb220\"" Jan 29 12:05:17.644589 systemd[1]: run-containerd-runc-k8s.io-5727503964734a0253add2e6a6cd1926dd8be03794e59b8738e8a29d6c0cb220-runc.WqVMvB.mount: Deactivated successfully. Jan 29 12:05:17.654515 systemd[1]: Started cri-containerd-5727503964734a0253add2e6a6cd1926dd8be03794e59b8738e8a29d6c0cb220.scope - libcontainer container 5727503964734a0253add2e6a6cd1926dd8be03794e59b8738e8a29d6c0cb220. Jan 29 12:05:17.689339 containerd[1467]: time="2025-01-29T12:05:17.687357506Z" level=info msg="StartContainer for \"5727503964734a0253add2e6a6cd1926dd8be03794e59b8738e8a29d6c0cb220\" returns successfully" Jan 29 12:05:17.698100 systemd[1]: cri-containerd-5727503964734a0253add2e6a6cd1926dd8be03794e59b8738e8a29d6c0cb220.scope: Deactivated successfully. Jan 29 12:05:18.590673 kubelet[2667]: I0129 12:05:18.590542 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-dlfnw" podStartSLOduration=11.44358632 podStartE2EDuration="14.590517899s" podCreationTimestamp="2025-01-29 12:05:04 +0000 UTC" firstStartedPulling="2025-01-29 12:05:06.22801806 +0000 UTC m=+17.039312335" lastFinishedPulling="2025-01-29 12:05:09.374949639 +0000 UTC m=+20.186243914" observedRunningTime="2025-01-29 12:05:10.718245829 +0000 UTC m=+21.529540095" watchObservedRunningTime="2025-01-29 12:05:18.590517899 +0000 UTC m=+29.401812182" Jan 29 12:05:18.595838 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5727503964734a0253add2e6a6cd1926dd8be03794e59b8738e8a29d6c0cb220-rootfs.mount: Deactivated successfully. Jan 29 12:05:19.781191 containerd[1467]: time="2025-01-29T12:05:19.781094784Z" level=info msg="shim disconnected" id=5727503964734a0253add2e6a6cd1926dd8be03794e59b8738e8a29d6c0cb220 namespace=k8s.io Jan 29 12:05:19.781191 containerd[1467]: time="2025-01-29T12:05:19.781189702Z" level=warning msg="cleaning up after shim disconnected" id=5727503964734a0253add2e6a6cd1926dd8be03794e59b8738e8a29d6c0cb220 namespace=k8s.io Jan 29 12:05:19.781191 containerd[1467]: time="2025-01-29T12:05:19.781204520Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:05:20.581163 containerd[1467]: time="2025-01-29T12:05:20.580946314Z" level=info msg="CreateContainer within sandbox \"d147079fa313902fa99899c18dca8babe91c2abc328649b51e139b38b954b494\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 12:05:20.603537 containerd[1467]: time="2025-01-29T12:05:20.603260614Z" level=info msg="CreateContainer within sandbox \"d147079fa313902fa99899c18dca8babe91c2abc328649b51e139b38b954b494\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c1b8caf03a14ad845cf8377884596f27925ad410ba5a162f2bf7cdf8d35382ca\"" Jan 29 12:05:20.606094 containerd[1467]: time="2025-01-29T12:05:20.604617031Z" level=info msg="StartContainer for \"c1b8caf03a14ad845cf8377884596f27925ad410ba5a162f2bf7cdf8d35382ca\"" Jan 29 12:05:20.651557 systemd[1]: Started cri-containerd-c1b8caf03a14ad845cf8377884596f27925ad410ba5a162f2bf7cdf8d35382ca.scope - libcontainer container c1b8caf03a14ad845cf8377884596f27925ad410ba5a162f2bf7cdf8d35382ca. Jan 29 12:05:20.686269 containerd[1467]: time="2025-01-29T12:05:20.686207451Z" level=info msg="StartContainer for \"c1b8caf03a14ad845cf8377884596f27925ad410ba5a162f2bf7cdf8d35382ca\" returns successfully" Jan 29 12:05:20.699279 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:05:20.700177 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:05:20.700340 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:05:20.708721 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:05:20.709058 systemd[1]: cri-containerd-c1b8caf03a14ad845cf8377884596f27925ad410ba5a162f2bf7cdf8d35382ca.scope: Deactivated successfully. Jan 29 12:05:20.735166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1b8caf03a14ad845cf8377884596f27925ad410ba5a162f2bf7cdf8d35382ca-rootfs.mount: Deactivated successfully. Jan 29 12:05:20.738539 containerd[1467]: time="2025-01-29T12:05:20.738232841Z" level=info msg="shim disconnected" id=c1b8caf03a14ad845cf8377884596f27925ad410ba5a162f2bf7cdf8d35382ca namespace=k8s.io Jan 29 12:05:20.738539 containerd[1467]: time="2025-01-29T12:05:20.738323455Z" level=warning msg="cleaning up after shim disconnected" id=c1b8caf03a14ad845cf8377884596f27925ad410ba5a162f2bf7cdf8d35382ca namespace=k8s.io Jan 29 12:05:20.738539 containerd[1467]: time="2025-01-29T12:05:20.738342890Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:05:20.745896 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:05:20.758695 containerd[1467]: time="2025-01-29T12:05:20.758632901Z" level=warning msg="cleanup warnings time=\"2025-01-29T12:05:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 12:05:21.585989 containerd[1467]: time="2025-01-29T12:05:21.585782084Z" level=info msg="CreateContainer within sandbox \"d147079fa313902fa99899c18dca8babe91c2abc328649b51e139b38b954b494\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 12:05:21.615529 containerd[1467]: time="2025-01-29T12:05:21.615238183Z" level=info msg="CreateContainer within sandbox \"d147079fa313902fa99899c18dca8babe91c2abc328649b51e139b38b954b494\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4e820178082a69a1e293990835c1a3c5504ddcb521e2b79f2199024af9f74f99\"" Jan 29 12:05:21.617294 containerd[1467]: time="2025-01-29T12:05:21.615905635Z" level=info msg="StartContainer for \"4e820178082a69a1e293990835c1a3c5504ddcb521e2b79f2199024af9f74f99\"" Jan 29 12:05:21.664539 systemd[1]: Started cri-containerd-4e820178082a69a1e293990835c1a3c5504ddcb521e2b79f2199024af9f74f99.scope - libcontainer container 4e820178082a69a1e293990835c1a3c5504ddcb521e2b79f2199024af9f74f99. Jan 29 12:05:21.704298 containerd[1467]: time="2025-01-29T12:05:21.703633760Z" level=info msg="StartContainer for \"4e820178082a69a1e293990835c1a3c5504ddcb521e2b79f2199024af9f74f99\" returns successfully" Jan 29 12:05:21.706559 systemd[1]: cri-containerd-4e820178082a69a1e293990835c1a3c5504ddcb521e2b79f2199024af9f74f99.scope: Deactivated successfully. Jan 29 12:05:21.734634 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e820178082a69a1e293990835c1a3c5504ddcb521e2b79f2199024af9f74f99-rootfs.mount: Deactivated successfully. Jan 29 12:05:21.736604 containerd[1467]: time="2025-01-29T12:05:21.736533493Z" level=info msg="shim disconnected" id=4e820178082a69a1e293990835c1a3c5504ddcb521e2b79f2199024af9f74f99 namespace=k8s.io Jan 29 12:05:21.736764 containerd[1467]: time="2025-01-29T12:05:21.736605064Z" level=warning msg="cleaning up after shim disconnected" id=4e820178082a69a1e293990835c1a3c5504ddcb521e2b79f2199024af9f74f99 namespace=k8s.io Jan 29 12:05:21.736764 containerd[1467]: time="2025-01-29T12:05:21.736620187Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:05:22.590707 containerd[1467]: time="2025-01-29T12:05:22.590642554Z" level=info msg="CreateContainer within sandbox \"d147079fa313902fa99899c18dca8babe91c2abc328649b51e139b38b954b494\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 12:05:22.617507 containerd[1467]: time="2025-01-29T12:05:22.617099876Z" level=info msg="CreateContainer within sandbox \"d147079fa313902fa99899c18dca8babe91c2abc328649b51e139b38b954b494\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6a6c1fe5b61df61a72b7c14d23ddb0eb16df0a93bf8cf18e2fd7723be9c3cd24\"" Jan 29 12:05:22.619647 containerd[1467]: time="2025-01-29T12:05:22.618483365Z" level=info msg="StartContainer for \"6a6c1fe5b61df61a72b7c14d23ddb0eb16df0a93bf8cf18e2fd7723be9c3cd24\"" Jan 29 12:05:22.688511 systemd[1]: Started cri-containerd-6a6c1fe5b61df61a72b7c14d23ddb0eb16df0a93bf8cf18e2fd7723be9c3cd24.scope - libcontainer container 6a6c1fe5b61df61a72b7c14d23ddb0eb16df0a93bf8cf18e2fd7723be9c3cd24. Jan 29 12:05:22.720145 systemd[1]: cri-containerd-6a6c1fe5b61df61a72b7c14d23ddb0eb16df0a93bf8cf18e2fd7723be9c3cd24.scope: Deactivated successfully. Jan 29 12:05:22.721873 containerd[1467]: time="2025-01-29T12:05:22.721823063Z" level=info msg="StartContainer for \"6a6c1fe5b61df61a72b7c14d23ddb0eb16df0a93bf8cf18e2fd7723be9c3cd24\" returns successfully" Jan 29 12:05:22.747995 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a6c1fe5b61df61a72b7c14d23ddb0eb16df0a93bf8cf18e2fd7723be9c3cd24-rootfs.mount: Deactivated successfully. Jan 29 12:05:22.749998 containerd[1467]: time="2025-01-29T12:05:22.749873464Z" level=info msg="shim disconnected" id=6a6c1fe5b61df61a72b7c14d23ddb0eb16df0a93bf8cf18e2fd7723be9c3cd24 namespace=k8s.io Jan 29 12:05:22.750208 containerd[1467]: time="2025-01-29T12:05:22.750013365Z" level=warning msg="cleaning up after shim disconnected" id=6a6c1fe5b61df61a72b7c14d23ddb0eb16df0a93bf8cf18e2fd7723be9c3cd24 namespace=k8s.io Jan 29 12:05:22.750208 containerd[1467]: time="2025-01-29T12:05:22.750034222Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:05:23.599953 containerd[1467]: time="2025-01-29T12:05:23.598893548Z" level=info msg="CreateContainer within sandbox \"d147079fa313902fa99899c18dca8babe91c2abc328649b51e139b38b954b494\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 12:05:23.620622 containerd[1467]: time="2025-01-29T12:05:23.620246956Z" level=info msg="CreateContainer within sandbox \"d147079fa313902fa99899c18dca8babe91c2abc328649b51e139b38b954b494\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b996d1cc041622a8c5094820d301d76226c923a72b1a0ee07272b81344060628\"" Jan 29 12:05:23.622183 containerd[1467]: time="2025-01-29T12:05:23.620814035Z" level=info msg="StartContainer for \"b996d1cc041622a8c5094820d301d76226c923a72b1a0ee07272b81344060628\"" Jan 29 12:05:23.670018 systemd[1]: run-containerd-runc-k8s.io-b996d1cc041622a8c5094820d301d76226c923a72b1a0ee07272b81344060628-runc.TGGBPk.mount: Deactivated successfully. Jan 29 12:05:23.679480 systemd[1]: Started cri-containerd-b996d1cc041622a8c5094820d301d76226c923a72b1a0ee07272b81344060628.scope - libcontainer container b996d1cc041622a8c5094820d301d76226c923a72b1a0ee07272b81344060628. Jan 29 12:05:23.718634 containerd[1467]: time="2025-01-29T12:05:23.718590091Z" level=info msg="StartContainer for \"b996d1cc041622a8c5094820d301d76226c923a72b1a0ee07272b81344060628\" returns successfully" Jan 29 12:05:23.908271 kubelet[2667]: I0129 12:05:23.907627 2667 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 12:05:23.945337 kubelet[2667]: I0129 12:05:23.944631 2667 topology_manager.go:215] "Topology Admit Handler" podUID="e4865fa3-4658-4c99-b1da-6d6178e11ecd" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fvz82" Jan 29 12:05:23.945337 kubelet[2667]: I0129 12:05:23.944843 2667 topology_manager.go:215] "Topology Admit Handler" podUID="c74d56fb-52d2-41fe-a2ee-4c9a03d0bb93" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9k7hl" Jan 29 12:05:23.963996 systemd[1]: Created slice kubepods-burstable-podc74d56fb_52d2_41fe_a2ee_4c9a03d0bb93.slice - libcontainer container kubepods-burstable-podc74d56fb_52d2_41fe_a2ee_4c9a03d0bb93.slice. Jan 29 12:05:23.971802 systemd[1]: Created slice kubepods-burstable-pode4865fa3_4658_4c99_b1da_6d6178e11ecd.slice - libcontainer container kubepods-burstable-pode4865fa3_4658_4c99_b1da_6d6178e11ecd.slice. Jan 29 12:05:24.049931 kubelet[2667]: I0129 12:05:24.049566 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c74d56fb-52d2-41fe-a2ee-4c9a03d0bb93-config-volume\") pod \"coredns-7db6d8ff4d-9k7hl\" (UID: \"c74d56fb-52d2-41fe-a2ee-4c9a03d0bb93\") " pod="kube-system/coredns-7db6d8ff4d-9k7hl" Jan 29 12:05:24.049931 kubelet[2667]: I0129 12:05:24.049630 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rvq7\" (UniqueName: \"kubernetes.io/projected/c74d56fb-52d2-41fe-a2ee-4c9a03d0bb93-kube-api-access-2rvq7\") pod \"coredns-7db6d8ff4d-9k7hl\" (UID: \"c74d56fb-52d2-41fe-a2ee-4c9a03d0bb93\") " pod="kube-system/coredns-7db6d8ff4d-9k7hl" Jan 29 12:05:24.049931 kubelet[2667]: I0129 12:05:24.049666 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4865fa3-4658-4c99-b1da-6d6178e11ecd-config-volume\") pod \"coredns-7db6d8ff4d-fvz82\" (UID: \"e4865fa3-4658-4c99-b1da-6d6178e11ecd\") " pod="kube-system/coredns-7db6d8ff4d-fvz82" Jan 29 12:05:24.049931 kubelet[2667]: I0129 12:05:24.049718 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tssx\" (UniqueName: \"kubernetes.io/projected/e4865fa3-4658-4c99-b1da-6d6178e11ecd-kube-api-access-8tssx\") pod \"coredns-7db6d8ff4d-fvz82\" (UID: \"e4865fa3-4658-4c99-b1da-6d6178e11ecd\") " pod="kube-system/coredns-7db6d8ff4d-fvz82" Jan 29 12:05:24.280604 containerd[1467]: time="2025-01-29T12:05:24.279228346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9k7hl,Uid:c74d56fb-52d2-41fe-a2ee-4c9a03d0bb93,Namespace:kube-system,Attempt:0,}" Jan 29 12:05:24.281156 containerd[1467]: time="2025-01-29T12:05:24.281111871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fvz82,Uid:e4865fa3-4658-4c99-b1da-6d6178e11ecd,Namespace:kube-system,Attempt:0,}" Jan 29 12:05:25.973186 systemd-networkd[1382]: cilium_host: Link UP Jan 29 12:05:25.973495 systemd-networkd[1382]: cilium_net: Link UP Jan 29 12:05:25.973791 systemd-networkd[1382]: cilium_net: Gained carrier Jan 29 12:05:25.974080 systemd-networkd[1382]: cilium_host: Gained carrier Jan 29 12:05:26.114133 systemd-networkd[1382]: cilium_vxlan: Link UP Jan 29 12:05:26.114157 systemd-networkd[1382]: cilium_vxlan: Gained carrier Jan 29 12:05:26.147548 systemd-networkd[1382]: cilium_host: Gained IPv6LL Jan 29 12:05:26.392513 kernel: NET: Registered PF_ALG protocol family Jan 29 12:05:26.660017 systemd-networkd[1382]: cilium_net: Gained IPv6LL Jan 29 12:05:27.171979 systemd-networkd[1382]: cilium_vxlan: Gained IPv6LL Jan 29 12:05:27.209207 systemd-networkd[1382]: lxc_health: Link UP Jan 29 12:05:27.227504 systemd-networkd[1382]: lxc_health: Gained carrier Jan 29 12:05:27.849012 systemd-networkd[1382]: lxcd23f4d215bf5: Link UP Jan 29 12:05:27.859359 kernel: eth0: renamed from tmp7b39d Jan 29 12:05:27.876785 systemd-networkd[1382]: lxcd23f4d215bf5: Gained carrier Jan 29 12:05:27.895021 systemd-networkd[1382]: lxc12c826a235b6: Link UP Jan 29 12:05:27.904757 kernel: eth0: renamed from tmp77631 Jan 29 12:05:27.918227 systemd-networkd[1382]: lxc12c826a235b6: Gained carrier Jan 29 12:05:28.348042 kubelet[2667]: I0129 12:05:28.346980 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kgz5z" podStartSLOduration=13.169775495 podStartE2EDuration="24.346957304s" podCreationTimestamp="2025-01-29 12:05:04 +0000 UTC" firstStartedPulling="2025-01-29 12:05:06.404717283 +0000 UTC m=+17.216011556" lastFinishedPulling="2025-01-29 12:05:17.581899089 +0000 UTC m=+28.393193365" observedRunningTime="2025-01-29 12:05:24.624127417 +0000 UTC m=+35.435421702" watchObservedRunningTime="2025-01-29 12:05:28.346957304 +0000 UTC m=+39.158251588" Jan 29 12:05:28.964296 systemd-networkd[1382]: lxc_health: Gained IPv6LL Jan 29 12:05:29.606166 systemd-networkd[1382]: lxcd23f4d215bf5: Gained IPv6LL Jan 29 12:05:29.667963 systemd-networkd[1382]: lxc12c826a235b6: Gained IPv6LL Jan 29 12:05:31.760050 ntpd[1435]: Listen normally on 8 cilium_host 192.168.0.252:123 Jan 29 12:05:31.760897 ntpd[1435]: 29 Jan 12:05:31 ntpd[1435]: Listen normally on 8 cilium_host 192.168.0.252:123 Jan 29 12:05:31.760897 ntpd[1435]: 29 Jan 12:05:31 ntpd[1435]: Listen normally on 9 cilium_net [fe80::6cbd:14ff:fe70:459b%4]:123 Jan 29 12:05:31.760220 ntpd[1435]: Listen normally on 9 cilium_net [fe80::6cbd:14ff:fe70:459b%4]:123 Jan 29 12:05:31.761107 ntpd[1435]: Listen normally on 10 cilium_host [fe80::a4f4:b0ff:fe21:2e32%5]:123 Jan 29 12:05:31.761656 ntpd[1435]: 29 Jan 12:05:31 ntpd[1435]: Listen normally on 10 cilium_host [fe80::a4f4:b0ff:fe21:2e32%5]:123 Jan 29 12:05:31.761656 ntpd[1435]: 29 Jan 12:05:31 ntpd[1435]: Listen normally on 11 cilium_vxlan [fe80::fc06:1eff:fef0:35e7%6]:123 Jan 29 12:05:31.761656 ntpd[1435]: 29 Jan 12:05:31 ntpd[1435]: Listen normally on 12 lxc_health [fe80::b05f:60ff:fe1a:273d%8]:123 Jan 29 12:05:31.761656 ntpd[1435]: 29 Jan 12:05:31 ntpd[1435]: Listen normally on 13 lxcd23f4d215bf5 [fe80::54db:29ff:fe74:f35%10]:123 Jan 29 12:05:31.761656 ntpd[1435]: 29 Jan 12:05:31 ntpd[1435]: Listen normally on 14 lxc12c826a235b6 [fe80::b417:10ff:fe0a:4f44%12]:123 Jan 29 12:05:31.761217 ntpd[1435]: Listen normally on 11 cilium_vxlan [fe80::fc06:1eff:fef0:35e7%6]:123 Jan 29 12:05:31.761276 ntpd[1435]: Listen normally on 12 lxc_health [fe80::b05f:60ff:fe1a:273d%8]:123 Jan 29 12:05:31.761356 ntpd[1435]: Listen normally on 13 lxcd23f4d215bf5 [fe80::54db:29ff:fe74:f35%10]:123 Jan 29 12:05:31.761414 ntpd[1435]: Listen normally on 14 lxc12c826a235b6 [fe80::b417:10ff:fe0a:4f44%12]:123 Jan 29 12:05:32.890367 containerd[1467]: time="2025-01-29T12:05:32.889551931Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:05:32.893400 containerd[1467]: time="2025-01-29T12:05:32.889635711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:05:32.893400 containerd[1467]: time="2025-01-29T12:05:32.889660995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:32.893400 containerd[1467]: time="2025-01-29T12:05:32.889773556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:32.915509 containerd[1467]: time="2025-01-29T12:05:32.915335046Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:05:32.915509 containerd[1467]: time="2025-01-29T12:05:32.915414083Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:05:32.915884 containerd[1467]: time="2025-01-29T12:05:32.915492578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:32.915884 containerd[1467]: time="2025-01-29T12:05:32.915789037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:05:32.969385 systemd[1]: Started cri-containerd-7b39d6f04f212969cd29ff493e45d0a5f742324859042a290b9042a51afbb558.scope - libcontainer container 7b39d6f04f212969cd29ff493e45d0a5f742324859042a290b9042a51afbb558. Jan 29 12:05:32.981477 systemd[1]: Started cri-containerd-7763176b6b5fdf736d5a3ad74574f7655b02cf168384455a6125561202f8d716.scope - libcontainer container 7763176b6b5fdf736d5a3ad74574f7655b02cf168384455a6125561202f8d716. Jan 29 12:05:33.112330 containerd[1467]: time="2025-01-29T12:05:33.110676536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9k7hl,Uid:c74d56fb-52d2-41fe-a2ee-4c9a03d0bb93,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b39d6f04f212969cd29ff493e45d0a5f742324859042a290b9042a51afbb558\"" Jan 29 12:05:33.121172 containerd[1467]: time="2025-01-29T12:05:33.121096232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fvz82,Uid:e4865fa3-4658-4c99-b1da-6d6178e11ecd,Namespace:kube-system,Attempt:0,} returns sandbox id \"7763176b6b5fdf736d5a3ad74574f7655b02cf168384455a6125561202f8d716\"" Jan 29 12:05:33.122761 containerd[1467]: time="2025-01-29T12:05:33.122478578Z" level=info msg="CreateContainer within sandbox \"7b39d6f04f212969cd29ff493e45d0a5f742324859042a290b9042a51afbb558\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:05:33.131738 containerd[1467]: time="2025-01-29T12:05:33.131575707Z" level=info msg="CreateContainer within sandbox \"7763176b6b5fdf736d5a3ad74574f7655b02cf168384455a6125561202f8d716\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:05:33.163920 containerd[1467]: time="2025-01-29T12:05:33.163794903Z" level=info msg="CreateContainer within sandbox \"7b39d6f04f212969cd29ff493e45d0a5f742324859042a290b9042a51afbb558\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d4b32a67d0f8136f8b4d5d0c941a311a27b5c6e28790a3a6f4b9fd605e1e3d08\"" Jan 29 12:05:33.166040 containerd[1467]: time="2025-01-29T12:05:33.166002556Z" level=info msg="StartContainer for \"d4b32a67d0f8136f8b4d5d0c941a311a27b5c6e28790a3a6f4b9fd605e1e3d08\"" Jan 29 12:05:33.171716 containerd[1467]: time="2025-01-29T12:05:33.171594298Z" level=info msg="CreateContainer within sandbox \"7763176b6b5fdf736d5a3ad74574f7655b02cf168384455a6125561202f8d716\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f7c5a49e73c58d41a88f78a3527913310fe313079150691f56d16dae54731c5a\"" Jan 29 12:05:33.174383 containerd[1467]: time="2025-01-29T12:05:33.173985878Z" level=info msg="StartContainer for \"f7c5a49e73c58d41a88f78a3527913310fe313079150691f56d16dae54731c5a\"" Jan 29 12:05:33.225511 systemd[1]: Started cri-containerd-d4b32a67d0f8136f8b4d5d0c941a311a27b5c6e28790a3a6f4b9fd605e1e3d08.scope - libcontainer container d4b32a67d0f8136f8b4d5d0c941a311a27b5c6e28790a3a6f4b9fd605e1e3d08. Jan 29 12:05:33.235560 systemd[1]: Started cri-containerd-f7c5a49e73c58d41a88f78a3527913310fe313079150691f56d16dae54731c5a.scope - libcontainer container f7c5a49e73c58d41a88f78a3527913310fe313079150691f56d16dae54731c5a. Jan 29 12:05:33.278149 containerd[1467]: time="2025-01-29T12:05:33.278042534Z" level=info msg="StartContainer for \"d4b32a67d0f8136f8b4d5d0c941a311a27b5c6e28790a3a6f4b9fd605e1e3d08\" returns successfully" Jan 29 12:05:33.288608 containerd[1467]: time="2025-01-29T12:05:33.288556512Z" level=info msg="StartContainer for \"f7c5a49e73c58d41a88f78a3527913310fe313079150691f56d16dae54731c5a\" returns successfully" Jan 29 12:05:33.645353 kubelet[2667]: I0129 12:05:33.645262 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fvz82" podStartSLOduration=29.645241373 podStartE2EDuration="29.645241373s" podCreationTimestamp="2025-01-29 12:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:05:33.644856343 +0000 UTC m=+44.456150625" watchObservedRunningTime="2025-01-29 12:05:33.645241373 +0000 UTC m=+44.456535657" Jan 29 12:05:33.662345 kubelet[2667]: I0129 12:05:33.662214 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9k7hl" podStartSLOduration=29.662193382 podStartE2EDuration="29.662193382s" podCreationTimestamp="2025-01-29 12:05:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:05:33.661502081 +0000 UTC m=+44.472796365" watchObservedRunningTime="2025-01-29 12:05:33.662193382 +0000 UTC m=+44.473487664" Jan 29 12:05:45.036722 systemd[1]: Started sshd@8-10.128.0.18:22-147.75.109.163:42980.service - OpenSSH per-connection server daemon (147.75.109.163:42980). Jan 29 12:05:45.331063 sshd[4034]: Accepted publickey for core from 147.75.109.163 port 42980 ssh2: RSA SHA256:o3wzruhQrnnLrK/WKthtucnIRobYJRiusEDRL06Fd88 Jan 29 12:05:45.333086 sshd[4034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:45.339662 systemd-logind[1456]: New session 8 of user core. Jan 29 12:05:45.343555 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 12:05:45.649974 sshd[4034]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:45.655783 systemd[1]: sshd@8-10.128.0.18:22-147.75.109.163:42980.service: Deactivated successfully. Jan 29 12:05:45.659153 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 12:05:45.660524 systemd-logind[1456]: Session 8 logged out. Waiting for processes to exit. Jan 29 12:05:45.662064 systemd-logind[1456]: Removed session 8. Jan 29 12:05:50.705700 systemd[1]: Started sshd@9-10.128.0.18:22-147.75.109.163:48546.service - OpenSSH per-connection server daemon (147.75.109.163:48546). Jan 29 12:05:51.004549 sshd[4054]: Accepted publickey for core from 147.75.109.163 port 48546 ssh2: RSA SHA256:o3wzruhQrnnLrK/WKthtucnIRobYJRiusEDRL06Fd88 Jan 29 12:05:51.006482 sshd[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:51.012528 systemd-logind[1456]: New session 9 of user core. Jan 29 12:05:51.022521 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 12:05:51.292877 sshd[4054]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:51.298559 systemd[1]: sshd@9-10.128.0.18:22-147.75.109.163:48546.service: Deactivated successfully. Jan 29 12:05:51.300866 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 12:05:51.301880 systemd-logind[1456]: Session 9 logged out. Waiting for processes to exit. Jan 29 12:05:51.303444 systemd-logind[1456]: Removed session 9. Jan 29 12:05:56.344861 systemd[1]: Started sshd@10-10.128.0.18:22-147.75.109.163:48560.service - OpenSSH per-connection server daemon (147.75.109.163:48560). Jan 29 12:05:56.640228 sshd[4068]: Accepted publickey for core from 147.75.109.163 port 48560 ssh2: RSA SHA256:o3wzruhQrnnLrK/WKthtucnIRobYJRiusEDRL06Fd88 Jan 29 12:05:56.642097 sshd[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:56.648475 systemd-logind[1456]: New session 10 of user core. Jan 29 12:05:56.658531 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 12:05:56.923209 sshd[4068]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:56.928218 systemd[1]: sshd@10-10.128.0.18:22-147.75.109.163:48560.service: Deactivated successfully. Jan 29 12:05:56.930720 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 12:05:56.932904 systemd-logind[1456]: Session 10 logged out. Waiting for processes to exit. Jan 29 12:05:56.934824 systemd-logind[1456]: Removed session 10. Jan 29 12:06:01.979714 systemd[1]: Started sshd@11-10.128.0.18:22-147.75.109.163:55806.service - OpenSSH per-connection server daemon (147.75.109.163:55806). Jan 29 12:06:02.271921 sshd[4082]: Accepted publickey for core from 147.75.109.163 port 55806 ssh2: RSA SHA256:o3wzruhQrnnLrK/WKthtucnIRobYJRiusEDRL06Fd88 Jan 29 12:06:02.273850 sshd[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:02.280522 systemd-logind[1456]: New session 11 of user core. Jan 29 12:06:02.284526 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 12:06:02.555892 sshd[4082]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:02.561836 systemd[1]: sshd@11-10.128.0.18:22-147.75.109.163:55806.service: Deactivated successfully. Jan 29 12:06:02.564944 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 12:06:02.567142 systemd-logind[1456]: Session 11 logged out. Waiting for processes to exit. Jan 29 12:06:02.568855 systemd-logind[1456]: Removed session 11. Jan 29 12:06:07.614215 systemd[1]: Started sshd@12-10.128.0.18:22-147.75.109.163:41936.service - OpenSSH per-connection server daemon (147.75.109.163:41936). Jan 29 12:06:07.912662 sshd[4098]: Accepted publickey for core from 147.75.109.163 port 41936 ssh2: RSA SHA256:o3wzruhQrnnLrK/WKthtucnIRobYJRiusEDRL06Fd88 Jan 29 12:06:07.914332 sshd[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:07.920977 systemd-logind[1456]: New session 12 of user core. Jan 29 12:06:07.929507 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 12:06:08.195421 sshd[4098]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:08.200214 systemd[1]: sshd@12-10.128.0.18:22-147.75.109.163:41936.service: Deactivated successfully. Jan 29 12:06:08.203025 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 12:06:08.205142 systemd-logind[1456]: Session 12 logged out. Waiting for processes to exit. Jan 29 12:06:08.207091 systemd-logind[1456]: Removed session 12. Jan 29 12:06:08.254677 systemd[1]: Started sshd@13-10.128.0.18:22-147.75.109.163:41948.service - OpenSSH per-connection server daemon (147.75.109.163:41948). Jan 29 12:06:08.546434 sshd[4112]: Accepted publickey for core from 147.75.109.163 port 41948 ssh2: RSA SHA256:o3wzruhQrnnLrK/WKthtucnIRobYJRiusEDRL06Fd88 Jan 29 12:06:08.548415 sshd[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:08.554947 systemd-logind[1456]: New session 13 of user core. Jan 29 12:06:08.559547 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 12:06:08.873958 sshd[4112]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:08.879002 systemd[1]: sshd@13-10.128.0.18:22-147.75.109.163:41948.service: Deactivated successfully. Jan 29 12:06:08.881880 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 12:06:08.884141 systemd-logind[1456]: Session 13 logged out. Waiting for processes to exit. Jan 29 12:06:08.885976 systemd-logind[1456]: Removed session 13. Jan 29 12:06:08.930724 systemd[1]: Started sshd@14-10.128.0.18:22-147.75.109.163:41956.service - OpenSSH per-connection server daemon (147.75.109.163:41956). Jan 29 12:06:09.230755 sshd[4123]: Accepted publickey for core from 147.75.109.163 port 41956 ssh2: RSA SHA256:o3wzruhQrnnLrK/WKthtucnIRobYJRiusEDRL06Fd88 Jan 29 12:06:09.232770 sshd[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:09.239629 systemd-logind[1456]: New session 14 of user core. Jan 29 12:06:09.243522 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 12:06:09.517933 sshd[4123]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:09.522159 systemd[1]: sshd@14-10.128.0.18:22-147.75.109.163:41956.service: Deactivated successfully. Jan 29 12:06:09.525049 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 12:06:09.527253 systemd-logind[1456]: Session 14 logged out. Waiting for processes to exit. Jan 29 12:06:09.529028 systemd-logind[1456]: Removed session 14. Jan 29 12:06:14.576771 systemd[1]: Started sshd@15-10.128.0.18:22-147.75.109.163:41962.service - OpenSSH per-connection server daemon (147.75.109.163:41962). Jan 29 12:06:14.865389 sshd[4137]: Accepted publickey for core from 147.75.109.163 port 41962 ssh2: RSA SHA256:o3wzruhQrnnLrK/WKthtucnIRobYJRiusEDRL06Fd88 Jan 29 12:06:14.867415 sshd[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:14.874091 systemd-logind[1456]: New session 15 of user core. Jan 29 12:06:14.881502 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 12:06:15.150600 sshd[4137]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:15.155419 systemd[1]: sshd@15-10.128.0.18:22-147.75.109.163:41962.service: Deactivated successfully. Jan 29 12:06:15.158439 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 12:06:15.160684 systemd-logind[1456]: Session 15 logged out. Waiting for processes to exit. Jan 29 12:06:15.162167 systemd-logind[1456]: Removed session 15. Jan 29 12:06:20.211763 systemd[1]: Started sshd@16-10.128.0.18:22-147.75.109.163:37102.service - OpenSSH per-connection server daemon (147.75.109.163:37102). Jan 29 12:06:20.505610 sshd[4150]: Accepted publickey for core from 147.75.109.163 port 37102 ssh2: RSA SHA256:o3wzruhQrnnLrK/WKthtucnIRobYJRiusEDRL06Fd88 Jan 29 12:06:20.507564 sshd[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:20.513420 systemd-logind[1456]: New session 16 of user core. Jan 29 12:06:20.519554 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 12:06:20.789274 sshd[4150]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:20.794365 systemd[1]: sshd@16-10.128.0.18:22-147.75.109.163:37102.service: Deactivated successfully. Jan 29 12:06:20.797219 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 12:06:20.799396 systemd-logind[1456]: Session 16 logged out. Waiting for processes to exit. Jan 29 12:06:20.800752 systemd-logind[1456]: Removed session 16. Jan 29 12:06:20.847674 systemd[1]: Started sshd@17-10.128.0.18:22-147.75.109.163:37114.service - OpenSSH per-connection server daemon (147.75.109.163:37114). Jan 29 12:06:21.145201 sshd[4162]: Accepted publickey for core from 147.75.109.163 port 37114 ssh2: RSA SHA256:o3wzruhQrnnLrK/WKthtucnIRobYJRiusEDRL06Fd88 Jan 29 12:06:21.147169 sshd[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:21.153545 systemd-logind[1456]: New session 17 of user core. Jan 29 12:06:21.160528 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 12:06:21.524513 sshd[4162]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:21.529032 systemd[1]: sshd@17-10.128.0.18:22-147.75.109.163:37114.service: Deactivated successfully. Jan 29 12:06:21.531614 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 12:06:21.533855 systemd-logind[1456]: Session 17 logged out. Waiting for processes to exit. Jan 29 12:06:21.535494 systemd-logind[1456]: Removed session 17. Jan 29 12:06:21.578901 systemd[1]: Started sshd@18-10.128.0.18:22-147.75.109.163:37118.service - OpenSSH per-connection server daemon (147.75.109.163:37118). Jan 29 12:06:21.872032 sshd[4173]: Accepted publickey for core from 147.75.109.163 port 37118 ssh2: RSA SHA256:o3wzruhQrnnLrK/WKthtucnIRobYJRiusEDRL06Fd88 Jan 29 12:06:21.873530 sshd[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:21.880690 systemd-logind[1456]: New session 18 of user core. Jan 29 12:06:21.886511 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 12:06:23.669004 sshd[4173]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:23.674574 systemd[1]: sshd@18-10.128.0.18:22-147.75.109.163:37118.service: Deactivated successfully. Jan 29 12:06:23.679383 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 12:06:23.680597 systemd-logind[1456]: Session 18 logged out. Waiting for processes to exit. Jan 29 12:06:23.683849 systemd-logind[1456]: Removed session 18. Jan 29 12:06:23.722654 systemd[1]: Started sshd@19-10.128.0.18:22-147.75.109.163:37122.service - OpenSSH per-connection server daemon (147.75.109.163:37122). Jan 29 12:06:24.018727 sshd[4191]: Accepted publickey for core from 147.75.109.163 port 37122 ssh2: RSA SHA256:o3wzruhQrnnLrK/WKthtucnIRobYJRiusEDRL06Fd88 Jan 29 12:06:24.020762 sshd[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:24.028214 systemd-logind[1456]: New session 19 of user core. Jan 29 12:06:24.034537 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 12:06:24.444215 sshd[4191]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:24.450673 systemd[1]: sshd@19-10.128.0.18:22-147.75.109.163:37122.service: Deactivated successfully. Jan 29 12:06:24.453949 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 12:06:24.455063 systemd-logind[1456]: Session 19 logged out. Waiting for processes to exit. Jan 29 12:06:24.456772 systemd-logind[1456]: Removed session 19. Jan 29 12:06:24.504058 systemd[1]: Started sshd@20-10.128.0.18:22-147.75.109.163:37124.service - OpenSSH per-connection server daemon (147.75.109.163:37124). Jan 29 12:06:24.795740 sshd[4202]: Accepted publickey for core from 147.75.109.163 port 37124 ssh2: RSA SHA256:o3wzruhQrnnLrK/WKthtucnIRobYJRiusEDRL06Fd88 Jan 29 12:06:24.797619 sshd[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:24.804016 systemd-logind[1456]: New session 20 of user core. Jan 29 12:06:24.810534 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 12:06:25.074996 sshd[4202]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:25.079782 systemd[1]: sshd@20-10.128.0.18:22-147.75.109.163:37124.service: Deactivated successfully. Jan 29 12:06:25.082884 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 12:06:25.085071 systemd-logind[1456]: Session 20 logged out. Waiting for processes to exit. Jan 29 12:06:25.086710 systemd-logind[1456]: Removed session 20. Jan 29 12:06:30.133958 systemd[1]: Started sshd@21-10.128.0.18:22-147.75.109.163:49290.service - OpenSSH per-connection server daemon (147.75.109.163:49290). Jan 29 12:06:30.428447 sshd[4216]: Accepted publickey for core from 147.75.109.163 port 49290 ssh2: RSA SHA256:o3wzruhQrnnLrK/WKthtucnIRobYJRiusEDRL06Fd88 Jan 29 12:06:30.430346 sshd[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:30.436875 systemd-logind[1456]: New session 21 of user core. Jan 29 12:06:30.440564 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 12:06:30.715138 sshd[4216]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:30.719608 systemd[1]: sshd@21-10.128.0.18:22-147.75.109.163:49290.service: Deactivated successfully. Jan 29 12:06:30.722223 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 12:06:30.724340 systemd-logind[1456]: Session 21 logged out. Waiting for processes to exit. Jan 29 12:06:30.725999 systemd-logind[1456]: Removed session 21. Jan 29 12:06:35.764958 systemd[1]: Started sshd@22-10.128.0.18:22-147.75.109.163:49296.service - OpenSSH per-connection server daemon (147.75.109.163:49296). Jan 29 12:06:36.058647 sshd[4236]: Accepted publickey for core from 147.75.109.163 port 49296 ssh2: RSA SHA256:o3wzruhQrnnLrK/WKthtucnIRobYJRiusEDRL06Fd88 Jan 29 12:06:36.061000 sshd[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:36.067134 systemd-logind[1456]: New session 22 of user core. Jan 29 12:06:36.078526 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 12:06:36.342395 sshd[4236]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:36.347615 systemd[1]: sshd@22-10.128.0.18:22-147.75.109.163:49296.service: Deactivated successfully. Jan 29 12:06:36.350832 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 12:06:36.354229 systemd-logind[1456]: Session 22 logged out. Waiting for processes to exit. Jan 29 12:06:36.356149 systemd-logind[1456]: Removed session 22. Jan 29 12:06:41.401721 systemd[1]: Started sshd@23-10.128.0.18:22-147.75.109.163:38860.service - OpenSSH per-connection server daemon (147.75.109.163:38860). Jan 29 12:06:41.698169 sshd[4248]: Accepted publickey for core from 147.75.109.163 port 38860 ssh2: RSA SHA256:o3wzruhQrnnLrK/WKthtucnIRobYJRiusEDRL06Fd88 Jan 29 12:06:41.700186 sshd[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:41.706796 systemd-logind[1456]: New session 23 of user core. Jan 29 12:06:41.711511 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 12:06:41.981841 sshd[4248]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:41.986912 systemd[1]: sshd@23-10.128.0.18:22-147.75.109.163:38860.service: Deactivated successfully. Jan 29 12:06:41.989925 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 12:06:41.992015 systemd-logind[1456]: Session 23 logged out. Waiting for processes to exit. Jan 29 12:06:41.993816 systemd-logind[1456]: Removed session 23. Jan 29 12:06:47.037705 systemd[1]: Started sshd@24-10.128.0.18:22-147.75.109.163:38874.service - OpenSSH per-connection server daemon (147.75.109.163:38874). Jan 29 12:06:47.335500 sshd[4260]: Accepted publickey for core from 147.75.109.163 port 38874 ssh2: RSA SHA256:o3wzruhQrnnLrK/WKthtucnIRobYJRiusEDRL06Fd88 Jan 29 12:06:47.337523 sshd[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:47.344401 systemd-logind[1456]: New session 24 of user core. Jan 29 12:06:47.349514 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 12:06:47.619221 sshd[4260]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:47.624101 systemd[1]: sshd@24-10.128.0.18:22-147.75.109.163:38874.service: Deactivated successfully. Jan 29 12:06:47.626812 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 12:06:47.628907 systemd-logind[1456]: Session 24 logged out. Waiting for processes to exit. Jan 29 12:06:47.630626 systemd-logind[1456]: Removed session 24. Jan 29 12:06:47.671753 systemd[1]: Started sshd@25-10.128.0.18:22-147.75.109.163:55214.service - OpenSSH per-connection server daemon (147.75.109.163:55214). Jan 29 12:06:47.964964 sshd[4273]: Accepted publickey for core from 147.75.109.163 port 55214 ssh2: RSA SHA256:o3wzruhQrnnLrK/WKthtucnIRobYJRiusEDRL06Fd88 Jan 29 12:06:47.967039 sshd[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:47.974559 systemd-logind[1456]: New session 25 of user core. Jan 29 12:06:47.980527 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 12:06:50.370420 containerd[1467]: time="2025-01-29T12:06:50.370108371Z" level=info msg="StopContainer for \"47d6ba67cc36eb4c142d2f32b6f7d0795949e2121acc3137f315084b6af3b7c7\" with timeout 30 (s)" Jan 29 12:06:50.372450 containerd[1467]: time="2025-01-29T12:06:50.371835256Z" level=info msg="Stop container \"47d6ba67cc36eb4c142d2f32b6f7d0795949e2121acc3137f315084b6af3b7c7\" with signal terminated" Jan 29 12:06:50.390202 systemd[1]: run-containerd-runc-k8s.io-b996d1cc041622a8c5094820d301d76226c923a72b1a0ee07272b81344060628-runc.ZX07iG.mount: Deactivated successfully. Jan 29 12:06:50.403860 systemd[1]: cri-containerd-47d6ba67cc36eb4c142d2f32b6f7d0795949e2121acc3137f315084b6af3b7c7.scope: Deactivated successfully. Jan 29 12:06:50.415328 containerd[1467]: time="2025-01-29T12:06:50.414913365Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:06:50.425012 containerd[1467]: time="2025-01-29T12:06:50.424973664Z" level=info msg="StopContainer for \"b996d1cc041622a8c5094820d301d76226c923a72b1a0ee07272b81344060628\" with timeout 2 (s)" Jan 29 12:06:50.426049 containerd[1467]: time="2025-01-29T12:06:50.425250743Z" level=info msg="Stop container \"b996d1cc041622a8c5094820d301d76226c923a72b1a0ee07272b81344060628\" with signal terminated" Jan 29 12:06:50.437362 systemd-networkd[1382]: lxc_health: Link DOWN Jan 29 12:06:50.437375 systemd-networkd[1382]: lxc_health: Lost carrier Jan 29 12:06:50.455748 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47d6ba67cc36eb4c142d2f32b6f7d0795949e2121acc3137f315084b6af3b7c7-rootfs.mount: Deactivated successfully. Jan 29 12:06:50.463730 systemd[1]: cri-containerd-b996d1cc041622a8c5094820d301d76226c923a72b1a0ee07272b81344060628.scope: Deactivated successfully. Jan 29 12:06:50.464552 systemd[1]: cri-containerd-b996d1cc041622a8c5094820d301d76226c923a72b1a0ee07272b81344060628.scope: Consumed 9.219s CPU time. Jan 29 12:06:50.476881 containerd[1467]: time="2025-01-29T12:06:50.476655021Z" level=info msg="shim disconnected" id=47d6ba67cc36eb4c142d2f32b6f7d0795949e2121acc3137f315084b6af3b7c7 namespace=k8s.io Jan 29 12:06:50.476881 containerd[1467]: time="2025-01-29T12:06:50.476723209Z" level=warning msg="cleaning up after shim disconnected" id=47d6ba67cc36eb4c142d2f32b6f7d0795949e2121acc3137f315084b6af3b7c7 namespace=k8s.io Jan 29 12:06:50.476881 containerd[1467]: time="2025-01-29T12:06:50.476747661Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:06:50.512361 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b996d1cc041622a8c5094820d301d76226c923a72b1a0ee07272b81344060628-rootfs.mount: Deactivated successfully. Jan 29 12:06:50.514934 containerd[1467]: time="2025-01-29T12:06:50.514849524Z" level=warning msg="cleanup warnings time=\"2025-01-29T12:06:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 12:06:50.517720 containerd[1467]: time="2025-01-29T12:06:50.517635533Z" level=info msg="shim disconnected" id=b996d1cc041622a8c5094820d301d76226c923a72b1a0ee07272b81344060628 namespace=k8s.io Jan 29 12:06:50.517720 containerd[1467]: time="2025-01-29T12:06:50.517702286Z" level=warning msg="cleaning up after shim disconnected" id=b996d1cc041622a8c5094820d301d76226c923a72b1a0ee07272b81344060628 namespace=k8s.io Jan 29 12:06:50.517720 containerd[1467]: time="2025-01-29T12:06:50.517717614Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:06:50.518640 containerd[1467]: time="2025-01-29T12:06:50.518360310Z" level=info msg="StopContainer for \"47d6ba67cc36eb4c142d2f32b6f7d0795949e2121acc3137f315084b6af3b7c7\" returns successfully" Jan 29 12:06:50.519569 containerd[1467]: time="2025-01-29T12:06:50.519394049Z" level=info msg="StopPodSandbox for \"e57caa1501781861476f077e589206a7505f9a600aa5193e32e3d539bcaad18b\"" Jan 29 12:06:50.519569 containerd[1467]: time="2025-01-29T12:06:50.519456319Z" level=info msg="Container to stop \"47d6ba67cc36eb4c142d2f32b6f7d0795949e2121acc3137f315084b6af3b7c7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:06:50.526497 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e57caa1501781861476f077e589206a7505f9a600aa5193e32e3d539bcaad18b-shm.mount: Deactivated successfully. Jan 29 12:06:50.535814 systemd[1]: cri-containerd-e57caa1501781861476f077e589206a7505f9a600aa5193e32e3d539bcaad18b.scope: Deactivated successfully. Jan 29 12:06:50.551372 containerd[1467]: time="2025-01-29T12:06:50.551097251Z" level=warning msg="cleanup warnings time=\"2025-01-29T12:06:50Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 12:06:50.557375 containerd[1467]: time="2025-01-29T12:06:50.557341141Z" level=info msg="StopContainer for \"b996d1cc041622a8c5094820d301d76226c923a72b1a0ee07272b81344060628\" returns successfully" Jan 29 12:06:50.558574 containerd[1467]: time="2025-01-29T12:06:50.558132722Z" level=info msg="StopPodSandbox for \"d147079fa313902fa99899c18dca8babe91c2abc328649b51e139b38b954b494\"" Jan 29 12:06:50.558574 containerd[1467]: time="2025-01-29T12:06:50.558196361Z" level=info msg="Container to stop \"c1b8caf03a14ad845cf8377884596f27925ad410ba5a162f2bf7cdf8d35382ca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:06:50.558574 containerd[1467]: time="2025-01-29T12:06:50.558219923Z" level=info msg="Container to stop \"4e820178082a69a1e293990835c1a3c5504ddcb521e2b79f2199024af9f74f99\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:06:50.558574 containerd[1467]: time="2025-01-29T12:06:50.558239559Z" level=info msg="Container to stop \"6a6c1fe5b61df61a72b7c14d23ddb0eb16df0a93bf8cf18e2fd7723be9c3cd24\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:06:50.558574 containerd[1467]: time="2025-01-29T12:06:50.558260053Z" level=info msg="Container to stop \"5727503964734a0253add2e6a6cd1926dd8be03794e59b8738e8a29d6c0cb220\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:06:50.558574 containerd[1467]: time="2025-01-29T12:06:50.558281892Z" level=info msg="Container to stop \"b996d1cc041622a8c5094820d301d76226c923a72b1a0ee07272b81344060628\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:06:50.569363 systemd[1]: cri-containerd-d147079fa313902fa99899c18dca8babe91c2abc328649b51e139b38b954b494.scope: Deactivated successfully. Jan 29 12:06:50.581581 containerd[1467]: time="2025-01-29T12:06:50.581518263Z" level=info msg="shim disconnected" id=e57caa1501781861476f077e589206a7505f9a600aa5193e32e3d539bcaad18b namespace=k8s.io Jan 29 12:06:50.581933 containerd[1467]: time="2025-01-29T12:06:50.581582001Z" level=warning msg="cleaning up after shim disconnected" id=e57caa1501781861476f077e589206a7505f9a600aa5193e32e3d539bcaad18b namespace=k8s.io Jan 29 12:06:50.581933 containerd[1467]: time="2025-01-29T12:06:50.581597485Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:06:50.607698 containerd[1467]: time="2025-01-29T12:06:50.607500672Z" level=info msg="TearDown network for sandbox \"e57caa1501781861476f077e589206a7505f9a600aa5193e32e3d539bcaad18b\" successfully" Jan 29 12:06:50.607698 containerd[1467]: time="2025-01-29T12:06:50.607551584Z" level=info msg="StopPodSandbox for \"e57caa1501781861476f077e589206a7505f9a600aa5193e32e3d539bcaad18b\" returns successfully" Jan 29 12:06:50.608788 containerd[1467]: time="2025-01-29T12:06:50.608528514Z" level=info msg="shim disconnected" id=d147079fa313902fa99899c18dca8babe91c2abc328649b51e139b38b954b494 namespace=k8s.io Jan 29 12:06:50.608788 containerd[1467]: time="2025-01-29T12:06:50.608593042Z" level=warning msg="cleaning up after shim disconnected" id=d147079fa313902fa99899c18dca8babe91c2abc328649b51e139b38b954b494 namespace=k8s.io Jan 29 12:06:50.608788 containerd[1467]: time="2025-01-29T12:06:50.608607575Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:06:50.634839 containerd[1467]: time="2025-01-29T12:06:50.634229079Z" level=info msg="TearDown network for sandbox \"d147079fa313902fa99899c18dca8babe91c2abc328649b51e139b38b954b494\" successfully" Jan 29 12:06:50.634839 containerd[1467]: time="2025-01-29T12:06:50.634411649Z" level=info msg="StopPodSandbox for \"d147079fa313902fa99899c18dca8babe91c2abc328649b51e139b38b954b494\" returns successfully" Jan 29 12:06:50.697953 kubelet[2667]: I0129 12:06:50.697376 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-cni-path\") pod \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\" (UID: \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\") " Jan 29 12:06:50.697953 kubelet[2667]: I0129 12:06:50.697449 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-clustermesh-secrets\") pod \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\" (UID: \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\") " Jan 29 12:06:50.697953 kubelet[2667]: I0129 12:06:50.697476 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-cilium-run\") pod \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\" (UID: \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\") " Jan 29 12:06:50.697953 kubelet[2667]: I0129 12:06:50.697479 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-cni-path" (OuterVolumeSpecName: "cni-path") pod "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f" (UID: "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:06:50.697953 kubelet[2667]: I0129 12:06:50.697525 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f" (UID: "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:06:50.697953 kubelet[2667]: I0129 12:06:50.697502 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-cilium-cgroup\") pod \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\" (UID: \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\") " Jan 29 12:06:50.698906 kubelet[2667]: I0129 12:06:50.697569 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-hubble-tls\") pod \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\" (UID: \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\") " Jan 29 12:06:50.698906 kubelet[2667]: I0129 12:06:50.697596 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-lib-modules\") pod \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\" (UID: \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\") " Jan 29 12:06:50.698906 kubelet[2667]: I0129 12:06:50.697625 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e43c240b-1052-40ac-919c-34be121e1e40-cilium-config-path\") pod \"e43c240b-1052-40ac-919c-34be121e1e40\" (UID: \"e43c240b-1052-40ac-919c-34be121e1e40\") " Jan 29 12:06:50.698906 kubelet[2667]: I0129 12:06:50.697654 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn29x\" (UniqueName: \"kubernetes.io/projected/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-kube-api-access-kn29x\") pod \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\" (UID: \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\") " Jan 29 12:06:50.698906 kubelet[2667]: I0129 12:06:50.697679 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-xtables-lock\") pod \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\" (UID: \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\") " Jan 29 12:06:50.698906 kubelet[2667]: I0129 12:06:50.697704 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-etc-cni-netd\") pod \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\" (UID: \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\") " Jan 29 12:06:50.699227 kubelet[2667]: I0129 12:06:50.697731 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md8ng\" (UniqueName: \"kubernetes.io/projected/e43c240b-1052-40ac-919c-34be121e1e40-kube-api-access-md8ng\") pod \"e43c240b-1052-40ac-919c-34be121e1e40\" (UID: \"e43c240b-1052-40ac-919c-34be121e1e40\") " Jan 29 12:06:50.699227 kubelet[2667]: I0129 12:06:50.697758 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-bpf-maps\") pod \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\" (UID: \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\") " Jan 29 12:06:50.699227 kubelet[2667]: I0129 12:06:50.697783 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-host-proc-sys-net\") pod \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\" (UID: \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\") " Jan 29 12:06:50.699227 kubelet[2667]: I0129 12:06:50.697807 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-host-proc-sys-kernel\") pod \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\" (UID: \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\") " Jan 29 12:06:50.699227 kubelet[2667]: I0129 12:06:50.697836 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-cilium-config-path\") pod \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\" (UID: \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\") " Jan 29 12:06:50.699227 kubelet[2667]: I0129 12:06:50.697861 2667 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-hostproc\") pod \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\" (UID: \"fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f\") " Jan 29 12:06:50.699584 kubelet[2667]: I0129 12:06:50.697915 2667 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-cilium-cgroup\") on node \"ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 12:06:50.699584 kubelet[2667]: I0129 12:06:50.697936 2667 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-cni-path\") on node \"ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 12:06:50.699584 kubelet[2667]: I0129 12:06:50.697966 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-hostproc" (OuterVolumeSpecName: "hostproc") pod "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f" (UID: "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:06:50.699584 kubelet[2667]: I0129 12:06:50.697995 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f" (UID: "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:06:50.702857 kubelet[2667]: I0129 12:06:50.701488 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f" (UID: "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:06:50.702857 kubelet[2667]: I0129 12:06:50.701571 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f" (UID: "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:06:50.702857 kubelet[2667]: I0129 12:06:50.701597 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f" (UID: "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:06:50.702857 kubelet[2667]: I0129 12:06:50.701649 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f" (UID: "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:06:50.706422 kubelet[2667]: I0129 12:06:50.705869 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f" (UID: "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:06:50.706422 kubelet[2667]: I0129 12:06:50.705926 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f" (UID: "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:06:50.707508 kubelet[2667]: I0129 12:06:50.707476 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f" (UID: "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:50.709541 kubelet[2667]: I0129 12:06:50.709507 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f" (UID: "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:06:50.710289 kubelet[2667]: I0129 12:06:50.710257 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e43c240b-1052-40ac-919c-34be121e1e40-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e43c240b-1052-40ac-919c-34be121e1e40" (UID: "e43c240b-1052-40ac-919c-34be121e1e40"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:50.712164 kubelet[2667]: I0129 12:06:50.712129 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-kube-api-access-kn29x" (OuterVolumeSpecName: "kube-api-access-kn29x") pod "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f" (UID: "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f"). InnerVolumeSpecName "kube-api-access-kn29x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:50.712369 kubelet[2667]: I0129 12:06:50.712286 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e43c240b-1052-40ac-919c-34be121e1e40-kube-api-access-md8ng" (OuterVolumeSpecName: "kube-api-access-md8ng") pod "e43c240b-1052-40ac-919c-34be121e1e40" (UID: "e43c240b-1052-40ac-919c-34be121e1e40"). InnerVolumeSpecName "kube-api-access-md8ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:06:50.712636 kubelet[2667]: I0129 12:06:50.712602 2667 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f" (UID: "fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:06:50.798883 kubelet[2667]: I0129 12:06:50.798805 2667 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-lib-modules\") on node \"ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 12:06:50.798883 kubelet[2667]: I0129 12:06:50.798864 2667 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-kn29x\" (UniqueName: \"kubernetes.io/projected/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-kube-api-access-kn29x\") on node \"ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 12:06:50.798883 kubelet[2667]: I0129 12:06:50.798885 2667 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-xtables-lock\") on node \"ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 12:06:50.799245 kubelet[2667]: I0129 12:06:50.798904 2667 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-etc-cni-netd\") on node \"ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 12:06:50.799245 kubelet[2667]: I0129 12:06:50.798922 2667 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-md8ng\" (UniqueName: \"kubernetes.io/projected/e43c240b-1052-40ac-919c-34be121e1e40-kube-api-access-md8ng\") on node \"ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 12:06:50.799245 kubelet[2667]: I0129 12:06:50.798940 2667 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e43c240b-1052-40ac-919c-34be121e1e40-cilium-config-path\") on node \"ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 12:06:50.799245 kubelet[2667]: I0129 12:06:50.798956 2667 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-bpf-maps\") on node \"ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 12:06:50.799245 kubelet[2667]: I0129 12:06:50.798972 2667 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-host-proc-sys-kernel\") on node \"ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 12:06:50.799245 kubelet[2667]: I0129 12:06:50.798992 2667 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-cilium-config-path\") on node \"ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 12:06:50.799245 kubelet[2667]: I0129 12:06:50.799010 2667 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-hostproc\") on node \"ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 12:06:50.799586 kubelet[2667]: I0129 12:06:50.799026 2667 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-host-proc-sys-net\") on node \"ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 12:06:50.799586 kubelet[2667]: I0129 12:06:50.799043 2667 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-hubble-tls\") on node \"ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 12:06:50.799586 kubelet[2667]: I0129 12:06:50.799061 2667 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-clustermesh-secrets\") on node \"ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 12:06:50.799586 kubelet[2667]: I0129 12:06:50.799077 2667 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f-cilium-run\") on node \"ci-4081-3-0-8aea487ea02063b715a0.c.flatcar-212911.internal\" DevicePath \"\"" Jan 29 12:06:50.805957 kubelet[2667]: I0129 12:06:50.805827 2667 scope.go:117] "RemoveContainer" containerID="b996d1cc041622a8c5094820d301d76226c923a72b1a0ee07272b81344060628" Jan 29 12:06:50.811670 containerd[1467]: time="2025-01-29T12:06:50.811613694Z" level=info msg="RemoveContainer for \"b996d1cc041622a8c5094820d301d76226c923a72b1a0ee07272b81344060628\"" Jan 29 12:06:50.820275 systemd[1]: Removed slice kubepods-burstable-podfbdccb3d_4a0c_4fbe_a4de_7fb0e056240f.slice - libcontainer container kubepods-burstable-podfbdccb3d_4a0c_4fbe_a4de_7fb0e056240f.slice. Jan 29 12:06:50.820832 systemd[1]: kubepods-burstable-podfbdccb3d_4a0c_4fbe_a4de_7fb0e056240f.slice: Consumed 9.326s CPU time. Jan 29 12:06:50.824160 containerd[1467]: time="2025-01-29T12:06:50.824086679Z" level=info msg="RemoveContainer for \"b996d1cc041622a8c5094820d301d76226c923a72b1a0ee07272b81344060628\" returns successfully" Jan 29 12:06:50.826974 kubelet[2667]: I0129 12:06:50.826439 2667 scope.go:117] "RemoveContainer" containerID="6a6c1fe5b61df61a72b7c14d23ddb0eb16df0a93bf8cf18e2fd7723be9c3cd24" Jan 29 12:06:50.829726 containerd[1467]: time="2025-01-29T12:06:50.829354491Z" level=info msg="RemoveContainer for \"6a6c1fe5b61df61a72b7c14d23ddb0eb16df0a93bf8cf18e2fd7723be9c3cd24\"" Jan 29 12:06:50.834950 containerd[1467]: time="2025-01-29T12:06:50.834843088Z" level=info msg="RemoveContainer for \"6a6c1fe5b61df61a72b7c14d23ddb0eb16df0a93bf8cf18e2fd7723be9c3cd24\" returns successfully" Jan 29 12:06:50.838848 kubelet[2667]: I0129 12:06:50.838819 2667 scope.go:117] "RemoveContainer" containerID="4e820178082a69a1e293990835c1a3c5504ddcb521e2b79f2199024af9f74f99" Jan 29 12:06:50.843039 containerd[1467]: time="2025-01-29T12:06:50.842998414Z" level=info msg="RemoveContainer for \"4e820178082a69a1e293990835c1a3c5504ddcb521e2b79f2199024af9f74f99\"" Jan 29 12:06:50.844754 systemd[1]: Removed slice kubepods-besteffort-pode43c240b_1052_40ac_919c_34be121e1e40.slice - libcontainer container kubepods-besteffort-pode43c240b_1052_40ac_919c_34be121e1e40.slice. Jan 29 12:06:50.852627 containerd[1467]: time="2025-01-29T12:06:50.852458996Z" level=info msg="RemoveContainer for \"4e820178082a69a1e293990835c1a3c5504ddcb521e2b79f2199024af9f74f99\" returns successfully" Jan 29 12:06:50.853516 kubelet[2667]: I0129 12:06:50.853409 2667 scope.go:117] "RemoveContainer" containerID="c1b8caf03a14ad845cf8377884596f27925ad410ba5a162f2bf7cdf8d35382ca" Jan 29 12:06:50.855561 containerd[1467]: time="2025-01-29T12:06:50.855476377Z" level=info msg="RemoveContainer for \"c1b8caf03a14ad845cf8377884596f27925ad410ba5a162f2bf7cdf8d35382ca\"" Jan 29 12:06:50.860099 containerd[1467]: time="2025-01-29T12:06:50.860063926Z" level=info msg="RemoveContainer for \"c1b8caf03a14ad845cf8377884596f27925ad410ba5a162f2bf7cdf8d35382ca\" returns successfully" Jan 29 12:06:50.860372 kubelet[2667]: I0129 12:06:50.860328 2667 scope.go:117] "RemoveContainer" containerID="5727503964734a0253add2e6a6cd1926dd8be03794e59b8738e8a29d6c0cb220" Jan 29 12:06:50.862523 containerd[1467]: time="2025-01-29T12:06:50.862176574Z" level=info msg="RemoveContainer for \"5727503964734a0253add2e6a6cd1926dd8be03794e59b8738e8a29d6c0cb220\"" Jan 29 12:06:50.866997 containerd[1467]: time="2025-01-29T12:06:50.866953235Z" level=info msg="RemoveContainer for \"5727503964734a0253add2e6a6cd1926dd8be03794e59b8738e8a29d6c0cb220\" returns successfully" Jan 29 12:06:50.867455 kubelet[2667]: I0129 12:06:50.867297 2667 scope.go:117] "RemoveContainer" containerID="b996d1cc041622a8c5094820d301d76226c923a72b1a0ee07272b81344060628" Jan 29 12:06:50.867703 containerd[1467]: time="2025-01-29T12:06:50.867656436Z" level=error msg="ContainerStatus for \"b996d1cc041622a8c5094820d301d76226c923a72b1a0ee07272b81344060628\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b996d1cc041622a8c5094820d301d76226c923a72b1a0ee07272b81344060628\": not found" Jan 29 12:06:50.867879 kubelet[2667]: E0129 12:06:50.867843 2667 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b996d1cc041622a8c5094820d301d76226c923a72b1a0ee07272b81344060628\": not found" containerID="b996d1cc041622a8c5094820d301d76226c923a72b1a0ee07272b81344060628" Jan 29 12:06:50.868009 kubelet[2667]: I0129 12:06:50.867889 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b996d1cc041622a8c5094820d301d76226c923a72b1a0ee07272b81344060628"} err="failed to get container status \"b996d1cc041622a8c5094820d301d76226c923a72b1a0ee07272b81344060628\": rpc error: code = NotFound desc = an error occurred when try to find container \"b996d1cc041622a8c5094820d301d76226c923a72b1a0ee07272b81344060628\": not found" Jan 29 12:06:50.868009 kubelet[2667]: I0129 12:06:50.868001 2667 scope.go:117] "RemoveContainer" containerID="6a6c1fe5b61df61a72b7c14d23ddb0eb16df0a93bf8cf18e2fd7723be9c3cd24" Jan 29 12:06:50.868393 containerd[1467]: time="2025-01-29T12:06:50.868343465Z" level=error msg="ContainerStatus for \"6a6c1fe5b61df61a72b7c14d23ddb0eb16df0a93bf8cf18e2fd7723be9c3cd24\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6a6c1fe5b61df61a72b7c14d23ddb0eb16df0a93bf8cf18e2fd7723be9c3cd24\": not found" Jan 29 12:06:50.868605 kubelet[2667]: E0129 12:06:50.868573 2667 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6a6c1fe5b61df61a72b7c14d23ddb0eb16df0a93bf8cf18e2fd7723be9c3cd24\": not found" containerID="6a6c1fe5b61df61a72b7c14d23ddb0eb16df0a93bf8cf18e2fd7723be9c3cd24" Jan 29 12:06:50.868705 kubelet[2667]: I0129 12:06:50.868611 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6a6c1fe5b61df61a72b7c14d23ddb0eb16df0a93bf8cf18e2fd7723be9c3cd24"} err="failed to get container status \"6a6c1fe5b61df61a72b7c14d23ddb0eb16df0a93bf8cf18e2fd7723be9c3cd24\": rpc error: code = NotFound desc = an error occurred when try to find container \"6a6c1fe5b61df61a72b7c14d23ddb0eb16df0a93bf8cf18e2fd7723be9c3cd24\": not found" Jan 29 12:06:50.868705 kubelet[2667]: I0129 12:06:50.868647 2667 scope.go:117] "RemoveContainer" containerID="4e820178082a69a1e293990835c1a3c5504ddcb521e2b79f2199024af9f74f99" Jan 29 12:06:50.869231 containerd[1467]: time="2025-01-29T12:06:50.869172056Z" level=error msg="ContainerStatus for \"4e820178082a69a1e293990835c1a3c5504ddcb521e2b79f2199024af9f74f99\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4e820178082a69a1e293990835c1a3c5504ddcb521e2b79f2199024af9f74f99\": not found" Jan 29 12:06:50.869716 kubelet[2667]: E0129 12:06:50.869565 2667 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4e820178082a69a1e293990835c1a3c5504ddcb521e2b79f2199024af9f74f99\": not found" containerID="4e820178082a69a1e293990835c1a3c5504ddcb521e2b79f2199024af9f74f99" Jan 29 12:06:50.869716 kubelet[2667]: I0129 12:06:50.869616 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4e820178082a69a1e293990835c1a3c5504ddcb521e2b79f2199024af9f74f99"} err="failed to get container status \"4e820178082a69a1e293990835c1a3c5504ddcb521e2b79f2199024af9f74f99\": rpc error: code = NotFound desc = an error occurred when try to find container \"4e820178082a69a1e293990835c1a3c5504ddcb521e2b79f2199024af9f74f99\": not found" Jan 29 12:06:50.870554 kubelet[2667]: I0129 12:06:50.870410 2667 scope.go:117] "RemoveContainer" containerID="c1b8caf03a14ad845cf8377884596f27925ad410ba5a162f2bf7cdf8d35382ca" Jan 29 12:06:50.870807 containerd[1467]: time="2025-01-29T12:06:50.870702326Z" level=error msg="ContainerStatus for \"c1b8caf03a14ad845cf8377884596f27925ad410ba5a162f2bf7cdf8d35382ca\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c1b8caf03a14ad845cf8377884596f27925ad410ba5a162f2bf7cdf8d35382ca\": not found" Jan 29 12:06:50.871505 kubelet[2667]: E0129 12:06:50.871244 2667 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c1b8caf03a14ad845cf8377884596f27925ad410ba5a162f2bf7cdf8d35382ca\": not found" containerID="c1b8caf03a14ad845cf8377884596f27925ad410ba5a162f2bf7cdf8d35382ca" Jan 29 12:06:50.871505 kubelet[2667]: I0129 12:06:50.871280 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c1b8caf03a14ad845cf8377884596f27925ad410ba5a162f2bf7cdf8d35382ca"} err="failed to get container status \"c1b8caf03a14ad845cf8377884596f27925ad410ba5a162f2bf7cdf8d35382ca\": rpc error: code = NotFound desc = an error occurred when try to find container \"c1b8caf03a14ad845cf8377884596f27925ad410ba5a162f2bf7cdf8d35382ca\": not found" Jan 29 12:06:50.871505 kubelet[2667]: I0129 12:06:50.871320 2667 scope.go:117] "RemoveContainer" containerID="5727503964734a0253add2e6a6cd1926dd8be03794e59b8738e8a29d6c0cb220" Jan 29 12:06:50.872770 kubelet[2667]: E0129 12:06:50.871887 2667 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5727503964734a0253add2e6a6cd1926dd8be03794e59b8738e8a29d6c0cb220\": not found" containerID="5727503964734a0253add2e6a6cd1926dd8be03794e59b8738e8a29d6c0cb220" Jan 29 12:06:50.872770 kubelet[2667]: I0129 12:06:50.871938 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5727503964734a0253add2e6a6cd1926dd8be03794e59b8738e8a29d6c0cb220"} err="failed to get container status \"5727503964734a0253add2e6a6cd1926dd8be03794e59b8738e8a29d6c0cb220\": rpc error: code = NotFound desc = an error occurred when try to find container \"5727503964734a0253add2e6a6cd1926dd8be03794e59b8738e8a29d6c0cb220\": not found" Jan 29 12:06:50.872770 kubelet[2667]: I0129 12:06:50.871964 2667 scope.go:117] "RemoveContainer" containerID="47d6ba67cc36eb4c142d2f32b6f7d0795949e2121acc3137f315084b6af3b7c7" Jan 29 12:06:50.872958 containerd[1467]: time="2025-01-29T12:06:50.871578109Z" level=error msg="ContainerStatus for \"5727503964734a0253add2e6a6cd1926dd8be03794e59b8738e8a29d6c0cb220\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5727503964734a0253add2e6a6cd1926dd8be03794e59b8738e8a29d6c0cb220\": not found" Jan 29 12:06:50.873565 containerd[1467]: time="2025-01-29T12:06:50.873520619Z" level=info msg="RemoveContainer for \"47d6ba67cc36eb4c142d2f32b6f7d0795949e2121acc3137f315084b6af3b7c7\"" Jan 29 12:06:50.877664 containerd[1467]: time="2025-01-29T12:06:50.877619021Z" level=info msg="RemoveContainer for \"47d6ba67cc36eb4c142d2f32b6f7d0795949e2121acc3137f315084b6af3b7c7\" returns successfully" Jan 29 12:06:50.878039 kubelet[2667]: I0129 12:06:50.877912 2667 scope.go:117] "RemoveContainer" containerID="47d6ba67cc36eb4c142d2f32b6f7d0795949e2121acc3137f315084b6af3b7c7" Jan 29 12:06:50.878363 containerd[1467]: time="2025-01-29T12:06:50.878244700Z" level=error msg="ContainerStatus for \"47d6ba67cc36eb4c142d2f32b6f7d0795949e2121acc3137f315084b6af3b7c7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"47d6ba67cc36eb4c142d2f32b6f7d0795949e2121acc3137f315084b6af3b7c7\": not found" Jan 29 12:06:50.878613 kubelet[2667]: E0129 12:06:50.878444 2667 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"47d6ba67cc36eb4c142d2f32b6f7d0795949e2121acc3137f315084b6af3b7c7\": not found" containerID="47d6ba67cc36eb4c142d2f32b6f7d0795949e2121acc3137f315084b6af3b7c7" Jan 29 12:06:50.878613 kubelet[2667]: I0129 12:06:50.878476 2667 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"47d6ba67cc36eb4c142d2f32b6f7d0795949e2121acc3137f315084b6af3b7c7"} err="failed to get container status \"47d6ba67cc36eb4c142d2f32b6f7d0795949e2121acc3137f315084b6af3b7c7\": rpc error: code = NotFound desc = an error occurred when try to find container \"47d6ba67cc36eb4c142d2f32b6f7d0795949e2121acc3137f315084b6af3b7c7\": not found" Jan 29 12:06:51.325264 kubelet[2667]: I0129 12:06:51.325200 2667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e43c240b-1052-40ac-919c-34be121e1e40" path="/var/lib/kubelet/pods/e43c240b-1052-40ac-919c-34be121e1e40/volumes" Jan 29 12:06:51.326033 kubelet[2667]: I0129 12:06:51.325979 2667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f" path="/var/lib/kubelet/pods/fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f/volumes" Jan 29 12:06:51.375029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d147079fa313902fa99899c18dca8babe91c2abc328649b51e139b38b954b494-rootfs.mount: Deactivated successfully. Jan 29 12:06:51.375190 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d147079fa313902fa99899c18dca8babe91c2abc328649b51e139b38b954b494-shm.mount: Deactivated successfully. Jan 29 12:06:51.375332 systemd[1]: var-lib-kubelet-pods-fbdccb3d\x2d4a0c\x2d4fbe\x2da4de\x2d7fb0e056240f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 12:06:51.375537 systemd[1]: var-lib-kubelet-pods-fbdccb3d\x2d4a0c\x2d4fbe\x2da4de\x2d7fb0e056240f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 12:06:51.375740 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e57caa1501781861476f077e589206a7505f9a600aa5193e32e3d539bcaad18b-rootfs.mount: Deactivated successfully. Jan 29 12:06:51.375893 systemd[1]: var-lib-kubelet-pods-e43c240b\x2d1052\x2d40ac\x2d919c\x2d34be121e1e40-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmd8ng.mount: Deactivated successfully. Jan 29 12:06:51.376031 systemd[1]: var-lib-kubelet-pods-fbdccb3d\x2d4a0c\x2d4fbe\x2da4de\x2d7fb0e056240f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkn29x.mount: Deactivated successfully. Jan 29 12:06:52.317176 sshd[4273]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:52.323207 systemd[1]: sshd@25-10.128.0.18:22-147.75.109.163:55214.service: Deactivated successfully. Jan 29 12:06:52.325632 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 12:06:52.325907 systemd[1]: session-25.scope: Consumed 1.610s CPU time. Jan 29 12:06:52.326901 systemd-logind[1456]: Session 25 logged out. Waiting for processes to exit. Jan 29 12:06:52.328812 systemd-logind[1456]: Removed session 25. Jan 29 12:06:52.378871 systemd[1]: Started sshd@26-10.128.0.18:22-147.75.109.163:55228.service - OpenSSH per-connection server daemon (147.75.109.163:55228). Jan 29 12:06:52.662365 sshd[4437]: Accepted publickey for core from 147.75.109.163 port 55228 ssh2: RSA SHA256:o3wzruhQrnnLrK/WKthtucnIRobYJRiusEDRL06Fd88 Jan 29 12:06:52.664266 sshd[4437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:52.671440 systemd-logind[1456]: New session 26 of user core. Jan 29 12:06:52.676504 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 12:06:52.759923 ntpd[1435]: Deleting interface #12 lxc_health, fe80::b05f:60ff:fe1a:273d%8#123, interface stats: received=0, sent=0, dropped=0, active_time=81 secs Jan 29 12:06:52.760411 ntpd[1435]: 29 Jan 12:06:52 ntpd[1435]: Deleting interface #12 lxc_health, fe80::b05f:60ff:fe1a:273d%8#123, interface stats: received=0, sent=0, dropped=0, active_time=81 secs Jan 29 12:06:53.740137 kubelet[2667]: I0129 12:06:53.740040 2667 topology_manager.go:215] "Topology Admit Handler" podUID="fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4" podNamespace="kube-system" podName="cilium-t64xn" Jan 29 12:06:53.740791 kubelet[2667]: E0129 12:06:53.740185 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f" containerName="mount-cgroup" Jan 29 12:06:53.740791 kubelet[2667]: E0129 12:06:53.740202 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f" containerName="apply-sysctl-overwrites" Jan 29 12:06:53.740791 kubelet[2667]: E0129 12:06:53.740213 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f" containerName="cilium-agent" Jan 29 12:06:53.740791 kubelet[2667]: E0129 12:06:53.740223 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e43c240b-1052-40ac-919c-34be121e1e40" containerName="cilium-operator" Jan 29 12:06:53.740791 kubelet[2667]: E0129 12:06:53.740234 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f" containerName="mount-bpf-fs" Jan 29 12:06:53.740791 kubelet[2667]: E0129 12:06:53.740243 2667 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f" containerName="clean-cilium-state" Jan 29 12:06:53.740791 kubelet[2667]: I0129 12:06:53.740282 2667 memory_manager.go:354] "RemoveStaleState removing state" podUID="e43c240b-1052-40ac-919c-34be121e1e40" containerName="cilium-operator" Jan 29 12:06:53.740791 kubelet[2667]: I0129 12:06:53.740293 2667 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbdccb3d-4a0c-4fbe-a4de-7fb0e056240f" containerName="cilium-agent" Jan 29 12:06:53.748169 sshd[4437]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:53.765189 systemd[1]: sshd@26-10.128.0.18:22-147.75.109.163:55228.service: Deactivated successfully. Jan 29 12:06:53.771695 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 12:06:53.777510 systemd-logind[1456]: Session 26 logged out. Waiting for processes to exit. Jan 29 12:06:53.781509 systemd[1]: Created slice kubepods-burstable-podfb0aaa11_0a28_4dc7_8ce7_ffea3f218dc4.slice - libcontainer container kubepods-burstable-podfb0aaa11_0a28_4dc7_8ce7_ffea3f218dc4.slice. Jan 29 12:06:53.802579 systemd-logind[1456]: Removed session 26. Jan 29 12:06:53.813212 systemd[1]: Started sshd@27-10.128.0.18:22-147.75.109.163:55234.service - OpenSSH per-connection server daemon (147.75.109.163:55234). Jan 29 12:06:53.818333 kubelet[2667]: I0129 12:06:53.817988 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4-cilium-run\") pod \"cilium-t64xn\" (UID: \"fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4\") " pod="kube-system/cilium-t64xn" Jan 29 12:06:53.818333 kubelet[2667]: I0129 12:06:53.818039 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4-clustermesh-secrets\") pod \"cilium-t64xn\" (UID: \"fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4\") " pod="kube-system/cilium-t64xn" Jan 29 12:06:53.818333 kubelet[2667]: I0129 12:06:53.818074 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnxhk\" (UniqueName: \"kubernetes.io/projected/fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4-kube-api-access-cnxhk\") pod \"cilium-t64xn\" (UID: \"fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4\") " pod="kube-system/cilium-t64xn" Jan 29 12:06:53.818333 kubelet[2667]: I0129 12:06:53.818100 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4-cilium-cgroup\") pod \"cilium-t64xn\" (UID: \"fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4\") " pod="kube-system/cilium-t64xn" Jan 29 12:06:53.818333 kubelet[2667]: I0129 12:06:53.818127 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4-lib-modules\") pod \"cilium-t64xn\" (UID: \"fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4\") " pod="kube-system/cilium-t64xn" Jan 29 12:06:53.818333 kubelet[2667]: I0129 12:06:53.818153 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4-etc-cni-netd\") pod \"cilium-t64xn\" (UID: \"fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4\") " pod="kube-system/cilium-t64xn" Jan 29 12:06:53.818703 kubelet[2667]: I0129 12:06:53.818181 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4-bpf-maps\") pod \"cilium-t64xn\" (UID: \"fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4\") " pod="kube-system/cilium-t64xn" Jan 29 12:06:53.818703 kubelet[2667]: I0129 12:06:53.818213 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4-cilium-ipsec-secrets\") pod \"cilium-t64xn\" (UID: \"fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4\") " pod="kube-system/cilium-t64xn" Jan 29 12:06:53.818703 kubelet[2667]: I0129 12:06:53.818241 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4-host-proc-sys-kernel\") pod \"cilium-t64xn\" (UID: \"fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4\") " pod="kube-system/cilium-t64xn" Jan 29 12:06:53.818703 kubelet[2667]: I0129 12:06:53.818266 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4-cilium-config-path\") pod \"cilium-t64xn\" (UID: \"fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4\") " pod="kube-system/cilium-t64xn" Jan 29 12:06:53.818703 kubelet[2667]: I0129 12:06:53.818295 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4-hostproc\") pod \"cilium-t64xn\" (UID: \"fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4\") " pod="kube-system/cilium-t64xn" Jan 29 12:06:53.818703 kubelet[2667]: I0129 12:06:53.818338 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4-cni-path\") pod \"cilium-t64xn\" (UID: \"fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4\") " pod="kube-system/cilium-t64xn" Jan 29 12:06:53.819015 kubelet[2667]: I0129 12:06:53.818368 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4-host-proc-sys-net\") pod \"cilium-t64xn\" (UID: \"fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4\") " pod="kube-system/cilium-t64xn" Jan 29 12:06:53.819015 kubelet[2667]: I0129 12:06:53.818393 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4-hubble-tls\") pod \"cilium-t64xn\" (UID: \"fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4\") " pod="kube-system/cilium-t64xn" Jan 29 12:06:53.819015 kubelet[2667]: I0129 12:06:53.818418 2667 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4-xtables-lock\") pod \"cilium-t64xn\" (UID: \"fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4\") " pod="kube-system/cilium-t64xn" Jan 29 12:06:54.104153 containerd[1467]: time="2025-01-29T12:06:54.104102083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t64xn,Uid:fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4,Namespace:kube-system,Attempt:0,}" Jan 29 12:06:54.121241 sshd[4449]: Accepted publickey for core from 147.75.109.163 port 55234 ssh2: RSA SHA256:o3wzruhQrnnLrK/WKthtucnIRobYJRiusEDRL06Fd88 Jan 29 12:06:54.126854 sshd[4449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:54.135286 systemd-logind[1456]: New session 27 of user core. Jan 29 12:06:54.140526 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 12:06:54.148682 containerd[1467]: time="2025-01-29T12:06:54.148572323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:06:54.149977 containerd[1467]: time="2025-01-29T12:06:54.149768146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:06:54.150429 containerd[1467]: time="2025-01-29T12:06:54.150366583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:06:54.151217 containerd[1467]: time="2025-01-29T12:06:54.150556913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:06:54.175572 systemd[1]: Started cri-containerd-ba3d6f83110b3c1f868cfd4a3b11f2734a15c507c041aef8027f26cde2b38852.scope - libcontainer container ba3d6f83110b3c1f868cfd4a3b11f2734a15c507c041aef8027f26cde2b38852. Jan 29 12:06:54.207644 containerd[1467]: time="2025-01-29T12:06:54.207589856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t64xn,Uid:fb0aaa11-0a28-4dc7-8ce7-ffea3f218dc4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba3d6f83110b3c1f868cfd4a3b11f2734a15c507c041aef8027f26cde2b38852\"" Jan 29 12:06:54.213265 containerd[1467]: time="2025-01-29T12:06:54.213220624Z" level=info msg="CreateContainer within sandbox \"ba3d6f83110b3c1f868cfd4a3b11f2734a15c507c041aef8027f26cde2b38852\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 12:06:54.229674 containerd[1467]: time="2025-01-29T12:06:54.229616057Z" level=info msg="CreateContainer within sandbox \"ba3d6f83110b3c1f868cfd4a3b11f2734a15c507c041aef8027f26cde2b38852\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"19575aca5652e98373d12d79b60582364ae3f9a83d47e3ac8c44712a24d1bf96\"" Jan 29 12:06:54.230333 containerd[1467]: time="2025-01-29T12:06:54.230128822Z" level=info msg="StartContainer for \"19575aca5652e98373d12d79b60582364ae3f9a83d47e3ac8c44712a24d1bf96\"" Jan 29 12:06:54.263521 systemd[1]: Started cri-containerd-19575aca5652e98373d12d79b60582364ae3f9a83d47e3ac8c44712a24d1bf96.scope - libcontainer container 19575aca5652e98373d12d79b60582364ae3f9a83d47e3ac8c44712a24d1bf96. Jan 29 12:06:54.301060 containerd[1467]: time="2025-01-29T12:06:54.300964003Z" level=info msg="StartContainer for \"19575aca5652e98373d12d79b60582364ae3f9a83d47e3ac8c44712a24d1bf96\" returns successfully" Jan 29 12:06:54.311139 systemd[1]: cri-containerd-19575aca5652e98373d12d79b60582364ae3f9a83d47e3ac8c44712a24d1bf96.scope: Deactivated successfully. Jan 29 12:06:54.333864 sshd[4449]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:54.342999 systemd[1]: sshd@27-10.128.0.18:22-147.75.109.163:55234.service: Deactivated successfully. Jan 29 12:06:54.345738 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 12:06:54.347169 systemd-logind[1456]: Session 27 logged out. Waiting for processes to exit. Jan 29 12:06:54.348956 systemd-logind[1456]: Removed session 27. Jan 29 12:06:54.353219 containerd[1467]: time="2025-01-29T12:06:54.352948822Z" level=info msg="shim disconnected" id=19575aca5652e98373d12d79b60582364ae3f9a83d47e3ac8c44712a24d1bf96 namespace=k8s.io Jan 29 12:06:54.353219 containerd[1467]: time="2025-01-29T12:06:54.353013778Z" level=warning msg="cleaning up after shim disconnected" id=19575aca5652e98373d12d79b60582364ae3f9a83d47e3ac8c44712a24d1bf96 namespace=k8s.io Jan 29 12:06:54.353219 containerd[1467]: time="2025-01-29T12:06:54.353031077Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:06:54.392710 systemd[1]: Started sshd@28-10.128.0.18:22-147.75.109.163:55244.service - OpenSSH per-connection server daemon (147.75.109.163:55244). Jan 29 12:06:54.487364 kubelet[2667]: E0129 12:06:54.487283 2667 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 12:06:54.688929 sshd[4565]: Accepted publickey for core from 147.75.109.163 port 55244 ssh2: RSA SHA256:o3wzruhQrnnLrK/WKthtucnIRobYJRiusEDRL06Fd88 Jan 29 12:06:54.690644 sshd[4565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:54.697412 systemd-logind[1456]: New session 28 of user core. Jan 29 12:06:54.702574 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 29 12:06:54.831809 containerd[1467]: time="2025-01-29T12:06:54.831759457Z" level=info msg="CreateContainer within sandbox \"ba3d6f83110b3c1f868cfd4a3b11f2734a15c507c041aef8027f26cde2b38852\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 12:06:54.847596 containerd[1467]: time="2025-01-29T12:06:54.847531482Z" level=info msg="CreateContainer within sandbox \"ba3d6f83110b3c1f868cfd4a3b11f2734a15c507c041aef8027f26cde2b38852\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"43491f4bf13e965007e23afe3bd035cfa74aab8bd52685ecfb40dbc5b9b6be7c\"" Jan 29 12:06:54.848290 containerd[1467]: time="2025-01-29T12:06:54.848098658Z" level=info msg="StartContainer for \"43491f4bf13e965007e23afe3bd035cfa74aab8bd52685ecfb40dbc5b9b6be7c\"" Jan 29 12:06:54.898547 systemd[1]: Started cri-containerd-43491f4bf13e965007e23afe3bd035cfa74aab8bd52685ecfb40dbc5b9b6be7c.scope - libcontainer container 43491f4bf13e965007e23afe3bd035cfa74aab8bd52685ecfb40dbc5b9b6be7c. Jan 29 12:06:54.953103 containerd[1467]: time="2025-01-29T12:06:54.951905778Z" level=info msg="StartContainer for \"43491f4bf13e965007e23afe3bd035cfa74aab8bd52685ecfb40dbc5b9b6be7c\" returns successfully" Jan 29 12:06:54.963073 systemd[1]: cri-containerd-43491f4bf13e965007e23afe3bd035cfa74aab8bd52685ecfb40dbc5b9b6be7c.scope: Deactivated successfully. Jan 29 12:06:55.003384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43491f4bf13e965007e23afe3bd035cfa74aab8bd52685ecfb40dbc5b9b6be7c-rootfs.mount: Deactivated successfully. Jan 29 12:06:55.007864 containerd[1467]: time="2025-01-29T12:06:55.007482752Z" level=info msg="shim disconnected" id=43491f4bf13e965007e23afe3bd035cfa74aab8bd52685ecfb40dbc5b9b6be7c namespace=k8s.io Jan 29 12:06:55.007864 containerd[1467]: time="2025-01-29T12:06:55.007704032Z" level=warning msg="cleaning up after shim disconnected" id=43491f4bf13e965007e23afe3bd035cfa74aab8bd52685ecfb40dbc5b9b6be7c namespace=k8s.io Jan 29 12:06:55.007864 containerd[1467]: time="2025-01-29T12:06:55.007721934Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:06:55.835842 containerd[1467]: time="2025-01-29T12:06:55.835672675Z" level=info msg="CreateContainer within sandbox \"ba3d6f83110b3c1f868cfd4a3b11f2734a15c507c041aef8027f26cde2b38852\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 12:06:55.859335 containerd[1467]: time="2025-01-29T12:06:55.857865455Z" level=info msg="CreateContainer within sandbox \"ba3d6f83110b3c1f868cfd4a3b11f2734a15c507c041aef8027f26cde2b38852\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"be60b2177c51af0b8ba2e62f3a2f1cb487f8e81ee57f16dd3365bb19d15d3769\"" Jan 29 12:06:55.859335 containerd[1467]: time="2025-01-29T12:06:55.858566872Z" level=info msg="StartContainer for \"be60b2177c51af0b8ba2e62f3a2f1cb487f8e81ee57f16dd3365bb19d15d3769\"" Jan 29 12:06:55.912523 systemd[1]: Started cri-containerd-be60b2177c51af0b8ba2e62f3a2f1cb487f8e81ee57f16dd3365bb19d15d3769.scope - libcontainer container be60b2177c51af0b8ba2e62f3a2f1cb487f8e81ee57f16dd3365bb19d15d3769. Jan 29 12:06:55.930047 systemd[1]: run-containerd-runc-k8s.io-be60b2177c51af0b8ba2e62f3a2f1cb487f8e81ee57f16dd3365bb19d15d3769-runc.c3Rpgq.mount: Deactivated successfully. Jan 29 12:06:55.954557 containerd[1467]: time="2025-01-29T12:06:55.954393487Z" level=info msg="StartContainer for \"be60b2177c51af0b8ba2e62f3a2f1cb487f8e81ee57f16dd3365bb19d15d3769\" returns successfully" Jan 29 12:06:55.958436 systemd[1]: cri-containerd-be60b2177c51af0b8ba2e62f3a2f1cb487f8e81ee57f16dd3365bb19d15d3769.scope: Deactivated successfully. Jan 29 12:06:55.991177 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be60b2177c51af0b8ba2e62f3a2f1cb487f8e81ee57f16dd3365bb19d15d3769-rootfs.mount: Deactivated successfully. Jan 29 12:06:55.994426 containerd[1467]: time="2025-01-29T12:06:55.994336663Z" level=info msg="shim disconnected" id=be60b2177c51af0b8ba2e62f3a2f1cb487f8e81ee57f16dd3365bb19d15d3769 namespace=k8s.io Jan 29 12:06:55.994426 containerd[1467]: time="2025-01-29T12:06:55.994408521Z" level=warning msg="cleaning up after shim disconnected" id=be60b2177c51af0b8ba2e62f3a2f1cb487f8e81ee57f16dd3365bb19d15d3769 namespace=k8s.io Jan 29 12:06:55.994426 containerd[1467]: time="2025-01-29T12:06:55.994422930Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:06:56.842839 containerd[1467]: time="2025-01-29T12:06:56.842791376Z" level=info msg="CreateContainer within sandbox \"ba3d6f83110b3c1f868cfd4a3b11f2734a15c507c041aef8027f26cde2b38852\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 12:06:56.863785 containerd[1467]: time="2025-01-29T12:06:56.862922687Z" level=info msg="CreateContainer within sandbox \"ba3d6f83110b3c1f868cfd4a3b11f2734a15c507c041aef8027f26cde2b38852\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"73b97f241991e20d507bc7fb8a4480a3a39e50077448895637ed78c5014a3570\"" Jan 29 12:06:56.866786 containerd[1467]: time="2025-01-29T12:06:56.864513546Z" level=info msg="StartContainer for \"73b97f241991e20d507bc7fb8a4480a3a39e50077448895637ed78c5014a3570\"" Jan 29 12:06:56.906526 systemd[1]: Started cri-containerd-73b97f241991e20d507bc7fb8a4480a3a39e50077448895637ed78c5014a3570.scope - libcontainer container 73b97f241991e20d507bc7fb8a4480a3a39e50077448895637ed78c5014a3570. Jan 29 12:06:56.940584 systemd[1]: cri-containerd-73b97f241991e20d507bc7fb8a4480a3a39e50077448895637ed78c5014a3570.scope: Deactivated successfully. Jan 29 12:06:56.948579 containerd[1467]: time="2025-01-29T12:06:56.948445362Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb0aaa11_0a28_4dc7_8ce7_ffea3f218dc4.slice/cri-containerd-73b97f241991e20d507bc7fb8a4480a3a39e50077448895637ed78c5014a3570.scope/memory.events\": no such file or directory" Jan 29 12:06:56.949496 containerd[1467]: time="2025-01-29T12:06:56.949421365Z" level=info msg="StartContainer for \"73b97f241991e20d507bc7fb8a4480a3a39e50077448895637ed78c5014a3570\" returns successfully" Jan 29 12:06:56.979552 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73b97f241991e20d507bc7fb8a4480a3a39e50077448895637ed78c5014a3570-rootfs.mount: Deactivated successfully. Jan 29 12:06:56.981755 containerd[1467]: time="2025-01-29T12:06:56.981677868Z" level=info msg="shim disconnected" id=73b97f241991e20d507bc7fb8a4480a3a39e50077448895637ed78c5014a3570 namespace=k8s.io Jan 29 12:06:56.981755 containerd[1467]: time="2025-01-29T12:06:56.981747584Z" level=warning msg="cleaning up after shim disconnected" id=73b97f241991e20d507bc7fb8a4480a3a39e50077448895637ed78c5014a3570 namespace=k8s.io Jan 29 12:06:56.982424 containerd[1467]: time="2025-01-29T12:06:56.981763847Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:06:57.848418 containerd[1467]: time="2025-01-29T12:06:57.847826949Z" level=info msg="CreateContainer within sandbox \"ba3d6f83110b3c1f868cfd4a3b11f2734a15c507c041aef8027f26cde2b38852\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 12:06:57.885224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount463531693.mount: Deactivated successfully. Jan 29 12:06:57.888781 containerd[1467]: time="2025-01-29T12:06:57.888558155Z" level=info msg="CreateContainer within sandbox \"ba3d6f83110b3c1f868cfd4a3b11f2734a15c507c041aef8027f26cde2b38852\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ab532125d4f6d68e900205e5664ab5e7097c52f86cc02303eeceb87b6042a073\"" Jan 29 12:06:57.890916 containerd[1467]: time="2025-01-29T12:06:57.890129004Z" level=info msg="StartContainer for \"ab532125d4f6d68e900205e5664ab5e7097c52f86cc02303eeceb87b6042a073\"" Jan 29 12:06:57.949543 systemd[1]: Started cri-containerd-ab532125d4f6d68e900205e5664ab5e7097c52f86cc02303eeceb87b6042a073.scope - libcontainer container ab532125d4f6d68e900205e5664ab5e7097c52f86cc02303eeceb87b6042a073. Jan 29 12:06:57.993878 containerd[1467]: time="2025-01-29T12:06:57.993826358Z" level=info msg="StartContainer for \"ab532125d4f6d68e900205e5664ab5e7097c52f86cc02303eeceb87b6042a073\" returns successfully" Jan 29 12:06:58.603475 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 29 12:07:01.348729 systemd[1]: run-containerd-runc-k8s.io-ab532125d4f6d68e900205e5664ab5e7097c52f86cc02303eeceb87b6042a073-runc.LoKYU2.mount: Deactivated successfully. Jan 29 12:07:01.789850 systemd-networkd[1382]: lxc_health: Link UP Jan 29 12:07:01.799486 systemd-networkd[1382]: lxc_health: Gained carrier Jan 29 12:07:02.141193 kubelet[2667]: I0129 12:07:02.140521 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-t64xn" podStartSLOduration=9.140499031 podStartE2EDuration="9.140499031s" podCreationTimestamp="2025-01-29 12:06:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:06:58.870821424 +0000 UTC m=+129.682115709" watchObservedRunningTime="2025-01-29 12:07:02.140499031 +0000 UTC m=+132.951793314" Jan 29 12:07:03.748250 systemd-networkd[1382]: lxc_health: Gained IPv6LL Jan 29 12:07:05.759995 ntpd[1435]: Listen normally on 15 lxc_health [fe80::eceb:b1ff:fe00:110a%14]:123 Jan 29 12:07:05.760669 ntpd[1435]: 29 Jan 12:07:05 ntpd[1435]: Listen normally on 15 lxc_health [fe80::eceb:b1ff:fe00:110a%14]:123 Jan 29 12:07:08.025121 systemd[1]: run-containerd-runc-k8s.io-ab532125d4f6d68e900205e5664ab5e7097c52f86cc02303eeceb87b6042a073-runc.6zGNBV.mount: Deactivated successfully. Jan 29 12:07:08.153080 sshd[4565]: pam_unix(sshd:session): session closed for user core Jan 29 12:07:08.158910 systemd[1]: sshd@28-10.128.0.18:22-147.75.109.163:55244.service: Deactivated successfully. Jan 29 12:07:08.161261 systemd[1]: session-28.scope: Deactivated successfully. Jan 29 12:07:08.162240 systemd-logind[1456]: Session 28 logged out. Waiting for processes to exit. Jan 29 12:07:08.163886 systemd-logind[1456]: Removed session 28.