Feb 13 20:20:39.136600 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 20:20:39.136651 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:20:39.136673 kernel: BIOS-provided physical RAM map: Feb 13 20:20:39.136686 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Feb 13 20:20:39.136699 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Feb 13 20:20:39.136712 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Feb 13 20:20:39.136730 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Feb 13 20:20:39.136750 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Feb 13 20:20:39.136764 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Feb 13 20:20:39.136778 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Feb 13 20:20:39.136792 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Feb 13 20:20:39.136806 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Feb 13 20:20:39.136820 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Feb 13 20:20:39.136835 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Feb 13 20:20:39.136856 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Feb 13 20:20:39.136872 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Feb 13 20:20:39.136887 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Feb 13 20:20:39.136904 kernel: NX (Execute Disable) protection: active Feb 13 20:20:39.136920 kernel: APIC: Static calls initialized Feb 13 20:20:39.136935 kernel: efi: EFI v2.7 by EDK II Feb 13 20:20:39.136951 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Feb 13 20:20:39.136967 kernel: SMBIOS 2.4 present. Feb 13 20:20:39.136983 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024 Feb 13 20:20:39.136999 kernel: Hypervisor detected: KVM Feb 13 20:20:39.137019 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 20:20:39.137034 kernel: kvm-clock: using sched offset of 12292197165 cycles Feb 13 20:20:39.137053 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 20:20:39.137070 kernel: tsc: Detected 2299.998 MHz processor Feb 13 20:20:39.137086 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:20:39.137103 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:20:39.137119 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Feb 13 20:20:39.137147 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Feb 13 20:20:39.137164 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:20:39.137184 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Feb 13 20:20:39.137201 kernel: Using GB pages for direct mapping Feb 13 20:20:39.137217 kernel: Secure boot disabled Feb 13 20:20:39.137233 kernel: ACPI: Early table checksum verification disabled Feb 13 20:20:39.137249 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Feb 13 20:20:39.137266 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Feb 13 20:20:39.137285 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Feb 13 20:20:39.137316 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Feb 13 20:20:39.137344 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Feb 13 20:20:39.137393 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Feb 13 20:20:39.137412 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Feb 13 20:20:39.137431 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Feb 13 20:20:39.137450 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Feb 13 20:20:39.137467 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Feb 13 20:20:39.137491 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Feb 13 20:20:39.137508 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Feb 13 20:20:39.137526 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Feb 13 20:20:39.137544 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Feb 13 20:20:39.137564 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Feb 13 20:20:39.137581 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Feb 13 20:20:39.137599 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Feb 13 20:20:39.137617 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Feb 13 20:20:39.137635 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Feb 13 20:20:39.137657 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Feb 13 20:20:39.137674 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 20:20:39.137692 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 20:20:39.137710 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 20:20:39.137728 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Feb 13 20:20:39.137747 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Feb 13 20:20:39.137766 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Feb 13 20:20:39.137784 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Feb 13 20:20:39.137802 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Feb 13 20:20:39.137824 kernel: Zone ranges: Feb 13 20:20:39.137843 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:20:39.137861 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 20:20:39.137879 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Feb 13 20:20:39.137897 kernel: Movable zone start for each node Feb 13 20:20:39.137915 kernel: Early memory node ranges Feb 13 20:20:39.137933 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Feb 13 20:20:39.137952 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Feb 13 20:20:39.137970 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Feb 13 20:20:39.137994 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Feb 13 20:20:39.138014 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Feb 13 20:20:39.138031 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Feb 13 20:20:39.138049 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:20:39.138067 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Feb 13 20:20:39.138084 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Feb 13 20:20:39.138102 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 20:20:39.138121 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Feb 13 20:20:39.138149 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 20:20:39.138172 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 20:20:39.138190 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 20:20:39.138208 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 20:20:39.138226 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 20:20:39.138245 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 20:20:39.138264 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 20:20:39.138281 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:20:39.138299 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 20:20:39.138317 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Feb 13 20:20:39.138339 kernel: Booting paravirtualized kernel on KVM Feb 13 20:20:39.138380 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:20:39.138399 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 20:20:39.138417 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 20:20:39.138435 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 20:20:39.138452 kernel: pcpu-alloc: [0] 0 1 Feb 13 20:20:39.138469 kernel: kvm-guest: PV spinlocks enabled Feb 13 20:20:39.138487 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 20:20:39.138507 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:20:39.138531 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:20:39.138549 kernel: random: crng init done Feb 13 20:20:39.138566 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 20:20:39.138584 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:20:39.138603 kernel: Fallback order for Node 0: 0 Feb 13 20:20:39.138621 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Feb 13 20:20:39.138640 kernel: Policy zone: Normal Feb 13 20:20:39.138658 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:20:39.138680 kernel: software IO TLB: area num 2. Feb 13 20:20:39.138698 kernel: Memory: 7513396K/7860584K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 346928K reserved, 0K cma-reserved) Feb 13 20:20:39.138735 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 20:20:39.138754 kernel: Kernel/User page tables isolation: enabled Feb 13 20:20:39.138772 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 20:20:39.138790 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:20:39.138809 kernel: Dynamic Preempt: voluntary Feb 13 20:20:39.138828 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:20:39.138848 kernel: rcu: RCU event tracing is enabled. Feb 13 20:20:39.138886 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 20:20:39.138906 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:20:39.138925 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:20:39.138949 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:20:39.138970 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:20:39.138990 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 20:20:39.139009 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 20:20:39.139030 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:20:39.139052 kernel: Console: colour dummy device 80x25 Feb 13 20:20:39.139078 kernel: printk: console [ttyS0] enabled Feb 13 20:20:39.139099 kernel: ACPI: Core revision 20230628 Feb 13 20:20:39.139119 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:20:39.139144 kernel: x2apic enabled Feb 13 20:20:39.139165 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 20:20:39.139186 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Feb 13 20:20:39.139206 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 20:20:39.139226 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Feb 13 20:20:39.139250 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Feb 13 20:20:39.139270 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Feb 13 20:20:39.139290 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:20:39.139313 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 20:20:39.139332 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 20:20:39.139352 kernel: Spectre V2 : Mitigation: IBRS Feb 13 20:20:39.139388 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:20:39.139407 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 20:20:39.139437 kernel: RETBleed: Mitigation: IBRS Feb 13 20:20:39.139460 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 20:20:39.139480 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Feb 13 20:20:39.139504 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 20:20:39.139524 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 20:20:39.139547 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:20:39.139568 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:20:39.139590 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:20:39.139614 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:20:39.139638 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:20:39.139669 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 20:20:39.139697 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:20:39.139720 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:20:39.139741 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:20:39.139763 kernel: landlock: Up and running. Feb 13 20:20:39.139786 kernel: SELinux: Initializing. Feb 13 20:20:39.139808 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:20:39.139832 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:20:39.139856 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Feb 13 20:20:39.139888 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:20:39.139916 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:20:39.139942 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:20:39.139965 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Feb 13 20:20:39.139987 kernel: signal: max sigframe size: 1776 Feb 13 20:20:39.140008 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:20:39.140030 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:20:39.140053 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 20:20:39.140074 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:20:39.140107 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:20:39.140134 kernel: .... node #0, CPUs: #1 Feb 13 20:20:39.140171 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 20:20:39.140196 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 20:20:39.140218 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:20:39.140240 kernel: smpboot: Max logical packages: 1 Feb 13 20:20:39.140263 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Feb 13 20:20:39.140286 kernel: devtmpfs: initialized Feb 13 20:20:39.140315 kernel: x86/mm: Memory block size: 128MB Feb 13 20:20:39.140343 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Feb 13 20:20:39.140402 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:20:39.140428 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 20:20:39.140451 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:20:39.140470 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:20:39.140490 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:20:39.140510 kernel: audit: type=2000 audit(1739478037.231:1): state=initialized audit_enabled=0 res=1 Feb 13 20:20:39.140529 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:20:39.140557 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:20:39.140579 kernel: cpuidle: using governor menu Feb 13 20:20:39.140600 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:20:39.140624 kernel: dca service started, version 1.12.1 Feb 13 20:20:39.140646 kernel: PCI: Using configuration type 1 for base access Feb 13 20:20:39.140671 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:20:39.140690 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:20:39.140709 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:20:39.140728 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:20:39.140753 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:20:39.140776 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:20:39.140798 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:20:39.140819 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:20:39.140842 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:20:39.140864 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 20:20:39.140888 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 20:20:39.140906 kernel: ACPI: Interpreter enabled Feb 13 20:20:39.140924 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 20:20:39.140947 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:20:39.140966 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:20:39.140985 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 20:20:39.141004 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 20:20:39.141023 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:20:39.141314 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:20:39.141585 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 20:20:39.141813 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 20:20:39.141848 kernel: PCI host bridge to bus 0000:00 Feb 13 20:20:39.142078 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 20:20:39.142290 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 20:20:39.142515 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 20:20:39.142705 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Feb 13 20:20:39.142897 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:20:39.143133 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 20:20:39.143442 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Feb 13 20:20:39.143712 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 13 20:20:39.143932 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 20:20:39.144168 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Feb 13 20:20:39.144408 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Feb 13 20:20:39.144655 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Feb 13 20:20:39.144936 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 20:20:39.145207 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Feb 13 20:20:39.145464 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Feb 13 20:20:39.145706 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 20:20:39.145935 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 13 20:20:39.146173 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Feb 13 20:20:39.146211 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 20:20:39.146245 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 20:20:39.146267 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 20:20:39.146289 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 20:20:39.146310 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 20:20:39.146330 kernel: iommu: Default domain type: Translated Feb 13 20:20:39.146350 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:20:39.146433 kernel: efivars: Registered efivars operations Feb 13 20:20:39.146455 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:20:39.146483 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 20:20:39.146502 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Feb 13 20:20:39.146523 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Feb 13 20:20:39.146543 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Feb 13 20:20:39.146564 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Feb 13 20:20:39.146584 kernel: vgaarb: loaded Feb 13 20:20:39.146603 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 20:20:39.146624 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:20:39.146644 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:20:39.146670 kernel: pnp: PnP ACPI init Feb 13 20:20:39.146692 kernel: pnp: PnP ACPI: found 7 devices Feb 13 20:20:39.146713 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:20:39.146733 kernel: NET: Registered PF_INET protocol family Feb 13 20:20:39.146755 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 20:20:39.146775 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 20:20:39.146795 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:20:39.146817 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:20:39.146839 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 20:20:39.146863 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 20:20:39.146884 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 20:20:39.146904 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 20:20:39.146924 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:20:39.146945 kernel: NET: Registered PF_XDP protocol family Feb 13 20:20:39.147193 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 20:20:39.147417 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 20:20:39.147620 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 20:20:39.147832 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Feb 13 20:20:39.148047 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 20:20:39.148075 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:20:39.148096 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 20:20:39.148117 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Feb 13 20:20:39.148151 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 20:20:39.148172 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 20:20:39.148192 kernel: clocksource: Switched to clocksource tsc Feb 13 20:20:39.148218 kernel: Initialise system trusted keyrings Feb 13 20:20:39.148239 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 20:20:39.148260 kernel: Key type asymmetric registered Feb 13 20:20:39.148280 kernel: Asymmetric key parser 'x509' registered Feb 13 20:20:39.148300 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:20:39.148321 kernel: io scheduler mq-deadline registered Feb 13 20:20:39.148342 kernel: io scheduler kyber registered Feb 13 20:20:39.148429 kernel: io scheduler bfq registered Feb 13 20:20:39.148460 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:20:39.148495 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 13 20:20:39.148746 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Feb 13 20:20:39.148776 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 13 20:20:39.148999 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Feb 13 20:20:39.149028 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 13 20:20:39.149262 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Feb 13 20:20:39.149295 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:20:39.149320 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:20:39.149344 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 20:20:39.149409 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Feb 13 20:20:39.149430 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Feb 13 20:20:39.149683 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Feb 13 20:20:39.149716 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 20:20:39.149736 kernel: i8042: Warning: Keylock active Feb 13 20:20:39.149759 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 20:20:39.149781 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 20:20:39.150026 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 20:20:39.150269 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 20:20:39.150531 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T20:20:38 UTC (1739478038) Feb 13 20:20:39.150744 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 20:20:39.150773 kernel: intel_pstate: CPU model not supported Feb 13 20:20:39.150796 kernel: pstore: Using crash dump compression: deflate Feb 13 20:20:39.150817 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 20:20:39.150839 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:20:39.150862 kernel: Segment Routing with IPv6 Feb 13 20:20:39.150892 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:20:39.150913 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:20:39.150934 kernel: Key type dns_resolver registered Feb 13 20:20:39.150955 kernel: IPI shorthand broadcast: enabled Feb 13 20:20:39.150976 kernel: sched_clock: Marking stable (931004913, 159507364)->(1125398473, -34886196) Feb 13 20:20:39.150997 kernel: registered taskstats version 1 Feb 13 20:20:39.151019 kernel: Loading compiled-in X.509 certificates Feb 13 20:20:39.151041 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 20:20:39.151064 kernel: Key type .fscrypt registered Feb 13 20:20:39.151091 kernel: Key type fscrypt-provisioning registered Feb 13 20:20:39.151114 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:20:39.151143 kernel: ima: No architecture policies found Feb 13 20:20:39.151163 kernel: clk: Disabling unused clocks Feb 13 20:20:39.151185 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 20:20:39.151205 kernel: Write protecting the kernel read-only data: 36864k Feb 13 20:20:39.151227 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 20:20:39.151249 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 20:20:39.151274 kernel: Run /init as init process Feb 13 20:20:39.151295 kernel: with arguments: Feb 13 20:20:39.151314 kernel: /init Feb 13 20:20:39.151334 kernel: with environment: Feb 13 20:20:39.151368 kernel: HOME=/ Feb 13 20:20:39.151389 kernel: TERM=linux Feb 13 20:20:39.151410 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:20:39.151435 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:20:39.151464 systemd[1]: Detected virtualization google. Feb 13 20:20:39.151487 systemd[1]: Detected architecture x86-64. Feb 13 20:20:39.151513 systemd[1]: Running in initrd. Feb 13 20:20:39.151538 systemd[1]: No hostname configured, using default hostname. Feb 13 20:20:39.151558 systemd[1]: Hostname set to . Feb 13 20:20:39.151581 systemd[1]: Initializing machine ID from random generator. Feb 13 20:20:39.151603 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:20:39.151624 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:20:39.151649 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:20:39.151674 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:20:39.151697 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:20:39.151719 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:20:39.151740 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:20:39.151761 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:20:39.151790 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:20:39.151823 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:20:39.151858 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:20:39.151927 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:20:39.151965 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:20:39.151999 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:20:39.152033 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:20:39.152072 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:20:39.152126 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:20:39.152170 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:20:39.152204 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:20:39.152238 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:20:39.152272 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:20:39.152306 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:20:39.152339 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:20:39.152397 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:20:39.152438 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:20:39.152472 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:20:39.152506 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:20:39.152541 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:20:39.152575 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:20:39.152610 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:20:39.152683 systemd-journald[183]: Collecting audit messages is disabled. Feb 13 20:20:39.152760 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:20:39.152794 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:20:39.152827 systemd-journald[183]: Journal started Feb 13 20:20:39.152897 systemd-journald[183]: Runtime Journal (/run/log/journal/dd15d6348aac4cc5b6fba6a6a1644903) is 8.0M, max 148.7M, 140.7M free. Feb 13 20:20:39.162951 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:20:39.163025 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:20:39.164299 systemd-modules-load[184]: Inserted module 'overlay' Feb 13 20:20:39.185455 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:20:39.196595 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:20:39.200831 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:20:39.210136 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:20:39.224527 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:20:39.224587 kernel: Bridge firewalling registered Feb 13 20:20:39.223254 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 13 20:20:39.230985 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:20:39.235943 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:20:39.240979 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:20:39.254226 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:20:39.266647 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:20:39.283513 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:20:39.286663 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:20:39.296706 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:20:39.301109 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:20:39.313164 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:20:39.349261 dracut-cmdline[218]: dracut-dracut-053 Feb 13 20:20:39.354304 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:20:39.359008 systemd-resolved[211]: Positive Trust Anchors: Feb 13 20:20:39.359024 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:20:39.359101 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:20:39.364860 systemd-resolved[211]: Defaulting to hostname 'linux'. Feb 13 20:20:39.366944 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:20:39.376602 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:20:39.468404 kernel: SCSI subsystem initialized Feb 13 20:20:39.479405 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:20:39.492400 kernel: iscsi: registered transport (tcp) Feb 13 20:20:39.518538 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:20:39.518633 kernel: QLogic iSCSI HBA Driver Feb 13 20:20:39.575347 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:20:39.581562 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:20:39.624951 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:20:39.625044 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:20:39.625087 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:20:39.672402 kernel: raid6: avx2x4 gen() 18108 MB/s Feb 13 20:20:39.689394 kernel: raid6: avx2x2 gen() 18169 MB/s Feb 13 20:20:39.706832 kernel: raid6: avx2x1 gen() 14097 MB/s Feb 13 20:20:39.706895 kernel: raid6: using algorithm avx2x2 gen() 18169 MB/s Feb 13 20:20:39.724968 kernel: raid6: .... xor() 17285 MB/s, rmw enabled Feb 13 20:20:39.725027 kernel: raid6: using avx2x2 recovery algorithm Feb 13 20:20:39.749392 kernel: xor: automatically using best checksumming function avx Feb 13 20:20:39.943401 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:20:39.958079 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:20:39.965583 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:20:40.000256 systemd-udevd[400]: Using default interface naming scheme 'v255'. Feb 13 20:20:40.008056 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:20:40.019309 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:20:40.049728 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Feb 13 20:20:40.090506 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:20:40.097571 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:20:40.202116 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:20:40.211697 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:20:40.253708 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:20:40.260725 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:20:40.264487 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:20:40.266469 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:20:40.277568 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:20:40.319961 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:20:40.351391 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:20:40.389407 kernel: scsi host0: Virtio SCSI HBA Feb 13 20:20:40.403731 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:20:40.415576 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Feb 13 20:20:40.415683 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 20:20:40.403992 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:20:40.432900 kernel: AES CTR mode by8 optimization enabled Feb 13 20:20:40.433583 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:20:40.437091 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:20:40.437568 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:20:40.451278 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:20:40.466169 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:20:40.507657 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Feb 13 20:20:40.522724 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Feb 13 20:20:40.523052 kernel: sd 0:0:1:0: [sda] Write Protect is off Feb 13 20:20:40.523344 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Feb 13 20:20:40.523665 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 20:20:40.523963 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:20:40.524012 kernel: GPT:17805311 != 25165823 Feb 13 20:20:40.524050 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:20:40.524079 kernel: GPT:17805311 != 25165823 Feb 13 20:20:40.524114 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:20:40.524149 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:20:40.524184 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Feb 13 20:20:40.514683 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:20:40.526131 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:20:40.575392 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (444) Feb 13 20:20:40.596673 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (448) Feb 13 20:20:40.602784 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:20:40.617938 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Feb 13 20:20:40.636249 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Feb 13 20:20:40.644189 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Feb 13 20:20:40.651055 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Feb 13 20:20:40.651330 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Feb 13 20:20:40.663732 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:20:40.692330 disk-uuid[550]: Primary Header is updated. Feb 13 20:20:40.692330 disk-uuid[550]: Secondary Entries is updated. Feb 13 20:20:40.692330 disk-uuid[550]: Secondary Header is updated. Feb 13 20:20:40.717484 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:20:40.738427 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:20:40.762404 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:20:41.760861 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:20:41.760948 disk-uuid[551]: The operation has completed successfully. Feb 13 20:20:41.834189 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:20:41.834395 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:20:41.870561 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:20:41.901132 sh[568]: Success Feb 13 20:20:41.924393 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 20:20:42.009763 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:20:42.017170 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:20:42.057937 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:20:42.093616 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 20:20:42.093724 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:20:42.093755 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:20:42.110085 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:20:42.110144 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:20:42.140423 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 20:20:42.146335 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:20:42.147414 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:20:42.156579 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:20:42.230569 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:20:42.230620 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:20:42.230655 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:20:42.230689 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:20:42.230721 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:20:42.227694 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:20:42.252655 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:20:42.265028 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:20:42.281632 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:20:42.459953 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:20:42.477985 ignition[645]: Ignition 2.19.0 Feb 13 20:20:42.477999 ignition[645]: Stage: fetch-offline Feb 13 20:20:42.481913 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:20:42.478068 ignition[645]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:20:42.494636 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:20:42.478086 ignition[645]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:20:42.478437 ignition[645]: parsed url from cmdline: "" Feb 13 20:20:42.478445 ignition[645]: no config URL provided Feb 13 20:20:42.478452 ignition[645]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:20:42.478464 ignition[645]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:20:42.563663 systemd-networkd[756]: lo: Link UP Feb 13 20:20:42.478471 ignition[645]: failed to fetch config: resource requires networking Feb 13 20:20:42.563669 systemd-networkd[756]: lo: Gained carrier Feb 13 20:20:42.478774 ignition[645]: Ignition finished successfully Feb 13 20:20:42.565789 systemd-networkd[756]: Enumeration completed Feb 13 20:20:42.622174 ignition[759]: Ignition 2.19.0 Feb 13 20:20:42.565940 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:20:42.622184 ignition[759]: Stage: fetch Feb 13 20:20:42.566674 systemd-networkd[756]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:20:42.622457 ignition[759]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:20:42.566683 systemd-networkd[756]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:20:42.622473 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:20:42.569203 systemd-networkd[756]: eth0: Link UP Feb 13 20:20:42.622611 ignition[759]: parsed url from cmdline: "" Feb 13 20:20:42.569209 systemd-networkd[756]: eth0: Gained carrier Feb 13 20:20:42.622619 ignition[759]: no config URL provided Feb 13 20:20:42.569220 systemd-networkd[756]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:20:42.622628 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:20:42.569814 systemd[1]: Reached target network.target - Network. Feb 13 20:20:42.622640 ignition[759]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:20:42.581467 systemd-networkd[756]: eth0: DHCPv4 address 10.128.0.9/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 13 20:20:42.622662 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Feb 13 20:20:42.600577 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:20:42.626706 ignition[759]: GET result: OK Feb 13 20:20:42.633555 unknown[759]: fetched base config from "system" Feb 13 20:20:42.626808 ignition[759]: parsing config with SHA512: 29a01ea1bdd472149bb5766acf582fd7179332ccbf14d5b1faf0f1810aebc5e149c522c2698e2d7e8c4fd9b94149759677e0a766ea73bc66ecf08f7dde99e816 Feb 13 20:20:42.633571 unknown[759]: fetched base config from "system" Feb 13 20:20:42.634284 ignition[759]: fetch: fetch complete Feb 13 20:20:42.633582 unknown[759]: fetched user config from "gcp" Feb 13 20:20:42.634291 ignition[759]: fetch: fetch passed Feb 13 20:20:42.636682 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:20:42.634348 ignition[759]: Ignition finished successfully Feb 13 20:20:42.658603 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:20:42.709272 ignition[765]: Ignition 2.19.0 Feb 13 20:20:42.712318 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:20:42.709284 ignition[765]: Stage: kargs Feb 13 20:20:42.726620 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:20:42.709592 ignition[765]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:20:42.778144 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:20:42.709611 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:20:42.787491 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:20:42.710752 ignition[765]: kargs: kargs passed Feb 13 20:20:42.803774 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:20:42.710812 ignition[765]: Ignition finished successfully Feb 13 20:20:42.831667 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:20:42.775170 ignition[770]: Ignition 2.19.0 Feb 13 20:20:42.839756 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:20:42.775181 ignition[770]: Stage: disks Feb 13 20:20:42.860889 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:20:42.775427 ignition[770]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:20:42.896651 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:20:42.775444 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:20:42.776599 ignition[770]: disks: disks passed Feb 13 20:20:42.776657 ignition[770]: Ignition finished successfully Feb 13 20:20:42.943997 systemd-fsck[779]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 20:20:43.106596 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:20:43.136509 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:20:43.265402 kernel: EXT4-fs (sda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 20:20:43.266141 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:20:43.267121 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:20:43.300508 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:20:43.317518 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:20:43.341467 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (787) Feb 13 20:20:43.326993 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 20:20:43.379729 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:20:43.379795 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:20:43.379844 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:20:43.379873 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:20:43.327053 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:20:43.421712 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:20:43.327084 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:20:43.405716 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:20:43.430876 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:20:43.453651 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:20:43.592636 initrd-setup-root[811]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:20:43.602558 initrd-setup-root[818]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:20:43.612820 initrd-setup-root[825]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:20:43.622550 initrd-setup-root[832]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:20:43.764494 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:20:43.772506 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:20:43.807396 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:20:43.815615 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:20:43.825725 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:20:43.869223 ignition[903]: INFO : Ignition 2.19.0 Feb 13 20:20:43.876574 ignition[903]: INFO : Stage: mount Feb 13 20:20:43.876574 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:20:43.876574 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:20:43.876574 ignition[903]: INFO : mount: mount passed Feb 13 20:20:43.876574 ignition[903]: INFO : Ignition finished successfully Feb 13 20:20:43.874758 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:20:43.896103 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:20:43.917552 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:20:43.964626 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:20:43.989613 systemd-networkd[756]: eth0: Gained IPv6LL Feb 13 20:20:44.015414 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (916) Feb 13 20:20:44.035192 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:20:44.035293 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:20:44.035325 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:20:44.057102 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:20:44.057171 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:20:44.060208 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:20:44.104409 ignition[933]: INFO : Ignition 2.19.0 Feb 13 20:20:44.104409 ignition[933]: INFO : Stage: files Feb 13 20:20:44.118545 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:20:44.118545 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:20:44.118545 ignition[933]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:20:44.118545 ignition[933]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:20:44.118545 ignition[933]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:20:44.118545 ignition[933]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:20:44.118545 ignition[933]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:20:44.118545 ignition[933]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:20:44.118545 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:20:44.118545 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 20:20:44.115910 unknown[933]: wrote ssh authorized keys file for user: core Feb 13 20:20:44.254478 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:20:44.378275 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:20:44.378275 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 20:20:44.410515 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 20:20:44.737871 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 20:20:44.924074 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 20:20:44.939505 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:20:44.939505 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:20:44.939505 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:20:44.939505 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:20:44.939505 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:20:44.939505 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:20:44.939505 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:20:44.939505 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:20:44.939505 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:20:44.939505 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:20:44.939505 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:20:44.939505 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:20:44.939505 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:20:44.939505 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 20:20:45.177530 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 20:20:45.678490 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:20:45.697571 ignition[933]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 20:20:45.697571 ignition[933]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:20:45.697571 ignition[933]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:20:45.697571 ignition[933]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 20:20:45.697571 ignition[933]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:20:45.697571 ignition[933]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:20:45.697571 ignition[933]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:20:45.697571 ignition[933]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:20:45.697571 ignition[933]: INFO : files: files passed Feb 13 20:20:45.697571 ignition[933]: INFO : Ignition finished successfully Feb 13 20:20:45.684011 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:20:45.703601 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:20:45.735914 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:20:45.746158 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:20:45.928568 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:20:45.928568 initrd-setup-root-after-ignition[961]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:20:45.746291 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:20:45.977542 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:20:45.823081 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:20:45.831879 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:20:45.861600 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:20:45.950169 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:20:45.950301 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:20:45.968479 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:20:45.987724 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:20:46.011808 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:20:46.018717 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:20:46.075890 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:20:46.101713 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:20:46.154862 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:20:46.166727 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:20:46.187966 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:20:46.206817 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:20:46.207044 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:20:46.233832 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:20:46.254830 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:20:46.272807 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:20:46.291814 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:20:46.312835 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:20:46.333778 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:20:46.353820 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:20:46.374816 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:20:46.395798 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:20:46.416796 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:20:46.434791 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:20:46.435016 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:20:46.460860 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:20:46.480902 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:20:46.501725 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:20:46.501925 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:20:46.523799 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:20:46.524039 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:20:46.555840 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:20:46.556114 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:20:46.575885 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:20:46.576089 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:20:46.602712 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:20:46.610663 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:20:46.664563 ignition[986]: INFO : Ignition 2.19.0 Feb 13 20:20:46.664563 ignition[986]: INFO : Stage: umount Feb 13 20:20:46.664563 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:20:46.664563 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:20:46.664563 ignition[986]: INFO : umount: umount passed Feb 13 20:20:46.664563 ignition[986]: INFO : Ignition finished successfully Feb 13 20:20:46.623981 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:20:46.624329 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:20:46.701821 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:20:46.702049 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:20:46.734025 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:20:46.735104 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:20:46.735233 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:20:46.739464 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:20:46.739598 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:20:46.769750 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:20:46.769895 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:20:46.789097 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:20:46.789204 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:20:46.807644 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:20:46.807747 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:20:46.827624 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:20:46.827719 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:20:46.845622 systemd[1]: Stopped target network.target - Network. Feb 13 20:20:46.860519 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:20:46.860665 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:20:46.878658 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:20:46.893532 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:20:46.897473 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:20:46.912535 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:20:46.927575 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:20:46.945635 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:20:46.945735 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:20:46.963600 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:20:46.963702 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:20:46.982597 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:20:46.982720 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:20:47.000637 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:20:47.000753 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:20:47.018614 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:20:47.018734 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:20:47.036941 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:20:47.042458 systemd-networkd[756]: eth0: DHCPv6 lease lost Feb 13 20:20:47.060851 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:20:47.091107 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:20:47.091260 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:20:47.110242 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:20:47.110699 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:20:47.130331 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:20:47.130467 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:20:47.158517 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:20:47.160692 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:20:47.160769 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:20:47.184803 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:20:47.184887 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:20:47.202798 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:20:47.202880 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:20:47.219787 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:20:47.219874 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:20:47.248918 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:20:47.277041 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:20:47.277231 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:20:47.305755 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:20:47.305824 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:20:47.702497 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 13 20:20:47.328613 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:20:47.328695 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:20:47.348551 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:20:47.348686 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:20:47.375511 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:20:47.375637 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:20:47.402524 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:20:47.402664 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:20:47.438566 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:20:47.470492 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:20:47.470628 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:20:47.491650 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 20:20:47.491782 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:20:47.512615 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:20:47.512729 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:20:47.535598 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:20:47.535713 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:20:47.556116 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:20:47.556250 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:20:47.575945 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:20:47.576074 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:20:47.597924 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:20:47.612647 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:20:47.655722 systemd[1]: Switching root. Feb 13 20:20:47.939493 systemd-journald[183]: Journal stopped Feb 13 20:20:39.136600 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 20:20:39.136651 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:20:39.136673 kernel: BIOS-provided physical RAM map: Feb 13 20:20:39.136686 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Feb 13 20:20:39.136699 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Feb 13 20:20:39.136712 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Feb 13 20:20:39.136730 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Feb 13 20:20:39.136750 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Feb 13 20:20:39.136764 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Feb 13 20:20:39.136778 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Feb 13 20:20:39.136792 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Feb 13 20:20:39.136806 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Feb 13 20:20:39.136820 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Feb 13 20:20:39.136835 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Feb 13 20:20:39.136856 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Feb 13 20:20:39.136872 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Feb 13 20:20:39.136887 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Feb 13 20:20:39.136904 kernel: NX (Execute Disable) protection: active Feb 13 20:20:39.136920 kernel: APIC: Static calls initialized Feb 13 20:20:39.136935 kernel: efi: EFI v2.7 by EDK II Feb 13 20:20:39.136951 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Feb 13 20:20:39.136967 kernel: SMBIOS 2.4 present. Feb 13 20:20:39.136983 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024 Feb 13 20:20:39.136999 kernel: Hypervisor detected: KVM Feb 13 20:20:39.137019 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 20:20:39.137034 kernel: kvm-clock: using sched offset of 12292197165 cycles Feb 13 20:20:39.137053 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 20:20:39.137070 kernel: tsc: Detected 2299.998 MHz processor Feb 13 20:20:39.137086 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:20:39.137103 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:20:39.137119 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Feb 13 20:20:39.137147 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Feb 13 20:20:39.137164 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:20:39.137184 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Feb 13 20:20:39.137201 kernel: Using GB pages for direct mapping Feb 13 20:20:39.137217 kernel: Secure boot disabled Feb 13 20:20:39.137233 kernel: ACPI: Early table checksum verification disabled Feb 13 20:20:39.137249 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Feb 13 20:20:39.137266 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Feb 13 20:20:39.137285 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Feb 13 20:20:39.137316 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Feb 13 20:20:39.137344 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Feb 13 20:20:39.137393 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Feb 13 20:20:39.137412 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Feb 13 20:20:39.137431 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Feb 13 20:20:39.137450 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Feb 13 20:20:39.137467 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Feb 13 20:20:39.137491 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Feb 13 20:20:39.137508 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Feb 13 20:20:39.137526 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Feb 13 20:20:39.137544 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Feb 13 20:20:39.137564 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Feb 13 20:20:39.137581 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Feb 13 20:20:39.137599 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Feb 13 20:20:39.137617 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Feb 13 20:20:39.137635 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Feb 13 20:20:39.137657 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Feb 13 20:20:39.137674 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 20:20:39.137692 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 20:20:39.137710 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 20:20:39.137728 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Feb 13 20:20:39.137747 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Feb 13 20:20:39.137766 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Feb 13 20:20:39.137784 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Feb 13 20:20:39.137802 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Feb 13 20:20:39.137824 kernel: Zone ranges: Feb 13 20:20:39.137843 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:20:39.137861 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 13 20:20:39.137879 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Feb 13 20:20:39.137897 kernel: Movable zone start for each node Feb 13 20:20:39.137915 kernel: Early memory node ranges Feb 13 20:20:39.137933 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Feb 13 20:20:39.137952 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Feb 13 20:20:39.137970 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Feb 13 20:20:39.137994 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Feb 13 20:20:39.138014 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Feb 13 20:20:39.138031 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Feb 13 20:20:39.138049 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:20:39.138067 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Feb 13 20:20:39.138084 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Feb 13 20:20:39.138102 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 20:20:39.138121 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Feb 13 20:20:39.138149 kernel: ACPI: PM-Timer IO Port: 0xb008 Feb 13 20:20:39.138172 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 20:20:39.138190 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 20:20:39.138208 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 20:20:39.138226 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 20:20:39.138245 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 20:20:39.138264 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 20:20:39.138281 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:20:39.138299 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 20:20:39.138317 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Feb 13 20:20:39.138339 kernel: Booting paravirtualized kernel on KVM Feb 13 20:20:39.138380 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:20:39.138399 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 20:20:39.138417 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 20:20:39.138435 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 20:20:39.138452 kernel: pcpu-alloc: [0] 0 1 Feb 13 20:20:39.138469 kernel: kvm-guest: PV spinlocks enabled Feb 13 20:20:39.138487 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 20:20:39.138507 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:20:39.138531 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:20:39.138549 kernel: random: crng init done Feb 13 20:20:39.138566 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 13 20:20:39.138584 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 20:20:39.138603 kernel: Fallback order for Node 0: 0 Feb 13 20:20:39.138621 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Feb 13 20:20:39.138640 kernel: Policy zone: Normal Feb 13 20:20:39.138658 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:20:39.138680 kernel: software IO TLB: area num 2. Feb 13 20:20:39.138698 kernel: Memory: 7513396K/7860584K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 346928K reserved, 0K cma-reserved) Feb 13 20:20:39.138735 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 20:20:39.138754 kernel: Kernel/User page tables isolation: enabled Feb 13 20:20:39.138772 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 20:20:39.138790 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:20:39.138809 kernel: Dynamic Preempt: voluntary Feb 13 20:20:39.138828 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:20:39.138848 kernel: rcu: RCU event tracing is enabled. Feb 13 20:20:39.138886 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 20:20:39.138906 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:20:39.138925 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:20:39.138949 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:20:39.138970 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:20:39.138990 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 20:20:39.139009 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 20:20:39.139030 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:20:39.139052 kernel: Console: colour dummy device 80x25 Feb 13 20:20:39.139078 kernel: printk: console [ttyS0] enabled Feb 13 20:20:39.139099 kernel: ACPI: Core revision 20230628 Feb 13 20:20:39.139119 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:20:39.139144 kernel: x2apic enabled Feb 13 20:20:39.139165 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 20:20:39.139186 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Feb 13 20:20:39.139206 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 20:20:39.139226 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Feb 13 20:20:39.139250 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Feb 13 20:20:39.139270 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Feb 13 20:20:39.139290 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:20:39.139313 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 20:20:39.139332 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 20:20:39.139352 kernel: Spectre V2 : Mitigation: IBRS Feb 13 20:20:39.139388 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:20:39.139407 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 20:20:39.139437 kernel: RETBleed: Mitigation: IBRS Feb 13 20:20:39.139460 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 20:20:39.139480 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Feb 13 20:20:39.139504 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 20:20:39.139524 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 20:20:39.139547 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:20:39.139568 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:20:39.139590 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:20:39.139614 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:20:39.139638 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:20:39.139669 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 20:20:39.139697 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:20:39.139720 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:20:39.139741 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:20:39.139763 kernel: landlock: Up and running. Feb 13 20:20:39.139786 kernel: SELinux: Initializing. Feb 13 20:20:39.139808 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:20:39.139832 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:20:39.139856 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Feb 13 20:20:39.139888 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:20:39.139916 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:20:39.139942 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:20:39.139965 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Feb 13 20:20:39.139987 kernel: signal: max sigframe size: 1776 Feb 13 20:20:39.140008 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:20:39.140030 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:20:39.140053 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 20:20:39.140074 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:20:39.140107 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:20:39.140134 kernel: .... node #0, CPUs: #1 Feb 13 20:20:39.140171 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Feb 13 20:20:39.140196 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 13 20:20:39.140218 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:20:39.140240 kernel: smpboot: Max logical packages: 1 Feb 13 20:20:39.140263 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Feb 13 20:20:39.140286 kernel: devtmpfs: initialized Feb 13 20:20:39.140315 kernel: x86/mm: Memory block size: 128MB Feb 13 20:20:39.140343 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Feb 13 20:20:39.140402 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:20:39.140428 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 20:20:39.140451 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:20:39.140470 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:20:39.140490 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:20:39.140510 kernel: audit: type=2000 audit(1739478037.231:1): state=initialized audit_enabled=0 res=1 Feb 13 20:20:39.140529 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:20:39.140557 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:20:39.140579 kernel: cpuidle: using governor menu Feb 13 20:20:39.140600 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:20:39.140624 kernel: dca service started, version 1.12.1 Feb 13 20:20:39.140646 kernel: PCI: Using configuration type 1 for base access Feb 13 20:20:39.140671 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:20:39.140690 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:20:39.140709 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:20:39.140728 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:20:39.140753 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:20:39.140776 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:20:39.140798 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:20:39.140819 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:20:39.140842 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:20:39.140864 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Feb 13 20:20:39.140888 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 20:20:39.140906 kernel: ACPI: Interpreter enabled Feb 13 20:20:39.140924 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 20:20:39.140947 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:20:39.140966 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:20:39.140985 kernel: PCI: Ignoring E820 reservations for host bridge windows Feb 13 20:20:39.141004 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Feb 13 20:20:39.141023 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:20:39.141314 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:20:39.141585 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 20:20:39.141813 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 20:20:39.141848 kernel: PCI host bridge to bus 0000:00 Feb 13 20:20:39.142078 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 20:20:39.142290 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 20:20:39.142515 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 20:20:39.142705 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Feb 13 20:20:39.142897 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:20:39.143133 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 20:20:39.143442 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Feb 13 20:20:39.143712 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 13 20:20:39.143932 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Feb 13 20:20:39.144168 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Feb 13 20:20:39.144408 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Feb 13 20:20:39.144655 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Feb 13 20:20:39.144936 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 20:20:39.145207 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Feb 13 20:20:39.145464 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Feb 13 20:20:39.145706 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 20:20:39.145935 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Feb 13 20:20:39.146173 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Feb 13 20:20:39.146211 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 20:20:39.146245 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 20:20:39.146267 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 20:20:39.146289 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 20:20:39.146310 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 20:20:39.146330 kernel: iommu: Default domain type: Translated Feb 13 20:20:39.146350 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:20:39.146433 kernel: efivars: Registered efivars operations Feb 13 20:20:39.146455 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:20:39.146483 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 20:20:39.146502 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Feb 13 20:20:39.146523 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Feb 13 20:20:39.146543 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Feb 13 20:20:39.146564 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Feb 13 20:20:39.146584 kernel: vgaarb: loaded Feb 13 20:20:39.146603 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 20:20:39.146624 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:20:39.146644 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:20:39.146670 kernel: pnp: PnP ACPI init Feb 13 20:20:39.146692 kernel: pnp: PnP ACPI: found 7 devices Feb 13 20:20:39.146713 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:20:39.146733 kernel: NET: Registered PF_INET protocol family Feb 13 20:20:39.146755 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 20:20:39.146775 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 13 20:20:39.146795 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:20:39.146817 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 20:20:39.146839 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Feb 13 20:20:39.146863 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 13 20:20:39.146884 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 20:20:39.146904 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 13 20:20:39.146924 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:20:39.146945 kernel: NET: Registered PF_XDP protocol family Feb 13 20:20:39.147193 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 20:20:39.147417 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 20:20:39.147620 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 20:20:39.147832 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Feb 13 20:20:39.148047 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 20:20:39.148075 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:20:39.148096 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 20:20:39.148117 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Feb 13 20:20:39.148151 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 20:20:39.148172 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Feb 13 20:20:39.148192 kernel: clocksource: Switched to clocksource tsc Feb 13 20:20:39.148218 kernel: Initialise system trusted keyrings Feb 13 20:20:39.148239 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 13 20:20:39.148260 kernel: Key type asymmetric registered Feb 13 20:20:39.148280 kernel: Asymmetric key parser 'x509' registered Feb 13 20:20:39.148300 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:20:39.148321 kernel: io scheduler mq-deadline registered Feb 13 20:20:39.148342 kernel: io scheduler kyber registered Feb 13 20:20:39.148429 kernel: io scheduler bfq registered Feb 13 20:20:39.148460 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:20:39.148495 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 13 20:20:39.148746 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Feb 13 20:20:39.148776 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Feb 13 20:20:39.148999 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Feb 13 20:20:39.149028 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 13 20:20:39.149262 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Feb 13 20:20:39.149295 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:20:39.149320 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:20:39.149344 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 13 20:20:39.149409 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Feb 13 20:20:39.149430 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Feb 13 20:20:39.149683 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Feb 13 20:20:39.149716 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 20:20:39.149736 kernel: i8042: Warning: Keylock active Feb 13 20:20:39.149759 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 20:20:39.149781 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 20:20:39.150026 kernel: rtc_cmos 00:00: RTC can wake from S4 Feb 13 20:20:39.150269 kernel: rtc_cmos 00:00: registered as rtc0 Feb 13 20:20:39.150531 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T20:20:38 UTC (1739478038) Feb 13 20:20:39.150744 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Feb 13 20:20:39.150773 kernel: intel_pstate: CPU model not supported Feb 13 20:20:39.150796 kernel: pstore: Using crash dump compression: deflate Feb 13 20:20:39.150817 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 20:20:39.150839 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:20:39.150862 kernel: Segment Routing with IPv6 Feb 13 20:20:39.150892 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:20:39.150913 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:20:39.150934 kernel: Key type dns_resolver registered Feb 13 20:20:39.150955 kernel: IPI shorthand broadcast: enabled Feb 13 20:20:39.150976 kernel: sched_clock: Marking stable (931004913, 159507364)->(1125398473, -34886196) Feb 13 20:20:39.150997 kernel: registered taskstats version 1 Feb 13 20:20:39.151019 kernel: Loading compiled-in X.509 certificates Feb 13 20:20:39.151041 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 20:20:39.151064 kernel: Key type .fscrypt registered Feb 13 20:20:39.151091 kernel: Key type fscrypt-provisioning registered Feb 13 20:20:39.151114 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:20:39.151143 kernel: ima: No architecture policies found Feb 13 20:20:39.151163 kernel: clk: Disabling unused clocks Feb 13 20:20:39.151185 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 20:20:39.151205 kernel: Write protecting the kernel read-only data: 36864k Feb 13 20:20:39.151227 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 20:20:39.151249 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 20:20:39.151274 kernel: Run /init as init process Feb 13 20:20:39.151295 kernel: with arguments: Feb 13 20:20:39.151314 kernel: /init Feb 13 20:20:39.151334 kernel: with environment: Feb 13 20:20:39.151368 kernel: HOME=/ Feb 13 20:20:39.151389 kernel: TERM=linux Feb 13 20:20:39.151410 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:20:39.151435 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:20:39.151464 systemd[1]: Detected virtualization google. Feb 13 20:20:39.151487 systemd[1]: Detected architecture x86-64. Feb 13 20:20:39.151513 systemd[1]: Running in initrd. Feb 13 20:20:39.151538 systemd[1]: No hostname configured, using default hostname. Feb 13 20:20:39.151558 systemd[1]: Hostname set to . Feb 13 20:20:39.151581 systemd[1]: Initializing machine ID from random generator. Feb 13 20:20:39.151603 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:20:39.151624 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:20:39.151649 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:20:39.151674 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:20:39.151697 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:20:39.151719 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:20:39.151740 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:20:39.151761 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:20:39.151790 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:20:39.151823 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:20:39.151858 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:20:39.151927 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:20:39.151965 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:20:39.151999 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:20:39.152033 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:20:39.152072 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:20:39.152126 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:20:39.152170 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:20:39.152204 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:20:39.152238 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:20:39.152272 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:20:39.152306 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:20:39.152339 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:20:39.152397 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:20:39.152438 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:20:39.152472 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:20:39.152506 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:20:39.152541 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:20:39.152575 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:20:39.152610 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:20:39.152683 systemd-journald[183]: Collecting audit messages is disabled. Feb 13 20:20:39.152760 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:20:39.152794 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:20:39.152827 systemd-journald[183]: Journal started Feb 13 20:20:39.152897 systemd-journald[183]: Runtime Journal (/run/log/journal/dd15d6348aac4cc5b6fba6a6a1644903) is 8.0M, max 148.7M, 140.7M free. Feb 13 20:20:39.162951 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:20:39.163025 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:20:39.164299 systemd-modules-load[184]: Inserted module 'overlay' Feb 13 20:20:39.185455 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:20:39.196595 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:20:39.200831 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:20:39.210136 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:20:39.224527 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:20:39.224587 kernel: Bridge firewalling registered Feb 13 20:20:39.223254 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 13 20:20:39.230985 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:20:39.235943 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:20:39.240979 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:20:39.254226 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:20:39.266647 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:20:39.283513 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:20:39.286663 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:20:39.296706 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:20:39.301109 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:20:39.313164 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:20:39.349261 dracut-cmdline[218]: dracut-dracut-053 Feb 13 20:20:39.354304 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:20:39.359008 systemd-resolved[211]: Positive Trust Anchors: Feb 13 20:20:39.359024 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:20:39.359101 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:20:39.364860 systemd-resolved[211]: Defaulting to hostname 'linux'. Feb 13 20:20:39.366944 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:20:39.376602 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:20:39.468404 kernel: SCSI subsystem initialized Feb 13 20:20:39.479405 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:20:39.492400 kernel: iscsi: registered transport (tcp) Feb 13 20:20:39.518538 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:20:39.518633 kernel: QLogic iSCSI HBA Driver Feb 13 20:20:39.575347 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:20:39.581562 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:20:39.624951 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:20:39.625044 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:20:39.625087 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:20:39.672402 kernel: raid6: avx2x4 gen() 18108 MB/s Feb 13 20:20:39.689394 kernel: raid6: avx2x2 gen() 18169 MB/s Feb 13 20:20:39.706832 kernel: raid6: avx2x1 gen() 14097 MB/s Feb 13 20:20:39.706895 kernel: raid6: using algorithm avx2x2 gen() 18169 MB/s Feb 13 20:20:39.724968 kernel: raid6: .... xor() 17285 MB/s, rmw enabled Feb 13 20:20:39.725027 kernel: raid6: using avx2x2 recovery algorithm Feb 13 20:20:39.749392 kernel: xor: automatically using best checksumming function avx Feb 13 20:20:39.943401 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:20:39.958079 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:20:39.965583 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:20:40.000256 systemd-udevd[400]: Using default interface naming scheme 'v255'. Feb 13 20:20:40.008056 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:20:40.019309 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:20:40.049728 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Feb 13 20:20:40.090506 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:20:40.097571 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:20:40.202116 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:20:40.211697 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:20:40.253708 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:20:40.260725 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:20:40.264487 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:20:40.266469 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:20:40.277568 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:20:40.319961 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:20:40.351391 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:20:40.389407 kernel: scsi host0: Virtio SCSI HBA Feb 13 20:20:40.403731 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:20:40.415576 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Feb 13 20:20:40.415683 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 20:20:40.403992 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:20:40.432900 kernel: AES CTR mode by8 optimization enabled Feb 13 20:20:40.433583 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:20:40.437091 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:20:40.437568 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:20:40.451278 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:20:40.466169 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:20:40.507657 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Feb 13 20:20:40.522724 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Feb 13 20:20:40.523052 kernel: sd 0:0:1:0: [sda] Write Protect is off Feb 13 20:20:40.523344 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Feb 13 20:20:40.523665 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Feb 13 20:20:40.523963 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:20:40.524012 kernel: GPT:17805311 != 25165823 Feb 13 20:20:40.524050 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:20:40.524079 kernel: GPT:17805311 != 25165823 Feb 13 20:20:40.524114 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:20:40.524149 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:20:40.524184 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Feb 13 20:20:40.514683 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:20:40.526131 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:20:40.575392 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (444) Feb 13 20:20:40.596673 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (448) Feb 13 20:20:40.602784 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:20:40.617938 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Feb 13 20:20:40.636249 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Feb 13 20:20:40.644189 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Feb 13 20:20:40.651055 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Feb 13 20:20:40.651330 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Feb 13 20:20:40.663732 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:20:40.692330 disk-uuid[550]: Primary Header is updated. Feb 13 20:20:40.692330 disk-uuid[550]: Secondary Entries is updated. Feb 13 20:20:40.692330 disk-uuid[550]: Secondary Header is updated. Feb 13 20:20:40.717484 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:20:40.738427 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:20:40.762404 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:20:41.760861 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 20:20:41.760948 disk-uuid[551]: The operation has completed successfully. Feb 13 20:20:41.834189 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:20:41.834395 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:20:41.870561 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:20:41.901132 sh[568]: Success Feb 13 20:20:41.924393 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 20:20:42.009763 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:20:42.017170 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:20:42.057937 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:20:42.093616 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 20:20:42.093724 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:20:42.093755 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:20:42.110085 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:20:42.110144 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:20:42.140423 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 20:20:42.146335 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:20:42.147414 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:20:42.156579 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:20:42.230569 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:20:42.230620 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:20:42.230655 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:20:42.230689 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:20:42.230721 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:20:42.227694 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:20:42.252655 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:20:42.265028 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:20:42.281632 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:20:42.459953 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:20:42.477985 ignition[645]: Ignition 2.19.0 Feb 13 20:20:42.477999 ignition[645]: Stage: fetch-offline Feb 13 20:20:42.481913 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:20:42.478068 ignition[645]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:20:42.494636 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:20:42.478086 ignition[645]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:20:42.478437 ignition[645]: parsed url from cmdline: "" Feb 13 20:20:42.478445 ignition[645]: no config URL provided Feb 13 20:20:42.478452 ignition[645]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:20:42.478464 ignition[645]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:20:42.563663 systemd-networkd[756]: lo: Link UP Feb 13 20:20:42.478471 ignition[645]: failed to fetch config: resource requires networking Feb 13 20:20:42.563669 systemd-networkd[756]: lo: Gained carrier Feb 13 20:20:42.478774 ignition[645]: Ignition finished successfully Feb 13 20:20:42.565789 systemd-networkd[756]: Enumeration completed Feb 13 20:20:42.622174 ignition[759]: Ignition 2.19.0 Feb 13 20:20:42.565940 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:20:42.622184 ignition[759]: Stage: fetch Feb 13 20:20:42.566674 systemd-networkd[756]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:20:42.622457 ignition[759]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:20:42.566683 systemd-networkd[756]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:20:42.622473 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:20:42.569203 systemd-networkd[756]: eth0: Link UP Feb 13 20:20:42.622611 ignition[759]: parsed url from cmdline: "" Feb 13 20:20:42.569209 systemd-networkd[756]: eth0: Gained carrier Feb 13 20:20:42.622619 ignition[759]: no config URL provided Feb 13 20:20:42.569220 systemd-networkd[756]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:20:42.622628 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:20:42.569814 systemd[1]: Reached target network.target - Network. Feb 13 20:20:42.622640 ignition[759]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:20:42.581467 systemd-networkd[756]: eth0: DHCPv4 address 10.128.0.9/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 13 20:20:42.622662 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Feb 13 20:20:42.600577 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:20:42.626706 ignition[759]: GET result: OK Feb 13 20:20:42.633555 unknown[759]: fetched base config from "system" Feb 13 20:20:42.626808 ignition[759]: parsing config with SHA512: 29a01ea1bdd472149bb5766acf582fd7179332ccbf14d5b1faf0f1810aebc5e149c522c2698e2d7e8c4fd9b94149759677e0a766ea73bc66ecf08f7dde99e816 Feb 13 20:20:42.633571 unknown[759]: fetched base config from "system" Feb 13 20:20:42.634284 ignition[759]: fetch: fetch complete Feb 13 20:20:42.633582 unknown[759]: fetched user config from "gcp" Feb 13 20:20:42.634291 ignition[759]: fetch: fetch passed Feb 13 20:20:42.636682 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:20:42.634348 ignition[759]: Ignition finished successfully Feb 13 20:20:42.658603 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:20:42.709272 ignition[765]: Ignition 2.19.0 Feb 13 20:20:42.712318 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:20:42.709284 ignition[765]: Stage: kargs Feb 13 20:20:42.726620 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:20:42.709592 ignition[765]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:20:42.778144 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:20:42.709611 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:20:42.787491 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:20:42.710752 ignition[765]: kargs: kargs passed Feb 13 20:20:42.803774 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:20:42.710812 ignition[765]: Ignition finished successfully Feb 13 20:20:42.831667 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:20:42.775170 ignition[770]: Ignition 2.19.0 Feb 13 20:20:42.839756 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:20:42.775181 ignition[770]: Stage: disks Feb 13 20:20:42.860889 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:20:42.775427 ignition[770]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:20:42.896651 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:20:42.775444 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:20:42.776599 ignition[770]: disks: disks passed Feb 13 20:20:42.776657 ignition[770]: Ignition finished successfully Feb 13 20:20:42.943997 systemd-fsck[779]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 20:20:43.106596 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:20:43.136509 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:20:43.265402 kernel: EXT4-fs (sda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 20:20:43.266141 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:20:43.267121 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:20:43.300508 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:20:43.317518 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:20:43.341467 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (787) Feb 13 20:20:43.326993 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 20:20:43.379729 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:20:43.379795 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:20:43.379844 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:20:43.379873 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:20:43.327053 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:20:43.421712 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:20:43.327084 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:20:43.405716 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:20:43.430876 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:20:43.453651 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:20:43.592636 initrd-setup-root[811]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:20:43.602558 initrd-setup-root[818]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:20:43.612820 initrd-setup-root[825]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:20:43.622550 initrd-setup-root[832]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:20:43.764494 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:20:43.772506 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:20:43.807396 kernel: BTRFS info (device sda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:20:43.815615 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:20:43.825725 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:20:43.869223 ignition[903]: INFO : Ignition 2.19.0 Feb 13 20:20:43.876574 ignition[903]: INFO : Stage: mount Feb 13 20:20:43.876574 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:20:43.876574 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:20:43.876574 ignition[903]: INFO : mount: mount passed Feb 13 20:20:43.876574 ignition[903]: INFO : Ignition finished successfully Feb 13 20:20:43.874758 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:20:43.896103 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:20:43.917552 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:20:43.964626 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:20:43.989613 systemd-networkd[756]: eth0: Gained IPv6LL Feb 13 20:20:44.015414 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (916) Feb 13 20:20:44.035192 kernel: BTRFS info (device sda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:20:44.035293 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:20:44.035325 kernel: BTRFS info (device sda6): using free space tree Feb 13 20:20:44.057102 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 20:20:44.057171 kernel: BTRFS info (device sda6): auto enabling async discard Feb 13 20:20:44.060208 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:20:44.104409 ignition[933]: INFO : Ignition 2.19.0 Feb 13 20:20:44.104409 ignition[933]: INFO : Stage: files Feb 13 20:20:44.118545 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:20:44.118545 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:20:44.118545 ignition[933]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:20:44.118545 ignition[933]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:20:44.118545 ignition[933]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:20:44.118545 ignition[933]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:20:44.118545 ignition[933]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:20:44.118545 ignition[933]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:20:44.118545 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:20:44.118545 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 20:20:44.115910 unknown[933]: wrote ssh authorized keys file for user: core Feb 13 20:20:44.254478 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 20:20:44.378275 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 20:20:44.378275 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 20:20:44.410515 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 13 20:20:44.737871 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 13 20:20:44.924074 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 13 20:20:44.939505 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:20:44.939505 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:20:44.939505 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:20:44.939505 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 20:20:44.939505 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:20:44.939505 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 20:20:44.939505 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:20:44.939505 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 20:20:44.939505 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:20:44.939505 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:20:44.939505 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:20:44.939505 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:20:44.939505 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:20:44.939505 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 20:20:45.177530 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Feb 13 20:20:45.678490 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 20:20:45.697571 ignition[933]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Feb 13 20:20:45.697571 ignition[933]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:20:45.697571 ignition[933]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 20:20:45.697571 ignition[933]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Feb 13 20:20:45.697571 ignition[933]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Feb 13 20:20:45.697571 ignition[933]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 20:20:45.697571 ignition[933]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:20:45.697571 ignition[933]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:20:45.697571 ignition[933]: INFO : files: files passed Feb 13 20:20:45.697571 ignition[933]: INFO : Ignition finished successfully Feb 13 20:20:45.684011 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:20:45.703601 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:20:45.735914 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:20:45.746158 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:20:45.928568 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:20:45.928568 initrd-setup-root-after-ignition[961]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:20:45.746291 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:20:45.977542 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:20:45.823081 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:20:45.831879 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:20:45.861600 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:20:45.950169 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:20:45.950301 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:20:45.968479 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:20:45.987724 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:20:46.011808 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:20:46.018717 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:20:46.075890 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:20:46.101713 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:20:46.154862 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:20:46.166727 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:20:46.187966 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:20:46.206817 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:20:46.207044 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:20:46.233832 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:20:46.254830 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:20:46.272807 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:20:46.291814 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:20:46.312835 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:20:46.333778 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:20:46.353820 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:20:46.374816 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:20:46.395798 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:20:46.416796 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:20:46.434791 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:20:46.435016 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:20:46.460860 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:20:46.480902 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:20:46.501725 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:20:46.501925 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:20:46.523799 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:20:46.524039 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:20:46.555840 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:20:46.556114 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:20:46.575885 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:20:46.576089 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:20:46.602712 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:20:46.610663 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:20:46.664563 ignition[986]: INFO : Ignition 2.19.0 Feb 13 20:20:46.664563 ignition[986]: INFO : Stage: umount Feb 13 20:20:46.664563 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:20:46.664563 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Feb 13 20:20:46.664563 ignition[986]: INFO : umount: umount passed Feb 13 20:20:46.664563 ignition[986]: INFO : Ignition finished successfully Feb 13 20:20:46.623981 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:20:46.624329 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:20:46.701821 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:20:46.702049 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:20:46.734025 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:20:46.735104 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:20:46.735233 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:20:46.739464 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:20:46.739598 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:20:46.769750 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:20:46.769895 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:20:46.789097 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:20:46.789204 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:20:46.807644 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:20:46.807747 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:20:46.827624 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:20:46.827719 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:20:46.845622 systemd[1]: Stopped target network.target - Network. Feb 13 20:20:46.860519 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:20:46.860665 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:20:46.878658 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:20:46.893532 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:20:46.897473 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:20:46.912535 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:20:46.927575 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:20:46.945635 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:20:46.945735 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:20:46.963600 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:20:46.963702 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:20:46.982597 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:20:46.982720 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:20:47.000637 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:20:47.000753 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:20:47.018614 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:20:47.018734 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:20:47.036941 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:20:47.042458 systemd-networkd[756]: eth0: DHCPv6 lease lost Feb 13 20:20:47.060851 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:20:47.091107 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:20:47.091260 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:20:47.110242 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:20:47.110699 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:20:47.130331 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:20:47.130467 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:20:47.158517 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:20:47.160692 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:20:47.160769 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:20:47.184803 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:20:47.184887 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:20:47.202798 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:20:47.202880 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:20:47.219787 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:20:47.219874 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:20:47.248918 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:20:47.277041 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:20:47.277231 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:20:47.305755 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:20:47.305824 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:20:47.702497 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 13 20:20:47.328613 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:20:47.328695 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:20:47.348551 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:20:47.348686 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:20:47.375511 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:20:47.375637 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:20:47.402524 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:20:47.402664 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:20:47.438566 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:20:47.470492 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:20:47.470628 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:20:47.491650 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 20:20:47.491782 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:20:47.512615 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:20:47.512729 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:20:47.535598 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:20:47.535713 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:20:47.556116 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:20:47.556250 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:20:47.575945 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:20:47.576074 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:20:47.597924 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:20:47.612647 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:20:47.655722 systemd[1]: Switching root. Feb 13 20:20:47.939493 systemd-journald[183]: Journal stopped Feb 13 20:20:50.570386 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:20:50.570434 kernel: SELinux: policy capability open_perms=1 Feb 13 20:20:50.570450 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:20:50.570462 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:20:50.570474 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:20:50.570486 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:20:50.570499 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:20:50.570515 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:20:50.570527 kernel: audit: type=1403 audit(1739478048.389:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:20:50.570543 systemd[1]: Successfully loaded SELinux policy in 95.624ms. Feb 13 20:20:50.570558 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.852ms. Feb 13 20:20:50.570572 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:20:50.570588 systemd[1]: Detected virtualization google. Feb 13 20:20:50.570601 systemd[1]: Detected architecture x86-64. Feb 13 20:20:50.570619 systemd[1]: Detected first boot. Feb 13 20:20:50.570634 systemd[1]: Initializing machine ID from random generator. Feb 13 20:20:50.570648 zram_generator::config[1027]: No configuration found. Feb 13 20:20:50.570662 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:20:50.570676 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:20:50.570694 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:20:50.570707 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:20:50.570722 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:20:50.570736 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:20:50.570750 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:20:50.570764 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:20:50.570780 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:20:50.570798 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:20:50.570812 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:20:50.570826 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:20:50.570840 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:20:50.570854 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:20:50.570868 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:20:50.570882 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:20:50.570896 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:20:50.570914 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:20:50.570928 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 20:20:50.570942 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:20:50.570955 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:20:50.570969 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:20:50.570983 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:20:50.571002 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:20:50.571016 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:20:50.571030 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:20:50.571048 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:20:50.571062 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:20:50.571077 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:20:50.571092 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:20:50.571107 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:20:50.571121 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:20:50.571136 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:20:50.571154 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:20:50.571169 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:20:50.571183 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:20:50.571198 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:20:50.571213 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:20:50.571231 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:20:50.571245 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:20:50.571279 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:20:50.571295 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:20:50.571310 systemd[1]: Reached target machines.target - Containers. Feb 13 20:20:50.571329 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:20:50.571344 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:20:50.571383 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:20:50.571420 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:20:50.571436 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:20:50.571451 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:20:50.571465 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:20:50.571481 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:20:50.571496 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:20:50.571511 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:20:50.571525 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:20:50.571544 kernel: ACPI: bus type drm_connector registered Feb 13 20:20:50.571557 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:20:50.571572 kernel: loop: module loaded Feb 13 20:20:50.571587 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:20:50.571602 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:20:50.571616 kernel: fuse: init (API version 7.39) Feb 13 20:20:50.571629 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:20:50.571644 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:20:50.571689 systemd-journald[1114]: Collecting audit messages is disabled. Feb 13 20:20:50.571721 systemd-journald[1114]: Journal started Feb 13 20:20:50.571747 systemd-journald[1114]: Runtime Journal (/run/log/journal/9123b0ac844e4f8d9230a46a5383a484) is 8.0M, max 148.7M, 140.7M free. Feb 13 20:20:49.346581 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:20:49.375640 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 20:20:49.376304 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:20:50.593416 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:20:50.625407 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:20:50.650389 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:20:50.673746 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:20:50.673858 systemd[1]: Stopped verity-setup.service. Feb 13 20:20:50.699516 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:20:50.710430 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:20:50.721189 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:20:50.731833 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:20:50.742778 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:20:50.752791 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:20:50.762784 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:20:50.772733 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:20:50.782984 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:20:50.794976 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:20:50.806901 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:20:50.807167 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:20:50.818945 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:20:50.819204 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:20:50.830893 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:20:50.831144 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:20:50.841931 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:20:50.842215 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:20:50.853938 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:20:50.854212 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:20:50.864924 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:20:50.865191 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:20:50.875893 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:20:50.885874 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:20:50.897911 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:20:50.910871 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:20:50.937910 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:20:50.963568 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:20:50.982562 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:20:50.992519 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:20:50.992616 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:20:51.003858 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:20:51.027645 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:20:51.050604 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:20:51.060711 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:20:51.067835 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:20:51.085078 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:20:51.098561 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:20:51.106687 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:20:51.117900 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:20:51.130858 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:20:51.137621 systemd-journald[1114]: Time spent on flushing to /var/log/journal/9123b0ac844e4f8d9230a46a5383a484 is 73.776ms for 933 entries. Feb 13 20:20:51.137621 systemd-journald[1114]: System Journal (/var/log/journal/9123b0ac844e4f8d9230a46a5383a484) is 8.0M, max 584.8M, 576.8M free. Feb 13 20:20:51.245659 systemd-journald[1114]: Received client request to flush runtime journal. Feb 13 20:20:51.245730 kernel: loop0: detected capacity change from 0 to 140768 Feb 13 20:20:51.157620 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:20:51.180672 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:20:51.201016 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:20:51.218852 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:20:51.230729 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:20:51.248138 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:20:51.261512 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:20:51.273043 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:20:51.285551 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:20:51.303559 systemd-tmpfiles[1146]: ACLs are not supported, ignoring. Feb 13 20:20:51.304135 systemd-tmpfiles[1146]: ACLs are not supported, ignoring. Feb 13 20:20:51.309962 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:20:51.331842 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:20:51.344667 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:20:51.361414 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:20:51.386292 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:20:51.397555 udevadm[1148]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 20:20:51.416784 kernel: loop1: detected capacity change from 0 to 142488 Feb 13 20:20:51.422690 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:20:51.425869 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:20:51.476642 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:20:51.500225 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:20:51.539418 kernel: loop2: detected capacity change from 0 to 205544 Feb 13 20:20:51.577682 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Feb 13 20:20:51.577744 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Feb 13 20:20:51.590933 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:20:51.625392 kernel: loop3: detected capacity change from 0 to 54824 Feb 13 20:20:51.717416 kernel: loop4: detected capacity change from 0 to 140768 Feb 13 20:20:51.767403 kernel: loop5: detected capacity change from 0 to 142488 Feb 13 20:20:51.828424 kernel: loop6: detected capacity change from 0 to 205544 Feb 13 20:20:51.886540 kernel: loop7: detected capacity change from 0 to 54824 Feb 13 20:20:51.919522 (sd-merge)[1172]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Feb 13 20:20:51.920801 (sd-merge)[1172]: Merged extensions into '/usr'. Feb 13 20:20:51.935545 systemd[1]: Reloading requested from client PID 1145 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:20:51.935576 systemd[1]: Reloading... Feb 13 20:20:52.105396 zram_generator::config[1196]: No configuration found. Feb 13 20:20:52.315222 ldconfig[1140]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:20:52.397477 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:20:52.519790 systemd[1]: Reloading finished in 582 ms. Feb 13 20:20:52.552769 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:20:52.563197 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:20:52.588597 systemd[1]: Starting ensure-sysext.service... Feb 13 20:20:52.608571 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:20:52.628860 systemd[1]: Reloading requested from client PID 1239 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:20:52.629056 systemd[1]: Reloading... Feb 13 20:20:52.667189 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:20:52.669439 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:20:52.671145 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:20:52.671783 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Feb 13 20:20:52.671933 systemd-tmpfiles[1240]: ACLs are not supported, ignoring. Feb 13 20:20:52.681448 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:20:52.681467 systemd-tmpfiles[1240]: Skipping /boot Feb 13 20:20:52.725918 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:20:52.728571 systemd-tmpfiles[1240]: Skipping /boot Feb 13 20:20:52.818390 zram_generator::config[1267]: No configuration found. Feb 13 20:20:52.992184 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:20:53.059198 systemd[1]: Reloading finished in 429 ms. Feb 13 20:20:53.081296 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:20:53.098143 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:20:53.124637 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:20:53.146612 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:20:53.165323 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:20:53.187636 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:20:53.204644 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:20:53.224715 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:20:53.244830 augenrules[1330]: No rules Feb 13 20:20:53.246472 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:20:53.258105 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:20:53.292152 systemd-udevd[1326]: Using default interface naming scheme 'v255'. Feb 13 20:20:53.294660 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:20:53.295216 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:20:53.308707 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:20:53.325032 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:20:53.348586 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:20:53.358891 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:20:53.359560 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:20:53.365710 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:20:53.378726 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:20:53.380462 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:20:53.392502 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:20:53.407226 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:20:53.419073 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:20:53.431769 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:20:53.432037 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:20:53.444761 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:20:53.445154 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:20:53.466751 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:20:53.509144 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:20:53.509658 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:20:53.518751 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:20:53.542545 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:20:53.563761 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:20:53.584602 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:20:53.605614 systemd[1]: Starting setup-oem.service - Setup OEM... Feb 13 20:20:53.613659 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:20:53.624563 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:20:53.635564 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:20:53.656642 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:20:53.667502 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:20:53.667566 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:20:53.670038 systemd[1]: Finished ensure-sysext.service. Feb 13 20:20:53.679083 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:20:53.680228 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:20:53.692084 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:20:53.692413 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:20:53.703436 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:20:53.703714 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:20:53.716094 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:20:53.716624 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:20:53.729998 systemd-resolved[1324]: Positive Trust Anchors: Feb 13 20:20:53.730038 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:20:53.730117 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:20:53.746846 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:20:53.765265 systemd-resolved[1324]: Defaulting to hostname 'linux'. Feb 13 20:20:53.788391 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 20:20:53.791498 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:20:53.815587 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Feb 13 20:20:53.816094 kernel: ACPI: button: Power Button [PWRF] Feb 13 20:20:53.816314 systemd[1]: Finished setup-oem.service - Setup OEM. Feb 13 20:20:53.847188 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 20:20:53.857817 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:20:53.869386 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Feb 13 20:20:53.890240 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Feb 13 20:20:53.901513 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:20:53.901638 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:20:53.996597 systemd-networkd[1378]: lo: Link UP Feb 13 20:20:53.999851 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1352) Feb 13 20:20:53.996615 systemd-networkd[1378]: lo: Gained carrier Feb 13 20:20:54.014108 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Feb 13 20:20:54.014490 kernel: ACPI: button: Sleep Button [SLPF] Feb 13 20:20:54.019380 kernel: EDAC MC: Ver: 3.0.0 Feb 13 20:20:54.026023 systemd-networkd[1378]: Enumeration completed Feb 13 20:20:54.027022 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:20:54.028202 systemd-networkd[1378]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:20:54.029722 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:20:54.030374 systemd-networkd[1378]: eth0: Link UP Feb 13 20:20:54.032673 systemd-networkd[1378]: eth0: Gained carrier Feb 13 20:20:54.032831 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:20:54.041510 systemd[1]: Reached target network.target - Network. Feb 13 20:20:54.043459 systemd-networkd[1378]: eth0: DHCPv4 address 10.128.0.9/32, gateway 10.128.0.1 acquired from 169.254.169.254 Feb 13 20:20:54.056654 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:20:54.078404 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 13 20:20:54.121002 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 20:20:54.121881 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:20:54.142915 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Feb 13 20:20:54.155286 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:20:54.175624 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:20:54.183656 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:20:54.208588 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:20:54.208859 lvm[1416]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:20:54.245224 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:20:54.246731 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:20:54.253603 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:20:54.274945 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:20:54.282208 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:20:54.294698 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:20:54.305647 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:20:54.316580 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:20:54.327745 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:20:54.337707 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:20:54.348528 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:20:54.359513 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:20:54.359579 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:20:54.368504 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:20:54.378082 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:20:54.389416 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:20:54.401112 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:20:54.412607 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:20:54.423940 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:20:54.434408 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:20:54.444494 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:20:54.452591 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:20:54.452647 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:20:54.457515 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:20:54.481235 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 20:20:54.501675 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:20:54.521571 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:20:54.549608 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:20:54.559552 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:20:54.567053 jq[1431]: false Feb 13 20:20:54.570696 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:20:54.587660 systemd[1]: Started ntpd.service - Network Time Service. Feb 13 20:20:54.608722 extend-filesystems[1432]: Found loop4 Feb 13 20:20:54.623744 extend-filesystems[1432]: Found loop5 Feb 13 20:20:54.623744 extend-filesystems[1432]: Found loop6 Feb 13 20:20:54.623744 extend-filesystems[1432]: Found loop7 Feb 13 20:20:54.623744 extend-filesystems[1432]: Found sda Feb 13 20:20:54.623744 extend-filesystems[1432]: Found sda1 Feb 13 20:20:54.623744 extend-filesystems[1432]: Found sda2 Feb 13 20:20:54.623744 extend-filesystems[1432]: Found sda3 Feb 13 20:20:54.623744 extend-filesystems[1432]: Found usr Feb 13 20:20:54.623744 extend-filesystems[1432]: Found sda4 Feb 13 20:20:54.623744 extend-filesystems[1432]: Found sda6 Feb 13 20:20:54.623744 extend-filesystems[1432]: Found sda7 Feb 13 20:20:54.623744 extend-filesystems[1432]: Found sda9 Feb 13 20:20:54.623744 extend-filesystems[1432]: Checking size of /dev/sda9 Feb 13 20:20:54.852579 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Feb 13 20:20:54.852672 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Feb 13 20:20:54.852713 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1360) Feb 13 20:20:54.609467 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 20:20:54.853274 extend-filesystems[1432]: Resized partition /dev/sda9 Feb 13 20:20:54.888098 coreos-metadata[1429]: Feb 13 20:20:54.644 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Feb 13 20:20:54.888098 coreos-metadata[1429]: Feb 13 20:20:54.648 INFO Fetch successful Feb 13 20:20:54.888098 coreos-metadata[1429]: Feb 13 20:20:54.649 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Feb 13 20:20:54.888098 coreos-metadata[1429]: Feb 13 20:20:54.654 INFO Fetch successful Feb 13 20:20:54.888098 coreos-metadata[1429]: Feb 13 20:20:54.654 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Feb 13 20:20:54.888098 coreos-metadata[1429]: Feb 13 20:20:54.659 INFO Fetch successful Feb 13 20:20:54.888098 coreos-metadata[1429]: Feb 13 20:20:54.659 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Feb 13 20:20:54.888098 coreos-metadata[1429]: Feb 13 20:20:54.664 INFO Fetch successful Feb 13 20:20:54.703157 dbus-daemon[1430]: [system] SELinux support is enabled Feb 13 20:20:54.617630 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:20:54.889271 ntpd[1437]: 13 Feb 20:20:54 ntpd[1437]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:30:53 UTC 2025 (1): Starting Feb 13 20:20:54.889271 ntpd[1437]: 13 Feb 20:20:54 ntpd[1437]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 20:20:54.889271 ntpd[1437]: 13 Feb 20:20:54 ntpd[1437]: ---------------------------------------------------- Feb 13 20:20:54.889271 ntpd[1437]: 13 Feb 20:20:54 ntpd[1437]: ntp-4 is maintained by Network Time Foundation, Feb 13 20:20:54.889271 ntpd[1437]: 13 Feb 20:20:54 ntpd[1437]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 20:20:54.889271 ntpd[1437]: 13 Feb 20:20:54 ntpd[1437]: corporation. Support and training for ntp-4 are Feb 13 20:20:54.889271 ntpd[1437]: 13 Feb 20:20:54 ntpd[1437]: available at https://www.nwtime.org/support Feb 13 20:20:54.889271 ntpd[1437]: 13 Feb 20:20:54 ntpd[1437]: ---------------------------------------------------- Feb 13 20:20:54.889271 ntpd[1437]: 13 Feb 20:20:54 ntpd[1437]: proto: precision = 0.089 usec (-23) Feb 13 20:20:54.889271 ntpd[1437]: 13 Feb 20:20:54 ntpd[1437]: basedate set to 2025-02-01 Feb 13 20:20:54.889271 ntpd[1437]: 13 Feb 20:20:54 ntpd[1437]: gps base set to 2025-02-02 (week 2352) Feb 13 20:20:54.889271 ntpd[1437]: 13 Feb 20:20:54 ntpd[1437]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 20:20:54.889271 ntpd[1437]: 13 Feb 20:20:54 ntpd[1437]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 20:20:54.889271 ntpd[1437]: 13 Feb 20:20:54 ntpd[1437]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 20:20:54.889271 ntpd[1437]: 13 Feb 20:20:54 ntpd[1437]: Listen normally on 3 eth0 10.128.0.9:123 Feb 13 20:20:54.889271 ntpd[1437]: 13 Feb 20:20:54 ntpd[1437]: Listen normally on 4 lo [::1]:123 Feb 13 20:20:54.889271 ntpd[1437]: 13 Feb 20:20:54 ntpd[1437]: bind(21) AF_INET6 fe80::4001:aff:fe80:9%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 20:20:54.889271 ntpd[1437]: 13 Feb 20:20:54 ntpd[1437]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:9%2#123 Feb 13 20:20:54.889271 ntpd[1437]: 13 Feb 20:20:54 ntpd[1437]: failed to init interface for address fe80::4001:aff:fe80:9%2 Feb 13 20:20:54.889271 ntpd[1437]: 13 Feb 20:20:54 ntpd[1437]: Listening on routing socket on fd #21 for interface updates Feb 13 20:20:54.889271 ntpd[1437]: 13 Feb 20:20:54 ntpd[1437]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 20:20:54.889271 ntpd[1437]: 13 Feb 20:20:54 ntpd[1437]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 20:20:54.895890 extend-filesystems[1456]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:20:54.895890 extend-filesystems[1456]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Feb 13 20:20:54.895890 extend-filesystems[1456]: old_desc_blocks = 1, new_desc_blocks = 2 Feb 13 20:20:54.895890 extend-filesystems[1456]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Feb 13 20:20:54.723560 dbus-daemon[1430]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1378 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 20:20:54.643620 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:20:54.948933 extend-filesystems[1432]: Resized filesystem in /dev/sda9 Feb 13 20:20:54.729867 ntpd[1437]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:30:53 UTC 2025 (1): Starting Feb 13 20:20:54.677669 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:20:54.729904 ntpd[1437]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Feb 13 20:20:54.685617 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Feb 13 20:20:54.729920 ntpd[1437]: ---------------------------------------------------- Feb 13 20:20:54.687051 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:20:54.962979 update_engine[1454]: I20250213 20:20:54.863952 1454 main.cc:92] Flatcar Update Engine starting Feb 13 20:20:54.962979 update_engine[1454]: I20250213 20:20:54.871075 1454 update_check_scheduler.cc:74] Next update check in 10m26s Feb 13 20:20:54.729938 ntpd[1437]: ntp-4 is maintained by Network Time Foundation, Feb 13 20:20:54.695528 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:20:54.729955 ntpd[1437]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Feb 13 20:20:54.709542 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:20:54.965152 jq[1457]: true Feb 13 20:20:54.729972 ntpd[1437]: corporation. Support and training for ntp-4 are Feb 13 20:20:54.727578 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:20:54.729992 ntpd[1437]: available at https://www.nwtime.org/support Feb 13 20:20:54.795060 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:20:54.730008 ntpd[1437]: ---------------------------------------------------- Feb 13 20:20:54.796450 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:20:54.732304 ntpd[1437]: proto: precision = 0.089 usec (-23) Feb 13 20:20:54.797029 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:20:54.732775 ntpd[1437]: basedate set to 2025-02-01 Feb 13 20:20:54.797486 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:20:54.732800 ntpd[1437]: gps base set to 2025-02-02 (week 2352) Feb 13 20:20:54.864954 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:20:54.742376 ntpd[1437]: Listen and drop on 0 v6wildcard [::]:123 Feb 13 20:20:54.866425 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:20:54.747445 ntpd[1437]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Feb 13 20:20:54.883074 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:20:54.747817 ntpd[1437]: Listen normally on 2 lo 127.0.0.1:123 Feb 13 20:20:54.884469 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:20:54.747875 ntpd[1437]: Listen normally on 3 eth0 10.128.0.9:123 Feb 13 20:20:54.950411 systemd-logind[1452]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 20:20:54.747935 ntpd[1437]: Listen normally on 4 lo [::1]:123 Feb 13 20:20:54.950471 systemd-logind[1452]: Watching system buttons on /dev/input/event2 (Sleep Button) Feb 13 20:20:54.748002 ntpd[1437]: bind(21) AF_INET6 fe80::4001:aff:fe80:9%2#123 flags 0x11 failed: Cannot assign requested address Feb 13 20:20:54.950512 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 20:20:54.748031 ntpd[1437]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:9%2#123 Feb 13 20:20:54.953469 systemd-logind[1452]: New seat seat0. Feb 13 20:20:54.748056 ntpd[1437]: failed to init interface for address fe80::4001:aff:fe80:9%2 Feb 13 20:20:54.955593 (ntainerd)[1468]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:20:54.748114 ntpd[1437]: Listening on routing socket on fd #21 for interface updates Feb 13 20:20:54.969790 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:20:54.751750 ntpd[1437]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 20:20:54.751796 ntpd[1437]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Feb 13 20:20:54.982326 dbus-daemon[1430]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 20:20:54.995752 jq[1467]: true Feb 13 20:20:54.998910 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 20:20:55.064248 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:20:55.083446 tar[1466]: linux-amd64/helm Feb 13 20:20:55.085340 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:20:55.097633 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:20:55.098469 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:20:55.098738 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:20:55.119781 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 20:20:55.127978 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:20:55.128420 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:20:55.150880 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:20:55.159301 bash[1498]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:20:55.170905 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:20:55.201010 systemd[1]: Starting sshkeys.service... Feb 13 20:20:55.292447 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 20:20:55.314662 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 20:20:55.503553 coreos-metadata[1503]: Feb 13 20:20:55.503 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Feb 13 20:20:55.507625 coreos-metadata[1503]: Feb 13 20:20:55.507 INFO Fetch failed with 404: resource not found Feb 13 20:20:55.507625 coreos-metadata[1503]: Feb 13 20:20:55.507 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Feb 13 20:20:55.508586 systemd-networkd[1378]: eth0: Gained IPv6LL Feb 13 20:20:55.519024 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:20:55.519502 coreos-metadata[1503]: Feb 13 20:20:55.519 INFO Fetch successful Feb 13 20:20:55.519571 coreos-metadata[1503]: Feb 13 20:20:55.519 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Feb 13 20:20:55.523386 coreos-metadata[1503]: Feb 13 20:20:55.522 INFO Fetch failed with 404: resource not found Feb 13 20:20:55.523386 coreos-metadata[1503]: Feb 13 20:20:55.522 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Feb 13 20:20:55.524443 coreos-metadata[1503]: Feb 13 20:20:55.524 INFO Fetch failed with 404: resource not found Feb 13 20:20:55.524592 coreos-metadata[1503]: Feb 13 20:20:55.524 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Feb 13 20:20:55.527842 coreos-metadata[1503]: Feb 13 20:20:55.527 INFO Fetch successful Feb 13 20:20:55.532718 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:20:55.537259 unknown[1503]: wrote ssh authorized keys file for user: core Feb 13 20:20:55.545630 sshd_keygen[1464]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:20:55.552861 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:20:55.571906 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:20:55.587962 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Feb 13 20:20:55.637589 dbus-daemon[1430]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 20:20:55.648992 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 20:20:55.655653 dbus-daemon[1430]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1499 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 20:20:55.672031 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 20:20:55.687703 init.sh[1518]: + '[' -e /etc/default/instance_configs.cfg.template ']' Feb 13 20:20:55.704649 init.sh[1518]: + echo -e '[InstanceSetup]\nset_host_keys = false' Feb 13 20:20:55.704649 init.sh[1518]: + /usr/bin/google_instance_setup Feb 13 20:20:55.704844 update-ssh-keys[1519]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:20:55.709822 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 20:20:55.728350 systemd[1]: Finished sshkeys.service. Feb 13 20:20:55.746652 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:20:55.767065 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:20:55.789038 systemd[1]: Started sshd@0-10.128.0.9:22-139.178.89.65:59394.service - OpenSSH per-connection server daemon (139.178.89.65:59394). Feb 13 20:20:55.802134 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:20:55.809715 locksmithd[1500]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:20:55.883292 polkitd[1527]: Started polkitd version 121 Feb 13 20:20:55.898797 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:20:55.900039 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:20:55.904565 polkitd[1527]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 20:20:55.904798 polkitd[1527]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 20:20:55.906147 polkitd[1527]: Finished loading, compiling and executing 2 rules Feb 13 20:20:55.907272 dbus-daemon[1430]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 20:20:55.908623 polkitd[1527]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 20:20:55.915501 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 20:20:55.934703 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:20:55.984842 systemd-hostnamed[1499]: Hostname set to (transient) Feb 13 20:20:55.986580 systemd-resolved[1324]: System hostname changed to 'ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal'. Feb 13 20:20:55.997711 containerd[1468]: time="2025-02-13T20:20:55.993430785Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:20:56.003963 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:20:56.025855 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:20:56.042832 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 20:20:56.053088 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:20:56.134825 containerd[1468]: time="2025-02-13T20:20:56.134723216Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:20:56.142429 containerd[1468]: time="2025-02-13T20:20:56.142335649Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:20:56.143079 containerd[1468]: time="2025-02-13T20:20:56.142577404Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:20:56.143079 containerd[1468]: time="2025-02-13T20:20:56.142628443Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:20:56.143079 containerd[1468]: time="2025-02-13T20:20:56.142870914Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:20:56.143079 containerd[1468]: time="2025-02-13T20:20:56.142901899Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:20:56.145908 containerd[1468]: time="2025-02-13T20:20:56.145041487Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:20:56.145908 containerd[1468]: time="2025-02-13T20:20:56.145081333Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:20:56.148407 containerd[1468]: time="2025-02-13T20:20:56.147320979Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:20:56.148407 containerd[1468]: time="2025-02-13T20:20:56.147881349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:20:56.148407 containerd[1468]: time="2025-02-13T20:20:56.147931194Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:20:56.148407 containerd[1468]: time="2025-02-13T20:20:56.147952039Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:20:56.150331 containerd[1468]: time="2025-02-13T20:20:56.148118084Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:20:56.150331 containerd[1468]: time="2025-02-13T20:20:56.149782368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:20:56.150331 containerd[1468]: time="2025-02-13T20:20:56.150026297Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:20:56.150331 containerd[1468]: time="2025-02-13T20:20:56.150056362Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:20:56.150331 containerd[1468]: time="2025-02-13T20:20:56.150206728Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:20:56.150331 containerd[1468]: time="2025-02-13T20:20:56.150286155Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:20:56.159904 containerd[1468]: time="2025-02-13T20:20:56.159814741Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:20:56.160168 containerd[1468]: time="2025-02-13T20:20:56.160084450Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:20:56.160668 containerd[1468]: time="2025-02-13T20:20:56.160274279Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:20:56.160668 containerd[1468]: time="2025-02-13T20:20:56.160317295Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:20:56.160668 containerd[1468]: time="2025-02-13T20:20:56.160345641Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:20:56.160668 containerd[1468]: time="2025-02-13T20:20:56.160587859Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:20:56.162022 containerd[1468]: time="2025-02-13T20:20:56.161983483Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:20:56.162651 containerd[1468]: time="2025-02-13T20:20:56.162194180Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:20:56.162651 containerd[1468]: time="2025-02-13T20:20:56.162229139Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:20:56.162651 containerd[1468]: time="2025-02-13T20:20:56.162252143Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:20:56.162651 containerd[1468]: time="2025-02-13T20:20:56.162278740Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:20:56.162651 containerd[1468]: time="2025-02-13T20:20:56.162301811Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:20:56.162651 containerd[1468]: time="2025-02-13T20:20:56.162324509Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:20:56.162651 containerd[1468]: time="2025-02-13T20:20:56.162379468Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:20:56.162651 containerd[1468]: time="2025-02-13T20:20:56.162406700Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:20:56.162651 containerd[1468]: time="2025-02-13T20:20:56.162430814Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:20:56.162651 containerd[1468]: time="2025-02-13T20:20:56.162555084Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:20:56.162651 containerd[1468]: time="2025-02-13T20:20:56.162584801Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:20:56.163735 containerd[1468]: time="2025-02-13T20:20:56.163420628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:20:56.163735 containerd[1468]: time="2025-02-13T20:20:56.163466420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:20:56.163735 containerd[1468]: time="2025-02-13T20:20:56.163491645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:20:56.163735 containerd[1468]: time="2025-02-13T20:20:56.163528837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:20:56.163735 containerd[1468]: time="2025-02-13T20:20:56.163554105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:20:56.163735 containerd[1468]: time="2025-02-13T20:20:56.163680861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:20:56.163735 containerd[1468]: time="2025-02-13T20:20:56.163706806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:20:56.165113 containerd[1468]: time="2025-02-13T20:20:56.163758406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:20:56.165113 containerd[1468]: time="2025-02-13T20:20:56.163784542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:20:56.165113 containerd[1468]: time="2025-02-13T20:20:56.163812802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:20:56.165113 containerd[1468]: time="2025-02-13T20:20:56.163835694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:20:56.165113 containerd[1468]: time="2025-02-13T20:20:56.163858180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:20:56.165113 containerd[1468]: time="2025-02-13T20:20:56.163881857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:20:56.165113 containerd[1468]: time="2025-02-13T20:20:56.163909412Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:20:56.165113 containerd[1468]: time="2025-02-13T20:20:56.163946638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:20:56.165113 containerd[1468]: time="2025-02-13T20:20:56.163969014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:20:56.165113 containerd[1468]: time="2025-02-13T20:20:56.163988388Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:20:56.165113 containerd[1468]: time="2025-02-13T20:20:56.164071808Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:20:56.165113 containerd[1468]: time="2025-02-13T20:20:56.164108874Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:20:56.165113 containerd[1468]: time="2025-02-13T20:20:56.164134960Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:20:56.168564 containerd[1468]: time="2025-02-13T20:20:56.164160772Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:20:56.168564 containerd[1468]: time="2025-02-13T20:20:56.164191450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:20:56.168564 containerd[1468]: time="2025-02-13T20:20:56.164218110Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:20:56.168564 containerd[1468]: time="2025-02-13T20:20:56.164238496Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:20:56.168564 containerd[1468]: time="2025-02-13T20:20:56.165048304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:20:56.168814 containerd[1468]: time="2025-02-13T20:20:56.168717333Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:20:56.169083 containerd[1468]: time="2025-02-13T20:20:56.168829082Z" level=info msg="Connect containerd service" Feb 13 20:20:56.169083 containerd[1468]: time="2025-02-13T20:20:56.168896104Z" level=info msg="using legacy CRI server" Feb 13 20:20:56.169083 containerd[1468]: time="2025-02-13T20:20:56.168912268Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:20:56.169232 containerd[1468]: time="2025-02-13T20:20:56.169148311Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:20:56.179388 containerd[1468]: time="2025-02-13T20:20:56.176302936Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:20:56.179388 containerd[1468]: time="2025-02-13T20:20:56.177494737Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:20:56.179388 containerd[1468]: time="2025-02-13T20:20:56.177573213Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:20:56.179388 containerd[1468]: time="2025-02-13T20:20:56.177639845Z" level=info msg="Start subscribing containerd event" Feb 13 20:20:56.179388 containerd[1468]: time="2025-02-13T20:20:56.177693235Z" level=info msg="Start recovering state" Feb 13 20:20:56.179388 containerd[1468]: time="2025-02-13T20:20:56.177787523Z" level=info msg="Start event monitor" Feb 13 20:20:56.179388 containerd[1468]: time="2025-02-13T20:20:56.177812597Z" level=info msg="Start snapshots syncer" Feb 13 20:20:56.179388 containerd[1468]: time="2025-02-13T20:20:56.177828747Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:20:56.179388 containerd[1468]: time="2025-02-13T20:20:56.177843745Z" level=info msg="Start streaming server" Feb 13 20:20:56.179388 containerd[1468]: time="2025-02-13T20:20:56.177929640Z" level=info msg="containerd successfully booted in 0.193574s" Feb 13 20:20:56.178101 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:20:56.282616 sshd[1542]: Accepted publickey for core from 139.178.89.65 port 59394 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:20:56.284809 sshd[1542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:56.311155 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:20:56.329821 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:20:56.346890 systemd-logind[1452]: New session 1 of user core. Feb 13 20:20:56.384400 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:20:56.407073 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:20:56.449571 (systemd)[1565]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:20:56.477670 tar[1466]: linux-amd64/LICENSE Feb 13 20:20:56.477670 tar[1466]: linux-amd64/README.md Feb 13 20:20:56.515091 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 20:20:56.773600 systemd[1565]: Queued start job for default target default.target. Feb 13 20:20:56.781010 systemd[1565]: Created slice app.slice - User Application Slice. Feb 13 20:20:56.781066 systemd[1565]: Reached target paths.target - Paths. Feb 13 20:20:56.781106 systemd[1565]: Reached target timers.target - Timers. Feb 13 20:20:56.788730 systemd[1565]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:20:56.822862 systemd[1565]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:20:56.823312 systemd[1565]: Reached target sockets.target - Sockets. Feb 13 20:20:56.823636 systemd[1565]: Reached target basic.target - Basic System. Feb 13 20:20:56.823743 systemd[1565]: Reached target default.target - Main User Target. Feb 13 20:20:56.823809 systemd[1565]: Startup finished in 358ms. Feb 13 20:20:56.824273 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:20:56.843847 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:20:56.869835 instance-setup[1530]: INFO Running google_set_multiqueue. Feb 13 20:20:56.891784 instance-setup[1530]: INFO Set channels for eth0 to 2. Feb 13 20:20:56.898263 instance-setup[1530]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Feb 13 20:20:56.900213 instance-setup[1530]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Feb 13 20:20:56.900848 instance-setup[1530]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Feb 13 20:20:56.902587 instance-setup[1530]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Feb 13 20:20:56.903157 instance-setup[1530]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Feb 13 20:20:56.904911 instance-setup[1530]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Feb 13 20:20:56.905447 instance-setup[1530]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Feb 13 20:20:56.907263 instance-setup[1530]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Feb 13 20:20:56.916822 instance-setup[1530]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Feb 13 20:20:56.921810 instance-setup[1530]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Feb 13 20:20:56.924016 instance-setup[1530]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Feb 13 20:20:56.924259 instance-setup[1530]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Feb 13 20:20:56.954877 init.sh[1518]: + /usr/bin/google_metadata_script_runner --script-type startup Feb 13 20:20:57.098792 systemd[1]: Started sshd@1-10.128.0.9:22-139.178.89.65:54260.service - OpenSSH per-connection server daemon (139.178.89.65:54260). Feb 13 20:20:57.238337 startup-script[1607]: INFO Starting startup scripts. Feb 13 20:20:57.246945 startup-script[1607]: INFO No startup scripts found in metadata. Feb 13 20:20:57.247035 startup-script[1607]: INFO Finished running startup scripts. Feb 13 20:20:57.276328 init.sh[1518]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Feb 13 20:20:57.276328 init.sh[1518]: + daemon_pids=() Feb 13 20:20:57.276615 init.sh[1518]: + for d in accounts clock_skew network Feb 13 20:20:57.277167 init.sh[1518]: + daemon_pids+=($!) Feb 13 20:20:57.277167 init.sh[1518]: + for d in accounts clock_skew network Feb 13 20:20:57.277330 init.sh[1614]: + /usr/bin/google_accounts_daemon Feb 13 20:20:57.277753 init.sh[1518]: + daemon_pids+=($!) Feb 13 20:20:57.277753 init.sh[1518]: + for d in accounts clock_skew network Feb 13 20:20:57.277753 init.sh[1518]: + daemon_pids+=($!) Feb 13 20:20:57.277753 init.sh[1518]: + NOTIFY_SOCKET=/run/systemd/notify Feb 13 20:20:57.277753 init.sh[1518]: + /usr/bin/systemd-notify --ready Feb 13 20:20:57.281398 init.sh[1615]: + /usr/bin/google_clock_skew_daemon Feb 13 20:20:57.283292 init.sh[1616]: + /usr/bin/google_network_daemon Feb 13 20:20:57.312304 systemd[1]: Started oem-gce.service - GCE Linux Agent. Feb 13 20:20:57.323567 init.sh[1518]: + wait -n 1614 1615 1616 Feb 13 20:20:57.500415 sshd[1610]: Accepted publickey for core from 139.178.89.65 port 54260 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:20:57.503499 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:57.520275 systemd-logind[1452]: New session 2 of user core. Feb 13 20:20:57.524627 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:20:57.731693 ntpd[1437]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:9%2]:123 Feb 13 20:20:57.735232 ntpd[1437]: 13 Feb 20:20:57 ntpd[1437]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:9%2]:123 Feb 13 20:20:57.732629 sshd[1610]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:57.747058 systemd[1]: sshd@1-10.128.0.9:22-139.178.89.65:54260.service: Deactivated successfully. Feb 13 20:20:57.751279 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:20:57.754711 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:20:57.757868 systemd-logind[1452]: Removed session 2. Feb 13 20:20:57.796706 systemd[1]: Started sshd@2-10.128.0.9:22-139.178.89.65:54276.service - OpenSSH per-connection server daemon (139.178.89.65:54276). Feb 13 20:20:57.823260 google-networking[1616]: INFO Starting Google Networking daemon. Feb 13 20:20:57.872329 groupadd[1629]: group added to /etc/group: name=google-sudoers, GID=1000 Feb 13 20:20:57.873124 google-clock-skew[1615]: INFO Starting Google Clock Skew daemon. Feb 13 20:20:57.879730 groupadd[1629]: group added to /etc/gshadow: name=google-sudoers Feb 13 20:20:57.881940 google-clock-skew[1615]: INFO Clock drift token has changed: 0. Feb 13 20:20:57.939824 groupadd[1629]: new group: name=google-sudoers, GID=1000 Feb 13 20:20:57.956634 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:20:57.970698 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:20:57.973962 (kubelet)[1645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:20:57.982439 systemd[1]: Startup finished in 1.120s (kernel) + 9.589s (initrd) + 9.677s (userspace) = 20.387s. Feb 13 20:20:57.997249 google-accounts[1614]: INFO Starting Google Accounts daemon. Feb 13 20:20:58.030961 google-accounts[1614]: WARNING OS Login not installed. Feb 13 20:20:58.035139 google-accounts[1614]: INFO Creating a new user account for 0. Feb 13 20:20:58.041771 init.sh[1651]: useradd: invalid user name '0': use --badname to ignore Feb 13 20:20:58.042655 google-accounts[1614]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Feb 13 20:20:58.129180 sshd[1630]: Accepted publickey for core from 139.178.89.65 port 54276 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:20:58.132033 sshd[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:20:58.141143 systemd-logind[1452]: New session 3 of user core. Feb 13 20:20:58.145592 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:20:58.347139 sshd[1630]: pam_unix(sshd:session): session closed for user core Feb 13 20:20:58.356052 systemd[1]: sshd@2-10.128.0.9:22-139.178.89.65:54276.service: Deactivated successfully. Feb 13 20:20:58.359719 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:20:58.361653 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:20:58.363995 systemd-logind[1452]: Removed session 3. Feb 13 20:20:58.000083 systemd-resolved[1324]: Clock change detected. Flushing caches. Feb 13 20:20:58.014867 systemd-journald[1114]: Time jumped backwards, rotating. Feb 13 20:20:58.000669 google-clock-skew[1615]: INFO Synced system time with hardware clock. Feb 13 20:20:58.360374 kubelet[1645]: E0213 20:20:58.360006 1645 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:20:58.363554 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:20:58.363869 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:20:58.364371 systemd[1]: kubelet.service: Consumed 1.254s CPU time. Feb 13 20:21:07.931970 systemd[1]: Started sshd@3-10.128.0.9:22-139.178.89.65:47706.service - OpenSSH per-connection server daemon (139.178.89.65:47706). Feb 13 20:21:08.219407 sshd[1665]: Accepted publickey for core from 139.178.89.65 port 47706 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:21:08.221563 sshd[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:08.228259 systemd-logind[1452]: New session 4 of user core. Feb 13 20:21:08.235191 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:21:08.392324 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:21:08.398616 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:21:08.436260 sshd[1665]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:08.441231 systemd[1]: sshd@3-10.128.0.9:22-139.178.89.65:47706.service: Deactivated successfully. Feb 13 20:21:08.444025 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:21:08.446524 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:21:08.449676 systemd-logind[1452]: Removed session 4. Feb 13 20:21:08.494343 systemd[1]: Started sshd@4-10.128.0.9:22-139.178.89.65:47720.service - OpenSSH per-connection server daemon (139.178.89.65:47720). Feb 13 20:21:08.716332 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:21:08.730598 (kubelet)[1682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:21:08.791406 sshd[1675]: Accepted publickey for core from 139.178.89.65 port 47720 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:21:08.794203 sshd[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:08.795591 kubelet[1682]: E0213 20:21:08.795449 1682 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:21:08.802574 systemd-logind[1452]: New session 5 of user core. Feb 13 20:21:08.803472 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:21:08.803733 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:21:08.810219 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:21:08.999765 sshd[1675]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:09.005740 systemd[1]: sshd@4-10.128.0.9:22-139.178.89.65:47720.service: Deactivated successfully. Feb 13 20:21:09.008604 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:21:09.009627 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:21:09.011095 systemd-logind[1452]: Removed session 5. Feb 13 20:21:09.055318 systemd[1]: Started sshd@5-10.128.0.9:22-139.178.89.65:47736.service - OpenSSH per-connection server daemon (139.178.89.65:47736). Feb 13 20:21:09.343158 sshd[1694]: Accepted publickey for core from 139.178.89.65 port 47736 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:21:09.345187 sshd[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:09.351893 systemd-logind[1452]: New session 6 of user core. Feb 13 20:21:09.357176 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:21:09.556405 sshd[1694]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:09.562315 systemd[1]: sshd@5-10.128.0.9:22-139.178.89.65:47736.service: Deactivated successfully. Feb 13 20:21:09.564893 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:21:09.565890 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:21:09.567465 systemd-logind[1452]: Removed session 6. Feb 13 20:21:09.611339 systemd[1]: Started sshd@6-10.128.0.9:22-139.178.89.65:47740.service - OpenSSH per-connection server daemon (139.178.89.65:47740). Feb 13 20:21:09.898525 sshd[1701]: Accepted publickey for core from 139.178.89.65 port 47740 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:21:09.900410 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:09.907195 systemd-logind[1452]: New session 7 of user core. Feb 13 20:21:09.910183 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:21:10.091259 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 20:21:10.091831 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:21:10.108260 sudo[1704]: pam_unix(sudo:session): session closed for user root Feb 13 20:21:10.151093 sshd[1701]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:10.156485 systemd[1]: sshd@6-10.128.0.9:22-139.178.89.65:47740.service: Deactivated successfully. Feb 13 20:21:10.159348 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:21:10.161502 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:21:10.163174 systemd-logind[1452]: Removed session 7. Feb 13 20:21:10.210340 systemd[1]: Started sshd@7-10.128.0.9:22-139.178.89.65:47744.service - OpenSSH per-connection server daemon (139.178.89.65:47744). Feb 13 20:21:10.495517 sshd[1709]: Accepted publickey for core from 139.178.89.65 port 47744 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:21:10.497692 sshd[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:10.504635 systemd-logind[1452]: New session 8 of user core. Feb 13 20:21:10.514147 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:21:10.676977 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 20:21:10.677550 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:21:10.682569 sudo[1713]: pam_unix(sudo:session): session closed for user root Feb 13 20:21:10.696470 sudo[1712]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 20:21:10.697063 sudo[1712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:21:10.714336 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 20:21:10.718718 auditctl[1716]: No rules Feb 13 20:21:10.719293 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:21:10.719600 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 20:21:10.725095 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:21:10.771771 augenrules[1734]: No rules Feb 13 20:21:10.774359 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:21:10.776602 sudo[1712]: pam_unix(sudo:session): session closed for user root Feb 13 20:21:10.820364 sshd[1709]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:10.826551 systemd[1]: sshd@7-10.128.0.9:22-139.178.89.65:47744.service: Deactivated successfully. Feb 13 20:21:10.829248 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:21:10.830213 systemd-logind[1452]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:21:10.831681 systemd-logind[1452]: Removed session 8. Feb 13 20:21:10.872714 systemd[1]: Started sshd@8-10.128.0.9:22-139.178.89.65:47752.service - OpenSSH per-connection server daemon (139.178.89.65:47752). Feb 13 20:21:11.168244 sshd[1742]: Accepted publickey for core from 139.178.89.65 port 47752 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:21:11.170287 sshd[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:11.177098 systemd-logind[1452]: New session 9 of user core. Feb 13 20:21:11.184198 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:21:11.346671 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:21:11.347271 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:21:11.792368 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 20:21:11.796274 (dockerd)[1761]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 20:21:12.242172 dockerd[1761]: time="2025-02-13T20:21:12.242014061Z" level=info msg="Starting up" Feb 13 20:21:12.356817 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport799601052-merged.mount: Deactivated successfully. Feb 13 20:21:12.392703 dockerd[1761]: time="2025-02-13T20:21:12.392640617Z" level=info msg="Loading containers: start." Feb 13 20:21:12.555085 kernel: Initializing XFRM netlink socket Feb 13 20:21:12.665742 systemd-networkd[1378]: docker0: Link UP Feb 13 20:21:12.689498 dockerd[1761]: time="2025-02-13T20:21:12.689424080Z" level=info msg="Loading containers: done." Feb 13 20:21:12.708468 dockerd[1761]: time="2025-02-13T20:21:12.708397094Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 20:21:12.708657 dockerd[1761]: time="2025-02-13T20:21:12.708543802Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Feb 13 20:21:12.708744 dockerd[1761]: time="2025-02-13T20:21:12.708717038Z" level=info msg="Daemon has completed initialization" Feb 13 20:21:12.747965 dockerd[1761]: time="2025-02-13T20:21:12.747811142Z" level=info msg="API listen on /run/docker.sock" Feb 13 20:21:12.748143 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 20:21:13.682568 containerd[1468]: time="2025-02-13T20:21:13.682517458Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 20:21:14.161658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3650625607.mount: Deactivated successfully. Feb 13 20:21:15.629312 containerd[1468]: time="2025-02-13T20:21:15.629232275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:15.631058 containerd[1468]: time="2025-02-13T20:21:15.630995608Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=27983216" Feb 13 20:21:15.631991 containerd[1468]: time="2025-02-13T20:21:15.631884519Z" level=info msg="ImageCreate event name:\"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:15.635885 containerd[1468]: time="2025-02-13T20:21:15.635805475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:15.637844 containerd[1468]: time="2025-02-13T20:21:15.637334815Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"27973388\" in 1.954758965s" Feb 13 20:21:15.637844 containerd[1468]: time="2025-02-13T20:21:15.637387413Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\"" Feb 13 20:21:15.640536 containerd[1468]: time="2025-02-13T20:21:15.640393205Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 20:21:17.038466 containerd[1468]: time="2025-02-13T20:21:17.038390069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:17.040212 containerd[1468]: time="2025-02-13T20:21:17.040136351Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=24710127" Feb 13 20:21:17.041480 containerd[1468]: time="2025-02-13T20:21:17.041401715Z" level=info msg="ImageCreate event name:\"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:17.045492 containerd[1468]: time="2025-02-13T20:21:17.045412011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:17.047153 containerd[1468]: time="2025-02-13T20:21:17.046904242Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"26154739\" in 1.406463794s" Feb 13 20:21:17.047153 containerd[1468]: time="2025-02-13T20:21:17.046980280Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\"" Feb 13 20:21:17.047936 containerd[1468]: time="2025-02-13T20:21:17.047878834Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 20:21:18.154303 containerd[1468]: time="2025-02-13T20:21:18.154230849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:18.156049 containerd[1468]: time="2025-02-13T20:21:18.155983002Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=18654341" Feb 13 20:21:18.156905 containerd[1468]: time="2025-02-13T20:21:18.156825265Z" level=info msg="ImageCreate event name:\"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:18.160764 containerd[1468]: time="2025-02-13T20:21:18.160645097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:18.163116 containerd[1468]: time="2025-02-13T20:21:18.163045674Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"20098989\" in 1.115106038s" Feb 13 20:21:18.163116 containerd[1468]: time="2025-02-13T20:21:18.163090186Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\"" Feb 13 20:21:18.164056 containerd[1468]: time="2025-02-13T20:21:18.163801994Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 20:21:19.054726 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:21:19.070140 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:21:19.256411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3435726674.mount: Deactivated successfully. Feb 13 20:21:19.340946 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:21:19.353559 (kubelet)[1975]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:21:19.434944 kubelet[1975]: E0213 20:21:19.433537 1975 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:21:19.438876 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:21:19.439358 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:21:20.005901 containerd[1468]: time="2025-02-13T20:21:20.005816960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:20.007131 containerd[1468]: time="2025-02-13T20:21:20.007063361Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=30231003" Feb 13 20:21:20.008536 containerd[1468]: time="2025-02-13T20:21:20.008480481Z" level=info msg="ImageCreate event name:\"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:20.012171 containerd[1468]: time="2025-02-13T20:21:20.012094498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:20.013724 containerd[1468]: time="2025-02-13T20:21:20.013673059Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"30228127\" in 1.849825281s" Feb 13 20:21:20.013834 containerd[1468]: time="2025-02-13T20:21:20.013730177Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\"" Feb 13 20:21:20.014935 containerd[1468]: time="2025-02-13T20:21:20.014680853Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 20:21:21.673192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2662614011.mount: Deactivated successfully. Feb 13 20:21:22.695572 containerd[1468]: time="2025-02-13T20:21:22.695497230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:22.697340 containerd[1468]: time="2025-02-13T20:21:22.696942460Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Feb 13 20:21:22.698832 containerd[1468]: time="2025-02-13T20:21:22.698745405Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:22.702415 containerd[1468]: time="2025-02-13T20:21:22.702341849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:22.704143 containerd[1468]: time="2025-02-13T20:21:22.703968357Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.689245234s" Feb 13 20:21:22.704143 containerd[1468]: time="2025-02-13T20:21:22.704015160Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 20:21:22.705018 containerd[1468]: time="2025-02-13T20:21:22.704970106Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 20:21:23.108615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2642740137.mount: Deactivated successfully. Feb 13 20:21:23.113701 containerd[1468]: time="2025-02-13T20:21:23.113636724Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:23.115012 containerd[1468]: time="2025-02-13T20:21:23.114943609Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Feb 13 20:21:23.115893 containerd[1468]: time="2025-02-13T20:21:23.115823601Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:23.118777 containerd[1468]: time="2025-02-13T20:21:23.118738273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:23.120491 containerd[1468]: time="2025-02-13T20:21:23.120097972Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 415.084982ms" Feb 13 20:21:23.120491 containerd[1468]: time="2025-02-13T20:21:23.120270637Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 20:21:23.121566 containerd[1468]: time="2025-02-13T20:21:23.121170951Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 20:21:23.544074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2975736597.mount: Deactivated successfully. Feb 13 20:21:25.552134 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 20:21:25.632618 containerd[1468]: time="2025-02-13T20:21:25.632541531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:25.634219 containerd[1468]: time="2025-02-13T20:21:25.634144710Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56786556" Feb 13 20:21:25.635688 containerd[1468]: time="2025-02-13T20:21:25.635609925Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:25.639595 containerd[1468]: time="2025-02-13T20:21:25.639521447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:25.641464 containerd[1468]: time="2025-02-13T20:21:25.641290608Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.520066866s" Feb 13 20:21:25.641464 containerd[1468]: time="2025-02-13T20:21:25.641337935Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Feb 13 20:21:29.141641 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:21:29.152380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:21:29.198451 systemd[1]: Reloading requested from client PID 2120 ('systemctl') (unit session-9.scope)... Feb 13 20:21:29.198476 systemd[1]: Reloading... Feb 13 20:21:29.382449 zram_generator::config[2160]: No configuration found. Feb 13 20:21:29.559950 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:21:29.674210 systemd[1]: Reloading finished in 475 ms. Feb 13 20:21:29.744712 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 20:21:29.744868 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 20:21:29.745371 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:21:29.753813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:21:30.401475 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:21:30.415606 (kubelet)[2210]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:21:30.473689 kubelet[2210]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:21:30.473689 kubelet[2210]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:21:30.473689 kubelet[2210]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:21:30.474287 kubelet[2210]: I0213 20:21:30.473812 2210 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:21:30.916479 kubelet[2210]: I0213 20:21:30.916418 2210 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 20:21:30.916479 kubelet[2210]: I0213 20:21:30.916456 2210 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:21:30.916849 kubelet[2210]: I0213 20:21:30.916810 2210 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 20:21:30.969988 kubelet[2210]: E0213 20:21:30.969891 2210 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.9:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:21:30.972174 kubelet[2210]: I0213 20:21:30.971982 2210 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:21:30.982107 kubelet[2210]: E0213 20:21:30.982056 2210 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:21:30.982107 kubelet[2210]: I0213 20:21:30.982094 2210 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:21:30.988042 kubelet[2210]: I0213 20:21:30.988010 2210 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:21:30.988194 kubelet[2210]: I0213 20:21:30.988173 2210 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 20:21:30.988464 kubelet[2210]: I0213 20:21:30.988421 2210 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:21:30.988712 kubelet[2210]: I0213 20:21:30.988467 2210 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:21:30.988935 kubelet[2210]: I0213 20:21:30.988728 2210 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:21:30.988935 kubelet[2210]: I0213 20:21:30.988747 2210 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 20:21:30.989063 kubelet[2210]: I0213 20:21:30.988940 2210 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:21:30.992494 kubelet[2210]: I0213 20:21:30.992195 2210 kubelet.go:408] "Attempting to sync node with API server" Feb 13 20:21:30.992494 kubelet[2210]: I0213 20:21:30.992232 2210 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:21:30.992494 kubelet[2210]: I0213 20:21:30.992280 2210 kubelet.go:314] "Adding apiserver pod source" Feb 13 20:21:30.992494 kubelet[2210]: I0213 20:21:30.992303 2210 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:21:31.006612 kubelet[2210]: W0213 20:21:31.006292 2210 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.9:6443: connect: connection refused Feb 13 20:21:31.006612 kubelet[2210]: E0213 20:21:31.006385 2210 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:21:31.006612 kubelet[2210]: W0213 20:21:31.006492 2210 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.9:6443: connect: connection refused Feb 13 20:21:31.006612 kubelet[2210]: E0213 20:21:31.006544 2210 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:21:31.007960 kubelet[2210]: I0213 20:21:31.007801 2210 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:21:31.010992 kubelet[2210]: I0213 20:21:31.010949 2210 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:21:31.012155 kubelet[2210]: W0213 20:21:31.012114 2210 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:21:31.013459 kubelet[2210]: I0213 20:21:31.012976 2210 server.go:1269] "Started kubelet" Feb 13 20:21:31.015547 kubelet[2210]: I0213 20:21:31.014777 2210 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:21:31.023642 kubelet[2210]: I0213 20:21:31.022394 2210 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:21:31.025475 kubelet[2210]: I0213 20:21:31.025415 2210 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:21:31.025969 kubelet[2210]: I0213 20:21:31.025948 2210 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:21:31.026446 kubelet[2210]: I0213 20:21:31.026423 2210 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:21:31.031280 kubelet[2210]: I0213 20:21:31.031141 2210 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 20:21:31.031471 kubelet[2210]: E0213 20:21:31.031442 2210 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" not found" Feb 13 20:21:31.031604 kubelet[2210]: E0213 20:21:31.027580 2210 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.9:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.9:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal.1823de1b8d66adae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal,UID:ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal,},FirstTimestamp:2025-02-13 20:21:31.012943278 +0000 UTC m=+0.591330228,LastTimestamp:2025-02-13 20:21:31.012943278 +0000 UTC m=+0.591330228,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal,}" Feb 13 20:21:31.033661 kubelet[2210]: I0213 20:21:31.033629 2210 server.go:460] "Adding debug handlers to kubelet server" Feb 13 20:21:31.035734 kubelet[2210]: E0213 20:21:31.035691 2210 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.9:6443: connect: connection refused" interval="200ms" Feb 13 20:21:31.036552 kubelet[2210]: I0213 20:21:31.036522 2210 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:21:31.038312 kubelet[2210]: I0213 20:21:31.038282 2210 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:21:31.038702 kubelet[2210]: I0213 20:21:31.038603 2210 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 20:21:31.038802 kubelet[2210]: I0213 20:21:31.038707 2210 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:21:31.041947 kubelet[2210]: W0213 20:21:31.041706 2210 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.9:6443: connect: connection refused Feb 13 20:21:31.041947 kubelet[2210]: E0213 20:21:31.041779 2210 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:21:31.043487 kubelet[2210]: E0213 20:21:31.043459 2210 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:21:31.044030 kubelet[2210]: I0213 20:21:31.043994 2210 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:21:31.058011 kubelet[2210]: I0213 20:21:31.057801 2210 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:21:31.059954 kubelet[2210]: I0213 20:21:31.059851 2210 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:21:31.059954 kubelet[2210]: I0213 20:21:31.059920 2210 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:21:31.060113 kubelet[2210]: I0213 20:21:31.059976 2210 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 20:21:31.060113 kubelet[2210]: E0213 20:21:31.060053 2210 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:21:31.070951 kubelet[2210]: W0213 20:21:31.070742 2210 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.9:6443: connect: connection refused Feb 13 20:21:31.070951 kubelet[2210]: E0213 20:21:31.070814 2210 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:21:31.087069 kubelet[2210]: I0213 20:21:31.087005 2210 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:21:31.087069 kubelet[2210]: I0213 20:21:31.087063 2210 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:21:31.087288 kubelet[2210]: I0213 20:21:31.087089 2210 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:21:31.089417 kubelet[2210]: I0213 20:21:31.089392 2210 policy_none.go:49] "None policy: Start" Feb 13 20:21:31.090462 kubelet[2210]: I0213 20:21:31.090365 2210 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:21:31.090462 kubelet[2210]: I0213 20:21:31.090397 2210 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:21:31.102111 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:21:31.116962 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:21:31.122572 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:21:31.132245 kubelet[2210]: E0213 20:21:31.132184 2210 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" not found" Feb 13 20:21:31.140362 kubelet[2210]: I0213 20:21:31.140318 2210 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:21:31.140834 kubelet[2210]: I0213 20:21:31.140808 2210 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:21:31.140991 kubelet[2210]: I0213 20:21:31.140833 2210 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:21:31.142633 kubelet[2210]: I0213 20:21:31.141783 2210 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:21:31.143601 kubelet[2210]: E0213 20:21:31.143570 2210 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" not found" Feb 13 20:21:31.194127 systemd[1]: Created slice kubepods-burstable-pod4acf93a648262cc3adb1da852583fc85.slice - libcontainer container kubepods-burstable-pod4acf93a648262cc3adb1da852583fc85.slice. Feb 13 20:21:31.209860 systemd[1]: Created slice kubepods-burstable-podd9547363e4e9ba94059a00eb2c8ae388.slice - libcontainer container kubepods-burstable-podd9547363e4e9ba94059a00eb2c8ae388.slice. Feb 13 20:21:31.226852 systemd[1]: Created slice kubepods-burstable-podef31c09c1afd4c800f597a3b2023536b.slice - libcontainer container kubepods-burstable-podef31c09c1afd4c800f597a3b2023536b.slice. Feb 13 20:21:31.237218 kubelet[2210]: E0213 20:21:31.237158 2210 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.9:6443: connect: connection refused" interval="400ms" Feb 13 20:21:31.245855 kubelet[2210]: I0213 20:21:31.245799 2210 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:31.246245 kubelet[2210]: E0213 20:21:31.246195 2210 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.9:6443/api/v1/nodes\": dial tcp 10.128.0.9:6443: connect: connection refused" node="ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:31.339697 kubelet[2210]: I0213 20:21:31.339618 2210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4acf93a648262cc3adb1da852583fc85-k8s-certs\") pod \"kube-apiserver-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" (UID: \"4acf93a648262cc3adb1da852583fc85\") " pod="kube-system/kube-apiserver-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:31.339697 kubelet[2210]: I0213 20:21:31.339693 2210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d9547363e4e9ba94059a00eb2c8ae388-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" (UID: \"d9547363e4e9ba94059a00eb2c8ae388\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:31.340018 kubelet[2210]: I0213 20:21:31.339727 2210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9547363e4e9ba94059a00eb2c8ae388-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" (UID: \"d9547363e4e9ba94059a00eb2c8ae388\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:31.340018 kubelet[2210]: I0213 20:21:31.339755 2210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9547363e4e9ba94059a00eb2c8ae388-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" (UID: \"d9547363e4e9ba94059a00eb2c8ae388\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:31.340018 kubelet[2210]: I0213 20:21:31.339788 2210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef31c09c1afd4c800f597a3b2023536b-kubeconfig\") pod \"kube-scheduler-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" (UID: \"ef31c09c1afd4c800f597a3b2023536b\") " pod="kube-system/kube-scheduler-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:31.340018 kubelet[2210]: I0213 20:21:31.339829 2210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4acf93a648262cc3adb1da852583fc85-ca-certs\") pod \"kube-apiserver-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" (UID: \"4acf93a648262cc3adb1da852583fc85\") " pod="kube-system/kube-apiserver-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:31.340216 kubelet[2210]: I0213 20:21:31.339863 2210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4acf93a648262cc3adb1da852583fc85-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" (UID: \"4acf93a648262cc3adb1da852583fc85\") " pod="kube-system/kube-apiserver-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:31.340216 kubelet[2210]: I0213 20:21:31.339889 2210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9547363e4e9ba94059a00eb2c8ae388-ca-certs\") pod \"kube-controller-manager-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" (UID: \"d9547363e4e9ba94059a00eb2c8ae388\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:31.340216 kubelet[2210]: I0213 20:21:31.340004 2210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d9547363e4e9ba94059a00eb2c8ae388-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" (UID: \"d9547363e4e9ba94059a00eb2c8ae388\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:31.450893 kubelet[2210]: I0213 20:21:31.450753 2210 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:31.451232 kubelet[2210]: E0213 20:21:31.451185 2210 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.9:6443/api/v1/nodes\": dial tcp 10.128.0.9:6443: connect: connection refused" node="ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:31.504478 containerd[1468]: time="2025-02-13T20:21:31.504411477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal,Uid:4acf93a648262cc3adb1da852583fc85,Namespace:kube-system,Attempt:0,}" Feb 13 20:21:31.524804 containerd[1468]: time="2025-02-13T20:21:31.524728052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal,Uid:d9547363e4e9ba94059a00eb2c8ae388,Namespace:kube-system,Attempt:0,}" Feb 13 20:21:31.531960 containerd[1468]: time="2025-02-13T20:21:31.531572097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal,Uid:ef31c09c1afd4c800f597a3b2023536b,Namespace:kube-system,Attempt:0,}" Feb 13 20:21:31.638439 kubelet[2210]: E0213 20:21:31.638365 2210 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.9:6443: connect: connection refused" interval="800ms" Feb 13 20:21:31.861859 kubelet[2210]: I0213 20:21:31.861808 2210 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:31.862726 kubelet[2210]: E0213 20:21:31.862688 2210 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.9:6443/api/v1/nodes\": dial tcp 10.128.0.9:6443: connect: connection refused" node="ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:31.874591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2497833088.mount: Deactivated successfully. Feb 13 20:21:31.882058 containerd[1468]: time="2025-02-13T20:21:31.881973354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:21:31.883295 containerd[1468]: time="2025-02-13T20:21:31.883239750Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:21:31.884574 containerd[1468]: time="2025-02-13T20:21:31.884515964Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:21:31.885137 containerd[1468]: time="2025-02-13T20:21:31.885077125Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Feb 13 20:21:31.886483 containerd[1468]: time="2025-02-13T20:21:31.886423760Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:21:31.887995 containerd[1468]: time="2025-02-13T20:21:31.887903959Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:21:31.888569 containerd[1468]: time="2025-02-13T20:21:31.888493417Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:21:31.891228 containerd[1468]: time="2025-02-13T20:21:31.891112975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:21:31.894676 containerd[1468]: time="2025-02-13T20:21:31.894037344Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 389.524399ms" Feb 13 20:21:31.896645 containerd[1468]: time="2025-02-13T20:21:31.896589229Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 371.75742ms" Feb 13 20:21:31.898853 containerd[1468]: time="2025-02-13T20:21:31.898798607Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 367.139308ms" Feb 13 20:21:32.120074 containerd[1468]: time="2025-02-13T20:21:32.119655775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:21:32.120502 containerd[1468]: time="2025-02-13T20:21:32.120274651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:21:32.120502 containerd[1468]: time="2025-02-13T20:21:32.120435072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:21:32.124074 containerd[1468]: time="2025-02-13T20:21:32.123525674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:21:32.125684 containerd[1468]: time="2025-02-13T20:21:32.125414541Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:21:32.125684 containerd[1468]: time="2025-02-13T20:21:32.125482063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:21:32.125684 containerd[1468]: time="2025-02-13T20:21:32.125507198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:21:32.125684 containerd[1468]: time="2025-02-13T20:21:32.125625120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:21:32.142068 containerd[1468]: time="2025-02-13T20:21:32.141011021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:21:32.142068 containerd[1468]: time="2025-02-13T20:21:32.141183056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:21:32.142068 containerd[1468]: time="2025-02-13T20:21:32.141222302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:21:32.142068 containerd[1468]: time="2025-02-13T20:21:32.141968663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:21:32.171699 systemd[1]: Started cri-containerd-5ae83c765c8527eeb25e93739783ec0bb851f902eac84e86164d0360471c6e72.scope - libcontainer container 5ae83c765c8527eeb25e93739783ec0bb851f902eac84e86164d0360471c6e72. Feb 13 20:21:32.174525 systemd[1]: Started cri-containerd-e412398d9098c9696f254a7ad76b50d306e76872266ea51ba39228458659236c.scope - libcontainer container e412398d9098c9696f254a7ad76b50d306e76872266ea51ba39228458659236c. Feb 13 20:21:32.204175 systemd[1]: Started cri-containerd-cd64e2bace67138ad6d97d5728a2642a3d9537933d9b11870d26c0e3f131d94c.scope - libcontainer container cd64e2bace67138ad6d97d5728a2642a3d9537933d9b11870d26c0e3f131d94c. Feb 13 20:21:32.208334 kubelet[2210]: W0213 20:21:32.207410 2210 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.9:6443: connect: connection refused Feb 13 20:21:32.208334 kubelet[2210]: E0213 20:21:32.207474 2210 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.9:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:21:32.268329 kubelet[2210]: W0213 20:21:32.268086 2210 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.9:6443: connect: connection refused Feb 13 20:21:32.268329 kubelet[2210]: E0213 20:21:32.268194 2210 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.9:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:21:32.276084 kubelet[2210]: W0213 20:21:32.275562 2210 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.9:6443: connect: connection refused Feb 13 20:21:32.276084 kubelet[2210]: E0213 20:21:32.275889 2210 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.9:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:21:32.283944 containerd[1468]: time="2025-02-13T20:21:32.283654710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal,Uid:ef31c09c1afd4c800f597a3b2023536b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e412398d9098c9696f254a7ad76b50d306e76872266ea51ba39228458659236c\"" Feb 13 20:21:32.308976 kubelet[2210]: E0213 20:21:32.306790 2210 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-21291" Feb 13 20:21:32.311254 containerd[1468]: time="2025-02-13T20:21:32.311204569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal,Uid:4acf93a648262cc3adb1da852583fc85,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ae83c765c8527eeb25e93739783ec0bb851f902eac84e86164d0360471c6e72\"" Feb 13 20:21:32.312085 containerd[1468]: time="2025-02-13T20:21:32.312022308Z" level=info msg="CreateContainer within sandbox \"e412398d9098c9696f254a7ad76b50d306e76872266ea51ba39228458659236c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 20:21:32.319175 kubelet[2210]: E0213 20:21:32.319141 2210 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-21291" Feb 13 20:21:32.324399 containerd[1468]: time="2025-02-13T20:21:32.324360936Z" level=info msg="CreateContainer within sandbox \"5ae83c765c8527eeb25e93739783ec0bb851f902eac84e86164d0360471c6e72\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 20:21:32.328333 containerd[1468]: time="2025-02-13T20:21:32.328287655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal,Uid:d9547363e4e9ba94059a00eb2c8ae388,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd64e2bace67138ad6d97d5728a2642a3d9537933d9b11870d26c0e3f131d94c\"" Feb 13 20:21:32.330752 kubelet[2210]: E0213 20:21:32.330376 2210 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-1-b860790c0860addb1c68.c.flat" Feb 13 20:21:32.332153 containerd[1468]: time="2025-02-13T20:21:32.332106550Z" level=info msg="CreateContainer within sandbox \"cd64e2bace67138ad6d97d5728a2642a3d9537933d9b11870d26c0e3f131d94c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 20:21:32.350110 containerd[1468]: time="2025-02-13T20:21:32.350071924Z" level=info msg="CreateContainer within sandbox \"e412398d9098c9696f254a7ad76b50d306e76872266ea51ba39228458659236c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6d8cfc49b16296ef878d3135b2584db633073e92f9af622349a0465be4e5944c\"" Feb 13 20:21:32.351195 containerd[1468]: time="2025-02-13T20:21:32.351160228Z" level=info msg="CreateContainer within sandbox \"5ae83c765c8527eeb25e93739783ec0bb851f902eac84e86164d0360471c6e72\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5b592242a81b83f5848535da6219b95fb2b93c8e07f07b43b99276f0ada44aa5\"" Feb 13 20:21:32.351511 containerd[1468]: time="2025-02-13T20:21:32.351317713Z" level=info msg="StartContainer for \"6d8cfc49b16296ef878d3135b2584db633073e92f9af622349a0465be4e5944c\"" Feb 13 20:21:32.353254 containerd[1468]: time="2025-02-13T20:21:32.352067398Z" level=info msg="StartContainer for \"5b592242a81b83f5848535da6219b95fb2b93c8e07f07b43b99276f0ada44aa5\"" Feb 13 20:21:32.360456 containerd[1468]: time="2025-02-13T20:21:32.360388919Z" level=info msg="CreateContainer within sandbox \"cd64e2bace67138ad6d97d5728a2642a3d9537933d9b11870d26c0e3f131d94c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"41e8cc9b21d0ab9b6b8b87cdd7109644b84b6457d8ad083865a1fe6891ab5dd4\"" Feb 13 20:21:32.360965 containerd[1468]: time="2025-02-13T20:21:32.360891243Z" level=info msg="StartContainer for \"41e8cc9b21d0ab9b6b8b87cdd7109644b84b6457d8ad083865a1fe6891ab5dd4\"" Feb 13 20:21:32.408687 systemd[1]: Started cri-containerd-5b592242a81b83f5848535da6219b95fb2b93c8e07f07b43b99276f0ada44aa5.scope - libcontainer container 5b592242a81b83f5848535da6219b95fb2b93c8e07f07b43b99276f0ada44aa5. Feb 13 20:21:32.418185 systemd[1]: Started cri-containerd-6d8cfc49b16296ef878d3135b2584db633073e92f9af622349a0465be4e5944c.scope - libcontainer container 6d8cfc49b16296ef878d3135b2584db633073e92f9af622349a0465be4e5944c. Feb 13 20:21:32.439799 kubelet[2210]: E0213 20:21:32.439323 2210 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.9:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.9:6443: connect: connection refused" interval="1.6s" Feb 13 20:21:32.446208 systemd[1]: Started cri-containerd-41e8cc9b21d0ab9b6b8b87cdd7109644b84b6457d8ad083865a1fe6891ab5dd4.scope - libcontainer container 41e8cc9b21d0ab9b6b8b87cdd7109644b84b6457d8ad083865a1fe6891ab5dd4. Feb 13 20:21:32.520614 kubelet[2210]: W0213 20:21:32.520445 2210 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.9:6443: connect: connection refused Feb 13 20:21:32.520614 kubelet[2210]: E0213 20:21:32.520556 2210 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.9:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.9:6443: connect: connection refused" logger="UnhandledError" Feb 13 20:21:32.532497 containerd[1468]: time="2025-02-13T20:21:32.532450952Z" level=info msg="StartContainer for \"5b592242a81b83f5848535da6219b95fb2b93c8e07f07b43b99276f0ada44aa5\" returns successfully" Feb 13 20:21:32.559300 containerd[1468]: time="2025-02-13T20:21:32.559248245Z" level=info msg="StartContainer for \"41e8cc9b21d0ab9b6b8b87cdd7109644b84b6457d8ad083865a1fe6891ab5dd4\" returns successfully" Feb 13 20:21:32.602268 containerd[1468]: time="2025-02-13T20:21:32.602207705Z" level=info msg="StartContainer for \"6d8cfc49b16296ef878d3135b2584db633073e92f9af622349a0465be4e5944c\" returns successfully" Feb 13 20:21:32.667880 kubelet[2210]: I0213 20:21:32.667734 2210 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:35.586992 kubelet[2210]: E0213 20:21:35.586945 2210 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" not found" node="ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:35.618223 kubelet[2210]: E0213 20:21:35.618062 2210 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal.1823de1b8d66adae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal,UID:ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal,},FirstTimestamp:2025-02-13 20:21:31.012943278 +0000 UTC m=+0.591330228,LastTimestamp:2025-02-13 20:21:31.012943278 +0000 UTC m=+0.591330228,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal,}" Feb 13 20:21:35.625728 kubelet[2210]: I0213 20:21:35.625509 2210 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:35.625728 kubelet[2210]: E0213 20:21:35.625562 2210 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\": node \"ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" not found" Feb 13 20:21:35.680749 kubelet[2210]: E0213 20:21:35.680498 2210 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal.1823de1b8f381894 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal,UID:ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal,},FirstTimestamp:2025-02-13 20:21:31.043444884 +0000 UTC m=+0.621831841,LastTimestamp:2025-02-13 20:21:31.043444884 +0000 UTC m=+0.621831841,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal,}" Feb 13 20:21:35.742274 kubelet[2210]: E0213 20:21:35.741528 2210 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal.1823de1b91c1dad7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal,UID:ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal,},FirstTimestamp:2025-02-13 20:21:31.086027479 +0000 UTC m=+0.664414434,LastTimestamp:2025-02-13 20:21:31.086027479 +0000 UTC m=+0.664414434,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal,}" Feb 13 20:21:35.802944 kubelet[2210]: E0213 20:21:35.802537 2210 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal.1823de1b91c219f9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal,UID:ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal,},FirstTimestamp:2025-02-13 20:21:31.086043641 +0000 UTC m=+0.664430596,LastTimestamp:2025-02-13 20:21:31.086043641 +0000 UTC m=+0.664430596,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal,}" Feb 13 20:21:36.007988 kubelet[2210]: I0213 20:21:36.006940 2210 apiserver.go:52] "Watching apiserver" Feb 13 20:21:36.032605 kubelet[2210]: E0213 20:21:36.032144 2210 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:36.041520 kubelet[2210]: I0213 20:21:36.038878 2210 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 20:21:37.894796 systemd[1]: Reloading requested from client PID 2487 ('systemctl') (unit session-9.scope)... Feb 13 20:21:37.894821 systemd[1]: Reloading... Feb 13 20:21:38.016954 zram_generator::config[2523]: No configuration found. Feb 13 20:21:38.194462 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:21:38.321639 systemd[1]: Reloading finished in 425 ms. Feb 13 20:21:38.383261 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:21:38.390416 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:21:38.390753 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:21:38.390830 systemd[1]: kubelet.service: Consumed 1.125s CPU time, 119.3M memory peak, 0B memory swap peak. Feb 13 20:21:38.398410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:21:38.611155 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:21:38.622654 (kubelet)[2575]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:21:38.691808 kubelet[2575]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:21:38.691808 kubelet[2575]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 20:21:38.691808 kubelet[2575]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:21:38.692412 kubelet[2575]: I0213 20:21:38.692059 2575 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:21:38.705697 kubelet[2575]: I0213 20:21:38.705638 2575 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 20:21:38.705697 kubelet[2575]: I0213 20:21:38.705671 2575 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:21:38.706054 kubelet[2575]: I0213 20:21:38.706017 2575 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 20:21:38.707689 kubelet[2575]: I0213 20:21:38.707661 2575 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 20:21:38.710694 kubelet[2575]: I0213 20:21:38.710486 2575 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:21:38.714760 kubelet[2575]: E0213 20:21:38.714724 2575 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:21:38.714880 kubelet[2575]: I0213 20:21:38.714761 2575 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:21:38.718707 kubelet[2575]: I0213 20:21:38.718667 2575 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:21:38.718846 kubelet[2575]: I0213 20:21:38.718829 2575 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 20:21:38.719103 kubelet[2575]: I0213 20:21:38.719060 2575 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:21:38.719361 kubelet[2575]: I0213 20:21:38.719103 2575 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:21:38.719361 kubelet[2575]: I0213 20:21:38.719360 2575 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:21:38.719591 kubelet[2575]: I0213 20:21:38.719385 2575 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 20:21:38.719591 kubelet[2575]: I0213 20:21:38.719441 2575 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:21:38.719704 kubelet[2575]: I0213 20:21:38.719612 2575 kubelet.go:408] "Attempting to sync node with API server" Feb 13 20:21:38.719704 kubelet[2575]: I0213 20:21:38.719672 2575 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:21:38.719812 kubelet[2575]: I0213 20:21:38.719717 2575 kubelet.go:314] "Adding apiserver pod source" Feb 13 20:21:38.719812 kubelet[2575]: I0213 20:21:38.719738 2575 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:21:38.727666 kubelet[2575]: I0213 20:21:38.727191 2575 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:21:38.730452 kubelet[2575]: I0213 20:21:38.730422 2575 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:21:38.735653 kubelet[2575]: I0213 20:21:38.735612 2575 server.go:1269] "Started kubelet" Feb 13 20:21:38.737878 kubelet[2575]: I0213 20:21:38.737835 2575 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:21:38.741688 kubelet[2575]: I0213 20:21:38.741651 2575 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:21:38.745512 kubelet[2575]: I0213 20:21:38.745475 2575 server.go:460] "Adding debug handlers to kubelet server" Feb 13 20:21:38.747580 kubelet[2575]: I0213 20:21:38.747166 2575 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:21:38.749879 kubelet[2575]: I0213 20:21:38.749860 2575 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 20:21:38.750345 kubelet[2575]: E0213 20:21:38.750322 2575 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" not found" Feb 13 20:21:38.762211 kubelet[2575]: I0213 20:21:38.762113 2575 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:21:38.762456 kubelet[2575]: I0213 20:21:38.762423 2575 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:21:38.763163 kubelet[2575]: I0213 20:21:38.763139 2575 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 20:21:38.763549 kubelet[2575]: I0213 20:21:38.763527 2575 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:21:38.772483 kubelet[2575]: I0213 20:21:38.772322 2575 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:21:38.774256 kubelet[2575]: I0213 20:21:38.774232 2575 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:21:38.774790 kubelet[2575]: I0213 20:21:38.774401 2575 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 20:21:38.774790 kubelet[2575]: I0213 20:21:38.774432 2575 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 20:21:38.774790 kubelet[2575]: E0213 20:21:38.774493 2575 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 20:21:38.780528 kubelet[2575]: I0213 20:21:38.780270 2575 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:21:38.780528 kubelet[2575]: I0213 20:21:38.780399 2575 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:21:38.787946 kubelet[2575]: I0213 20:21:38.786963 2575 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:21:38.858236 kubelet[2575]: I0213 20:21:38.858207 2575 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 20:21:38.858411 kubelet[2575]: I0213 20:21:38.858396 2575 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 20:21:38.858479 kubelet[2575]: I0213 20:21:38.858471 2575 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:21:38.858700 kubelet[2575]: I0213 20:21:38.858685 2575 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 20:21:38.858800 kubelet[2575]: I0213 20:21:38.858777 2575 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 20:21:38.858854 kubelet[2575]: I0213 20:21:38.858847 2575 policy_none.go:49] "None policy: Start" Feb 13 20:21:38.859830 kubelet[2575]: I0213 20:21:38.859804 2575 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 20:21:38.860003 kubelet[2575]: I0213 20:21:38.859904 2575 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:21:38.860337 kubelet[2575]: I0213 20:21:38.860296 2575 state_mem.go:75] "Updated machine memory state" Feb 13 20:21:38.868714 kubelet[2575]: I0213 20:21:38.868625 2575 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:21:38.870940 kubelet[2575]: I0213 20:21:38.869337 2575 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:21:38.870940 kubelet[2575]: I0213 20:21:38.869366 2575 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:21:38.870940 kubelet[2575]: I0213 20:21:38.869899 2575 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:21:38.892057 kubelet[2575]: W0213 20:21:38.892017 2575 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 20:21:38.895339 kubelet[2575]: W0213 20:21:38.895302 2575 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 20:21:38.901791 kubelet[2575]: W0213 20:21:38.901721 2575 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 20:21:38.928283 sudo[2607]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 13 20:21:38.929150 sudo[2607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Feb 13 20:21:38.991992 kubelet[2575]: I0213 20:21:38.991942 2575 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:39.008785 kubelet[2575]: I0213 20:21:39.008736 2575 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:39.008977 kubelet[2575]: I0213 20:21:39.008855 2575 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:39.065561 kubelet[2575]: I0213 20:21:39.065437 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d9547363e4e9ba94059a00eb2c8ae388-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" (UID: \"d9547363e4e9ba94059a00eb2c8ae388\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:39.066341 kubelet[2575]: I0213 20:21:39.065965 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9547363e4e9ba94059a00eb2c8ae388-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" (UID: \"d9547363e4e9ba94059a00eb2c8ae388\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:39.066341 kubelet[2575]: I0213 20:21:39.066111 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef31c09c1afd4c800f597a3b2023536b-kubeconfig\") pod \"kube-scheduler-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" (UID: \"ef31c09c1afd4c800f597a3b2023536b\") " pod="kube-system/kube-scheduler-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:39.066341 kubelet[2575]: I0213 20:21:39.066261 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4acf93a648262cc3adb1da852583fc85-k8s-certs\") pod \"kube-apiserver-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" (UID: \"4acf93a648262cc3adb1da852583fc85\") " pod="kube-system/kube-apiserver-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:39.067240 kubelet[2575]: I0213 20:21:39.066643 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4acf93a648262cc3adb1da852583fc85-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" (UID: \"4acf93a648262cc3adb1da852583fc85\") " pod="kube-system/kube-apiserver-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:39.067240 kubelet[2575]: I0213 20:21:39.067008 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9547363e4e9ba94059a00eb2c8ae388-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" (UID: \"d9547363e4e9ba94059a00eb2c8ae388\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:39.067240 kubelet[2575]: I0213 20:21:39.067184 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d9547363e4e9ba94059a00eb2c8ae388-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" (UID: \"d9547363e4e9ba94059a00eb2c8ae388\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:39.067813 kubelet[2575]: I0213 20:21:39.067600 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4acf93a648262cc3adb1da852583fc85-ca-certs\") pod \"kube-apiserver-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" (UID: \"4acf93a648262cc3adb1da852583fc85\") " pod="kube-system/kube-apiserver-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:39.067813 kubelet[2575]: I0213 20:21:39.067722 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9547363e4e9ba94059a00eb2c8ae388-ca-certs\") pod \"kube-controller-manager-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" (UID: \"d9547363e4e9ba94059a00eb2c8ae388\") " pod="kube-system/kube-controller-manager-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:39.716300 sudo[2607]: pam_unix(sudo:session): session closed for user root Feb 13 20:21:39.720488 kubelet[2575]: I0213 20:21:39.720085 2575 apiserver.go:52] "Watching apiserver" Feb 13 20:21:39.763545 kubelet[2575]: I0213 20:21:39.763448 2575 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 20:21:39.840223 kubelet[2575]: W0213 20:21:39.840184 2575 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Feb 13 20:21:39.840429 kubelet[2575]: E0213 20:21:39.840290 2575 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" Feb 13 20:21:39.882422 kubelet[2575]: I0213 20:21:39.882308 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" podStartSLOduration=1.882267623 podStartE2EDuration="1.882267623s" podCreationTimestamp="2025-02-13 20:21:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:21:39.880841801 +0000 UTC m=+1.250963645" watchObservedRunningTime="2025-02-13 20:21:39.882267623 +0000 UTC m=+1.252389461" Feb 13 20:21:39.882642 kubelet[2575]: I0213 20:21:39.882546 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" podStartSLOduration=1.882536784 podStartE2EDuration="1.882536784s" podCreationTimestamp="2025-02-13 20:21:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:21:39.867053696 +0000 UTC m=+1.237175540" watchObservedRunningTime="2025-02-13 20:21:39.882536784 +0000 UTC m=+1.252658624" Feb 13 20:21:39.896697 kubelet[2575]: I0213 20:21:39.896631 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" podStartSLOduration=1.896611374 podStartE2EDuration="1.896611374s" podCreationTimestamp="2025-02-13 20:21:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:21:39.895958753 +0000 UTC m=+1.266080601" watchObservedRunningTime="2025-02-13 20:21:39.896611374 +0000 UTC m=+1.266733222" Feb 13 20:21:40.126909 update_engine[1454]: I20250213 20:21:40.125972 1454 update_attempter.cc:509] Updating boot flags... Feb 13 20:21:40.262993 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2626) Feb 13 20:21:40.447034 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2629) Feb 13 20:21:40.637946 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2629) Feb 13 20:21:42.202415 sudo[1745]: pam_unix(sudo:session): session closed for user root Feb 13 20:21:42.246356 sshd[1742]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:42.252849 systemd[1]: sshd@8-10.128.0.9:22-139.178.89.65:47752.service: Deactivated successfully. Feb 13 20:21:42.257777 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:21:42.258228 systemd[1]: session-9.scope: Consumed 6.826s CPU time, 154.1M memory peak, 0B memory swap peak. Feb 13 20:21:42.260624 systemd-logind[1452]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:21:42.262408 systemd-logind[1452]: Removed session 9. Feb 13 20:21:43.684629 kubelet[2575]: I0213 20:21:43.684562 2575 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 20:21:43.685353 containerd[1468]: time="2025-02-13T20:21:43.685178678Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:21:43.685810 kubelet[2575]: I0213 20:21:43.685499 2575 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 20:21:44.636800 systemd[1]: Created slice kubepods-besteffort-pod0a244b82_1f71_45ef_bb1a_7cec1779c565.slice - libcontainer container kubepods-besteffort-pod0a244b82_1f71_45ef_bb1a_7cec1779c565.slice. Feb 13 20:21:44.663445 kubelet[2575]: W0213 20:21:44.663369 2575 reflector.go:561] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal' and this object Feb 13 20:21:44.663445 kubelet[2575]: E0213 20:21:44.663432 2575 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal' and this object" logger="UnhandledError" Feb 13 20:21:44.663748 kubelet[2575]: W0213 20:21:44.663692 2575 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal' and this object Feb 13 20:21:44.663748 kubelet[2575]: E0213 20:21:44.663728 2575 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal' and this object" logger="UnhandledError" Feb 13 20:21:44.663962 kubelet[2575]: W0213 20:21:44.663911 2575 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal' and this object Feb 13 20:21:44.664044 kubelet[2575]: E0213 20:21:44.663961 2575 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal' and this object" logger="UnhandledError" Feb 13 20:21:44.673730 systemd[1]: Created slice kubepods-burstable-poda7677b5b_234a_4dbf_be39_3741372d9305.slice - libcontainer container kubepods-burstable-poda7677b5b_234a_4dbf_be39_3741372d9305.slice. Feb 13 20:21:44.723460 kubelet[2575]: I0213 20:21:44.723391 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a244b82-1f71-45ef-bb1a-7cec1779c565-lib-modules\") pod \"kube-proxy-zhp8k\" (UID: \"0a244b82-1f71-45ef-bb1a-7cec1779c565\") " pod="kube-system/kube-proxy-zhp8k" Feb 13 20:21:44.723460 kubelet[2575]: I0213 20:21:44.723458 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a7677b5b-234a-4dbf-be39-3741372d9305-clustermesh-secrets\") pod \"cilium-2qqqn\" (UID: \"a7677b5b-234a-4dbf-be39-3741372d9305\") " pod="kube-system/cilium-2qqqn" Feb 13 20:21:44.724888 kubelet[2575]: I0213 20:21:44.723489 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a7677b5b-234a-4dbf-be39-3741372d9305-hubble-tls\") pod \"cilium-2qqqn\" (UID: \"a7677b5b-234a-4dbf-be39-3741372d9305\") " pod="kube-system/cilium-2qqqn" Feb 13 20:21:44.724888 kubelet[2575]: I0213 20:21:44.723518 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0a244b82-1f71-45ef-bb1a-7cec1779c565-kube-proxy\") pod \"kube-proxy-zhp8k\" (UID: \"0a244b82-1f71-45ef-bb1a-7cec1779c565\") " pod="kube-system/kube-proxy-zhp8k" Feb 13 20:21:44.724888 kubelet[2575]: I0213 20:21:44.723553 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-cilium-run\") pod \"cilium-2qqqn\" (UID: \"a7677b5b-234a-4dbf-be39-3741372d9305\") " pod="kube-system/cilium-2qqqn" Feb 13 20:21:44.724888 kubelet[2575]: I0213 20:21:44.723582 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-cni-path\") pod \"cilium-2qqqn\" (UID: \"a7677b5b-234a-4dbf-be39-3741372d9305\") " pod="kube-system/cilium-2qqqn" Feb 13 20:21:44.724888 kubelet[2575]: I0213 20:21:44.723609 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gckgd\" (UniqueName: \"kubernetes.io/projected/a7677b5b-234a-4dbf-be39-3741372d9305-kube-api-access-gckgd\") pod \"cilium-2qqqn\" (UID: \"a7677b5b-234a-4dbf-be39-3741372d9305\") " pod="kube-system/cilium-2qqqn" Feb 13 20:21:44.724888 kubelet[2575]: I0213 20:21:44.723637 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-bpf-maps\") pod \"cilium-2qqqn\" (UID: \"a7677b5b-234a-4dbf-be39-3741372d9305\") " pod="kube-system/cilium-2qqqn" Feb 13 20:21:44.726133 kubelet[2575]: I0213 20:21:44.723663 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-etc-cni-netd\") pod \"cilium-2qqqn\" (UID: \"a7677b5b-234a-4dbf-be39-3741372d9305\") " pod="kube-system/cilium-2qqqn" Feb 13 20:21:44.726133 kubelet[2575]: I0213 20:21:44.723687 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a244b82-1f71-45ef-bb1a-7cec1779c565-xtables-lock\") pod \"kube-proxy-zhp8k\" (UID: \"0a244b82-1f71-45ef-bb1a-7cec1779c565\") " pod="kube-system/kube-proxy-zhp8k" Feb 13 20:21:44.726133 kubelet[2575]: I0213 20:21:44.723722 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-cilium-cgroup\") pod \"cilium-2qqqn\" (UID: \"a7677b5b-234a-4dbf-be39-3741372d9305\") " pod="kube-system/cilium-2qqqn" Feb 13 20:21:44.726133 kubelet[2575]: I0213 20:21:44.723746 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-xtables-lock\") pod \"cilium-2qqqn\" (UID: \"a7677b5b-234a-4dbf-be39-3741372d9305\") " pod="kube-system/cilium-2qqqn" Feb 13 20:21:44.726133 kubelet[2575]: I0213 20:21:44.723775 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-host-proc-sys-kernel\") pod \"cilium-2qqqn\" (UID: \"a7677b5b-234a-4dbf-be39-3741372d9305\") " pod="kube-system/cilium-2qqqn" Feb 13 20:21:44.726802 kubelet[2575]: I0213 20:21:44.723804 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56rkr\" (UniqueName: \"kubernetes.io/projected/0a244b82-1f71-45ef-bb1a-7cec1779c565-kube-api-access-56rkr\") pod \"kube-proxy-zhp8k\" (UID: \"0a244b82-1f71-45ef-bb1a-7cec1779c565\") " pod="kube-system/kube-proxy-zhp8k" Feb 13 20:21:44.726802 kubelet[2575]: I0213 20:21:44.723830 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-lib-modules\") pod \"cilium-2qqqn\" (UID: \"a7677b5b-234a-4dbf-be39-3741372d9305\") " pod="kube-system/cilium-2qqqn" Feb 13 20:21:44.726802 kubelet[2575]: I0213 20:21:44.723857 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a7677b5b-234a-4dbf-be39-3741372d9305-cilium-config-path\") pod \"cilium-2qqqn\" (UID: \"a7677b5b-234a-4dbf-be39-3741372d9305\") " pod="kube-system/cilium-2qqqn" Feb 13 20:21:44.726802 kubelet[2575]: I0213 20:21:44.723891 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-hostproc\") pod \"cilium-2qqqn\" (UID: \"a7677b5b-234a-4dbf-be39-3741372d9305\") " pod="kube-system/cilium-2qqqn" Feb 13 20:21:44.726802 kubelet[2575]: I0213 20:21:44.723952 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-host-proc-sys-net\") pod \"cilium-2qqqn\" (UID: \"a7677b5b-234a-4dbf-be39-3741372d9305\") " pod="kube-system/cilium-2qqqn" Feb 13 20:21:44.734571 systemd[1]: Created slice kubepods-besteffort-pod8f96ea5f_a26a_4cdc_b943_317287cd6869.slice - libcontainer container kubepods-besteffort-pod8f96ea5f_a26a_4cdc_b943_317287cd6869.slice. Feb 13 20:21:44.827592 kubelet[2575]: I0213 20:21:44.825329 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q75qt\" (UniqueName: \"kubernetes.io/projected/8f96ea5f-a26a-4cdc-b943-317287cd6869-kube-api-access-q75qt\") pod \"cilium-operator-5d85765b45-8v545\" (UID: \"8f96ea5f-a26a-4cdc-b943-317287cd6869\") " pod="kube-system/cilium-operator-5d85765b45-8v545" Feb 13 20:21:44.827592 kubelet[2575]: I0213 20:21:44.825490 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f96ea5f-a26a-4cdc-b943-317287cd6869-cilium-config-path\") pod \"cilium-operator-5d85765b45-8v545\" (UID: \"8f96ea5f-a26a-4cdc-b943-317287cd6869\") " pod="kube-system/cilium-operator-5d85765b45-8v545" Feb 13 20:21:44.952524 containerd[1468]: time="2025-02-13T20:21:44.952379234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zhp8k,Uid:0a244b82-1f71-45ef-bb1a-7cec1779c565,Namespace:kube-system,Attempt:0,}" Feb 13 20:21:44.990834 containerd[1468]: time="2025-02-13T20:21:44.990030015Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:21:44.990834 containerd[1468]: time="2025-02-13T20:21:44.990110083Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:21:44.990834 containerd[1468]: time="2025-02-13T20:21:44.990149059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:21:44.990834 containerd[1468]: time="2025-02-13T20:21:44.990366084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:21:45.024131 systemd[1]: Started cri-containerd-10576f8fe0c4f7cb735643613320550e1aff8d3f8e2acc6ecb5aaae86c29e991.scope - libcontainer container 10576f8fe0c4f7cb735643613320550e1aff8d3f8e2acc6ecb5aaae86c29e991. Feb 13 20:21:45.057728 containerd[1468]: time="2025-02-13T20:21:45.057673454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zhp8k,Uid:0a244b82-1f71-45ef-bb1a-7cec1779c565,Namespace:kube-system,Attempt:0,} returns sandbox id \"10576f8fe0c4f7cb735643613320550e1aff8d3f8e2acc6ecb5aaae86c29e991\"" Feb 13 20:21:45.062585 containerd[1468]: time="2025-02-13T20:21:45.062534067Z" level=info msg="CreateContainer within sandbox \"10576f8fe0c4f7cb735643613320550e1aff8d3f8e2acc6ecb5aaae86c29e991\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:21:45.081526 containerd[1468]: time="2025-02-13T20:21:45.081399391Z" level=info msg="CreateContainer within sandbox \"10576f8fe0c4f7cb735643613320550e1aff8d3f8e2acc6ecb5aaae86c29e991\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0657a79f18bb48a3c1a49ba99d4ba1963c46fbbfd7c316e8287626d2b0f0efef\"" Feb 13 20:21:45.083601 containerd[1468]: time="2025-02-13T20:21:45.082226191Z" level=info msg="StartContainer for \"0657a79f18bb48a3c1a49ba99d4ba1963c46fbbfd7c316e8287626d2b0f0efef\"" Feb 13 20:21:45.120239 systemd[1]: Started cri-containerd-0657a79f18bb48a3c1a49ba99d4ba1963c46fbbfd7c316e8287626d2b0f0efef.scope - libcontainer container 0657a79f18bb48a3c1a49ba99d4ba1963c46fbbfd7c316e8287626d2b0f0efef. Feb 13 20:21:45.162131 containerd[1468]: time="2025-02-13T20:21:45.162050370Z" level=info msg="StartContainer for \"0657a79f18bb48a3c1a49ba99d4ba1963c46fbbfd7c316e8287626d2b0f0efef\" returns successfully" Feb 13 20:21:45.826390 kubelet[2575]: E0213 20:21:45.826311 2575 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 13 20:21:45.827817 kubelet[2575]: E0213 20:21:45.826466 2575 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a7677b5b-234a-4dbf-be39-3741372d9305-cilium-config-path podName:a7677b5b-234a-4dbf-be39-3741372d9305 nodeName:}" failed. No retries permitted until 2025-02-13 20:21:46.326435459 +0000 UTC m=+7.696557300 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/a7677b5b-234a-4dbf-be39-3741372d9305-cilium-config-path") pod "cilium-2qqqn" (UID: "a7677b5b-234a-4dbf-be39-3741372d9305") : failed to sync configmap cache: timed out waiting for the condition Feb 13 20:21:45.942446 containerd[1468]: time="2025-02-13T20:21:45.942392197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-8v545,Uid:8f96ea5f-a26a-4cdc-b943-317287cd6869,Namespace:kube-system,Attempt:0,}" Feb 13 20:21:45.984282 containerd[1468]: time="2025-02-13T20:21:45.982060708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:21:45.984282 containerd[1468]: time="2025-02-13T20:21:45.982156672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:21:45.984282 containerd[1468]: time="2025-02-13T20:21:45.982203452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:21:45.984282 containerd[1468]: time="2025-02-13T20:21:45.982362393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:21:46.026144 systemd[1]: Started cri-containerd-b901c1f03858add2d8afb4a5b3ca385f07d06b4f81a6990ac870cdedd8316391.scope - libcontainer container b901c1f03858add2d8afb4a5b3ca385f07d06b4f81a6990ac870cdedd8316391. Feb 13 20:21:46.084995 containerd[1468]: time="2025-02-13T20:21:46.084641849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-8v545,Uid:8f96ea5f-a26a-4cdc-b943-317287cd6869,Namespace:kube-system,Attempt:0,} returns sandbox id \"b901c1f03858add2d8afb4a5b3ca385f07d06b4f81a6990ac870cdedd8316391\"" Feb 13 20:21:46.088353 containerd[1468]: time="2025-02-13T20:21:46.088317710Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 20:21:46.480062 containerd[1468]: time="2025-02-13T20:21:46.479855444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2qqqn,Uid:a7677b5b-234a-4dbf-be39-3741372d9305,Namespace:kube-system,Attempt:0,}" Feb 13 20:21:46.521139 containerd[1468]: time="2025-02-13T20:21:46.520708531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:21:46.521139 containerd[1468]: time="2025-02-13T20:21:46.520808318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:21:46.521139 containerd[1468]: time="2025-02-13T20:21:46.520858291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:21:46.522006 containerd[1468]: time="2025-02-13T20:21:46.521026711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:21:46.548131 systemd[1]: Started cri-containerd-20ec1f0ccfed805a439462508a897ba88f9375e81cb6a00dbae10972b7fcc38b.scope - libcontainer container 20ec1f0ccfed805a439462508a897ba88f9375e81cb6a00dbae10972b7fcc38b. Feb 13 20:21:46.583847 containerd[1468]: time="2025-02-13T20:21:46.583794482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2qqqn,Uid:a7677b5b-234a-4dbf-be39-3741372d9305,Namespace:kube-system,Attempt:0,} returns sandbox id \"20ec1f0ccfed805a439462508a897ba88f9375e81cb6a00dbae10972b7fcc38b\"" Feb 13 20:21:47.095825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3755520230.mount: Deactivated successfully. Feb 13 20:21:47.600508 kubelet[2575]: I0213 20:21:47.600235 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zhp8k" podStartSLOduration=3.600208719 podStartE2EDuration="3.600208719s" podCreationTimestamp="2025-02-13 20:21:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:21:45.869207324 +0000 UTC m=+7.239329172" watchObservedRunningTime="2025-02-13 20:21:47.600208719 +0000 UTC m=+8.970330567" Feb 13 20:21:48.040884 containerd[1468]: time="2025-02-13T20:21:48.039697220Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:48.042644 containerd[1468]: time="2025-02-13T20:21:48.042561575Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 20:21:48.044110 containerd[1468]: time="2025-02-13T20:21:48.044038072Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:48.046679 containerd[1468]: time="2025-02-13T20:21:48.046092344Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.957525509s" Feb 13 20:21:48.046679 containerd[1468]: time="2025-02-13T20:21:48.046144507Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 20:21:48.048196 containerd[1468]: time="2025-02-13T20:21:48.048161067Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 20:21:48.049897 containerd[1468]: time="2025-02-13T20:21:48.049647889Z" level=info msg="CreateContainer within sandbox \"b901c1f03858add2d8afb4a5b3ca385f07d06b4f81a6990ac870cdedd8316391\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 20:21:48.069838 containerd[1468]: time="2025-02-13T20:21:48.069656439Z" level=info msg="CreateContainer within sandbox \"b901c1f03858add2d8afb4a5b3ca385f07d06b4f81a6990ac870cdedd8316391\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f51740171d4f1cf22b9c197c32252219af2bd3264df73c1320aa0d3b1a96991e\"" Feb 13 20:21:48.071575 containerd[1468]: time="2025-02-13T20:21:48.070805553Z" level=info msg="StartContainer for \"f51740171d4f1cf22b9c197c32252219af2bd3264df73c1320aa0d3b1a96991e\"" Feb 13 20:21:48.127154 systemd[1]: Started cri-containerd-f51740171d4f1cf22b9c197c32252219af2bd3264df73c1320aa0d3b1a96991e.scope - libcontainer container f51740171d4f1cf22b9c197c32252219af2bd3264df73c1320aa0d3b1a96991e. Feb 13 20:21:48.165196 containerd[1468]: time="2025-02-13T20:21:48.165132138Z" level=info msg="StartContainer for \"f51740171d4f1cf22b9c197c32252219af2bd3264df73c1320aa0d3b1a96991e\" returns successfully" Feb 13 20:21:49.678641 kubelet[2575]: I0213 20:21:49.678298 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-8v545" podStartSLOduration=3.7181488910000002 podStartE2EDuration="5.678267204s" podCreationTimestamp="2025-02-13 20:21:44 +0000 UTC" firstStartedPulling="2025-02-13 20:21:46.087452631 +0000 UTC m=+7.457574456" lastFinishedPulling="2025-02-13 20:21:48.047570929 +0000 UTC m=+9.417692769" observedRunningTime="2025-02-13 20:21:48.909897108 +0000 UTC m=+10.280019063" watchObservedRunningTime="2025-02-13 20:21:49.678267204 +0000 UTC m=+11.048389051" Feb 13 20:21:53.547338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2266360341.mount: Deactivated successfully. Feb 13 20:21:56.205439 containerd[1468]: time="2025-02-13T20:21:56.204258782Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:56.206340 containerd[1468]: time="2025-02-13T20:21:56.206275487Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 20:21:56.208086 containerd[1468]: time="2025-02-13T20:21:56.208018458Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:21:56.210744 containerd[1468]: time="2025-02-13T20:21:56.210575372Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.16203361s" Feb 13 20:21:56.210744 containerd[1468]: time="2025-02-13T20:21:56.210625351Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 20:21:56.213636 containerd[1468]: time="2025-02-13T20:21:56.213471636Z" level=info msg="CreateContainer within sandbox \"20ec1f0ccfed805a439462508a897ba88f9375e81cb6a00dbae10972b7fcc38b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 20:21:56.229711 containerd[1468]: time="2025-02-13T20:21:56.229567135Z" level=info msg="CreateContainer within sandbox \"20ec1f0ccfed805a439462508a897ba88f9375e81cb6a00dbae10972b7fcc38b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7fcc6d46e70a2360910edb89eaeaca885d2e2ac6ae612339fb690b28d5c61012\"" Feb 13 20:21:56.231032 containerd[1468]: time="2025-02-13T20:21:56.230533047Z" level=info msg="StartContainer for \"7fcc6d46e70a2360910edb89eaeaca885d2e2ac6ae612339fb690b28d5c61012\"" Feb 13 20:21:56.277758 systemd[1]: run-containerd-runc-k8s.io-7fcc6d46e70a2360910edb89eaeaca885d2e2ac6ae612339fb690b28d5c61012-runc.SIbJE3.mount: Deactivated successfully. Feb 13 20:21:56.287141 systemd[1]: Started cri-containerd-7fcc6d46e70a2360910edb89eaeaca885d2e2ac6ae612339fb690b28d5c61012.scope - libcontainer container 7fcc6d46e70a2360910edb89eaeaca885d2e2ac6ae612339fb690b28d5c61012. Feb 13 20:21:56.324619 containerd[1468]: time="2025-02-13T20:21:56.323807594Z" level=info msg="StartContainer for \"7fcc6d46e70a2360910edb89eaeaca885d2e2ac6ae612339fb690b28d5c61012\" returns successfully" Feb 13 20:21:56.342123 systemd[1]: cri-containerd-7fcc6d46e70a2360910edb89eaeaca885d2e2ac6ae612339fb690b28d5c61012.scope: Deactivated successfully. Feb 13 20:21:57.225683 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7fcc6d46e70a2360910edb89eaeaca885d2e2ac6ae612339fb690b28d5c61012-rootfs.mount: Deactivated successfully. Feb 13 20:21:58.427253 containerd[1468]: time="2025-02-13T20:21:58.427158968Z" level=info msg="shim disconnected" id=7fcc6d46e70a2360910edb89eaeaca885d2e2ac6ae612339fb690b28d5c61012 namespace=k8s.io Feb 13 20:21:58.427253 containerd[1468]: time="2025-02-13T20:21:58.427250594Z" level=warning msg="cleaning up after shim disconnected" id=7fcc6d46e70a2360910edb89eaeaca885d2e2ac6ae612339fb690b28d5c61012 namespace=k8s.io Feb 13 20:21:58.428303 containerd[1468]: time="2025-02-13T20:21:58.427265858Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:21:58.901538 containerd[1468]: time="2025-02-13T20:21:58.901485807Z" level=info msg="CreateContainer within sandbox \"20ec1f0ccfed805a439462508a897ba88f9375e81cb6a00dbae10972b7fcc38b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 20:21:58.925552 containerd[1468]: time="2025-02-13T20:21:58.925486151Z" level=info msg="CreateContainer within sandbox \"20ec1f0ccfed805a439462508a897ba88f9375e81cb6a00dbae10972b7fcc38b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2a707816a703293cd0a4773d6bab8caa9cd2019b9ba413a19c524eaa8e3110e3\"" Feb 13 20:21:58.926664 containerd[1468]: time="2025-02-13T20:21:58.926623273Z" level=info msg="StartContainer for \"2a707816a703293cd0a4773d6bab8caa9cd2019b9ba413a19c524eaa8e3110e3\"" Feb 13 20:21:58.985190 systemd[1]: Started cri-containerd-2a707816a703293cd0a4773d6bab8caa9cd2019b9ba413a19c524eaa8e3110e3.scope - libcontainer container 2a707816a703293cd0a4773d6bab8caa9cd2019b9ba413a19c524eaa8e3110e3. Feb 13 20:21:59.029179 containerd[1468]: time="2025-02-13T20:21:59.029108448Z" level=info msg="StartContainer for \"2a707816a703293cd0a4773d6bab8caa9cd2019b9ba413a19c524eaa8e3110e3\" returns successfully" Feb 13 20:21:59.050356 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:21:59.052033 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:21:59.052395 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:21:59.060523 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:21:59.061501 systemd[1]: cri-containerd-2a707816a703293cd0a4773d6bab8caa9cd2019b9ba413a19c524eaa8e3110e3.scope: Deactivated successfully. Feb 13 20:21:59.094318 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a707816a703293cd0a4773d6bab8caa9cd2019b9ba413a19c524eaa8e3110e3-rootfs.mount: Deactivated successfully. Feb 13 20:21:59.099218 containerd[1468]: time="2025-02-13T20:21:59.098845156Z" level=info msg="shim disconnected" id=2a707816a703293cd0a4773d6bab8caa9cd2019b9ba413a19c524eaa8e3110e3 namespace=k8s.io Feb 13 20:21:59.099218 containerd[1468]: time="2025-02-13T20:21:59.098962644Z" level=warning msg="cleaning up after shim disconnected" id=2a707816a703293cd0a4773d6bab8caa9cd2019b9ba413a19c524eaa8e3110e3 namespace=k8s.io Feb 13 20:21:59.099218 containerd[1468]: time="2025-02-13T20:21:59.098980626Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:21:59.110106 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:21:59.909460 containerd[1468]: time="2025-02-13T20:21:59.909386853Z" level=info msg="CreateContainer within sandbox \"20ec1f0ccfed805a439462508a897ba88f9375e81cb6a00dbae10972b7fcc38b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 20:21:59.954242 containerd[1468]: time="2025-02-13T20:21:59.954182096Z" level=info msg="CreateContainer within sandbox \"20ec1f0ccfed805a439462508a897ba88f9375e81cb6a00dbae10972b7fcc38b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d2955c519782013e9fd1a75632be6648b2ecda774938655ec49aedecb16cf28e\"" Feb 13 20:21:59.955406 containerd[1468]: time="2025-02-13T20:21:59.955347836Z" level=info msg="StartContainer for \"d2955c519782013e9fd1a75632be6648b2ecda774938655ec49aedecb16cf28e\"" Feb 13 20:22:00.005194 systemd[1]: Started cri-containerd-d2955c519782013e9fd1a75632be6648b2ecda774938655ec49aedecb16cf28e.scope - libcontainer container d2955c519782013e9fd1a75632be6648b2ecda774938655ec49aedecb16cf28e. Feb 13 20:22:00.048983 systemd[1]: cri-containerd-d2955c519782013e9fd1a75632be6648b2ecda774938655ec49aedecb16cf28e.scope: Deactivated successfully. Feb 13 20:22:00.049959 containerd[1468]: time="2025-02-13T20:22:00.049792379Z" level=info msg="StartContainer for \"d2955c519782013e9fd1a75632be6648b2ecda774938655ec49aedecb16cf28e\" returns successfully" Feb 13 20:22:00.081598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2955c519782013e9fd1a75632be6648b2ecda774938655ec49aedecb16cf28e-rootfs.mount: Deactivated successfully. Feb 13 20:22:00.082952 containerd[1468]: time="2025-02-13T20:22:00.082071955Z" level=info msg="shim disconnected" id=d2955c519782013e9fd1a75632be6648b2ecda774938655ec49aedecb16cf28e namespace=k8s.io Feb 13 20:22:00.082952 containerd[1468]: time="2025-02-13T20:22:00.082145128Z" level=warning msg="cleaning up after shim disconnected" id=d2955c519782013e9fd1a75632be6648b2ecda774938655ec49aedecb16cf28e namespace=k8s.io Feb 13 20:22:00.082952 containerd[1468]: time="2025-02-13T20:22:00.082161139Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:22:00.911751 containerd[1468]: time="2025-02-13T20:22:00.911694845Z" level=info msg="CreateContainer within sandbox \"20ec1f0ccfed805a439462508a897ba88f9375e81cb6a00dbae10972b7fcc38b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 20:22:00.935214 containerd[1468]: time="2025-02-13T20:22:00.935158222Z" level=info msg="CreateContainer within sandbox \"20ec1f0ccfed805a439462508a897ba88f9375e81cb6a00dbae10972b7fcc38b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"51973fea8b0496eb911c430d225a5380014ede4ebb3297e49da6da2136984aec\"" Feb 13 20:22:00.936052 containerd[1468]: time="2025-02-13T20:22:00.935985712Z" level=info msg="StartContainer for \"51973fea8b0496eb911c430d225a5380014ede4ebb3297e49da6da2136984aec\"" Feb 13 20:22:00.994198 systemd[1]: Started cri-containerd-51973fea8b0496eb911c430d225a5380014ede4ebb3297e49da6da2136984aec.scope - libcontainer container 51973fea8b0496eb911c430d225a5380014ede4ebb3297e49da6da2136984aec. Feb 13 20:22:01.031229 systemd[1]: cri-containerd-51973fea8b0496eb911c430d225a5380014ede4ebb3297e49da6da2136984aec.scope: Deactivated successfully. Feb 13 20:22:01.033683 containerd[1468]: time="2025-02-13T20:22:01.033212602Z" level=info msg="StartContainer for \"51973fea8b0496eb911c430d225a5380014ede4ebb3297e49da6da2136984aec\" returns successfully" Feb 13 20:22:01.063365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51973fea8b0496eb911c430d225a5380014ede4ebb3297e49da6da2136984aec-rootfs.mount: Deactivated successfully. Feb 13 20:22:01.064887 containerd[1468]: time="2025-02-13T20:22:01.064623683Z" level=info msg="shim disconnected" id=51973fea8b0496eb911c430d225a5380014ede4ebb3297e49da6da2136984aec namespace=k8s.io Feb 13 20:22:01.064887 containerd[1468]: time="2025-02-13T20:22:01.064688288Z" level=warning msg="cleaning up after shim disconnected" id=51973fea8b0496eb911c430d225a5380014ede4ebb3297e49da6da2136984aec namespace=k8s.io Feb 13 20:22:01.064887 containerd[1468]: time="2025-02-13T20:22:01.064701711Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:22:01.920889 containerd[1468]: time="2025-02-13T20:22:01.920545319Z" level=info msg="CreateContainer within sandbox \"20ec1f0ccfed805a439462508a897ba88f9375e81cb6a00dbae10972b7fcc38b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 20:22:01.951400 containerd[1468]: time="2025-02-13T20:22:01.951337620Z" level=info msg="CreateContainer within sandbox \"20ec1f0ccfed805a439462508a897ba88f9375e81cb6a00dbae10972b7fcc38b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"01f3c145eb65de8c754d80cd1bbddbbbd1c3033f5663ee2d66cf59a4c4e60f0f\"" Feb 13 20:22:01.956252 containerd[1468]: time="2025-02-13T20:22:01.956068842Z" level=info msg="StartContainer for \"01f3c145eb65de8c754d80cd1bbddbbbd1c3033f5663ee2d66cf59a4c4e60f0f\"" Feb 13 20:22:02.007174 systemd[1]: Started cri-containerd-01f3c145eb65de8c754d80cd1bbddbbbd1c3033f5663ee2d66cf59a4c4e60f0f.scope - libcontainer container 01f3c145eb65de8c754d80cd1bbddbbbd1c3033f5663ee2d66cf59a4c4e60f0f. Feb 13 20:22:02.051859 containerd[1468]: time="2025-02-13T20:22:02.051812863Z" level=info msg="StartContainer for \"01f3c145eb65de8c754d80cd1bbddbbbd1c3033f5663ee2d66cf59a4c4e60f0f\" returns successfully" Feb 13 20:22:02.253507 kubelet[2575]: I0213 20:22:02.253322 2575 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 20:22:02.318395 systemd[1]: Created slice kubepods-burstable-podf128b4cc_4e0a_465f_af75_a8bd93bb062a.slice - libcontainer container kubepods-burstable-podf128b4cc_4e0a_465f_af75_a8bd93bb062a.slice. Feb 13 20:22:02.343614 systemd[1]: Created slice kubepods-burstable-podb12c46a0_e9ee_412e_a65f_fa96bcb3ae34.slice - libcontainer container kubepods-burstable-podb12c46a0_e9ee_412e_a65f_fa96bcb3ae34.slice. Feb 13 20:22:02.448307 kubelet[2575]: I0213 20:22:02.448243 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whz8x\" (UniqueName: \"kubernetes.io/projected/b12c46a0-e9ee-412e-a65f-fa96bcb3ae34-kube-api-access-whz8x\") pod \"coredns-6f6b679f8f-wpsjz\" (UID: \"b12c46a0-e9ee-412e-a65f-fa96bcb3ae34\") " pod="kube-system/coredns-6f6b679f8f-wpsjz" Feb 13 20:22:02.448489 kubelet[2575]: I0213 20:22:02.448381 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5npg\" (UniqueName: \"kubernetes.io/projected/f128b4cc-4e0a-465f-af75-a8bd93bb062a-kube-api-access-k5npg\") pod \"coredns-6f6b679f8f-7r7fz\" (UID: \"f128b4cc-4e0a-465f-af75-a8bd93bb062a\") " pod="kube-system/coredns-6f6b679f8f-7r7fz" Feb 13 20:22:02.448489 kubelet[2575]: I0213 20:22:02.448467 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b12c46a0-e9ee-412e-a65f-fa96bcb3ae34-config-volume\") pod \"coredns-6f6b679f8f-wpsjz\" (UID: \"b12c46a0-e9ee-412e-a65f-fa96bcb3ae34\") " pod="kube-system/coredns-6f6b679f8f-wpsjz" Feb 13 20:22:02.448633 kubelet[2575]: I0213 20:22:02.448503 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f128b4cc-4e0a-465f-af75-a8bd93bb062a-config-volume\") pod \"coredns-6f6b679f8f-7r7fz\" (UID: \"f128b4cc-4e0a-465f-af75-a8bd93bb062a\") " pod="kube-system/coredns-6f6b679f8f-7r7fz" Feb 13 20:22:02.633017 containerd[1468]: time="2025-02-13T20:22:02.632362228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7r7fz,Uid:f128b4cc-4e0a-465f-af75-a8bd93bb062a,Namespace:kube-system,Attempt:0,}" Feb 13 20:22:02.651075 containerd[1468]: time="2025-02-13T20:22:02.651011409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wpsjz,Uid:b12c46a0-e9ee-412e-a65f-fa96bcb3ae34,Namespace:kube-system,Attempt:0,}" Feb 13 20:22:04.476792 systemd-networkd[1378]: cilium_host: Link UP Feb 13 20:22:04.477105 systemd-networkd[1378]: cilium_net: Link UP Feb 13 20:22:04.477114 systemd-networkd[1378]: cilium_net: Gained carrier Feb 13 20:22:04.477438 systemd-networkd[1378]: cilium_host: Gained carrier Feb 13 20:22:04.483720 systemd-networkd[1378]: cilium_host: Gained IPv6LL Feb 13 20:22:04.642351 systemd-networkd[1378]: cilium_vxlan: Link UP Feb 13 20:22:04.642804 systemd-networkd[1378]: cilium_vxlan: Gained carrier Feb 13 20:22:04.933117 kernel: NET: Registered PF_ALG protocol family Feb 13 20:22:04.950211 systemd-networkd[1378]: cilium_net: Gained IPv6LL Feb 13 20:22:05.846214 systemd-networkd[1378]: lxc_health: Link UP Feb 13 20:22:05.860125 systemd-networkd[1378]: lxc_health: Gained carrier Feb 13 20:22:05.950638 systemd-networkd[1378]: cilium_vxlan: Gained IPv6LL Feb 13 20:22:06.239595 systemd-networkd[1378]: lxccb4b2b1a8d92: Link UP Feb 13 20:22:06.253174 kernel: eth0: renamed from tmpa0c08 Feb 13 20:22:06.265849 systemd-networkd[1378]: lxccb4b2b1a8d92: Gained carrier Feb 13 20:22:06.280736 systemd-networkd[1378]: lxc180bdd05d160: Link UP Feb 13 20:22:06.298570 kernel: eth0: renamed from tmp204c2 Feb 13 20:22:06.311499 systemd-networkd[1378]: lxc180bdd05d160: Gained carrier Feb 13 20:22:06.516509 kubelet[2575]: I0213 20:22:06.516340 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2qqqn" podStartSLOduration=12.889946144 podStartE2EDuration="22.51631623s" podCreationTimestamp="2025-02-13 20:21:44 +0000 UTC" firstStartedPulling="2025-02-13 20:21:46.585397893 +0000 UTC m=+7.955519729" lastFinishedPulling="2025-02-13 20:21:56.211767988 +0000 UTC m=+17.581889815" observedRunningTime="2025-02-13 20:22:02.948034339 +0000 UTC m=+24.318156190" watchObservedRunningTime="2025-02-13 20:22:06.51631623 +0000 UTC m=+27.886438077" Feb 13 20:22:07.550124 systemd-networkd[1378]: lxc_health: Gained IPv6LL Feb 13 20:22:08.063697 systemd-networkd[1378]: lxccb4b2b1a8d92: Gained IPv6LL Feb 13 20:22:08.318801 systemd-networkd[1378]: lxc180bdd05d160: Gained IPv6LL Feb 13 20:22:11.260219 ntpd[1437]: Listen normally on 7 cilium_host 192.168.0.92:123 Feb 13 20:22:11.261468 ntpd[1437]: 13 Feb 20:22:11 ntpd[1437]: Listen normally on 7 cilium_host 192.168.0.92:123 Feb 13 20:22:11.261468 ntpd[1437]: 13 Feb 20:22:11 ntpd[1437]: Listen normally on 8 cilium_net [fe80::bc9e:fdff:fe6a:5f63%4]:123 Feb 13 20:22:11.261468 ntpd[1437]: 13 Feb 20:22:11 ntpd[1437]: Listen normally on 9 cilium_host [fe80::2cfc:cfff:fe19:ecfc%5]:123 Feb 13 20:22:11.261468 ntpd[1437]: 13 Feb 20:22:11 ntpd[1437]: Listen normally on 10 cilium_vxlan [fe80::448f:d7ff:fecc:4fc2%6]:123 Feb 13 20:22:11.261468 ntpd[1437]: 13 Feb 20:22:11 ntpd[1437]: Listen normally on 11 lxc_health [fe80::78d2:bfff:fe31:6a31%8]:123 Feb 13 20:22:11.261468 ntpd[1437]: 13 Feb 20:22:11 ntpd[1437]: Listen normally on 12 lxccb4b2b1a8d92 [fe80::a44e:f5ff:feca:5784%10]:123 Feb 13 20:22:11.261468 ntpd[1437]: 13 Feb 20:22:11 ntpd[1437]: Listen normally on 13 lxc180bdd05d160 [fe80::d88b:88ff:fecb:5240%12]:123 Feb 13 20:22:11.260372 ntpd[1437]: Listen normally on 8 cilium_net [fe80::bc9e:fdff:fe6a:5f63%4]:123 Feb 13 20:22:11.260464 ntpd[1437]: Listen normally on 9 cilium_host [fe80::2cfc:cfff:fe19:ecfc%5]:123 Feb 13 20:22:11.260531 ntpd[1437]: Listen normally on 10 cilium_vxlan [fe80::448f:d7ff:fecc:4fc2%6]:123 Feb 13 20:22:11.260598 ntpd[1437]: Listen normally on 11 lxc_health [fe80::78d2:bfff:fe31:6a31%8]:123 Feb 13 20:22:11.260663 ntpd[1437]: Listen normally on 12 lxccb4b2b1a8d92 [fe80::a44e:f5ff:feca:5784%10]:123 Feb 13 20:22:11.260729 ntpd[1437]: Listen normally on 13 lxc180bdd05d160 [fe80::d88b:88ff:fecb:5240%12]:123 Feb 13 20:22:11.404906 containerd[1468]: time="2025-02-13T20:22:11.403956905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:22:11.404906 containerd[1468]: time="2025-02-13T20:22:11.404075684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:22:11.404906 containerd[1468]: time="2025-02-13T20:22:11.404098787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:22:11.404906 containerd[1468]: time="2025-02-13T20:22:11.404267695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:22:11.407778 containerd[1468]: time="2025-02-13T20:22:11.406537626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:22:11.407778 containerd[1468]: time="2025-02-13T20:22:11.406617136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:22:11.407778 containerd[1468]: time="2025-02-13T20:22:11.406650819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:22:11.407778 containerd[1468]: time="2025-02-13T20:22:11.406785744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:22:11.490251 systemd[1]: Started cri-containerd-204c2c5808ab01bcbb7872182034280f23149c889890bfc09ba55063d5bfb0e6.scope - libcontainer container 204c2c5808ab01bcbb7872182034280f23149c889890bfc09ba55063d5bfb0e6. Feb 13 20:22:11.500654 systemd[1]: Started cri-containerd-a0c0825697404f09fd446191bfea350f5f17ca39580231b4551f98c4ee729068.scope - libcontainer container a0c0825697404f09fd446191bfea350f5f17ca39580231b4551f98c4ee729068. Feb 13 20:22:11.630263 containerd[1468]: time="2025-02-13T20:22:11.628863713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wpsjz,Uid:b12c46a0-e9ee-412e-a65f-fa96bcb3ae34,Namespace:kube-system,Attempt:0,} returns sandbox id \"204c2c5808ab01bcbb7872182034280f23149c889890bfc09ba55063d5bfb0e6\"" Feb 13 20:22:11.636460 containerd[1468]: time="2025-02-13T20:22:11.636393504Z" level=info msg="CreateContainer within sandbox \"204c2c5808ab01bcbb7872182034280f23149c889890bfc09ba55063d5bfb0e6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:22:11.651724 containerd[1468]: time="2025-02-13T20:22:11.651560779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7r7fz,Uid:f128b4cc-4e0a-465f-af75-a8bd93bb062a,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0c0825697404f09fd446191bfea350f5f17ca39580231b4551f98c4ee729068\"" Feb 13 20:22:11.660757 containerd[1468]: time="2025-02-13T20:22:11.658829597Z" level=info msg="CreateContainer within sandbox \"a0c0825697404f09fd446191bfea350f5f17ca39580231b4551f98c4ee729068\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 20:22:11.676585 containerd[1468]: time="2025-02-13T20:22:11.676528828Z" level=info msg="CreateContainer within sandbox \"204c2c5808ab01bcbb7872182034280f23149c889890bfc09ba55063d5bfb0e6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e84d095f276798ca88403a9bf16e201814d7712b66ceedf43164fb4c1f8374ec\"" Feb 13 20:22:11.677750 containerd[1468]: time="2025-02-13T20:22:11.677713419Z" level=info msg="StartContainer for \"e84d095f276798ca88403a9bf16e201814d7712b66ceedf43164fb4c1f8374ec\"" Feb 13 20:22:11.700453 containerd[1468]: time="2025-02-13T20:22:11.700387676Z" level=info msg="CreateContainer within sandbox \"a0c0825697404f09fd446191bfea350f5f17ca39580231b4551f98c4ee729068\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"43bad772b5aab6f7f99783d72a6f5c85ba57bfb98c0952d832568f3ee2e27a3f\"" Feb 13 20:22:11.702859 containerd[1468]: time="2025-02-13T20:22:11.702571453Z" level=info msg="StartContainer for \"43bad772b5aab6f7f99783d72a6f5c85ba57bfb98c0952d832568f3ee2e27a3f\"" Feb 13 20:22:11.762966 systemd[1]: Started cri-containerd-e84d095f276798ca88403a9bf16e201814d7712b66ceedf43164fb4c1f8374ec.scope - libcontainer container e84d095f276798ca88403a9bf16e201814d7712b66ceedf43164fb4c1f8374ec. Feb 13 20:22:11.776359 systemd[1]: Started cri-containerd-43bad772b5aab6f7f99783d72a6f5c85ba57bfb98c0952d832568f3ee2e27a3f.scope - libcontainer container 43bad772b5aab6f7f99783d72a6f5c85ba57bfb98c0952d832568f3ee2e27a3f. Feb 13 20:22:11.826972 containerd[1468]: time="2025-02-13T20:22:11.826730967Z" level=info msg="StartContainer for \"e84d095f276798ca88403a9bf16e201814d7712b66ceedf43164fb4c1f8374ec\" returns successfully" Feb 13 20:22:11.833645 containerd[1468]: time="2025-02-13T20:22:11.833593734Z" level=info msg="StartContainer for \"43bad772b5aab6f7f99783d72a6f5c85ba57bfb98c0952d832568f3ee2e27a3f\" returns successfully" Feb 13 20:22:11.973529 kubelet[2575]: I0213 20:22:11.971710 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-wpsjz" podStartSLOduration=27.971684326 podStartE2EDuration="27.971684326s" podCreationTimestamp="2025-02-13 20:21:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:22:11.970187864 +0000 UTC m=+33.340309717" watchObservedRunningTime="2025-02-13 20:22:11.971684326 +0000 UTC m=+33.341806174" Feb 13 20:22:11.992245 kubelet[2575]: I0213 20:22:11.991713 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-7r7fz" podStartSLOduration=27.991686754 podStartE2EDuration="27.991686754s" podCreationTimestamp="2025-02-13 20:21:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:22:11.991233391 +0000 UTC m=+33.361355238" watchObservedRunningTime="2025-02-13 20:22:11.991686754 +0000 UTC m=+33.361808602" Feb 13 20:22:12.417147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2858646796.mount: Deactivated successfully. Feb 13 20:22:25.673317 systemd[1]: Started sshd@9-10.128.0.9:22-139.178.89.65:43260.service - OpenSSH per-connection server daemon (139.178.89.65:43260). Feb 13 20:22:25.967556 sshd[3957]: Accepted publickey for core from 139.178.89.65 port 43260 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:22:25.969654 sshd[3957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:25.976654 systemd-logind[1452]: New session 10 of user core. Feb 13 20:22:25.980164 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:22:26.274609 sshd[3957]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:26.280515 systemd[1]: sshd@9-10.128.0.9:22-139.178.89.65:43260.service: Deactivated successfully. Feb 13 20:22:26.286182 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:22:26.288877 systemd-logind[1452]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:22:26.291183 systemd-logind[1452]: Removed session 10. Feb 13 20:22:31.335374 systemd[1]: Started sshd@10-10.128.0.9:22-139.178.89.65:43276.service - OpenSSH per-connection server daemon (139.178.89.65:43276). Feb 13 20:22:31.619768 sshd[3972]: Accepted publickey for core from 139.178.89.65 port 43276 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:22:31.622122 sshd[3972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:31.629075 systemd-logind[1452]: New session 11 of user core. Feb 13 20:22:31.639213 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:22:31.916123 sshd[3972]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:31.921600 systemd[1]: sshd@10-10.128.0.9:22-139.178.89.65:43276.service: Deactivated successfully. Feb 13 20:22:31.924664 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:22:31.927124 systemd-logind[1452]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:22:31.928881 systemd-logind[1452]: Removed session 11. Feb 13 20:22:36.970513 systemd[1]: Started sshd@11-10.128.0.9:22-139.178.89.65:45626.service - OpenSSH per-connection server daemon (139.178.89.65:45626). Feb 13 20:22:37.262888 sshd[3986]: Accepted publickey for core from 139.178.89.65 port 45626 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:22:37.264919 sshd[3986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:37.272353 systemd-logind[1452]: New session 12 of user core. Feb 13 20:22:37.278176 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 20:22:37.549729 sshd[3986]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:37.555226 systemd[1]: sshd@11-10.128.0.9:22-139.178.89.65:45626.service: Deactivated successfully. Feb 13 20:22:37.558817 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 20:22:37.561356 systemd-logind[1452]: Session 12 logged out. Waiting for processes to exit. Feb 13 20:22:37.563094 systemd-logind[1452]: Removed session 12. Feb 13 20:22:42.609336 systemd[1]: Started sshd@12-10.128.0.9:22-139.178.89.65:45634.service - OpenSSH per-connection server daemon (139.178.89.65:45634). Feb 13 20:22:42.902433 sshd[4002]: Accepted publickey for core from 139.178.89.65 port 45634 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:22:42.904569 sshd[4002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:42.910992 systemd-logind[1452]: New session 13 of user core. Feb 13 20:22:42.918158 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 20:22:43.194521 sshd[4002]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:43.199894 systemd[1]: sshd@12-10.128.0.9:22-139.178.89.65:45634.service: Deactivated successfully. Feb 13 20:22:43.203495 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 20:22:43.206285 systemd-logind[1452]: Session 13 logged out. Waiting for processes to exit. Feb 13 20:22:43.207803 systemd-logind[1452]: Removed session 13. Feb 13 20:22:48.252337 systemd[1]: Started sshd@13-10.128.0.9:22-139.178.89.65:43292.service - OpenSSH per-connection server daemon (139.178.89.65:43292). Feb 13 20:22:48.539734 sshd[4018]: Accepted publickey for core from 139.178.89.65 port 43292 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:22:48.541860 sshd[4018]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:48.549077 systemd-logind[1452]: New session 14 of user core. Feb 13 20:22:48.555200 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 20:22:48.830229 sshd[4018]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:48.838490 systemd-logind[1452]: Session 14 logged out. Waiting for processes to exit. Feb 13 20:22:48.838780 systemd[1]: sshd@13-10.128.0.9:22-139.178.89.65:43292.service: Deactivated successfully. Feb 13 20:22:48.842291 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 20:22:48.845362 systemd-logind[1452]: Removed session 14. Feb 13 20:22:48.888464 systemd[1]: Started sshd@14-10.128.0.9:22-139.178.89.65:43308.service - OpenSSH per-connection server daemon (139.178.89.65:43308). Feb 13 20:22:49.180870 sshd[4032]: Accepted publickey for core from 139.178.89.65 port 43308 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:22:49.183232 sshd[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:49.189968 systemd-logind[1452]: New session 15 of user core. Feb 13 20:22:49.195151 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 20:22:49.513317 sshd[4032]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:49.522443 systemd[1]: sshd@14-10.128.0.9:22-139.178.89.65:43308.service: Deactivated successfully. Feb 13 20:22:49.524954 systemd-logind[1452]: Session 15 logged out. Waiting for processes to exit. Feb 13 20:22:49.529043 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 20:22:49.535327 systemd-logind[1452]: Removed session 15. Feb 13 20:22:49.571334 systemd[1]: Started sshd@15-10.128.0.9:22-139.178.89.65:43312.service - OpenSSH per-connection server daemon (139.178.89.65:43312). Feb 13 20:22:49.875932 sshd[4042]: Accepted publickey for core from 139.178.89.65 port 43312 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:22:49.877994 sshd[4042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:49.884554 systemd-logind[1452]: New session 16 of user core. Feb 13 20:22:49.897242 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 20:22:50.169081 sshd[4042]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:50.175875 systemd[1]: sshd@15-10.128.0.9:22-139.178.89.65:43312.service: Deactivated successfully. Feb 13 20:22:50.178517 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 20:22:50.179505 systemd-logind[1452]: Session 16 logged out. Waiting for processes to exit. Feb 13 20:22:50.181354 systemd-logind[1452]: Removed session 16. Feb 13 20:22:55.224324 systemd[1]: Started sshd@16-10.128.0.9:22-139.178.89.65:45244.service - OpenSSH per-connection server daemon (139.178.89.65:45244). Feb 13 20:22:55.513390 sshd[4055]: Accepted publickey for core from 139.178.89.65 port 45244 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:22:55.515564 sshd[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:55.522685 systemd-logind[1452]: New session 17 of user core. Feb 13 20:22:55.530134 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 20:22:55.805200 sshd[4055]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:55.810230 systemd[1]: sshd@16-10.128.0.9:22-139.178.89.65:45244.service: Deactivated successfully. Feb 13 20:22:55.814044 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 20:22:55.816688 systemd-logind[1452]: Session 17 logged out. Waiting for processes to exit. Feb 13 20:22:55.818400 systemd-logind[1452]: Removed session 17. Feb 13 20:23:00.860653 systemd[1]: Started sshd@17-10.128.0.9:22-139.178.89.65:45258.service - OpenSSH per-connection server daemon (139.178.89.65:45258). Feb 13 20:23:01.144797 sshd[4068]: Accepted publickey for core from 139.178.89.65 port 45258 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:23:01.146873 sshd[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:01.153895 systemd-logind[1452]: New session 18 of user core. Feb 13 20:23:01.161157 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 20:23:01.430724 sshd[4068]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:01.436135 systemd[1]: sshd@17-10.128.0.9:22-139.178.89.65:45258.service: Deactivated successfully. Feb 13 20:23:01.439650 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 20:23:01.441970 systemd-logind[1452]: Session 18 logged out. Waiting for processes to exit. Feb 13 20:23:01.444026 systemd-logind[1452]: Removed session 18. Feb 13 20:23:01.490382 systemd[1]: Started sshd@18-10.128.0.9:22-139.178.89.65:45260.service - OpenSSH per-connection server daemon (139.178.89.65:45260). Feb 13 20:23:01.769788 sshd[4081]: Accepted publickey for core from 139.178.89.65 port 45260 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:23:01.771969 sshd[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:01.777878 systemd-logind[1452]: New session 19 of user core. Feb 13 20:23:01.782134 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 20:23:02.135235 sshd[4081]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:02.142376 systemd[1]: sshd@18-10.128.0.9:22-139.178.89.65:45260.service: Deactivated successfully. Feb 13 20:23:02.147611 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 20:23:02.148801 systemd-logind[1452]: Session 19 logged out. Waiting for processes to exit. Feb 13 20:23:02.150375 systemd-logind[1452]: Removed session 19. Feb 13 20:23:02.193638 systemd[1]: Started sshd@19-10.128.0.9:22-139.178.89.65:45264.service - OpenSSH per-connection server daemon (139.178.89.65:45264). Feb 13 20:23:02.478696 sshd[4092]: Accepted publickey for core from 139.178.89.65 port 45264 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:23:02.480988 sshd[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:02.488555 systemd-logind[1452]: New session 20 of user core. Feb 13 20:23:02.496168 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 20:23:04.276690 sshd[4092]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:04.283235 systemd[1]: sshd@19-10.128.0.9:22-139.178.89.65:45264.service: Deactivated successfully. Feb 13 20:23:04.286887 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 20:23:04.288043 systemd-logind[1452]: Session 20 logged out. Waiting for processes to exit. Feb 13 20:23:04.289552 systemd-logind[1452]: Removed session 20. Feb 13 20:23:04.337322 systemd[1]: Started sshd@20-10.128.0.9:22-139.178.89.65:45272.service - OpenSSH per-connection server daemon (139.178.89.65:45272). Feb 13 20:23:04.620343 sshd[4110]: Accepted publickey for core from 139.178.89.65 port 45272 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:23:04.622510 sshd[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:04.629412 systemd-logind[1452]: New session 21 of user core. Feb 13 20:23:04.636180 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 20:23:05.063535 sshd[4110]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:05.069792 systemd[1]: sshd@20-10.128.0.9:22-139.178.89.65:45272.service: Deactivated successfully. Feb 13 20:23:05.072841 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 20:23:05.074196 systemd-logind[1452]: Session 21 logged out. Waiting for processes to exit. Feb 13 20:23:05.075818 systemd-logind[1452]: Removed session 21. Feb 13 20:23:05.118321 systemd[1]: Started sshd@21-10.128.0.9:22-139.178.89.65:60280.service - OpenSSH per-connection server daemon (139.178.89.65:60280). Feb 13 20:23:05.408193 sshd[4121]: Accepted publickey for core from 139.178.89.65 port 60280 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:23:05.410359 sshd[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:05.417418 systemd-logind[1452]: New session 22 of user core. Feb 13 20:23:05.422156 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 20:23:05.693093 sshd[4121]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:05.699505 systemd[1]: sshd@21-10.128.0.9:22-139.178.89.65:60280.service: Deactivated successfully. Feb 13 20:23:05.703125 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 20:23:05.704311 systemd-logind[1452]: Session 22 logged out. Waiting for processes to exit. Feb 13 20:23:05.706023 systemd-logind[1452]: Removed session 22. Feb 13 20:23:10.752897 systemd[1]: Started sshd@22-10.128.0.9:22-139.178.89.65:60290.service - OpenSSH per-connection server daemon (139.178.89.65:60290). Feb 13 20:23:11.034613 sshd[4137]: Accepted publickey for core from 139.178.89.65 port 60290 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:23:11.036771 sshd[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:11.042623 systemd-logind[1452]: New session 23 of user core. Feb 13 20:23:11.052198 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 20:23:11.318740 sshd[4137]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:11.324354 systemd[1]: sshd@22-10.128.0.9:22-139.178.89.65:60290.service: Deactivated successfully. Feb 13 20:23:11.327422 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 20:23:11.330034 systemd-logind[1452]: Session 23 logged out. Waiting for processes to exit. Feb 13 20:23:11.332051 systemd-logind[1452]: Removed session 23. Feb 13 20:23:16.380418 systemd[1]: Started sshd@23-10.128.0.9:22-139.178.89.65:55902.service - OpenSSH per-connection server daemon (139.178.89.65:55902). Feb 13 20:23:16.666023 sshd[4152]: Accepted publickey for core from 139.178.89.65 port 55902 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:23:16.668176 sshd[4152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:16.675373 systemd-logind[1452]: New session 24 of user core. Feb 13 20:23:16.682210 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 20:23:16.961319 sshd[4152]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:16.966529 systemd[1]: sshd@23-10.128.0.9:22-139.178.89.65:55902.service: Deactivated successfully. Feb 13 20:23:16.970197 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 20:23:16.972763 systemd-logind[1452]: Session 24 logged out. Waiting for processes to exit. Feb 13 20:23:16.974663 systemd-logind[1452]: Removed session 24. Feb 13 20:23:22.020078 systemd[1]: Started sshd@24-10.128.0.9:22-139.178.89.65:55908.service - OpenSSH per-connection server daemon (139.178.89.65:55908). Feb 13 20:23:22.307132 sshd[4165]: Accepted publickey for core from 139.178.89.65 port 55908 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:23:22.309276 sshd[4165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:22.315018 systemd-logind[1452]: New session 25 of user core. Feb 13 20:23:22.322167 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 20:23:22.593113 sshd[4165]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:22.598657 systemd[1]: sshd@24-10.128.0.9:22-139.178.89.65:55908.service: Deactivated successfully. Feb 13 20:23:22.602204 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 20:23:22.604647 systemd-logind[1452]: Session 25 logged out. Waiting for processes to exit. Feb 13 20:23:22.606538 systemd-logind[1452]: Removed session 25. Feb 13 20:23:22.652405 systemd[1]: Started sshd@25-10.128.0.9:22-139.178.89.65:55924.service - OpenSSH per-connection server daemon (139.178.89.65:55924). Feb 13 20:23:22.943327 sshd[4178]: Accepted publickey for core from 139.178.89.65 port 55924 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:23:22.945503 sshd[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:22.956866 systemd-logind[1452]: New session 26 of user core. Feb 13 20:23:22.961213 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 20:23:24.430754 containerd[1468]: time="2025-02-13T20:23:24.429328628Z" level=info msg="StopContainer for \"f51740171d4f1cf22b9c197c32252219af2bd3264df73c1320aa0d3b1a96991e\" with timeout 30 (s)" Feb 13 20:23:24.433064 containerd[1468]: time="2025-02-13T20:23:24.432969013Z" level=info msg="Stop container \"f51740171d4f1cf22b9c197c32252219af2bd3264df73c1320aa0d3b1a96991e\" with signal terminated" Feb 13 20:23:24.472459 systemd[1]: run-containerd-runc-k8s.io-01f3c145eb65de8c754d80cd1bbddbbbd1c3033f5663ee2d66cf59a4c4e60f0f-runc.oO6Ulz.mount: Deactivated successfully. Feb 13 20:23:24.474801 systemd[1]: cri-containerd-f51740171d4f1cf22b9c197c32252219af2bd3264df73c1320aa0d3b1a96991e.scope: Deactivated successfully. Feb 13 20:23:24.483768 containerd[1468]: time="2025-02-13T20:23:24.483530349Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:23:24.510250 containerd[1468]: time="2025-02-13T20:23:24.510024027Z" level=info msg="StopContainer for \"01f3c145eb65de8c754d80cd1bbddbbbd1c3033f5663ee2d66cf59a4c4e60f0f\" with timeout 2 (s)" Feb 13 20:23:24.512424 containerd[1468]: time="2025-02-13T20:23:24.512330263Z" level=info msg="Stop container \"01f3c145eb65de8c754d80cd1bbddbbbd1c3033f5663ee2d66cf59a4c4e60f0f\" with signal terminated" Feb 13 20:23:24.522464 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f51740171d4f1cf22b9c197c32252219af2bd3264df73c1320aa0d3b1a96991e-rootfs.mount: Deactivated successfully. Feb 13 20:23:24.530875 systemd-networkd[1378]: lxc_health: Link DOWN Feb 13 20:23:24.530889 systemd-networkd[1378]: lxc_health: Lost carrier Feb 13 20:23:24.551624 systemd[1]: cri-containerd-01f3c145eb65de8c754d80cd1bbddbbbd1c3033f5663ee2d66cf59a4c4e60f0f.scope: Deactivated successfully. Feb 13 20:23:24.552027 systemd[1]: cri-containerd-01f3c145eb65de8c754d80cd1bbddbbbd1c3033f5663ee2d66cf59a4c4e60f0f.scope: Consumed 9.681s CPU time. Feb 13 20:23:24.557131 containerd[1468]: time="2025-02-13T20:23:24.556964823Z" level=info msg="shim disconnected" id=f51740171d4f1cf22b9c197c32252219af2bd3264df73c1320aa0d3b1a96991e namespace=k8s.io Feb 13 20:23:24.557131 containerd[1468]: time="2025-02-13T20:23:24.557077336Z" level=warning msg="cleaning up after shim disconnected" id=f51740171d4f1cf22b9c197c32252219af2bd3264df73c1320aa0d3b1a96991e namespace=k8s.io Feb 13 20:23:24.557673 containerd[1468]: time="2025-02-13T20:23:24.557096237Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:23:24.597790 containerd[1468]: time="2025-02-13T20:23:24.597739098Z" level=info msg="StopContainer for \"f51740171d4f1cf22b9c197c32252219af2bd3264df73c1320aa0d3b1a96991e\" returns successfully" Feb 13 20:23:24.598807 containerd[1468]: time="2025-02-13T20:23:24.598643472Z" level=info msg="StopPodSandbox for \"b901c1f03858add2d8afb4a5b3ca385f07d06b4f81a6990ac870cdedd8316391\"" Feb 13 20:23:24.598807 containerd[1468]: time="2025-02-13T20:23:24.598708278Z" level=info msg="Container to stop \"f51740171d4f1cf22b9c197c32252219af2bd3264df73c1320aa0d3b1a96991e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:23:24.607032 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b901c1f03858add2d8afb4a5b3ca385f07d06b4f81a6990ac870cdedd8316391-shm.mount: Deactivated successfully. Feb 13 20:23:24.615518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01f3c145eb65de8c754d80cd1bbddbbbd1c3033f5663ee2d66cf59a4c4e60f0f-rootfs.mount: Deactivated successfully. Feb 13 20:23:24.622055 systemd[1]: cri-containerd-b901c1f03858add2d8afb4a5b3ca385f07d06b4f81a6990ac870cdedd8316391.scope: Deactivated successfully. Feb 13 20:23:24.624750 containerd[1468]: time="2025-02-13T20:23:24.624443518Z" level=info msg="shim disconnected" id=01f3c145eb65de8c754d80cd1bbddbbbd1c3033f5663ee2d66cf59a4c4e60f0f namespace=k8s.io Feb 13 20:23:24.624750 containerd[1468]: time="2025-02-13T20:23:24.624559105Z" level=warning msg="cleaning up after shim disconnected" id=01f3c145eb65de8c754d80cd1bbddbbbd1c3033f5663ee2d66cf59a4c4e60f0f namespace=k8s.io Feb 13 20:23:24.624750 containerd[1468]: time="2025-02-13T20:23:24.624609394Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:23:24.657119 containerd[1468]: time="2025-02-13T20:23:24.657064500Z" level=info msg="StopContainer for \"01f3c145eb65de8c754d80cd1bbddbbbd1c3033f5663ee2d66cf59a4c4e60f0f\" returns successfully" Feb 13 20:23:24.658492 containerd[1468]: time="2025-02-13T20:23:24.658102909Z" level=info msg="StopPodSandbox for \"20ec1f0ccfed805a439462508a897ba88f9375e81cb6a00dbae10972b7fcc38b\"" Feb 13 20:23:24.658492 containerd[1468]: time="2025-02-13T20:23:24.658174436Z" level=info msg="Container to stop \"51973fea8b0496eb911c430d225a5380014ede4ebb3297e49da6da2136984aec\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:23:24.658689 containerd[1468]: time="2025-02-13T20:23:24.658199038Z" level=info msg="Container to stop \"2a707816a703293cd0a4773d6bab8caa9cd2019b9ba413a19c524eaa8e3110e3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:23:24.658689 containerd[1468]: time="2025-02-13T20:23:24.658582333Z" level=info msg="Container to stop \"d2955c519782013e9fd1a75632be6648b2ecda774938655ec49aedecb16cf28e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:23:24.658689 containerd[1468]: time="2025-02-13T20:23:24.658613243Z" level=info msg="Container to stop \"01f3c145eb65de8c754d80cd1bbddbbbd1c3033f5663ee2d66cf59a4c4e60f0f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:23:24.658689 containerd[1468]: time="2025-02-13T20:23:24.658631564Z" level=info msg="Container to stop \"7fcc6d46e70a2360910edb89eaeaca885d2e2ac6ae612339fb690b28d5c61012\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 20:23:24.672564 systemd[1]: cri-containerd-20ec1f0ccfed805a439462508a897ba88f9375e81cb6a00dbae10972b7fcc38b.scope: Deactivated successfully. Feb 13 20:23:24.676676 containerd[1468]: time="2025-02-13T20:23:24.676367863Z" level=info msg="shim disconnected" id=b901c1f03858add2d8afb4a5b3ca385f07d06b4f81a6990ac870cdedd8316391 namespace=k8s.io Feb 13 20:23:24.676676 containerd[1468]: time="2025-02-13T20:23:24.676432567Z" level=warning msg="cleaning up after shim disconnected" id=b901c1f03858add2d8afb4a5b3ca385f07d06b4f81a6990ac870cdedd8316391 namespace=k8s.io Feb 13 20:23:24.676676 containerd[1468]: time="2025-02-13T20:23:24.676449942Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:23:24.704464 containerd[1468]: time="2025-02-13T20:23:24.704335963Z" level=info msg="TearDown network for sandbox \"b901c1f03858add2d8afb4a5b3ca385f07d06b4f81a6990ac870cdedd8316391\" successfully" Feb 13 20:23:24.704464 containerd[1468]: time="2025-02-13T20:23:24.704388386Z" level=info msg="StopPodSandbox for \"b901c1f03858add2d8afb4a5b3ca385f07d06b4f81a6990ac870cdedd8316391\" returns successfully" Feb 13 20:23:24.718326 containerd[1468]: time="2025-02-13T20:23:24.718242527Z" level=info msg="shim disconnected" id=20ec1f0ccfed805a439462508a897ba88f9375e81cb6a00dbae10972b7fcc38b namespace=k8s.io Feb 13 20:23:24.718326 containerd[1468]: time="2025-02-13T20:23:24.718315521Z" level=warning msg="cleaning up after shim disconnected" id=20ec1f0ccfed805a439462508a897ba88f9375e81cb6a00dbae10972b7fcc38b namespace=k8s.io Feb 13 20:23:24.718326 containerd[1468]: time="2025-02-13T20:23:24.718332714Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:23:24.744970 containerd[1468]: time="2025-02-13T20:23:24.744837445Z" level=info msg="TearDown network for sandbox \"20ec1f0ccfed805a439462508a897ba88f9375e81cb6a00dbae10972b7fcc38b\" successfully" Feb 13 20:23:24.745137 containerd[1468]: time="2025-02-13T20:23:24.744903028Z" level=info msg="StopPodSandbox for \"20ec1f0ccfed805a439462508a897ba88f9375e81cb6a00dbae10972b7fcc38b\" returns successfully" Feb 13 20:23:24.875990 kubelet[2575]: I0213 20:23:24.875457 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a7677b5b-234a-4dbf-be39-3741372d9305-clustermesh-secrets\") pod \"a7677b5b-234a-4dbf-be39-3741372d9305\" (UID: \"a7677b5b-234a-4dbf-be39-3741372d9305\") " Feb 13 20:23:24.875990 kubelet[2575]: I0213 20:23:24.875598 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-etc-cni-netd\") pod \"a7677b5b-234a-4dbf-be39-3741372d9305\" (UID: \"a7677b5b-234a-4dbf-be39-3741372d9305\") " Feb 13 20:23:24.875990 kubelet[2575]: I0213 20:23:24.875638 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-cilium-cgroup\") pod \"a7677b5b-234a-4dbf-be39-3741372d9305\" (UID: \"a7677b5b-234a-4dbf-be39-3741372d9305\") " Feb 13 20:23:24.875990 kubelet[2575]: I0213 20:23:24.875677 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f96ea5f-a26a-4cdc-b943-317287cd6869-cilium-config-path\") pod \"8f96ea5f-a26a-4cdc-b943-317287cd6869\" (UID: \"8f96ea5f-a26a-4cdc-b943-317287cd6869\") " Feb 13 20:23:24.875990 kubelet[2575]: I0213 20:23:24.875724 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-host-proc-sys-kernel\") pod \"a7677b5b-234a-4dbf-be39-3741372d9305\" (UID: \"a7677b5b-234a-4dbf-be39-3741372d9305\") " Feb 13 20:23:24.875990 kubelet[2575]: I0213 20:23:24.875755 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a7677b5b-234a-4dbf-be39-3741372d9305-hubble-tls\") pod \"a7677b5b-234a-4dbf-be39-3741372d9305\" (UID: \"a7677b5b-234a-4dbf-be39-3741372d9305\") " Feb 13 20:23:24.877014 kubelet[2575]: I0213 20:23:24.875781 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-cilium-run\") pod \"a7677b5b-234a-4dbf-be39-3741372d9305\" (UID: \"a7677b5b-234a-4dbf-be39-3741372d9305\") " Feb 13 20:23:24.877014 kubelet[2575]: I0213 20:23:24.875810 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-xtables-lock\") pod \"a7677b5b-234a-4dbf-be39-3741372d9305\" (UID: \"a7677b5b-234a-4dbf-be39-3741372d9305\") " Feb 13 20:23:24.877014 kubelet[2575]: I0213 20:23:24.875843 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q75qt\" (UniqueName: \"kubernetes.io/projected/8f96ea5f-a26a-4cdc-b943-317287cd6869-kube-api-access-q75qt\") pod \"8f96ea5f-a26a-4cdc-b943-317287cd6869\" (UID: \"8f96ea5f-a26a-4cdc-b943-317287cd6869\") " Feb 13 20:23:24.877014 kubelet[2575]: I0213 20:23:24.875882 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a7677b5b-234a-4dbf-be39-3741372d9305-cilium-config-path\") pod \"a7677b5b-234a-4dbf-be39-3741372d9305\" (UID: \"a7677b5b-234a-4dbf-be39-3741372d9305\") " Feb 13 20:23:24.877014 kubelet[2575]: I0213 20:23:24.875962 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-hostproc\") pod \"a7677b5b-234a-4dbf-be39-3741372d9305\" (UID: \"a7677b5b-234a-4dbf-be39-3741372d9305\") " Feb 13 20:23:24.877014 kubelet[2575]: I0213 20:23:24.876010 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gckgd\" (UniqueName: \"kubernetes.io/projected/a7677b5b-234a-4dbf-be39-3741372d9305-kube-api-access-gckgd\") pod \"a7677b5b-234a-4dbf-be39-3741372d9305\" (UID: \"a7677b5b-234a-4dbf-be39-3741372d9305\") " Feb 13 20:23:24.877456 kubelet[2575]: I0213 20:23:24.876038 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-lib-modules\") pod \"a7677b5b-234a-4dbf-be39-3741372d9305\" (UID: \"a7677b5b-234a-4dbf-be39-3741372d9305\") " Feb 13 20:23:24.877456 kubelet[2575]: I0213 20:23:24.876067 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-host-proc-sys-net\") pod \"a7677b5b-234a-4dbf-be39-3741372d9305\" (UID: \"a7677b5b-234a-4dbf-be39-3741372d9305\") " Feb 13 20:23:24.877456 kubelet[2575]: I0213 20:23:24.876100 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-cni-path\") pod \"a7677b5b-234a-4dbf-be39-3741372d9305\" (UID: \"a7677b5b-234a-4dbf-be39-3741372d9305\") " Feb 13 20:23:24.877456 kubelet[2575]: I0213 20:23:24.876130 2575 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-bpf-maps\") pod \"a7677b5b-234a-4dbf-be39-3741372d9305\" (UID: \"a7677b5b-234a-4dbf-be39-3741372d9305\") " Feb 13 20:23:24.877456 kubelet[2575]: I0213 20:23:24.876215 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a7677b5b-234a-4dbf-be39-3741372d9305" (UID: "a7677b5b-234a-4dbf-be39-3741372d9305"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:23:24.877456 kubelet[2575]: I0213 20:23:24.876270 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a7677b5b-234a-4dbf-be39-3741372d9305" (UID: "a7677b5b-234a-4dbf-be39-3741372d9305"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:23:24.877882 kubelet[2575]: I0213 20:23:24.876299 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a7677b5b-234a-4dbf-be39-3741372d9305" (UID: "a7677b5b-234a-4dbf-be39-3741372d9305"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:23:24.881719 kubelet[2575]: I0213 20:23:24.881376 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-hostproc" (OuterVolumeSpecName: "hostproc") pod "a7677b5b-234a-4dbf-be39-3741372d9305" (UID: "a7677b5b-234a-4dbf-be39-3741372d9305"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:23:24.884397 kubelet[2575]: I0213 20:23:24.884357 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a7677b5b-234a-4dbf-be39-3741372d9305" (UID: "a7677b5b-234a-4dbf-be39-3741372d9305"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:23:24.887608 kubelet[2575]: I0213 20:23:24.884492 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a7677b5b-234a-4dbf-be39-3741372d9305" (UID: "a7677b5b-234a-4dbf-be39-3741372d9305"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:23:24.887608 kubelet[2575]: I0213 20:23:24.884517 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a7677b5b-234a-4dbf-be39-3741372d9305" (UID: "a7677b5b-234a-4dbf-be39-3741372d9305"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:23:24.887608 kubelet[2575]: I0213 20:23:24.884570 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-cni-path" (OuterVolumeSpecName: "cni-path") pod "a7677b5b-234a-4dbf-be39-3741372d9305" (UID: "a7677b5b-234a-4dbf-be39-3741372d9305"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:23:24.887891 kubelet[2575]: I0213 20:23:24.887023 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a7677b5b-234a-4dbf-be39-3741372d9305" (UID: "a7677b5b-234a-4dbf-be39-3741372d9305"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:23:24.887891 kubelet[2575]: I0213 20:23:24.887090 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a7677b5b-234a-4dbf-be39-3741372d9305" (UID: "a7677b5b-234a-4dbf-be39-3741372d9305"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 20:23:24.889729 kubelet[2575]: I0213 20:23:24.889692 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f96ea5f-a26a-4cdc-b943-317287cd6869-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8f96ea5f-a26a-4cdc-b943-317287cd6869" (UID: "8f96ea5f-a26a-4cdc-b943-317287cd6869"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 20:23:24.892755 kubelet[2575]: I0213 20:23:24.889970 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7677b5b-234a-4dbf-be39-3741372d9305-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a7677b5b-234a-4dbf-be39-3741372d9305" (UID: "a7677b5b-234a-4dbf-be39-3741372d9305"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 20:23:24.896597 kubelet[2575]: I0213 20:23:24.896558 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7677b5b-234a-4dbf-be39-3741372d9305-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a7677b5b-234a-4dbf-be39-3741372d9305" (UID: "a7677b5b-234a-4dbf-be39-3741372d9305"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 20:23:24.899253 kubelet[2575]: I0213 20:23:24.896825 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7677b5b-234a-4dbf-be39-3741372d9305-kube-api-access-gckgd" (OuterVolumeSpecName: "kube-api-access-gckgd") pod "a7677b5b-234a-4dbf-be39-3741372d9305" (UID: "a7677b5b-234a-4dbf-be39-3741372d9305"). InnerVolumeSpecName "kube-api-access-gckgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 20:23:24.899253 kubelet[2575]: I0213 20:23:24.896837 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f96ea5f-a26a-4cdc-b943-317287cd6869-kube-api-access-q75qt" (OuterVolumeSpecName: "kube-api-access-q75qt") pod "8f96ea5f-a26a-4cdc-b943-317287cd6869" (UID: "8f96ea5f-a26a-4cdc-b943-317287cd6869"). InnerVolumeSpecName "kube-api-access-q75qt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 20:23:24.899253 kubelet[2575]: I0213 20:23:24.898888 2575 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7677b5b-234a-4dbf-be39-3741372d9305-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a7677b5b-234a-4dbf-be39-3741372d9305" (UID: "a7677b5b-234a-4dbf-be39-3741372d9305"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 20:23:24.976565 kubelet[2575]: I0213 20:23:24.976392 2575 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-host-proc-sys-kernel\") on node \"ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:23:24.976565 kubelet[2575]: I0213 20:23:24.976455 2575 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a7677b5b-234a-4dbf-be39-3741372d9305-hubble-tls\") on node \"ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:23:24.976565 kubelet[2575]: I0213 20:23:24.976477 2575 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-cilium-run\") on node \"ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:23:24.976565 kubelet[2575]: I0213 20:23:24.976493 2575 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-xtables-lock\") on node \"ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:23:24.976565 kubelet[2575]: I0213 20:23:24.976513 2575 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-q75qt\" (UniqueName: \"kubernetes.io/projected/8f96ea5f-a26a-4cdc-b943-317287cd6869-kube-api-access-q75qt\") on node \"ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:23:24.976565 kubelet[2575]: I0213 20:23:24.976536 2575 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gckgd\" (UniqueName: \"kubernetes.io/projected/a7677b5b-234a-4dbf-be39-3741372d9305-kube-api-access-gckgd\") on node \"ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:23:24.976565 kubelet[2575]: I0213 20:23:24.976554 2575 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-lib-modules\") on node \"ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:23:24.977098 kubelet[2575]: I0213 20:23:24.976572 2575 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a7677b5b-234a-4dbf-be39-3741372d9305-cilium-config-path\") on node \"ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:23:24.977098 kubelet[2575]: I0213 20:23:24.976594 2575 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-hostproc\") on node \"ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:23:24.977098 kubelet[2575]: I0213 20:23:24.976608 2575 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-host-proc-sys-net\") on node \"ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:23:24.977098 kubelet[2575]: I0213 20:23:24.976625 2575 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-cni-path\") on node \"ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:23:24.977098 kubelet[2575]: I0213 20:23:24.976641 2575 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-bpf-maps\") on node \"ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:23:24.977098 kubelet[2575]: I0213 20:23:24.976657 2575 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a7677b5b-234a-4dbf-be39-3741372d9305-clustermesh-secrets\") on node \"ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:23:24.977098 kubelet[2575]: I0213 20:23:24.976674 2575 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-etc-cni-netd\") on node \"ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:23:24.977484 kubelet[2575]: I0213 20:23:24.976688 2575 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a7677b5b-234a-4dbf-be39-3741372d9305-cilium-cgroup\") on node \"ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:23:24.977484 kubelet[2575]: I0213 20:23:24.976704 2575 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f96ea5f-a26a-4cdc-b943-317287cd6869-cilium-config-path\") on node \"ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal\" DevicePath \"\"" Feb 13 20:23:25.141651 kubelet[2575]: I0213 20:23:25.141067 2575 scope.go:117] "RemoveContainer" containerID="01f3c145eb65de8c754d80cd1bbddbbbd1c3033f5663ee2d66cf59a4c4e60f0f" Feb 13 20:23:25.144609 containerd[1468]: time="2025-02-13T20:23:25.144111006Z" level=info msg="RemoveContainer for \"01f3c145eb65de8c754d80cd1bbddbbbd1c3033f5663ee2d66cf59a4c4e60f0f\"" Feb 13 20:23:25.154623 systemd[1]: Removed slice kubepods-burstable-poda7677b5b_234a_4dbf_be39_3741372d9305.slice - libcontainer container kubepods-burstable-poda7677b5b_234a_4dbf_be39_3741372d9305.slice. Feb 13 20:23:25.154799 systemd[1]: kubepods-burstable-poda7677b5b_234a_4dbf_be39_3741372d9305.slice: Consumed 9.802s CPU time. Feb 13 20:23:25.158541 containerd[1468]: time="2025-02-13T20:23:25.157882487Z" level=info msg="RemoveContainer for \"01f3c145eb65de8c754d80cd1bbddbbbd1c3033f5663ee2d66cf59a4c4e60f0f\" returns successfully" Feb 13 20:23:25.159594 kubelet[2575]: I0213 20:23:25.159443 2575 scope.go:117] "RemoveContainer" containerID="51973fea8b0496eb911c430d225a5380014ede4ebb3297e49da6da2136984aec" Feb 13 20:23:25.161478 systemd[1]: Removed slice kubepods-besteffort-pod8f96ea5f_a26a_4cdc_b943_317287cd6869.slice - libcontainer container kubepods-besteffort-pod8f96ea5f_a26a_4cdc_b943_317287cd6869.slice. Feb 13 20:23:25.162029 containerd[1468]: time="2025-02-13T20:23:25.161805316Z" level=info msg="RemoveContainer for \"51973fea8b0496eb911c430d225a5380014ede4ebb3297e49da6da2136984aec\"" Feb 13 20:23:25.170119 containerd[1468]: time="2025-02-13T20:23:25.170037897Z" level=info msg="RemoveContainer for \"51973fea8b0496eb911c430d225a5380014ede4ebb3297e49da6da2136984aec\" returns successfully" Feb 13 20:23:25.170533 kubelet[2575]: I0213 20:23:25.170286 2575 scope.go:117] "RemoveContainer" containerID="d2955c519782013e9fd1a75632be6648b2ecda774938655ec49aedecb16cf28e" Feb 13 20:23:25.173122 containerd[1468]: time="2025-02-13T20:23:25.173037110Z" level=info msg="RemoveContainer for \"d2955c519782013e9fd1a75632be6648b2ecda774938655ec49aedecb16cf28e\"" Feb 13 20:23:25.179395 containerd[1468]: time="2025-02-13T20:23:25.178763759Z" level=info msg="RemoveContainer for \"d2955c519782013e9fd1a75632be6648b2ecda774938655ec49aedecb16cf28e\" returns successfully" Feb 13 20:23:25.179520 kubelet[2575]: I0213 20:23:25.179039 2575 scope.go:117] "RemoveContainer" containerID="2a707816a703293cd0a4773d6bab8caa9cd2019b9ba413a19c524eaa8e3110e3" Feb 13 20:23:25.181489 containerd[1468]: time="2025-02-13T20:23:25.181333342Z" level=info msg="RemoveContainer for \"2a707816a703293cd0a4773d6bab8caa9cd2019b9ba413a19c524eaa8e3110e3\"" Feb 13 20:23:25.186186 containerd[1468]: time="2025-02-13T20:23:25.186131185Z" level=info msg="RemoveContainer for \"2a707816a703293cd0a4773d6bab8caa9cd2019b9ba413a19c524eaa8e3110e3\" returns successfully" Feb 13 20:23:25.186726 kubelet[2575]: I0213 20:23:25.186483 2575 scope.go:117] "RemoveContainer" containerID="7fcc6d46e70a2360910edb89eaeaca885d2e2ac6ae612339fb690b28d5c61012" Feb 13 20:23:25.188018 containerd[1468]: time="2025-02-13T20:23:25.187946348Z" level=info msg="RemoveContainer for \"7fcc6d46e70a2360910edb89eaeaca885d2e2ac6ae612339fb690b28d5c61012\"" Feb 13 20:23:25.192338 containerd[1468]: time="2025-02-13T20:23:25.192298559Z" level=info msg="RemoveContainer for \"7fcc6d46e70a2360910edb89eaeaca885d2e2ac6ae612339fb690b28d5c61012\" returns successfully" Feb 13 20:23:25.192632 kubelet[2575]: I0213 20:23:25.192545 2575 scope.go:117] "RemoveContainer" containerID="01f3c145eb65de8c754d80cd1bbddbbbd1c3033f5663ee2d66cf59a4c4e60f0f" Feb 13 20:23:25.193115 containerd[1468]: time="2025-02-13T20:23:25.193063129Z" level=error msg="ContainerStatus for \"01f3c145eb65de8c754d80cd1bbddbbbd1c3033f5663ee2d66cf59a4c4e60f0f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"01f3c145eb65de8c754d80cd1bbddbbbd1c3033f5663ee2d66cf59a4c4e60f0f\": not found" Feb 13 20:23:25.193373 kubelet[2575]: E0213 20:23:25.193340 2575 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"01f3c145eb65de8c754d80cd1bbddbbbd1c3033f5663ee2d66cf59a4c4e60f0f\": not found" containerID="01f3c145eb65de8c754d80cd1bbddbbbd1c3033f5663ee2d66cf59a4c4e60f0f" Feb 13 20:23:25.195437 kubelet[2575]: I0213 20:23:25.193387 2575 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"01f3c145eb65de8c754d80cd1bbddbbbd1c3033f5663ee2d66cf59a4c4e60f0f"} err="failed to get container status \"01f3c145eb65de8c754d80cd1bbddbbbd1c3033f5663ee2d66cf59a4c4e60f0f\": rpc error: code = NotFound desc = an error occurred when try to find container \"01f3c145eb65de8c754d80cd1bbddbbbd1c3033f5663ee2d66cf59a4c4e60f0f\": not found" Feb 13 20:23:25.195437 kubelet[2575]: I0213 20:23:25.194514 2575 scope.go:117] "RemoveContainer" containerID="51973fea8b0496eb911c430d225a5380014ede4ebb3297e49da6da2136984aec" Feb 13 20:23:25.195437 kubelet[2575]: E0213 20:23:25.194989 2575 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"51973fea8b0496eb911c430d225a5380014ede4ebb3297e49da6da2136984aec\": not found" containerID="51973fea8b0496eb911c430d225a5380014ede4ebb3297e49da6da2136984aec" Feb 13 20:23:25.195437 kubelet[2575]: I0213 20:23:25.195025 2575 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"51973fea8b0496eb911c430d225a5380014ede4ebb3297e49da6da2136984aec"} err="failed to get container status \"51973fea8b0496eb911c430d225a5380014ede4ebb3297e49da6da2136984aec\": rpc error: code = NotFound desc = an error occurred when try to find container \"51973fea8b0496eb911c430d225a5380014ede4ebb3297e49da6da2136984aec\": not found" Feb 13 20:23:25.195437 kubelet[2575]: I0213 20:23:25.195055 2575 scope.go:117] "RemoveContainer" containerID="d2955c519782013e9fd1a75632be6648b2ecda774938655ec49aedecb16cf28e" Feb 13 20:23:25.195797 containerd[1468]: time="2025-02-13T20:23:25.194765522Z" level=error msg="ContainerStatus for \"51973fea8b0496eb911c430d225a5380014ede4ebb3297e49da6da2136984aec\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"51973fea8b0496eb911c430d225a5380014ede4ebb3297e49da6da2136984aec\": not found" Feb 13 20:23:25.195797 containerd[1468]: time="2025-02-13T20:23:25.195295942Z" level=error msg="ContainerStatus for \"d2955c519782013e9fd1a75632be6648b2ecda774938655ec49aedecb16cf28e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d2955c519782013e9fd1a75632be6648b2ecda774938655ec49aedecb16cf28e\": not found" Feb 13 20:23:25.196034 kubelet[2575]: E0213 20:23:25.195993 2575 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d2955c519782013e9fd1a75632be6648b2ecda774938655ec49aedecb16cf28e\": not found" containerID="d2955c519782013e9fd1a75632be6648b2ecda774938655ec49aedecb16cf28e" Feb 13 20:23:25.196034 kubelet[2575]: I0213 20:23:25.196034 2575 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d2955c519782013e9fd1a75632be6648b2ecda774938655ec49aedecb16cf28e"} err="failed to get container status \"d2955c519782013e9fd1a75632be6648b2ecda774938655ec49aedecb16cf28e\": rpc error: code = NotFound desc = an error occurred when try to find container \"d2955c519782013e9fd1a75632be6648b2ecda774938655ec49aedecb16cf28e\": not found" Feb 13 20:23:25.196034 kubelet[2575]: I0213 20:23:25.196064 2575 scope.go:117] "RemoveContainer" containerID="2a707816a703293cd0a4773d6bab8caa9cd2019b9ba413a19c524eaa8e3110e3" Feb 13 20:23:25.196895 containerd[1468]: time="2025-02-13T20:23:25.196732226Z" level=error msg="ContainerStatus for \"2a707816a703293cd0a4773d6bab8caa9cd2019b9ba413a19c524eaa8e3110e3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2a707816a703293cd0a4773d6bab8caa9cd2019b9ba413a19c524eaa8e3110e3\": not found" Feb 13 20:23:25.197203 kubelet[2575]: E0213 20:23:25.197158 2575 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2a707816a703293cd0a4773d6bab8caa9cd2019b9ba413a19c524eaa8e3110e3\": not found" containerID="2a707816a703293cd0a4773d6bab8caa9cd2019b9ba413a19c524eaa8e3110e3" Feb 13 20:23:25.197290 kubelet[2575]: I0213 20:23:25.197198 2575 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2a707816a703293cd0a4773d6bab8caa9cd2019b9ba413a19c524eaa8e3110e3"} err="failed to get container status \"2a707816a703293cd0a4773d6bab8caa9cd2019b9ba413a19c524eaa8e3110e3\": rpc error: code = NotFound desc = an error occurred when try to find container \"2a707816a703293cd0a4773d6bab8caa9cd2019b9ba413a19c524eaa8e3110e3\": not found" Feb 13 20:23:25.197290 kubelet[2575]: I0213 20:23:25.197225 2575 scope.go:117] "RemoveContainer" containerID="7fcc6d46e70a2360910edb89eaeaca885d2e2ac6ae612339fb690b28d5c61012" Feb 13 20:23:25.197544 containerd[1468]: time="2025-02-13T20:23:25.197466603Z" level=error msg="ContainerStatus for \"7fcc6d46e70a2360910edb89eaeaca885d2e2ac6ae612339fb690b28d5c61012\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7fcc6d46e70a2360910edb89eaeaca885d2e2ac6ae612339fb690b28d5c61012\": not found" Feb 13 20:23:25.197648 kubelet[2575]: E0213 20:23:25.197618 2575 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7fcc6d46e70a2360910edb89eaeaca885d2e2ac6ae612339fb690b28d5c61012\": not found" containerID="7fcc6d46e70a2360910edb89eaeaca885d2e2ac6ae612339fb690b28d5c61012" Feb 13 20:23:25.197726 kubelet[2575]: I0213 20:23:25.197654 2575 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7fcc6d46e70a2360910edb89eaeaca885d2e2ac6ae612339fb690b28d5c61012"} err="failed to get container status \"7fcc6d46e70a2360910edb89eaeaca885d2e2ac6ae612339fb690b28d5c61012\": rpc error: code = NotFound desc = an error occurred when try to find container \"7fcc6d46e70a2360910edb89eaeaca885d2e2ac6ae612339fb690b28d5c61012\": not found" Feb 13 20:23:25.197726 kubelet[2575]: I0213 20:23:25.197678 2575 scope.go:117] "RemoveContainer" containerID="f51740171d4f1cf22b9c197c32252219af2bd3264df73c1320aa0d3b1a96991e" Feb 13 20:23:25.200299 containerd[1468]: time="2025-02-13T20:23:25.200264324Z" level=info msg="RemoveContainer for \"f51740171d4f1cf22b9c197c32252219af2bd3264df73c1320aa0d3b1a96991e\"" Feb 13 20:23:25.204506 containerd[1468]: time="2025-02-13T20:23:25.204462939Z" level=info msg="RemoveContainer for \"f51740171d4f1cf22b9c197c32252219af2bd3264df73c1320aa0d3b1a96991e\" returns successfully" Feb 13 20:23:25.204692 kubelet[2575]: I0213 20:23:25.204667 2575 scope.go:117] "RemoveContainer" containerID="f51740171d4f1cf22b9c197c32252219af2bd3264df73c1320aa0d3b1a96991e" Feb 13 20:23:25.205048 containerd[1468]: time="2025-02-13T20:23:25.204984428Z" level=error msg="ContainerStatus for \"f51740171d4f1cf22b9c197c32252219af2bd3264df73c1320aa0d3b1a96991e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f51740171d4f1cf22b9c197c32252219af2bd3264df73c1320aa0d3b1a96991e\": not found" Feb 13 20:23:25.205229 kubelet[2575]: E0213 20:23:25.205192 2575 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f51740171d4f1cf22b9c197c32252219af2bd3264df73c1320aa0d3b1a96991e\": not found" containerID="f51740171d4f1cf22b9c197c32252219af2bd3264df73c1320aa0d3b1a96991e" Feb 13 20:23:25.205339 kubelet[2575]: I0213 20:23:25.205237 2575 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f51740171d4f1cf22b9c197c32252219af2bd3264df73c1320aa0d3b1a96991e"} err="failed to get container status \"f51740171d4f1cf22b9c197c32252219af2bd3264df73c1320aa0d3b1a96991e\": rpc error: code = NotFound desc = an error occurred when try to find container \"f51740171d4f1cf22b9c197c32252219af2bd3264df73c1320aa0d3b1a96991e\": not found" Feb 13 20:23:25.449016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20ec1f0ccfed805a439462508a897ba88f9375e81cb6a00dbae10972b7fcc38b-rootfs.mount: Deactivated successfully. Feb 13 20:23:25.449181 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-20ec1f0ccfed805a439462508a897ba88f9375e81cb6a00dbae10972b7fcc38b-shm.mount: Deactivated successfully. Feb 13 20:23:25.449342 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b901c1f03858add2d8afb4a5b3ca385f07d06b4f81a6990ac870cdedd8316391-rootfs.mount: Deactivated successfully. Feb 13 20:23:25.449460 systemd[1]: var-lib-kubelet-pods-a7677b5b\x2d234a\x2d4dbf\x2dbe39\x2d3741372d9305-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 20:23:25.449571 systemd[1]: var-lib-kubelet-pods-a7677b5b\x2d234a\x2d4dbf\x2dbe39\x2d3741372d9305-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 20:23:25.449684 systemd[1]: var-lib-kubelet-pods-8f96ea5f\x2da26a\x2d4cdc\x2db943\x2d317287cd6869-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq75qt.mount: Deactivated successfully. Feb 13 20:23:25.449800 systemd[1]: var-lib-kubelet-pods-a7677b5b\x2d234a\x2d4dbf\x2dbe39\x2d3741372d9305-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgckgd.mount: Deactivated successfully. Feb 13 20:23:26.413286 sshd[4178]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:26.420217 systemd-logind[1452]: Session 26 logged out. Waiting for processes to exit. Feb 13 20:23:26.422582 systemd[1]: sshd@25-10.128.0.9:22-139.178.89.65:55924.service: Deactivated successfully. Feb 13 20:23:26.426691 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 20:23:26.431963 systemd-logind[1452]: Removed session 26. Feb 13 20:23:26.473431 systemd[1]: Started sshd@26-10.128.0.9:22-139.178.89.65:45164.service - OpenSSH per-connection server daemon (139.178.89.65:45164). Feb 13 20:23:26.757065 sshd[4345]: Accepted publickey for core from 139.178.89.65 port 45164 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:23:26.759387 sshd[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:26.766658 systemd-logind[1452]: New session 27 of user core. Feb 13 20:23:26.770183 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 20:23:26.779998 kubelet[2575]: I0213 20:23:26.779909 2575 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f96ea5f-a26a-4cdc-b943-317287cd6869" path="/var/lib/kubelet/pods/8f96ea5f-a26a-4cdc-b943-317287cd6869/volumes" Feb 13 20:23:26.780899 kubelet[2575]: I0213 20:23:26.780839 2575 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7677b5b-234a-4dbf-be39-3741372d9305" path="/var/lib/kubelet/pods/a7677b5b-234a-4dbf-be39-3741372d9305/volumes" Feb 13 20:23:27.260180 ntpd[1437]: Deleting interface #11 lxc_health, fe80::78d2:bfff:fe31:6a31%8#123, interface stats: received=0, sent=0, dropped=0, active_time=76 secs Feb 13 20:23:27.261387 ntpd[1437]: 13 Feb 20:23:27 ntpd[1437]: Deleting interface #11 lxc_health, fe80::78d2:bfff:fe31:6a31%8#123, interface stats: received=0, sent=0, dropped=0, active_time=76 secs Feb 13 20:23:28.151156 kubelet[2575]: E0213 20:23:28.149201 2575 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8f96ea5f-a26a-4cdc-b943-317287cd6869" containerName="cilium-operator" Feb 13 20:23:28.151156 kubelet[2575]: E0213 20:23:28.149244 2575 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a7677b5b-234a-4dbf-be39-3741372d9305" containerName="mount-cgroup" Feb 13 20:23:28.151156 kubelet[2575]: E0213 20:23:28.149258 2575 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a7677b5b-234a-4dbf-be39-3741372d9305" containerName="mount-bpf-fs" Feb 13 20:23:28.151156 kubelet[2575]: E0213 20:23:28.149271 2575 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a7677b5b-234a-4dbf-be39-3741372d9305" containerName="clean-cilium-state" Feb 13 20:23:28.151156 kubelet[2575]: E0213 20:23:28.149284 2575 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a7677b5b-234a-4dbf-be39-3741372d9305" containerName="cilium-agent" Feb 13 20:23:28.151156 kubelet[2575]: E0213 20:23:28.149296 2575 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a7677b5b-234a-4dbf-be39-3741372d9305" containerName="apply-sysctl-overwrites" Feb 13 20:23:28.151156 kubelet[2575]: I0213 20:23:28.149341 2575 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f96ea5f-a26a-4cdc-b943-317287cd6869" containerName="cilium-operator" Feb 13 20:23:28.151156 kubelet[2575]: I0213 20:23:28.149355 2575 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7677b5b-234a-4dbf-be39-3741372d9305" containerName="cilium-agent" Feb 13 20:23:28.171668 sshd[4345]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:28.174772 systemd[1]: Created slice kubepods-burstable-pod2eada4ec_6379_47dc_85cd_f526466b3b9d.slice - libcontainer container kubepods-burstable-pod2eada4ec_6379_47dc_85cd_f526466b3b9d.slice. Feb 13 20:23:28.191533 systemd[1]: sshd@26-10.128.0.9:22-139.178.89.65:45164.service: Deactivated successfully. Feb 13 20:23:28.196641 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 20:23:28.197328 systemd[1]: session-27.scope: Consumed 1.174s CPU time. Feb 13 20:23:28.203043 systemd-logind[1452]: Session 27 logged out. Waiting for processes to exit. Feb 13 20:23:28.228098 systemd[1]: Started sshd@27-10.128.0.9:22-139.178.89.65:45170.service - OpenSSH per-connection server daemon (139.178.89.65:45170). Feb 13 20:23:28.230957 systemd-logind[1452]: Removed session 27. Feb 13 20:23:28.294489 kubelet[2575]: I0213 20:23:28.294415 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2eada4ec-6379-47dc-85cd-f526466b3b9d-etc-cni-netd\") pod \"cilium-9lt4w\" (UID: \"2eada4ec-6379-47dc-85cd-f526466b3b9d\") " pod="kube-system/cilium-9lt4w" Feb 13 20:23:28.294489 kubelet[2575]: I0213 20:23:28.294486 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2eada4ec-6379-47dc-85cd-f526466b3b9d-clustermesh-secrets\") pod \"cilium-9lt4w\" (UID: \"2eada4ec-6379-47dc-85cd-f526466b3b9d\") " pod="kube-system/cilium-9lt4w" Feb 13 20:23:28.294758 kubelet[2575]: I0213 20:23:28.294521 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2eada4ec-6379-47dc-85cd-f526466b3b9d-host-proc-sys-kernel\") pod \"cilium-9lt4w\" (UID: \"2eada4ec-6379-47dc-85cd-f526466b3b9d\") " pod="kube-system/cilium-9lt4w" Feb 13 20:23:28.294758 kubelet[2575]: I0213 20:23:28.294548 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2eada4ec-6379-47dc-85cd-f526466b3b9d-bpf-maps\") pod \"cilium-9lt4w\" (UID: \"2eada4ec-6379-47dc-85cd-f526466b3b9d\") " pod="kube-system/cilium-9lt4w" Feb 13 20:23:28.294758 kubelet[2575]: I0213 20:23:28.294575 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2eada4ec-6379-47dc-85cd-f526466b3b9d-cilium-ipsec-secrets\") pod \"cilium-9lt4w\" (UID: \"2eada4ec-6379-47dc-85cd-f526466b3b9d\") " pod="kube-system/cilium-9lt4w" Feb 13 20:23:28.294758 kubelet[2575]: I0213 20:23:28.294601 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2eada4ec-6379-47dc-85cd-f526466b3b9d-hubble-tls\") pod \"cilium-9lt4w\" (UID: \"2eada4ec-6379-47dc-85cd-f526466b3b9d\") " pod="kube-system/cilium-9lt4w" Feb 13 20:23:28.294758 kubelet[2575]: I0213 20:23:28.294634 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dddb\" (UniqueName: \"kubernetes.io/projected/2eada4ec-6379-47dc-85cd-f526466b3b9d-kube-api-access-5dddb\") pod \"cilium-9lt4w\" (UID: \"2eada4ec-6379-47dc-85cd-f526466b3b9d\") " pod="kube-system/cilium-9lt4w" Feb 13 20:23:28.294758 kubelet[2575]: I0213 20:23:28.294660 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2eada4ec-6379-47dc-85cd-f526466b3b9d-cilium-run\") pod \"cilium-9lt4w\" (UID: \"2eada4ec-6379-47dc-85cd-f526466b3b9d\") " pod="kube-system/cilium-9lt4w" Feb 13 20:23:28.295108 kubelet[2575]: I0213 20:23:28.294685 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2eada4ec-6379-47dc-85cd-f526466b3b9d-lib-modules\") pod \"cilium-9lt4w\" (UID: \"2eada4ec-6379-47dc-85cd-f526466b3b9d\") " pod="kube-system/cilium-9lt4w" Feb 13 20:23:28.295108 kubelet[2575]: I0213 20:23:28.294721 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2eada4ec-6379-47dc-85cd-f526466b3b9d-cilium-config-path\") pod \"cilium-9lt4w\" (UID: \"2eada4ec-6379-47dc-85cd-f526466b3b9d\") " pod="kube-system/cilium-9lt4w" Feb 13 20:23:28.295108 kubelet[2575]: I0213 20:23:28.294757 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2eada4ec-6379-47dc-85cd-f526466b3b9d-host-proc-sys-net\") pod \"cilium-9lt4w\" (UID: \"2eada4ec-6379-47dc-85cd-f526466b3b9d\") " pod="kube-system/cilium-9lt4w" Feb 13 20:23:28.295108 kubelet[2575]: I0213 20:23:28.294790 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2eada4ec-6379-47dc-85cd-f526466b3b9d-cni-path\") pod \"cilium-9lt4w\" (UID: \"2eada4ec-6379-47dc-85cd-f526466b3b9d\") " pod="kube-system/cilium-9lt4w" Feb 13 20:23:28.295108 kubelet[2575]: I0213 20:23:28.294818 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2eada4ec-6379-47dc-85cd-f526466b3b9d-hostproc\") pod \"cilium-9lt4w\" (UID: \"2eada4ec-6379-47dc-85cd-f526466b3b9d\") " pod="kube-system/cilium-9lt4w" Feb 13 20:23:28.295108 kubelet[2575]: I0213 20:23:28.294847 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2eada4ec-6379-47dc-85cd-f526466b3b9d-cilium-cgroup\") pod \"cilium-9lt4w\" (UID: \"2eada4ec-6379-47dc-85cd-f526466b3b9d\") " pod="kube-system/cilium-9lt4w" Feb 13 20:23:28.295299 kubelet[2575]: I0213 20:23:28.294877 2575 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2eada4ec-6379-47dc-85cd-f526466b3b9d-xtables-lock\") pod \"cilium-9lt4w\" (UID: \"2eada4ec-6379-47dc-85cd-f526466b3b9d\") " pod="kube-system/cilium-9lt4w" Feb 13 20:23:28.492533 containerd[1468]: time="2025-02-13T20:23:28.492323937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9lt4w,Uid:2eada4ec-6379-47dc-85cd-f526466b3b9d,Namespace:kube-system,Attempt:0,}" Feb 13 20:23:28.528497 containerd[1468]: time="2025-02-13T20:23:28.528308632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:23:28.529027 containerd[1468]: time="2025-02-13T20:23:28.528538350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:23:28.529027 containerd[1468]: time="2025-02-13T20:23:28.528576542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:23:28.529027 containerd[1468]: time="2025-02-13T20:23:28.528723813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:23:28.550401 sshd[4357]: Accepted publickey for core from 139.178.89.65 port 45170 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:23:28.551491 sshd[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:28.563453 systemd-logind[1452]: New session 28 of user core. Feb 13 20:23:28.569153 systemd[1]: Started cri-containerd-0fbcfe56241953aa48130eb9594de470d3b0b959d6f776f3515ea3c1b51451c2.scope - libcontainer container 0fbcfe56241953aa48130eb9594de470d3b0b959d6f776f3515ea3c1b51451c2. Feb 13 20:23:28.570796 systemd[1]: Started session-28.scope - Session 28 of User core. Feb 13 20:23:28.608424 containerd[1468]: time="2025-02-13T20:23:28.608372400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9lt4w,Uid:2eada4ec-6379-47dc-85cd-f526466b3b9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0fbcfe56241953aa48130eb9594de470d3b0b959d6f776f3515ea3c1b51451c2\"" Feb 13 20:23:28.612969 containerd[1468]: time="2025-02-13T20:23:28.612904602Z" level=info msg="CreateContainer within sandbox \"0fbcfe56241953aa48130eb9594de470d3b0b959d6f776f3515ea3c1b51451c2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 20:23:28.627206 containerd[1468]: time="2025-02-13T20:23:28.627166559Z" level=info msg="CreateContainer within sandbox \"0fbcfe56241953aa48130eb9594de470d3b0b959d6f776f3515ea3c1b51451c2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"321aee8a064a43c399cef1d1402324b2b33ada83821f0238c5706e54344fb559\"" Feb 13 20:23:28.628155 containerd[1468]: time="2025-02-13T20:23:28.628111315Z" level=info msg="StartContainer for \"321aee8a064a43c399cef1d1402324b2b33ada83821f0238c5706e54344fb559\"" Feb 13 20:23:28.669151 systemd[1]: Started cri-containerd-321aee8a064a43c399cef1d1402324b2b33ada83821f0238c5706e54344fb559.scope - libcontainer container 321aee8a064a43c399cef1d1402324b2b33ada83821f0238c5706e54344fb559. Feb 13 20:23:28.705334 containerd[1468]: time="2025-02-13T20:23:28.704543822Z" level=info msg="StartContainer for \"321aee8a064a43c399cef1d1402324b2b33ada83821f0238c5706e54344fb559\" returns successfully" Feb 13 20:23:28.722266 systemd[1]: cri-containerd-321aee8a064a43c399cef1d1402324b2b33ada83821f0238c5706e54344fb559.scope: Deactivated successfully. Feb 13 20:23:28.761807 sshd[4357]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:28.766192 containerd[1468]: time="2025-02-13T20:23:28.765512877Z" level=info msg="shim disconnected" id=321aee8a064a43c399cef1d1402324b2b33ada83821f0238c5706e54344fb559 namespace=k8s.io Feb 13 20:23:28.766192 containerd[1468]: time="2025-02-13T20:23:28.765627636Z" level=warning msg="cleaning up after shim disconnected" id=321aee8a064a43c399cef1d1402324b2b33ada83821f0238c5706e54344fb559 namespace=k8s.io Feb 13 20:23:28.766192 containerd[1468]: time="2025-02-13T20:23:28.765649344Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:23:28.769646 systemd[1]: sshd@27-10.128.0.9:22-139.178.89.65:45170.service: Deactivated successfully. Feb 13 20:23:28.774050 systemd[1]: session-28.scope: Deactivated successfully. Feb 13 20:23:28.776771 systemd-logind[1452]: Session 28 logged out. Waiting for processes to exit. Feb 13 20:23:28.782521 systemd-logind[1452]: Removed session 28. Feb 13 20:23:28.819391 systemd[1]: Started sshd@28-10.128.0.9:22-139.178.89.65:45178.service - OpenSSH per-connection server daemon (139.178.89.65:45178). Feb 13 20:23:28.902253 kubelet[2575]: E0213 20:23:28.902197 2575 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 20:23:29.113803 sshd[4471]: Accepted publickey for core from 139.178.89.65 port 45178 ssh2: RSA SHA256:DTMBw2adoKSfgtZ9DPCnQzXgQa1eFbfogspB4KbJjqY Feb 13 20:23:29.115839 sshd[4471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:23:29.121996 systemd-logind[1452]: New session 29 of user core. Feb 13 20:23:29.133178 systemd[1]: Started session-29.scope - Session 29 of User core. Feb 13 20:23:29.166128 containerd[1468]: time="2025-02-13T20:23:29.165886963Z" level=info msg="CreateContainer within sandbox \"0fbcfe56241953aa48130eb9594de470d3b0b959d6f776f3515ea3c1b51451c2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 20:23:29.181738 containerd[1468]: time="2025-02-13T20:23:29.181608256Z" level=info msg="CreateContainer within sandbox \"0fbcfe56241953aa48130eb9594de470d3b0b959d6f776f3515ea3c1b51451c2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"25d8b6bf89a2dab8b64b8b6e559ec6b25f0eb8a7f1644a148b5ed6984817c48f\"" Feb 13 20:23:29.182698 containerd[1468]: time="2025-02-13T20:23:29.182619061Z" level=info msg="StartContainer for \"25d8b6bf89a2dab8b64b8b6e559ec6b25f0eb8a7f1644a148b5ed6984817c48f\"" Feb 13 20:23:29.223159 systemd[1]: Started cri-containerd-25d8b6bf89a2dab8b64b8b6e559ec6b25f0eb8a7f1644a148b5ed6984817c48f.scope - libcontainer container 25d8b6bf89a2dab8b64b8b6e559ec6b25f0eb8a7f1644a148b5ed6984817c48f. Feb 13 20:23:29.264340 containerd[1468]: time="2025-02-13T20:23:29.264289799Z" level=info msg="StartContainer for \"25d8b6bf89a2dab8b64b8b6e559ec6b25f0eb8a7f1644a148b5ed6984817c48f\" returns successfully" Feb 13 20:23:29.286178 systemd[1]: cri-containerd-25d8b6bf89a2dab8b64b8b6e559ec6b25f0eb8a7f1644a148b5ed6984817c48f.scope: Deactivated successfully. Feb 13 20:23:29.356193 containerd[1468]: time="2025-02-13T20:23:29.355836557Z" level=info msg="shim disconnected" id=25d8b6bf89a2dab8b64b8b6e559ec6b25f0eb8a7f1644a148b5ed6984817c48f namespace=k8s.io Feb 13 20:23:29.356193 containerd[1468]: time="2025-02-13T20:23:29.355948153Z" level=warning msg="cleaning up after shim disconnected" id=25d8b6bf89a2dab8b64b8b6e559ec6b25f0eb8a7f1644a148b5ed6984817c48f namespace=k8s.io Feb 13 20:23:29.356193 containerd[1468]: time="2025-02-13T20:23:29.355965400Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:23:30.169042 containerd[1468]: time="2025-02-13T20:23:30.168980339Z" level=info msg="CreateContainer within sandbox \"0fbcfe56241953aa48130eb9594de470d3b0b959d6f776f3515ea3c1b51451c2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 20:23:30.195045 containerd[1468]: time="2025-02-13T20:23:30.194714857Z" level=info msg="CreateContainer within sandbox \"0fbcfe56241953aa48130eb9594de470d3b0b959d6f776f3515ea3c1b51451c2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9794087619483f07d1796db3089c160407c73171c57f97bc57172adbf535ec8e\"" Feb 13 20:23:30.200719 containerd[1468]: time="2025-02-13T20:23:30.198050219Z" level=info msg="StartContainer for \"9794087619483f07d1796db3089c160407c73171c57f97bc57172adbf535ec8e\"" Feb 13 20:23:30.200318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3741727780.mount: Deactivated successfully. Feb 13 20:23:30.256162 systemd[1]: Started cri-containerd-9794087619483f07d1796db3089c160407c73171c57f97bc57172adbf535ec8e.scope - libcontainer container 9794087619483f07d1796db3089c160407c73171c57f97bc57172adbf535ec8e. Feb 13 20:23:30.297951 containerd[1468]: time="2025-02-13T20:23:30.296185365Z" level=info msg="StartContainer for \"9794087619483f07d1796db3089c160407c73171c57f97bc57172adbf535ec8e\" returns successfully" Feb 13 20:23:30.301029 systemd[1]: cri-containerd-9794087619483f07d1796db3089c160407c73171c57f97bc57172adbf535ec8e.scope: Deactivated successfully. Feb 13 20:23:30.334809 containerd[1468]: time="2025-02-13T20:23:30.334727504Z" level=info msg="shim disconnected" id=9794087619483f07d1796db3089c160407c73171c57f97bc57172adbf535ec8e namespace=k8s.io Feb 13 20:23:30.335325 containerd[1468]: time="2025-02-13T20:23:30.334831638Z" level=warning msg="cleaning up after shim disconnected" id=9794087619483f07d1796db3089c160407c73171c57f97bc57172adbf535ec8e namespace=k8s.io Feb 13 20:23:30.335325 containerd[1468]: time="2025-02-13T20:23:30.334850825Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:23:30.403327 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9794087619483f07d1796db3089c160407c73171c57f97bc57172adbf535ec8e-rootfs.mount: Deactivated successfully. Feb 13 20:23:31.108750 kubelet[2575]: I0213 20:23:31.108356 2575 setters.go:600] "Node became not ready" node="ci-4081-3-1-b860790c0860addb1c68.c.flatcar-212911.internal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T20:23:31Z","lastTransitionTime":"2025-02-13T20:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 20:23:31.177054 containerd[1468]: time="2025-02-13T20:23:31.176937108Z" level=info msg="CreateContainer within sandbox \"0fbcfe56241953aa48130eb9594de470d3b0b959d6f776f3515ea3c1b51451c2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 20:23:31.201611 containerd[1468]: time="2025-02-13T20:23:31.200657865Z" level=info msg="CreateContainer within sandbox \"0fbcfe56241953aa48130eb9594de470d3b0b959d6f776f3515ea3c1b51451c2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8ec1ca976858c09734e8c4463646d39800c7ddc65948d9a7abf848d160511e28\"" Feb 13 20:23:31.203108 containerd[1468]: time="2025-02-13T20:23:31.203038106Z" level=info msg="StartContainer for \"8ec1ca976858c09734e8c4463646d39800c7ddc65948d9a7abf848d160511e28\"" Feb 13 20:23:31.280128 systemd[1]: Started cri-containerd-8ec1ca976858c09734e8c4463646d39800c7ddc65948d9a7abf848d160511e28.scope - libcontainer container 8ec1ca976858c09734e8c4463646d39800c7ddc65948d9a7abf848d160511e28. Feb 13 20:23:31.313465 systemd[1]: cri-containerd-8ec1ca976858c09734e8c4463646d39800c7ddc65948d9a7abf848d160511e28.scope: Deactivated successfully. Feb 13 20:23:31.317593 containerd[1468]: time="2025-02-13T20:23:31.317499415Z" level=info msg="StartContainer for \"8ec1ca976858c09734e8c4463646d39800c7ddc65948d9a7abf848d160511e28\" returns successfully" Feb 13 20:23:31.348478 containerd[1468]: time="2025-02-13T20:23:31.348389908Z" level=info msg="shim disconnected" id=8ec1ca976858c09734e8c4463646d39800c7ddc65948d9a7abf848d160511e28 namespace=k8s.io Feb 13 20:23:31.348478 containerd[1468]: time="2025-02-13T20:23:31.348456588Z" level=warning msg="cleaning up after shim disconnected" id=8ec1ca976858c09734e8c4463646d39800c7ddc65948d9a7abf848d160511e28 namespace=k8s.io Feb 13 20:23:31.348478 containerd[1468]: time="2025-02-13T20:23:31.348473128Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:23:31.404170 systemd[1]: run-containerd-runc-k8s.io-8ec1ca976858c09734e8c4463646d39800c7ddc65948d9a7abf848d160511e28-runc.73ngh9.mount: Deactivated successfully. Feb 13 20:23:31.404358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ec1ca976858c09734e8c4463646d39800c7ddc65948d9a7abf848d160511e28-rootfs.mount: Deactivated successfully. Feb 13 20:23:32.183180 containerd[1468]: time="2025-02-13T20:23:32.181648351Z" level=info msg="CreateContainer within sandbox \"0fbcfe56241953aa48130eb9594de470d3b0b959d6f776f3515ea3c1b51451c2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 20:23:32.214260 containerd[1468]: time="2025-02-13T20:23:32.213135723Z" level=info msg="CreateContainer within sandbox \"0fbcfe56241953aa48130eb9594de470d3b0b959d6f776f3515ea3c1b51451c2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"083a8ce6abfa3c9fce524b42cef56d07b134497ae17da36a6fee5e90d334464b\"" Feb 13 20:23:32.214260 containerd[1468]: time="2025-02-13T20:23:32.214028516Z" level=info msg="StartContainer for \"083a8ce6abfa3c9fce524b42cef56d07b134497ae17da36a6fee5e90d334464b\"" Feb 13 20:23:32.318125 systemd[1]: Started cri-containerd-083a8ce6abfa3c9fce524b42cef56d07b134497ae17da36a6fee5e90d334464b.scope - libcontainer container 083a8ce6abfa3c9fce524b42cef56d07b134497ae17da36a6fee5e90d334464b. Feb 13 20:23:32.391596 containerd[1468]: time="2025-02-13T20:23:32.391505204Z" level=info msg="StartContainer for \"083a8ce6abfa3c9fce524b42cef56d07b134497ae17da36a6fee5e90d334464b\" returns successfully" Feb 13 20:23:32.409083 systemd[1]: run-containerd-runc-k8s.io-083a8ce6abfa3c9fce524b42cef56d07b134497ae17da36a6fee5e90d334464b-runc.JDKmuN.mount: Deactivated successfully. Feb 13 20:23:32.891055 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 20:23:35.741964 systemd[1]: run-containerd-runc-k8s.io-083a8ce6abfa3c9fce524b42cef56d07b134497ae17da36a6fee5e90d334464b-runc.fBRS3z.mount: Deactivated successfully. Feb 13 20:23:36.208546 systemd-networkd[1378]: lxc_health: Link UP Feb 13 20:23:36.224447 systemd-networkd[1378]: lxc_health: Gained carrier Feb 13 20:23:36.531194 kubelet[2575]: I0213 20:23:36.530492 2575 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9lt4w" podStartSLOduration=8.530468146 podStartE2EDuration="8.530468146s" podCreationTimestamp="2025-02-13 20:23:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 20:23:33.20447612 +0000 UTC m=+114.574597969" watchObservedRunningTime="2025-02-13 20:23:36.530468146 +0000 UTC m=+117.900589997" Feb 13 20:23:38.110101 systemd-networkd[1378]: lxc_health: Gained IPv6LL Feb 13 20:23:38.811266 containerd[1468]: time="2025-02-13T20:23:38.811205133Z" level=info msg="StopPodSandbox for \"20ec1f0ccfed805a439462508a897ba88f9375e81cb6a00dbae10972b7fcc38b\"" Feb 13 20:23:38.811907 containerd[1468]: time="2025-02-13T20:23:38.811358816Z" level=info msg="TearDown network for sandbox \"20ec1f0ccfed805a439462508a897ba88f9375e81cb6a00dbae10972b7fcc38b\" successfully" Feb 13 20:23:38.811907 containerd[1468]: time="2025-02-13T20:23:38.811384692Z" level=info msg="StopPodSandbox for \"20ec1f0ccfed805a439462508a897ba88f9375e81cb6a00dbae10972b7fcc38b\" returns successfully" Feb 13 20:23:38.812266 containerd[1468]: time="2025-02-13T20:23:38.812222188Z" level=info msg="RemovePodSandbox for \"20ec1f0ccfed805a439462508a897ba88f9375e81cb6a00dbae10972b7fcc38b\"" Feb 13 20:23:38.812367 containerd[1468]: time="2025-02-13T20:23:38.812273296Z" level=info msg="Forcibly stopping sandbox \"20ec1f0ccfed805a439462508a897ba88f9375e81cb6a00dbae10972b7fcc38b\"" Feb 13 20:23:38.812367 containerd[1468]: time="2025-02-13T20:23:38.812356290Z" level=info msg="TearDown network for sandbox \"20ec1f0ccfed805a439462508a897ba88f9375e81cb6a00dbae10972b7fcc38b\" successfully" Feb 13 20:23:38.818938 containerd[1468]: time="2025-02-13T20:23:38.818222483Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"20ec1f0ccfed805a439462508a897ba88f9375e81cb6a00dbae10972b7fcc38b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:23:38.818938 containerd[1468]: time="2025-02-13T20:23:38.818360506Z" level=info msg="RemovePodSandbox \"20ec1f0ccfed805a439462508a897ba88f9375e81cb6a00dbae10972b7fcc38b\" returns successfully" Feb 13 20:23:38.819144 containerd[1468]: time="2025-02-13T20:23:38.819099541Z" level=info msg="StopPodSandbox for \"b901c1f03858add2d8afb4a5b3ca385f07d06b4f81a6990ac870cdedd8316391\"" Feb 13 20:23:38.819266 containerd[1468]: time="2025-02-13T20:23:38.819236816Z" level=info msg="TearDown network for sandbox \"b901c1f03858add2d8afb4a5b3ca385f07d06b4f81a6990ac870cdedd8316391\" successfully" Feb 13 20:23:38.819334 containerd[1468]: time="2025-02-13T20:23:38.819267090Z" level=info msg="StopPodSandbox for \"b901c1f03858add2d8afb4a5b3ca385f07d06b4f81a6990ac870cdedd8316391\" returns successfully" Feb 13 20:23:38.820158 containerd[1468]: time="2025-02-13T20:23:38.820090408Z" level=info msg="RemovePodSandbox for \"b901c1f03858add2d8afb4a5b3ca385f07d06b4f81a6990ac870cdedd8316391\"" Feb 13 20:23:38.820268 containerd[1468]: time="2025-02-13T20:23:38.820165254Z" level=info msg="Forcibly stopping sandbox \"b901c1f03858add2d8afb4a5b3ca385f07d06b4f81a6990ac870cdedd8316391\"" Feb 13 20:23:38.820350 containerd[1468]: time="2025-02-13T20:23:38.820271441Z" level=info msg="TearDown network for sandbox \"b901c1f03858add2d8afb4a5b3ca385f07d06b4f81a6990ac870cdedd8316391\" successfully" Feb 13 20:23:38.825939 containerd[1468]: time="2025-02-13T20:23:38.825044400Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b901c1f03858add2d8afb4a5b3ca385f07d06b4f81a6990ac870cdedd8316391\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:23:38.825939 containerd[1468]: time="2025-02-13T20:23:38.825165545Z" level=info msg="RemovePodSandbox \"b901c1f03858add2d8afb4a5b3ca385f07d06b4f81a6990ac870cdedd8316391\" returns successfully" Feb 13 20:23:40.260213 ntpd[1437]: Listen normally on 14 lxc_health [fe80::f8fb:4aff:fed1:8d17%14]:123 Feb 13 20:23:40.265935 ntpd[1437]: 13 Feb 20:23:40 ntpd[1437]: Listen normally on 14 lxc_health [fe80::f8fb:4aff:fed1:8d17%14]:123 Feb 13 20:23:42.591033 sshd[4471]: pam_unix(sshd:session): session closed for user core Feb 13 20:23:42.595719 systemd[1]: sshd@28-10.128.0.9:22-139.178.89.65:45178.service: Deactivated successfully. Feb 13 20:23:42.599655 systemd[1]: session-29.scope: Deactivated successfully. Feb 13 20:23:42.602180 systemd-logind[1452]: Session 29 logged out. Waiting for processes to exit. Feb 13 20:23:42.604428 systemd-logind[1452]: Removed session 29.