Jan 13 21:21:30.100649 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:21:30.100701 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:21:30.100721 kernel: BIOS-provided physical RAM map: Jan 13 21:21:30.100739 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 13 21:21:30.100755 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 13 21:21:30.100771 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 13 21:21:30.100791 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 13 21:21:30.100814 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 13 21:21:30.100831 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 13 21:21:30.100849 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 13 21:21:30.100866 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 13 21:21:30.100883 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 13 21:21:30.100900 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 13 21:21:30.100918 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 13 21:21:30.100944 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 13 21:21:30.100971 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 13 21:21:30.100990 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 13 21:21:30.101009 kernel: NX (Execute Disable) protection: active Jan 13 21:21:30.101028 kernel: APIC: Static calls initialized Jan 13 21:21:30.101047 kernel: efi: EFI v2.7 by EDK II Jan 13 21:21:30.101066 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Jan 13 21:21:30.101085 kernel: SMBIOS 2.4 present. Jan 13 21:21:30.101104 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 13 21:21:30.101122 kernel: Hypervisor detected: KVM Jan 13 21:21:30.101167 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:21:30.101187 kernel: kvm-clock: using sched offset of 12419498753 cycles Jan 13 21:21:30.101203 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:21:30.101221 kernel: tsc: Detected 2299.998 MHz processor Jan 13 21:21:30.101238 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:21:30.101256 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:21:30.101274 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 13 21:21:30.101290 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 13 21:21:30.101313 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:21:30.101344 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 13 21:21:30.101366 kernel: Using GB pages for direct mapping Jan 13 21:21:30.101385 kernel: Secure boot disabled Jan 13 21:21:30.101401 kernel: ACPI: Early table checksum verification disabled Jan 13 21:21:30.101420 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 13 21:21:30.101439 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 13 21:21:30.101459 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 13 21:21:30.101486 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 13 21:21:30.101511 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 13 21:21:30.101531 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 13 21:21:30.101551 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 13 21:21:30.101570 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 13 21:21:30.101590 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 13 21:21:30.101610 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 13 21:21:30.101635 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 13 21:21:30.101655 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 13 21:21:30.101674 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 13 21:21:30.101694 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 13 21:21:30.101714 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 13 21:21:30.101734 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 13 21:21:30.101753 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 13 21:21:30.101773 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 13 21:21:30.101792 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 13 21:21:30.101817 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 13 21:21:30.101837 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 21:21:30.101857 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 13 21:21:30.101876 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 13 21:21:30.101896 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 13 21:21:30.101916 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 13 21:21:30.101936 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 13 21:21:30.101965 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 13 21:21:30.101985 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Jan 13 21:21:30.102011 kernel: Zone ranges: Jan 13 21:21:30.102031 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:21:30.102051 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 21:21:30.102071 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 13 21:21:30.102091 kernel: Movable zone start for each node Jan 13 21:21:30.102110 kernel: Early memory node ranges Jan 13 21:21:30.102130 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 13 21:21:30.102221 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 13 21:21:30.102242 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 13 21:21:30.102268 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 13 21:21:30.102288 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 13 21:21:30.102308 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 13 21:21:30.102328 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:21:30.102348 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 13 21:21:30.102368 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 13 21:21:30.102388 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 13 21:21:30.102408 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 13 21:21:30.102428 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 13 21:21:30.102452 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:21:30.102472 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:21:30.102492 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:21:30.102511 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:21:30.102531 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:21:30.102551 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:21:30.102571 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:21:30.102591 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 21:21:30.102610 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 13 21:21:30.102635 kernel: Booting paravirtualized kernel on KVM Jan 13 21:21:30.102655 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:21:30.102675 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 21:21:30.102695 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 21:21:30.102715 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 21:21:30.102734 kernel: pcpu-alloc: [0] 0 1 Jan 13 21:21:30.102753 kernel: kvm-guest: PV spinlocks enabled Jan 13 21:21:30.102773 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 21:21:30.102796 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:21:30.102821 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:21:30.102841 kernel: random: crng init done Jan 13 21:21:30.102860 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 13 21:21:30.102880 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:21:30.102899 kernel: Fallback order for Node 0: 0 Jan 13 21:21:30.102919 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 13 21:21:30.102939 kernel: Policy zone: Normal Jan 13 21:21:30.102967 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:21:30.102992 kernel: software IO TLB: area num 2. Jan 13 21:21:30.103012 kernel: Memory: 7513380K/7860584K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 346944K reserved, 0K cma-reserved) Jan 13 21:21:30.103032 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 21:21:30.103052 kernel: Kernel/User page tables isolation: enabled Jan 13 21:21:30.103072 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:21:30.103092 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:21:30.103112 kernel: Dynamic Preempt: voluntary Jan 13 21:21:30.103132 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:21:30.103194 kernel: rcu: RCU event tracing is enabled. Jan 13 21:21:30.103236 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 21:21:30.103257 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:21:30.103278 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:21:30.103304 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:21:30.103325 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:21:30.103346 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 21:21:30.103367 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 21:21:30.103388 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:21:30.103409 kernel: Console: colour dummy device 80x25 Jan 13 21:21:30.103435 kernel: printk: console [ttyS0] enabled Jan 13 21:21:30.103462 kernel: ACPI: Core revision 20230628 Jan 13 21:21:30.103483 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:21:30.103504 kernel: x2apic enabled Jan 13 21:21:30.103525 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:21:30.103547 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 13 21:21:30.103568 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 13 21:21:30.103589 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 13 21:21:30.103615 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 13 21:21:30.103636 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 13 21:21:30.103657 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:21:30.103678 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 13 21:21:30.103699 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 13 21:21:30.103720 kernel: Spectre V2 : Mitigation: IBRS Jan 13 21:21:30.103741 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:21:30.103762 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:21:30.103783 kernel: RETBleed: Mitigation: IBRS Jan 13 21:21:30.103809 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 21:21:30.103830 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 13 21:21:30.103852 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 21:21:30.103872 kernel: MDS: Mitigation: Clear CPU buffers Jan 13 21:21:30.103894 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 21:21:30.103915 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:21:30.103936 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:21:30.103965 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:21:30.103986 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:21:30.104013 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 13 21:21:30.104034 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:21:30.104055 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:21:30.104076 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:21:30.104097 kernel: landlock: Up and running. Jan 13 21:21:30.104117 kernel: SELinux: Initializing. Jan 13 21:21:30.104138 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 21:21:30.104180 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 21:21:30.104201 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 13 21:21:30.104228 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:21:30.104250 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:21:30.104271 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:21:30.104292 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 13 21:21:30.104313 kernel: signal: max sigframe size: 1776 Jan 13 21:21:30.104334 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:21:30.104355 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:21:30.104376 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 21:21:30.104397 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:21:30.104423 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:21:30.104444 kernel: .... node #0, CPUs: #1 Jan 13 21:21:30.104466 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 13 21:21:30.104488 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 21:21:30.104509 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 21:21:30.104529 kernel: smpboot: Max logical packages: 1 Jan 13 21:21:30.104550 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 13 21:21:30.104571 kernel: devtmpfs: initialized Jan 13 21:21:30.104597 kernel: x86/mm: Memory block size: 128MB Jan 13 21:21:30.104618 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 13 21:21:30.104639 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:21:30.104660 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 21:21:30.104681 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:21:30.104702 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:21:30.104723 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:21:30.104744 kernel: audit: type=2000 audit(1736803289.131:1): state=initialized audit_enabled=0 res=1 Jan 13 21:21:30.104765 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:21:30.104791 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:21:30.104812 kernel: cpuidle: using governor menu Jan 13 21:21:30.104833 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:21:30.104854 kernel: dca service started, version 1.12.1 Jan 13 21:21:30.104874 kernel: PCI: Using configuration type 1 for base access Jan 13 21:21:30.104896 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:21:30.104916 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:21:30.104937 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:21:30.104966 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:21:30.104992 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:21:30.105013 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:21:30.105034 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:21:30.105055 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:21:30.105076 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:21:30.105096 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 13 21:21:30.105117 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:21:30.105138 kernel: ACPI: Interpreter enabled Jan 13 21:21:30.105173 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 21:21:30.105200 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:21:30.105221 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:21:30.105242 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 13 21:21:30.105263 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 13 21:21:30.105284 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:21:30.105599 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:21:30.106354 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 21:21:30.106989 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 21:21:30.107025 kernel: PCI host bridge to bus 0000:00 Jan 13 21:21:30.107255 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:21:30.107470 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:21:30.107672 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:21:30.107876 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 13 21:21:30.108088 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:21:30.108360 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 21:21:30.108602 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 13 21:21:30.108865 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 13 21:21:30.109101 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 13 21:21:30.109634 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 13 21:21:30.109867 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 13 21:21:30.110108 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 13 21:21:30.110367 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 21:21:30.110591 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 13 21:21:30.110854 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 13 21:21:30.111173 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:21:30.111428 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 13 21:21:30.111661 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 13 21:21:30.111697 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:21:30.111720 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:21:30.111743 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:21:30.111765 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:21:30.111786 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 21:21:30.111808 kernel: iommu: Default domain type: Translated Jan 13 21:21:30.111830 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:21:30.111852 kernel: efivars: Registered efivars operations Jan 13 21:21:30.111873 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:21:30.111899 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:21:30.111917 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 13 21:21:30.111938 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 13 21:21:30.111970 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 13 21:21:30.111991 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 13 21:21:30.112012 kernel: vgaarb: loaded Jan 13 21:21:30.112033 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:21:30.112054 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:21:30.112075 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:21:30.112102 kernel: pnp: PnP ACPI init Jan 13 21:21:30.112122 kernel: pnp: PnP ACPI: found 7 devices Jan 13 21:21:30.112165 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:21:30.112187 kernel: NET: Registered PF_INET protocol family Jan 13 21:21:30.112207 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 21:21:30.112229 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 13 21:21:30.112249 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:21:30.112270 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:21:30.112290 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 13 21:21:30.112317 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 13 21:21:30.112337 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 13 21:21:30.112359 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 13 21:21:30.112379 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:21:30.112399 kernel: NET: Registered PF_XDP protocol family Jan 13 21:21:30.112608 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:21:30.112805 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:21:30.113010 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:21:30.113282 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 13 21:21:30.113530 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 21:21:30.113560 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:21:30.113582 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 21:21:30.113603 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 13 21:21:30.113624 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 21:21:30.113645 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 13 21:21:30.113665 kernel: clocksource: Switched to clocksource tsc Jan 13 21:21:30.113693 kernel: Initialise system trusted keyrings Jan 13 21:21:30.113713 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 13 21:21:30.113734 kernel: Key type asymmetric registered Jan 13 21:21:30.113754 kernel: Asymmetric key parser 'x509' registered Jan 13 21:21:30.113774 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:21:30.113794 kernel: io scheduler mq-deadline registered Jan 13 21:21:30.113815 kernel: io scheduler kyber registered Jan 13 21:21:30.113835 kernel: io scheduler bfq registered Jan 13 21:21:30.113856 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:21:30.113882 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 13 21:21:30.114119 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 13 21:21:30.116654 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 13 21:21:30.117312 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 13 21:21:30.117346 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 13 21:21:30.117581 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 13 21:21:30.117609 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:21:30.117632 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:21:30.117654 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 13 21:21:30.117684 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 13 21:21:30.117705 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 13 21:21:30.117937 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 13 21:21:30.117974 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:21:30.117995 kernel: i8042: Warning: Keylock active Jan 13 21:21:30.118016 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:21:30.118038 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:21:30.118617 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 13 21:21:30.118846 kernel: rtc_cmos 00:00: registered as rtc0 Jan 13 21:21:30.119071 kernel: rtc_cmos 00:00: setting system clock to 2025-01-13T21:21:29 UTC (1736803289) Jan 13 21:21:30.119318 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 13 21:21:30.119347 kernel: intel_pstate: CPU model not supported Jan 13 21:21:30.119368 kernel: pstore: Using crash dump compression: deflate Jan 13 21:21:30.119390 kernel: pstore: Registered efi_pstore as persistent store backend Jan 13 21:21:30.119412 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:21:30.119434 kernel: Segment Routing with IPv6 Jan 13 21:21:30.119463 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:21:30.119485 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:21:30.119507 kernel: Key type dns_resolver registered Jan 13 21:21:30.119528 kernel: IPI shorthand broadcast: enabled Jan 13 21:21:30.119550 kernel: sched_clock: Marking stable (851004192, 142085545)->(1025826894, -32737157) Jan 13 21:21:30.119572 kernel: registered taskstats version 1 Jan 13 21:21:30.119593 kernel: Loading compiled-in X.509 certificates Jan 13 21:21:30.119615 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:21:30.119637 kernel: Key type .fscrypt registered Jan 13 21:21:30.119663 kernel: Key type fscrypt-provisioning registered Jan 13 21:21:30.119685 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:21:30.119706 kernel: ima: No architecture policies found Jan 13 21:21:30.119728 kernel: clk: Disabling unused clocks Jan 13 21:21:30.119750 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:21:30.119771 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:21:30.119792 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:21:30.119814 kernel: Run /init as init process Jan 13 21:21:30.119841 kernel: with arguments: Jan 13 21:21:30.119862 kernel: /init Jan 13 21:21:30.119883 kernel: with environment: Jan 13 21:21:30.119904 kernel: HOME=/ Jan 13 21:21:30.119925 kernel: TERM=linux Jan 13 21:21:30.119947 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:21:30.119976 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:21:30.120001 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:21:30.120032 systemd[1]: Detected virtualization google. Jan 13 21:21:30.120056 systemd[1]: Detected architecture x86-64. Jan 13 21:21:30.120078 systemd[1]: Running in initrd. Jan 13 21:21:30.120101 systemd[1]: No hostname configured, using default hostname. Jan 13 21:21:30.120123 systemd[1]: Hostname set to . Jan 13 21:21:30.122335 systemd[1]: Initializing machine ID from random generator. Jan 13 21:21:30.122446 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:21:30.122470 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:21:30.122501 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:21:30.122525 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:21:30.122548 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:21:30.122571 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:21:30.122595 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:21:30.122621 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:21:30.122645 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:21:30.122673 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:21:30.122695 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:21:30.122741 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:21:30.122769 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:21:30.122793 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:21:30.122816 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:21:30.122845 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:21:30.122869 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:21:30.122892 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:21:30.122914 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:21:30.122938 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:21:30.122970 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:21:30.122994 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:21:30.123018 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:21:30.123041 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:21:30.123068 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:21:30.123092 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:21:30.123116 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:21:30.123139 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:21:30.123182 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:21:30.123205 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:21:30.123270 systemd-journald[183]: Collecting audit messages is disabled. Jan 13 21:21:30.123327 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:21:30.123352 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:21:30.123375 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:21:30.123404 systemd-journald[183]: Journal started Jan 13 21:21:30.123448 systemd-journald[183]: Runtime Journal (/run/log/journal/11ac6d9dfbea4ca28b4cef940acdbd3a) is 8.0M, max 148.7M, 140.7M free. Jan 13 21:21:30.111274 systemd-modules-load[184]: Inserted module 'overlay' Jan 13 21:21:30.133291 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:21:30.135394 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:21:30.137675 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:21:30.155655 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:21:30.170295 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:21:30.170338 kernel: Bridge firewalling registered Jan 13 21:21:30.166513 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:21:30.169657 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 13 21:21:30.185567 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:21:30.186134 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:21:30.186556 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:21:30.194404 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:21:30.196621 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:21:30.220644 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:21:30.225535 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:21:30.233607 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:21:30.250364 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:21:30.257002 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:21:30.267758 dracut-cmdline[217]: dracut-dracut-053 Jan 13 21:21:30.272340 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:21:30.322761 systemd-resolved[220]: Positive Trust Anchors: Jan 13 21:21:30.322781 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:21:30.322855 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:21:30.328668 systemd-resolved[220]: Defaulting to hostname 'linux'. Jan 13 21:21:30.331852 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:21:30.338471 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:21:30.384190 kernel: SCSI subsystem initialized Jan 13 21:21:30.394185 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:21:30.406167 kernel: iscsi: registered transport (tcp) Jan 13 21:21:30.430192 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:21:30.430271 kernel: QLogic iSCSI HBA Driver Jan 13 21:21:30.482760 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:21:30.498391 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:21:30.540958 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:21:30.541045 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:21:30.541074 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:21:30.586203 kernel: raid6: avx2x4 gen() 17939 MB/s Jan 13 21:21:30.603192 kernel: raid6: avx2x2 gen() 17849 MB/s Jan 13 21:21:30.620688 kernel: raid6: avx2x1 gen() 13995 MB/s Jan 13 21:21:30.620749 kernel: raid6: using algorithm avx2x4 gen() 17939 MB/s Jan 13 21:21:30.638663 kernel: raid6: .... xor() 6897 MB/s, rmw enabled Jan 13 21:21:30.638716 kernel: raid6: using avx2x2 recovery algorithm Jan 13 21:21:30.662193 kernel: xor: automatically using best checksumming function avx Jan 13 21:21:30.835199 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:21:30.847846 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:21:30.861411 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:21:30.878773 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jan 13 21:21:30.885647 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:21:30.894879 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:21:30.930937 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jan 13 21:21:30.968579 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:21:30.985391 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:21:31.064432 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:21:31.079627 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:21:31.116126 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:21:31.124120 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:21:31.128301 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:21:31.133347 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:21:31.146866 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:21:31.186866 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:21:31.206174 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:21:31.216226 kernel: scsi host0: Virtio SCSI HBA Jan 13 21:21:31.221167 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 13 21:21:31.244433 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:21:31.244650 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:21:31.257539 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:21:31.301047 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:21:31.301088 kernel: AES CTR mode by8 optimization enabled Jan 13 21:21:31.263587 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:21:31.263857 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:21:31.266377 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:21:31.291560 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:21:31.333192 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 13 21:21:31.356784 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 13 21:21:31.357066 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 13 21:21:31.357324 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 13 21:21:31.357565 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 21:21:31.357799 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:21:31.357836 kernel: GPT:17805311 != 25165823 Jan 13 21:21:31.357860 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:21:31.357900 kernel: GPT:17805311 != 25165823 Jan 13 21:21:31.357924 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:21:31.357950 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:21:31.357977 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 13 21:21:31.338138 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:21:31.360223 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:21:31.399871 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:21:31.418197 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (459) Jan 13 21:21:31.425173 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (451) Jan 13 21:21:31.434530 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 13 21:21:31.448993 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 13 21:21:31.461346 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 13 21:21:31.472658 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 13 21:21:31.472907 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 13 21:21:31.485496 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:21:31.502367 disk-uuid[550]: Primary Header is updated. Jan 13 21:21:31.502367 disk-uuid[550]: Secondary Entries is updated. Jan 13 21:21:31.502367 disk-uuid[550]: Secondary Header is updated. Jan 13 21:21:31.520172 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:21:31.541185 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:21:31.565195 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:21:32.557545 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:21:32.557628 disk-uuid[551]: The operation has completed successfully. Jan 13 21:21:32.636499 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:21:32.636669 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:21:32.667365 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:21:32.692432 sh[568]: Success Jan 13 21:21:32.717786 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 21:21:32.795293 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:21:32.802722 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:21:32.830710 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:21:32.870179 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:21:32.870260 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:21:32.887220 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:21:32.887321 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:21:32.894053 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:21:33.006184 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 21:21:33.012894 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:21:33.013903 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:21:33.019352 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:21:33.040387 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:21:33.092474 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:21:33.092561 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:21:33.092588 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:21:33.117530 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:21:33.117611 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 21:21:33.132807 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:21:33.151345 kernel: BTRFS info (device sda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:21:33.158025 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:21:33.175435 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:21:33.233528 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:21:33.244630 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:21:33.355649 systemd-networkd[750]: lo: Link UP Jan 13 21:21:33.356201 systemd-networkd[750]: lo: Gained carrier Jan 13 21:21:33.362936 systemd-networkd[750]: Enumeration completed Jan 13 21:21:33.363099 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:21:33.364064 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:21:33.377478 ignition[685]: Ignition 2.19.0 Jan 13 21:21:33.364072 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:21:33.377487 ignition[685]: Stage: fetch-offline Jan 13 21:21:33.367210 systemd-networkd[750]: eth0: Link UP Jan 13 21:21:33.377528 ignition[685]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:33.367216 systemd-networkd[750]: eth0: Gained carrier Jan 13 21:21:33.377539 ignition[685]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:21:33.367232 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:21:33.377648 ignition[685]: parsed url from cmdline: "" Jan 13 21:21:33.380229 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.40/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 13 21:21:33.377654 ignition[685]: no config URL provided Jan 13 21:21:33.386857 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:21:33.377663 ignition[685]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:21:33.407991 systemd[1]: Reached target network.target - Network. Jan 13 21:21:33.377673 ignition[685]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:21:33.441346 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 21:21:33.377681 ignition[685]: failed to fetch config: resource requires networking Jan 13 21:21:33.480978 unknown[759]: fetched base config from "system" Jan 13 21:21:33.377969 ignition[685]: Ignition finished successfully Jan 13 21:21:33.480994 unknown[759]: fetched base config from "system" Jan 13 21:21:33.470343 ignition[759]: Ignition 2.19.0 Jan 13 21:21:33.481007 unknown[759]: fetched user config from "gcp" Jan 13 21:21:33.470352 ignition[759]: Stage: fetch Jan 13 21:21:33.483940 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 21:21:33.470580 ignition[759]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:33.505398 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:21:33.470594 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:21:33.554806 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:21:33.470725 ignition[759]: parsed url from cmdline: "" Jan 13 21:21:33.580404 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:21:33.470731 ignition[759]: no config URL provided Jan 13 21:21:33.618450 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:21:33.470749 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:21:33.630502 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:21:33.470760 ignition[759]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:21:33.651336 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:21:33.470782 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 13 21:21:33.668328 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:21:33.474490 ignition[759]: GET result: OK Jan 13 21:21:33.683320 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:21:33.474567 ignition[759]: parsing config with SHA512: bc43e4d65b793a31e011e9747a379bd249a0b11453d2e3c0e6a8781f9903ed2b5842e5ef5610b70bfdac61afa59fdd75cc663070f9685ad2a447c86ce66d042b Jan 13 21:21:33.698320 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:21:33.481985 ignition[759]: fetch: fetch complete Jan 13 21:21:33.717386 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:21:33.481997 ignition[759]: fetch: fetch passed Jan 13 21:21:33.482078 ignition[759]: Ignition finished successfully Jan 13 21:21:33.530526 ignition[765]: Ignition 2.19.0 Jan 13 21:21:33.530551 ignition[765]: Stage: kargs Jan 13 21:21:33.530775 ignition[765]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:33.530792 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:21:33.531902 ignition[765]: kargs: kargs passed Jan 13 21:21:33.531957 ignition[765]: Ignition finished successfully Jan 13 21:21:33.615777 ignition[771]: Ignition 2.19.0 Jan 13 21:21:33.615787 ignition[771]: Stage: disks Jan 13 21:21:33.616030 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:33.616044 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:21:33.617288 ignition[771]: disks: disks passed Jan 13 21:21:33.617345 ignition[771]: Ignition finished successfully Jan 13 21:21:33.773742 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 21:21:33.954340 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:21:33.959342 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:21:34.104199 kernel: EXT4-fs (sda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:21:34.105030 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:21:34.119929 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:21:34.125282 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:21:34.153615 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:21:34.173181 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (788) Jan 13 21:21:34.192862 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:21:34.192944 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:21:34.192988 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:21:34.202646 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:21:34.239312 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:21:34.239361 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 21:21:34.202711 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:21:34.202752 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:21:34.226408 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:21:34.250098 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:21:34.274342 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:21:34.392720 initrd-setup-root[815]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:21:34.403323 initrd-setup-root[822]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:21:34.413318 initrd-setup-root[829]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:21:34.423268 initrd-setup-root[836]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:21:34.550516 systemd-networkd[750]: eth0: Gained IPv6LL Jan 13 21:21:34.555613 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:21:34.577291 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:21:34.598184 kernel: BTRFS info (device sda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:21:34.610394 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:21:34.621406 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:21:34.665167 ignition[903]: INFO : Ignition 2.19.0 Jan 13 21:21:34.665167 ignition[903]: INFO : Stage: mount Jan 13 21:21:34.679313 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:34.679313 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:21:34.679313 ignition[903]: INFO : mount: mount passed Jan 13 21:21:34.679313 ignition[903]: INFO : Ignition finished successfully Jan 13 21:21:34.669064 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:21:34.699737 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:21:34.725314 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:21:35.111433 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:21:35.156178 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (915) Jan 13 21:21:35.167192 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:21:35.167289 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:21:35.180306 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:21:35.196450 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:21:35.196538 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 21:21:35.199541 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:21:35.241495 ignition[932]: INFO : Ignition 2.19.0 Jan 13 21:21:35.241495 ignition[932]: INFO : Stage: files Jan 13 21:21:35.256527 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:35.256527 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:21:35.256527 ignition[932]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:21:35.256527 ignition[932]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:21:35.256527 ignition[932]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:21:35.256527 ignition[932]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:21:35.256527 ignition[932]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:21:35.256527 ignition[932]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:21:35.256527 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:21:35.256527 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:21:35.253260 unknown[932]: wrote ssh authorized keys file for user: core Jan 13 21:21:35.396453 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:21:35.615732 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:21:35.632281 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:21:35.632281 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 13 21:21:43.379690 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 21:21:43.521668 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:21:43.537333 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:21:43.537333 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:21:43.537333 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:21:43.537333 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:21:43.537333 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:21:43.537333 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:21:43.537333 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:21:43.537333 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:21:43.537333 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:21:43.537333 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:21:43.537333 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:21:43.537333 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:21:43.537333 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:21:43.537333 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 13 21:21:43.774354 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 21:21:43.985396 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:21:43.985396 ignition[932]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 21:21:44.024341 ignition[932]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:21:44.024341 ignition[932]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:21:44.024341 ignition[932]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 21:21:44.024341 ignition[932]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:21:44.024341 ignition[932]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:21:44.024341 ignition[932]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:21:44.024341 ignition[932]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:21:44.024341 ignition[932]: INFO : files: files passed Jan 13 21:21:44.024341 ignition[932]: INFO : Ignition finished successfully Jan 13 21:21:43.991305 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:21:44.019420 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:21:44.053406 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:21:44.076798 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:21:44.240327 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:21:44.240327 initrd-setup-root-after-ignition[959]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:21:44.076968 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:21:44.307348 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:21:44.106556 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:21:44.114651 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:21:44.143477 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:21:44.233795 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:21:44.233976 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:21:44.251718 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:21:44.265578 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:21:44.297551 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:21:44.304380 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:21:44.368642 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:21:44.393562 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:21:44.441285 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:21:44.452749 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:21:44.474685 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:21:44.485753 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:21:44.485952 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:21:44.523693 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:21:44.533710 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:21:44.550697 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:21:44.566726 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:21:44.586701 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:21:44.604731 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:21:44.621729 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:21:44.639847 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:21:44.659773 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:21:44.678699 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:21:44.695620 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:21:44.695872 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:21:44.729718 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:21:44.749669 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:21:44.774580 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:21:44.774794 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:21:44.784840 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:21:44.785108 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:21:44.830631 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:21:44.830891 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:21:44.843740 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:21:44.910339 ignition[984]: INFO : Ignition 2.19.0 Jan 13 21:21:44.910339 ignition[984]: INFO : Stage: umount Jan 13 21:21:44.910339 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:44.910339 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:21:44.910339 ignition[984]: INFO : umount: umount passed Jan 13 21:21:44.910339 ignition[984]: INFO : Ignition finished successfully Jan 13 21:21:44.843921 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:21:44.868613 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:21:44.926491 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:21:44.961298 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:21:44.961580 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:21:44.979568 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:21:44.979757 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:21:45.010342 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:21:45.011510 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:21:45.011635 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:21:45.028121 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:21:45.028283 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:21:45.048839 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:21:45.048974 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:21:45.070563 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:21:45.070628 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:21:45.088467 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:21:45.088561 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:21:45.098602 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 21:21:45.098671 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 21:21:45.115607 systemd[1]: Stopped target network.target - Network. Jan 13 21:21:45.130543 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:21:45.130631 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:21:45.146605 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:21:45.164542 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:21:45.168278 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:21:45.179547 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:21:45.197528 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:21:45.223529 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:21:45.223598 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:21:45.234563 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:21:45.234627 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:21:45.268494 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:21:45.268584 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:21:45.276581 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:21:45.276657 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:21:45.310489 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:21:45.310569 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:21:45.318784 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:21:45.324222 systemd-networkd[750]: eth0: DHCPv6 lease lost Jan 13 21:21:45.345559 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:21:45.355963 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:21:45.356093 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:21:45.375537 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:21:45.375755 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:21:45.392443 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:21:45.392514 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:21:45.414302 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:21:45.430525 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:21:45.430616 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:21:45.446632 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:21:45.446730 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:21:45.472551 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:21:45.472622 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:21:45.499488 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:21:45.499571 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:21:45.519656 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:21:45.540933 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:21:45.942160 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 13 21:21:45.541115 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:21:45.555660 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:21:45.555728 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:21:45.576568 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:21:45.576624 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:21:45.604525 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:21:45.604598 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:21:45.642530 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:21:45.642630 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:21:45.666598 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:21:45.666693 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:21:45.709381 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:21:45.723303 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:21:45.723431 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:21:45.735396 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 21:21:45.735487 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:21:45.747454 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:21:45.747553 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:21:45.767444 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:21:45.767545 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:21:45.788952 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:21:45.789098 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:21:45.808752 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:21:45.808880 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:21:45.830656 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:21:45.856429 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:21:45.896888 systemd[1]: Switching root. Jan 13 21:21:46.213304 systemd-journald[183]: Journal stopped Jan 13 21:21:30.100649 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:21:30.100701 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:21:30.100721 kernel: BIOS-provided physical RAM map: Jan 13 21:21:30.100739 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jan 13 21:21:30.100755 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jan 13 21:21:30.100771 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jan 13 21:21:30.100791 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jan 13 21:21:30.100814 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jan 13 21:21:30.100831 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jan 13 21:21:30.100849 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Jan 13 21:21:30.100866 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Jan 13 21:21:30.100883 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Jan 13 21:21:30.100900 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jan 13 21:21:30.100918 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jan 13 21:21:30.100944 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jan 13 21:21:30.100971 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jan 13 21:21:30.100990 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jan 13 21:21:30.101009 kernel: NX (Execute Disable) protection: active Jan 13 21:21:30.101028 kernel: APIC: Static calls initialized Jan 13 21:21:30.101047 kernel: efi: EFI v2.7 by EDK II Jan 13 21:21:30.101066 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 Jan 13 21:21:30.101085 kernel: SMBIOS 2.4 present. Jan 13 21:21:30.101104 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Jan 13 21:21:30.101122 kernel: Hypervisor detected: KVM Jan 13 21:21:30.101167 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:21:30.101187 kernel: kvm-clock: using sched offset of 12419498753 cycles Jan 13 21:21:30.101203 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:21:30.101221 kernel: tsc: Detected 2299.998 MHz processor Jan 13 21:21:30.101238 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:21:30.101256 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:21:30.101274 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jan 13 21:21:30.101290 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Jan 13 21:21:30.101313 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:21:30.101344 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jan 13 21:21:30.101366 kernel: Using GB pages for direct mapping Jan 13 21:21:30.101385 kernel: Secure boot disabled Jan 13 21:21:30.101401 kernel: ACPI: Early table checksum verification disabled Jan 13 21:21:30.101420 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jan 13 21:21:30.101439 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jan 13 21:21:30.101459 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jan 13 21:21:30.101486 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jan 13 21:21:30.101511 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jan 13 21:21:30.101531 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Jan 13 21:21:30.101551 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jan 13 21:21:30.101570 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jan 13 21:21:30.101590 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jan 13 21:21:30.101610 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jan 13 21:21:30.101635 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jan 13 21:21:30.101655 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jan 13 21:21:30.101674 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jan 13 21:21:30.101694 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jan 13 21:21:30.101714 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jan 13 21:21:30.101734 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jan 13 21:21:30.101753 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jan 13 21:21:30.101773 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jan 13 21:21:30.101792 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jan 13 21:21:30.101817 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jan 13 21:21:30.101837 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 13 21:21:30.101857 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 13 21:21:30.101876 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 13 21:21:30.101896 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jan 13 21:21:30.101916 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jan 13 21:21:30.101936 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jan 13 21:21:30.101965 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jan 13 21:21:30.101985 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Jan 13 21:21:30.102011 kernel: Zone ranges: Jan 13 21:21:30.102031 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:21:30.102051 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 13 21:21:30.102071 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jan 13 21:21:30.102091 kernel: Movable zone start for each node Jan 13 21:21:30.102110 kernel: Early memory node ranges Jan 13 21:21:30.102130 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jan 13 21:21:30.102221 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jan 13 21:21:30.102242 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jan 13 21:21:30.102268 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jan 13 21:21:30.102288 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jan 13 21:21:30.102308 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jan 13 21:21:30.102328 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:21:30.102348 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jan 13 21:21:30.102368 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jan 13 21:21:30.102388 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 13 21:21:30.102408 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jan 13 21:21:30.102428 kernel: ACPI: PM-Timer IO Port: 0xb008 Jan 13 21:21:30.102452 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:21:30.102472 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:21:30.102492 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:21:30.102511 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:21:30.102531 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:21:30.102551 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:21:30.102571 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:21:30.102591 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 13 21:21:30.102610 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 13 21:21:30.102635 kernel: Booting paravirtualized kernel on KVM Jan 13 21:21:30.102655 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:21:30.102675 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 13 21:21:30.102695 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 13 21:21:30.102715 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 13 21:21:30.102734 kernel: pcpu-alloc: [0] 0 1 Jan 13 21:21:30.102753 kernel: kvm-guest: PV spinlocks enabled Jan 13 21:21:30.102773 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 21:21:30.102796 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:21:30.102821 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:21:30.102841 kernel: random: crng init done Jan 13 21:21:30.102860 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 13 21:21:30.102880 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:21:30.102899 kernel: Fallback order for Node 0: 0 Jan 13 21:21:30.102919 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jan 13 21:21:30.102939 kernel: Policy zone: Normal Jan 13 21:21:30.102967 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:21:30.102992 kernel: software IO TLB: area num 2. Jan 13 21:21:30.103012 kernel: Memory: 7513380K/7860584K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 346944K reserved, 0K cma-reserved) Jan 13 21:21:30.103032 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 13 21:21:30.103052 kernel: Kernel/User page tables isolation: enabled Jan 13 21:21:30.103072 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:21:30.103092 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:21:30.103112 kernel: Dynamic Preempt: voluntary Jan 13 21:21:30.103132 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:21:30.103194 kernel: rcu: RCU event tracing is enabled. Jan 13 21:21:30.103236 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 13 21:21:30.103257 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:21:30.103278 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:21:30.103304 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:21:30.103325 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:21:30.103346 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 13 21:21:30.103367 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 13 21:21:30.103388 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:21:30.103409 kernel: Console: colour dummy device 80x25 Jan 13 21:21:30.103435 kernel: printk: console [ttyS0] enabled Jan 13 21:21:30.103462 kernel: ACPI: Core revision 20230628 Jan 13 21:21:30.103483 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:21:30.103504 kernel: x2apic enabled Jan 13 21:21:30.103525 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:21:30.103547 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jan 13 21:21:30.103568 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 13 21:21:30.103589 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jan 13 21:21:30.103615 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jan 13 21:21:30.103636 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jan 13 21:21:30.103657 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:21:30.103678 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 13 21:21:30.103699 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 13 21:21:30.103720 kernel: Spectre V2 : Mitigation: IBRS Jan 13 21:21:30.103741 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:21:30.103762 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:21:30.103783 kernel: RETBleed: Mitigation: IBRS Jan 13 21:21:30.103809 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 21:21:30.103830 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Jan 13 21:21:30.103852 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 21:21:30.103872 kernel: MDS: Mitigation: Clear CPU buffers Jan 13 21:21:30.103894 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 13 21:21:30.103915 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:21:30.103936 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:21:30.103965 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:21:30.103986 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:21:30.104013 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 13 21:21:30.104034 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:21:30.104055 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:21:30.104076 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:21:30.104097 kernel: landlock: Up and running. Jan 13 21:21:30.104117 kernel: SELinux: Initializing. Jan 13 21:21:30.104138 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 21:21:30.104180 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 13 21:21:30.104201 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jan 13 21:21:30.104228 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:21:30.104250 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:21:30.104271 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 13 21:21:30.104292 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jan 13 21:21:30.104313 kernel: signal: max sigframe size: 1776 Jan 13 21:21:30.104334 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:21:30.104355 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:21:30.104376 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 13 21:21:30.104397 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:21:30.104423 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:21:30.104444 kernel: .... node #0, CPUs: #1 Jan 13 21:21:30.104466 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jan 13 21:21:30.104488 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 13 21:21:30.104509 kernel: smp: Brought up 1 node, 2 CPUs Jan 13 21:21:30.104529 kernel: smpboot: Max logical packages: 1 Jan 13 21:21:30.104550 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 13 21:21:30.104571 kernel: devtmpfs: initialized Jan 13 21:21:30.104597 kernel: x86/mm: Memory block size: 128MB Jan 13 21:21:30.104618 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jan 13 21:21:30.104639 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:21:30.104660 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 13 21:21:30.104681 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:21:30.104702 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:21:30.104723 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:21:30.104744 kernel: audit: type=2000 audit(1736803289.131:1): state=initialized audit_enabled=0 res=1 Jan 13 21:21:30.104765 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:21:30.104791 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:21:30.104812 kernel: cpuidle: using governor menu Jan 13 21:21:30.104833 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:21:30.104854 kernel: dca service started, version 1.12.1 Jan 13 21:21:30.104874 kernel: PCI: Using configuration type 1 for base access Jan 13 21:21:30.104896 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:21:30.104916 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:21:30.104937 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:21:30.104966 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:21:30.104992 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:21:30.105013 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:21:30.105034 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:21:30.105055 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:21:30.105076 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:21:30.105096 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jan 13 21:21:30.105117 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:21:30.105138 kernel: ACPI: Interpreter enabled Jan 13 21:21:30.105173 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 21:21:30.105200 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:21:30.105221 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:21:30.105242 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 13 21:21:30.105263 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jan 13 21:21:30.105284 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:21:30.105599 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:21:30.106354 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 13 21:21:30.106989 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 13 21:21:30.107025 kernel: PCI host bridge to bus 0000:00 Jan 13 21:21:30.107255 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:21:30.107470 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:21:30.107672 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:21:30.107876 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jan 13 21:21:30.108088 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:21:30.108360 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 13 21:21:30.108602 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jan 13 21:21:30.108865 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 13 21:21:30.109101 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jan 13 21:21:30.109634 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jan 13 21:21:30.109867 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jan 13 21:21:30.110108 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jan 13 21:21:30.110367 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 21:21:30.110591 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jan 13 21:21:30.110854 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jan 13 21:21:30.111173 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:21:30.111428 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jan 13 21:21:30.111661 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jan 13 21:21:30.111697 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:21:30.111720 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:21:30.111743 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:21:30.111765 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:21:30.111786 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 13 21:21:30.111808 kernel: iommu: Default domain type: Translated Jan 13 21:21:30.111830 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:21:30.111852 kernel: efivars: Registered efivars operations Jan 13 21:21:30.111873 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:21:30.111899 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:21:30.111917 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jan 13 21:21:30.111938 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jan 13 21:21:30.111970 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jan 13 21:21:30.111991 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jan 13 21:21:30.112012 kernel: vgaarb: loaded Jan 13 21:21:30.112033 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:21:30.112054 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:21:30.112075 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:21:30.112102 kernel: pnp: PnP ACPI init Jan 13 21:21:30.112122 kernel: pnp: PnP ACPI: found 7 devices Jan 13 21:21:30.112165 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:21:30.112187 kernel: NET: Registered PF_INET protocol family Jan 13 21:21:30.112207 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 21:21:30.112229 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 13 21:21:30.112249 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:21:30.112270 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:21:30.112290 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 13 21:21:30.112317 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 13 21:21:30.112337 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 13 21:21:30.112359 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 13 21:21:30.112379 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:21:30.112399 kernel: NET: Registered PF_XDP protocol family Jan 13 21:21:30.112608 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:21:30.112805 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:21:30.113010 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:21:30.113282 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jan 13 21:21:30.113530 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 13 21:21:30.113560 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:21:30.113582 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 13 21:21:30.113603 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Jan 13 21:21:30.113624 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 13 21:21:30.113645 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jan 13 21:21:30.113665 kernel: clocksource: Switched to clocksource tsc Jan 13 21:21:30.113693 kernel: Initialise system trusted keyrings Jan 13 21:21:30.113713 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 13 21:21:30.113734 kernel: Key type asymmetric registered Jan 13 21:21:30.113754 kernel: Asymmetric key parser 'x509' registered Jan 13 21:21:30.113774 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:21:30.113794 kernel: io scheduler mq-deadline registered Jan 13 21:21:30.113815 kernel: io scheduler kyber registered Jan 13 21:21:30.113835 kernel: io scheduler bfq registered Jan 13 21:21:30.113856 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:21:30.113882 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 13 21:21:30.114119 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jan 13 21:21:30.116654 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jan 13 21:21:30.117312 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jan 13 21:21:30.117346 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 13 21:21:30.117581 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jan 13 21:21:30.117609 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:21:30.117632 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:21:30.117654 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 13 21:21:30.117684 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jan 13 21:21:30.117705 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jan 13 21:21:30.117937 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jan 13 21:21:30.117974 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:21:30.117995 kernel: i8042: Warning: Keylock active Jan 13 21:21:30.118016 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:21:30.118038 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:21:30.118617 kernel: rtc_cmos 00:00: RTC can wake from S4 Jan 13 21:21:30.118846 kernel: rtc_cmos 00:00: registered as rtc0 Jan 13 21:21:30.119071 kernel: rtc_cmos 00:00: setting system clock to 2025-01-13T21:21:29 UTC (1736803289) Jan 13 21:21:30.119318 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jan 13 21:21:30.119347 kernel: intel_pstate: CPU model not supported Jan 13 21:21:30.119368 kernel: pstore: Using crash dump compression: deflate Jan 13 21:21:30.119390 kernel: pstore: Registered efi_pstore as persistent store backend Jan 13 21:21:30.119412 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:21:30.119434 kernel: Segment Routing with IPv6 Jan 13 21:21:30.119463 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:21:30.119485 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:21:30.119507 kernel: Key type dns_resolver registered Jan 13 21:21:30.119528 kernel: IPI shorthand broadcast: enabled Jan 13 21:21:30.119550 kernel: sched_clock: Marking stable (851004192, 142085545)->(1025826894, -32737157) Jan 13 21:21:30.119572 kernel: registered taskstats version 1 Jan 13 21:21:30.119593 kernel: Loading compiled-in X.509 certificates Jan 13 21:21:30.119615 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:21:30.119637 kernel: Key type .fscrypt registered Jan 13 21:21:30.119663 kernel: Key type fscrypt-provisioning registered Jan 13 21:21:30.119685 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:21:30.119706 kernel: ima: No architecture policies found Jan 13 21:21:30.119728 kernel: clk: Disabling unused clocks Jan 13 21:21:30.119750 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:21:30.119771 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:21:30.119792 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:21:30.119814 kernel: Run /init as init process Jan 13 21:21:30.119841 kernel: with arguments: Jan 13 21:21:30.119862 kernel: /init Jan 13 21:21:30.119883 kernel: with environment: Jan 13 21:21:30.119904 kernel: HOME=/ Jan 13 21:21:30.119925 kernel: TERM=linux Jan 13 21:21:30.119947 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:21:30.119976 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:21:30.120001 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:21:30.120032 systemd[1]: Detected virtualization google. Jan 13 21:21:30.120056 systemd[1]: Detected architecture x86-64. Jan 13 21:21:30.120078 systemd[1]: Running in initrd. Jan 13 21:21:30.120101 systemd[1]: No hostname configured, using default hostname. Jan 13 21:21:30.120123 systemd[1]: Hostname set to . Jan 13 21:21:30.122335 systemd[1]: Initializing machine ID from random generator. Jan 13 21:21:30.122446 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:21:30.122470 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:21:30.122501 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:21:30.122525 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:21:30.122548 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:21:30.122571 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:21:30.122595 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:21:30.122621 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:21:30.122645 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:21:30.122673 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:21:30.122695 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:21:30.122741 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:21:30.122769 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:21:30.122793 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:21:30.122816 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:21:30.122845 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:21:30.122869 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:21:30.122892 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:21:30.122914 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:21:30.122938 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:21:30.122970 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:21:30.122994 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:21:30.123018 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:21:30.123041 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:21:30.123068 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:21:30.123092 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:21:30.123116 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:21:30.123139 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:21:30.123182 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:21:30.123205 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:21:30.123270 systemd-journald[183]: Collecting audit messages is disabled. Jan 13 21:21:30.123327 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:21:30.123352 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:21:30.123375 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:21:30.123404 systemd-journald[183]: Journal started Jan 13 21:21:30.123448 systemd-journald[183]: Runtime Journal (/run/log/journal/11ac6d9dfbea4ca28b4cef940acdbd3a) is 8.0M, max 148.7M, 140.7M free. Jan 13 21:21:30.111274 systemd-modules-load[184]: Inserted module 'overlay' Jan 13 21:21:30.133291 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:21:30.135394 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:21:30.137675 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:21:30.155655 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:21:30.170295 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:21:30.170338 kernel: Bridge firewalling registered Jan 13 21:21:30.166513 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:21:30.169657 systemd-modules-load[184]: Inserted module 'br_netfilter' Jan 13 21:21:30.185567 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:21:30.186134 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:21:30.186556 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:21:30.194404 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:21:30.196621 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:21:30.220644 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:21:30.225535 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:21:30.233607 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:21:30.250364 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:21:30.257002 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:21:30.267758 dracut-cmdline[217]: dracut-dracut-053 Jan 13 21:21:30.272340 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:21:30.322761 systemd-resolved[220]: Positive Trust Anchors: Jan 13 21:21:30.322781 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:21:30.322855 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:21:30.328668 systemd-resolved[220]: Defaulting to hostname 'linux'. Jan 13 21:21:30.331852 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:21:30.338471 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:21:30.384190 kernel: SCSI subsystem initialized Jan 13 21:21:30.394185 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:21:30.406167 kernel: iscsi: registered transport (tcp) Jan 13 21:21:30.430192 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:21:30.430271 kernel: QLogic iSCSI HBA Driver Jan 13 21:21:30.482760 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:21:30.498391 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:21:30.540958 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:21:30.541045 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:21:30.541074 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:21:30.586203 kernel: raid6: avx2x4 gen() 17939 MB/s Jan 13 21:21:30.603192 kernel: raid6: avx2x2 gen() 17849 MB/s Jan 13 21:21:30.620688 kernel: raid6: avx2x1 gen() 13995 MB/s Jan 13 21:21:30.620749 kernel: raid6: using algorithm avx2x4 gen() 17939 MB/s Jan 13 21:21:30.638663 kernel: raid6: .... xor() 6897 MB/s, rmw enabled Jan 13 21:21:30.638716 kernel: raid6: using avx2x2 recovery algorithm Jan 13 21:21:30.662193 kernel: xor: automatically using best checksumming function avx Jan 13 21:21:30.835199 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:21:30.847846 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:21:30.861411 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:21:30.878773 systemd-udevd[401]: Using default interface naming scheme 'v255'. Jan 13 21:21:30.885647 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:21:30.894879 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:21:30.930937 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jan 13 21:21:30.968579 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:21:30.985391 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:21:31.064432 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:21:31.079627 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:21:31.116126 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:21:31.124120 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:21:31.128301 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:21:31.133347 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:21:31.146866 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:21:31.186866 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:21:31.206174 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:21:31.216226 kernel: scsi host0: Virtio SCSI HBA Jan 13 21:21:31.221167 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jan 13 21:21:31.244433 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:21:31.244650 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:21:31.257539 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:21:31.301047 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:21:31.301088 kernel: AES CTR mode by8 optimization enabled Jan 13 21:21:31.263587 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:21:31.263857 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:21:31.266377 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:21:31.291560 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:21:31.333192 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jan 13 21:21:31.356784 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jan 13 21:21:31.357066 kernel: sd 0:0:1:0: [sda] Write Protect is off Jan 13 21:21:31.357324 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jan 13 21:21:31.357565 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 13 21:21:31.357799 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:21:31.357836 kernel: GPT:17805311 != 25165823 Jan 13 21:21:31.357860 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:21:31.357900 kernel: GPT:17805311 != 25165823 Jan 13 21:21:31.357924 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:21:31.357950 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:21:31.357977 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jan 13 21:21:31.338138 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:21:31.360223 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:21:31.399871 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:21:31.418197 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (459) Jan 13 21:21:31.425173 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (451) Jan 13 21:21:31.434530 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Jan 13 21:21:31.448993 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Jan 13 21:21:31.461346 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 13 21:21:31.472658 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Jan 13 21:21:31.472907 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Jan 13 21:21:31.485496 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:21:31.502367 disk-uuid[550]: Primary Header is updated. Jan 13 21:21:31.502367 disk-uuid[550]: Secondary Entries is updated. Jan 13 21:21:31.502367 disk-uuid[550]: Secondary Header is updated. Jan 13 21:21:31.520172 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:21:31.541185 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:21:31.565195 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:21:32.557545 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 13 21:21:32.557628 disk-uuid[551]: The operation has completed successfully. Jan 13 21:21:32.636499 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:21:32.636669 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:21:32.667365 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:21:32.692432 sh[568]: Success Jan 13 21:21:32.717786 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 13 21:21:32.795293 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:21:32.802722 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:21:32.830710 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:21:32.870179 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:21:32.870260 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:21:32.887220 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:21:32.887321 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:21:32.894053 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:21:33.006184 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 13 21:21:33.012894 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:21:33.013903 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:21:33.019352 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:21:33.040387 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:21:33.092474 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:21:33.092561 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:21:33.092588 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:21:33.117530 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:21:33.117611 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 21:21:33.132807 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:21:33.151345 kernel: BTRFS info (device sda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:21:33.158025 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:21:33.175435 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:21:33.233528 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:21:33.244630 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:21:33.355649 systemd-networkd[750]: lo: Link UP Jan 13 21:21:33.356201 systemd-networkd[750]: lo: Gained carrier Jan 13 21:21:33.362936 systemd-networkd[750]: Enumeration completed Jan 13 21:21:33.363099 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:21:33.364064 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:21:33.377478 ignition[685]: Ignition 2.19.0 Jan 13 21:21:33.364072 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:21:33.377487 ignition[685]: Stage: fetch-offline Jan 13 21:21:33.367210 systemd-networkd[750]: eth0: Link UP Jan 13 21:21:33.377528 ignition[685]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:33.367216 systemd-networkd[750]: eth0: Gained carrier Jan 13 21:21:33.377539 ignition[685]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:21:33.367232 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:21:33.377648 ignition[685]: parsed url from cmdline: "" Jan 13 21:21:33.380229 systemd-networkd[750]: eth0: DHCPv4 address 10.128.0.40/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 13 21:21:33.377654 ignition[685]: no config URL provided Jan 13 21:21:33.386857 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:21:33.377663 ignition[685]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:21:33.407991 systemd[1]: Reached target network.target - Network. Jan 13 21:21:33.377673 ignition[685]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:21:33.441346 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 21:21:33.377681 ignition[685]: failed to fetch config: resource requires networking Jan 13 21:21:33.480978 unknown[759]: fetched base config from "system" Jan 13 21:21:33.377969 ignition[685]: Ignition finished successfully Jan 13 21:21:33.480994 unknown[759]: fetched base config from "system" Jan 13 21:21:33.470343 ignition[759]: Ignition 2.19.0 Jan 13 21:21:33.481007 unknown[759]: fetched user config from "gcp" Jan 13 21:21:33.470352 ignition[759]: Stage: fetch Jan 13 21:21:33.483940 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 21:21:33.470580 ignition[759]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:33.505398 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:21:33.470594 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:21:33.554806 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:21:33.470725 ignition[759]: parsed url from cmdline: "" Jan 13 21:21:33.580404 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:21:33.470731 ignition[759]: no config URL provided Jan 13 21:21:33.618450 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:21:33.470749 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:21:33.630502 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:21:33.470760 ignition[759]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:21:33.651336 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:21:33.470782 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jan 13 21:21:33.668328 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:21:33.474490 ignition[759]: GET result: OK Jan 13 21:21:33.683320 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:21:33.474567 ignition[759]: parsing config with SHA512: bc43e4d65b793a31e011e9747a379bd249a0b11453d2e3c0e6a8781f9903ed2b5842e5ef5610b70bfdac61afa59fdd75cc663070f9685ad2a447c86ce66d042b Jan 13 21:21:33.698320 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:21:33.481985 ignition[759]: fetch: fetch complete Jan 13 21:21:33.717386 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:21:33.481997 ignition[759]: fetch: fetch passed Jan 13 21:21:33.482078 ignition[759]: Ignition finished successfully Jan 13 21:21:33.530526 ignition[765]: Ignition 2.19.0 Jan 13 21:21:33.530551 ignition[765]: Stage: kargs Jan 13 21:21:33.530775 ignition[765]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:33.530792 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:21:33.531902 ignition[765]: kargs: kargs passed Jan 13 21:21:33.531957 ignition[765]: Ignition finished successfully Jan 13 21:21:33.615777 ignition[771]: Ignition 2.19.0 Jan 13 21:21:33.615787 ignition[771]: Stage: disks Jan 13 21:21:33.616030 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:33.616044 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:21:33.617288 ignition[771]: disks: disks passed Jan 13 21:21:33.617345 ignition[771]: Ignition finished successfully Jan 13 21:21:33.773742 systemd-fsck[780]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 13 21:21:33.954340 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:21:33.959342 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:21:34.104199 kernel: EXT4-fs (sda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:21:34.105030 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:21:34.119929 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:21:34.125282 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:21:34.153615 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:21:34.173181 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (788) Jan 13 21:21:34.192862 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:21:34.192944 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:21:34.192988 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:21:34.202646 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:21:34.239312 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:21:34.239361 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 21:21:34.202711 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:21:34.202752 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:21:34.226408 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:21:34.250098 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:21:34.274342 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:21:34.392720 initrd-setup-root[815]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:21:34.403323 initrd-setup-root[822]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:21:34.413318 initrd-setup-root[829]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:21:34.423268 initrd-setup-root[836]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:21:34.550516 systemd-networkd[750]: eth0: Gained IPv6LL Jan 13 21:21:34.555613 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:21:34.577291 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:21:34.598184 kernel: BTRFS info (device sda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:21:34.610394 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:21:34.621406 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:21:34.665167 ignition[903]: INFO : Ignition 2.19.0 Jan 13 21:21:34.665167 ignition[903]: INFO : Stage: mount Jan 13 21:21:34.679313 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:34.679313 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:21:34.679313 ignition[903]: INFO : mount: mount passed Jan 13 21:21:34.679313 ignition[903]: INFO : Ignition finished successfully Jan 13 21:21:34.669064 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:21:34.699737 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:21:34.725314 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:21:35.111433 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:21:35.156178 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (915) Jan 13 21:21:35.167192 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:21:35.167289 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:21:35.180306 kernel: BTRFS info (device sda6): using free space tree Jan 13 21:21:35.196450 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 13 21:21:35.196538 kernel: BTRFS info (device sda6): auto enabling async discard Jan 13 21:21:35.199541 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:21:35.241495 ignition[932]: INFO : Ignition 2.19.0 Jan 13 21:21:35.241495 ignition[932]: INFO : Stage: files Jan 13 21:21:35.256527 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:35.256527 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:21:35.256527 ignition[932]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:21:35.256527 ignition[932]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:21:35.256527 ignition[932]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:21:35.256527 ignition[932]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:21:35.256527 ignition[932]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:21:35.256527 ignition[932]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:21:35.256527 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:21:35.256527 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:21:35.253260 unknown[932]: wrote ssh authorized keys file for user: core Jan 13 21:21:35.396453 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 21:21:35.615732 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:21:35.632281 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:21:35.632281 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jan 13 21:21:43.379690 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 21:21:43.521668 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jan 13 21:21:43.537333 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:21:43.537333 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:21:43.537333 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:21:43.537333 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:21:43.537333 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:21:43.537333 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:21:43.537333 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:21:43.537333 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:21:43.537333 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:21:43.537333 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:21:43.537333 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:21:43.537333 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:21:43.537333 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:21:43.537333 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 13 21:21:43.774354 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 21:21:43.985396 ignition[932]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 13 21:21:43.985396 ignition[932]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 13 21:21:44.024341 ignition[932]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:21:44.024341 ignition[932]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:21:44.024341 ignition[932]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 13 21:21:44.024341 ignition[932]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:21:44.024341 ignition[932]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:21:44.024341 ignition[932]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:21:44.024341 ignition[932]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:21:44.024341 ignition[932]: INFO : files: files passed Jan 13 21:21:44.024341 ignition[932]: INFO : Ignition finished successfully Jan 13 21:21:43.991305 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:21:44.019420 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:21:44.053406 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:21:44.076798 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:21:44.240327 initrd-setup-root-after-ignition[959]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:21:44.240327 initrd-setup-root-after-ignition[959]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:21:44.076968 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:21:44.307348 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:21:44.106556 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:21:44.114651 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:21:44.143477 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:21:44.233795 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:21:44.233976 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:21:44.251718 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:21:44.265578 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:21:44.297551 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:21:44.304380 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:21:44.368642 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:21:44.393562 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:21:44.441285 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:21:44.452749 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:21:44.474685 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:21:44.485753 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:21:44.485952 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:21:44.523693 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:21:44.533710 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:21:44.550697 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:21:44.566726 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:21:44.586701 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:21:44.604731 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:21:44.621729 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:21:44.639847 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:21:44.659773 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:21:44.678699 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:21:44.695620 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:21:44.695872 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:21:44.729718 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:21:44.749669 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:21:44.774580 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:21:44.774794 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:21:44.784840 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:21:44.785108 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:21:44.830631 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:21:44.830891 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:21:44.843740 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:21:44.910339 ignition[984]: INFO : Ignition 2.19.0 Jan 13 21:21:44.910339 ignition[984]: INFO : Stage: umount Jan 13 21:21:44.910339 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:21:44.910339 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jan 13 21:21:44.910339 ignition[984]: INFO : umount: umount passed Jan 13 21:21:44.910339 ignition[984]: INFO : Ignition finished successfully Jan 13 21:21:44.843921 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:21:44.868613 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:21:44.926491 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:21:44.961298 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:21:44.961580 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:21:44.979568 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:21:44.979757 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:21:45.010342 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:21:45.011510 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:21:45.011635 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:21:45.028121 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:21:45.028283 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:21:45.048839 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:21:45.048974 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:21:45.070563 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:21:45.070628 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:21:45.088467 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:21:45.088561 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:21:45.098602 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 21:21:45.098671 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 21:21:45.115607 systemd[1]: Stopped target network.target - Network. Jan 13 21:21:45.130543 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:21:45.130631 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:21:45.146605 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:21:45.164542 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:21:45.168278 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:21:45.179547 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:21:45.197528 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:21:45.223529 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:21:45.223598 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:21:45.234563 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:21:45.234627 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:21:45.268494 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:21:45.268584 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:21:45.276581 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:21:45.276657 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:21:45.310489 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:21:45.310569 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:21:45.318784 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:21:45.324222 systemd-networkd[750]: eth0: DHCPv6 lease lost Jan 13 21:21:45.345559 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:21:45.355963 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:21:45.356093 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:21:45.375537 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:21:45.375755 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:21:45.392443 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:21:45.392514 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:21:45.414302 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:21:45.430525 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:21:45.430616 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:21:45.446632 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:21:45.446730 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:21:45.472551 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:21:45.472622 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:21:45.499488 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:21:45.499571 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:21:45.519656 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:21:45.540933 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:21:45.942160 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Jan 13 21:21:45.541115 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:21:45.555660 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:21:45.555728 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:21:45.576568 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:21:45.576624 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:21:45.604525 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:21:45.604598 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:21:45.642530 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:21:45.642630 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:21:45.666598 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:21:45.666693 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:21:45.709381 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:21:45.723303 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:21:45.723431 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:21:45.735396 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 21:21:45.735487 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:21:45.747454 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:21:45.747553 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:21:45.767444 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:21:45.767545 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:21:45.788952 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:21:45.789098 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:21:45.808752 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:21:45.808880 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:21:45.830656 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:21:45.856429 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:21:45.896888 systemd[1]: Switching root. Jan 13 21:21:46.213304 systemd-journald[183]: Journal stopped Jan 13 21:21:48.633822 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:21:48.633864 kernel: SELinux: policy capability open_perms=1 Jan 13 21:21:48.633878 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:21:48.633889 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:21:48.633900 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:21:48.633910 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:21:48.633923 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:21:48.633937 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:21:48.633949 kernel: audit: type=1403 audit(1736803306.469:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:21:48.633963 systemd[1]: Successfully loaded SELinux policy in 94.728ms. Jan 13 21:21:48.633977 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.171ms. Jan 13 21:21:48.633991 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:21:48.634003 systemd[1]: Detected virtualization google. Jan 13 21:21:48.634015 systemd[1]: Detected architecture x86-64. Jan 13 21:21:48.634032 systemd[1]: Detected first boot. Jan 13 21:21:48.634045 systemd[1]: Initializing machine ID from random generator. Jan 13 21:21:48.634058 zram_generator::config[1025]: No configuration found. Jan 13 21:21:48.634072 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:21:48.634084 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:21:48.634100 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:21:48.634119 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:21:48.634134 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:21:48.634170 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:21:48.634187 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:21:48.634201 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:21:48.634216 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:21:48.634234 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:21:48.634247 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:21:48.634260 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:21:48.634273 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:21:48.634287 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:21:48.634300 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:21:48.634312 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:21:48.634326 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:21:48.634342 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:21:48.634355 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:21:48.634368 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:21:48.634381 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:21:48.634394 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:21:48.634407 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:21:48.634424 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:21:48.634438 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:21:48.634451 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:21:48.634470 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:21:48.634484 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:21:48.634497 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:21:48.634512 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:21:48.634525 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:21:48.634538 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:21:48.634552 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:21:48.634569 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:21:48.634583 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:21:48.634597 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:21:48.634611 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:21:48.634625 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:21:48.634641 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:21:48.634655 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:21:48.634668 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:21:48.634682 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:21:48.634696 systemd[1]: Reached target machines.target - Containers. Jan 13 21:21:48.634716 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:21:48.634730 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:21:48.634743 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:21:48.634760 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:21:48.634775 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:21:48.634788 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:21:48.634802 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:21:48.634817 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:21:48.634830 kernel: fuse: init (API version 7.39) Jan 13 21:21:48.634843 kernel: ACPI: bus type drm_connector registered Jan 13 21:21:48.634855 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:21:48.634872 kernel: loop: module loaded Jan 13 21:21:48.634885 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:21:48.634899 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:21:48.634913 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:21:48.634927 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:21:48.634940 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:21:48.634953 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:21:48.634967 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:21:48.634981 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:21:48.635024 systemd-journald[1112]: Collecting audit messages is disabled. Jan 13 21:21:48.635063 systemd-journald[1112]: Journal started Jan 13 21:21:48.635113 systemd-journald[1112]: Runtime Journal (/run/log/journal/4d51abe00c92415b8eeff0a79e11978e) is 8.0M, max 148.7M, 140.7M free. Jan 13 21:21:47.385661 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:21:47.415633 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 13 21:21:47.416264 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:21:48.654200 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:21:48.683212 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:21:48.705802 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:21:48.705906 systemd[1]: Stopped verity-setup.service. Jan 13 21:21:48.732181 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:21:48.741172 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:21:48.752854 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:21:48.764663 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:21:48.774576 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:21:48.785568 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:21:48.795493 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:21:48.805505 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:21:48.815690 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:21:48.827743 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:21:48.839751 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:21:48.839994 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:21:48.851688 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:21:48.851916 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:21:48.863673 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:21:48.863927 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:21:48.874706 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:21:48.874955 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:21:48.886764 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:21:48.887011 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:21:48.897780 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:21:48.898033 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:21:48.908729 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:21:48.918723 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:21:48.930747 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:21:48.942742 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:21:48.968295 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:21:48.990429 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:21:49.016379 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:21:49.026338 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:21:49.026412 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:21:49.037748 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:21:49.061515 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:21:49.073830 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:21:49.083525 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:21:49.091654 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:21:49.109519 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:21:49.120402 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:21:49.129434 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:21:49.140554 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:21:49.149421 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:21:49.163253 systemd-journald[1112]: Time spent on flushing to /var/log/journal/4d51abe00c92415b8eeff0a79e11978e is 83.651ms for 934 entries. Jan 13 21:21:49.163253 systemd-journald[1112]: System Journal (/var/log/journal/4d51abe00c92415b8eeff0a79e11978e) is 8.0M, max 584.8M, 576.8M free. Jan 13 21:21:49.299700 systemd-journald[1112]: Received client request to flush runtime journal. Jan 13 21:21:49.299774 kernel: loop0: detected capacity change from 0 to 140768 Jan 13 21:21:49.177028 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:21:49.196431 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:21:49.212408 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:21:49.226992 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:21:49.238498 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:21:49.255754 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:21:49.267791 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:21:49.303612 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:21:49.316784 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:21:49.329835 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:21:49.354574 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:21:49.364170 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:21:49.365076 systemd-tmpfiles[1144]: ACLs are not supported, ignoring. Jan 13 21:21:49.365111 systemd-tmpfiles[1144]: ACLs are not supported, ignoring. Jan 13 21:21:49.373194 udevadm[1146]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 21:21:49.385647 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:21:49.403901 kernel: loop1: detected capacity change from 0 to 142488 Jan 13 21:21:49.409944 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:21:49.421687 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:21:49.422806 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:21:49.511786 kernel: loop2: detected capacity change from 0 to 205544 Jan 13 21:21:49.548943 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:21:49.573348 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:21:49.615180 kernel: loop3: detected capacity change from 0 to 54824 Jan 13 21:21:49.638782 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Jan 13 21:21:49.639576 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Jan 13 21:21:49.651472 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:21:49.705177 kernel: loop4: detected capacity change from 0 to 140768 Jan 13 21:21:49.772663 kernel: loop5: detected capacity change from 0 to 142488 Jan 13 21:21:49.824179 kernel: loop6: detected capacity change from 0 to 205544 Jan 13 21:21:49.874325 kernel: loop7: detected capacity change from 0 to 54824 Jan 13 21:21:49.899180 (sd-merge)[1171]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Jan 13 21:21:49.900612 (sd-merge)[1171]: Merged extensions into '/usr'. Jan 13 21:21:49.912879 systemd[1]: Reloading requested from client PID 1143 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:21:49.912901 systemd[1]: Reloading... Jan 13 21:21:50.068405 zram_generator::config[1195]: No configuration found. Jan 13 21:21:50.332833 ldconfig[1138]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:21:50.355361 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:21:50.469352 systemd[1]: Reloading finished in 555 ms. Jan 13 21:21:50.498897 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:21:50.508872 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:21:50.536458 systemd[1]: Starting ensure-sysext.service... Jan 13 21:21:50.554451 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:21:50.571267 systemd[1]: Reloading requested from client PID 1237 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:21:50.571295 systemd[1]: Reloading... Jan 13 21:21:50.604849 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:21:50.607289 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:21:50.608864 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:21:50.609457 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Jan 13 21:21:50.609599 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Jan 13 21:21:50.620859 systemd-tmpfiles[1238]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:21:50.622218 systemd-tmpfiles[1238]: Skipping /boot Jan 13 21:21:50.663941 systemd-tmpfiles[1238]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:21:50.663966 systemd-tmpfiles[1238]: Skipping /boot Jan 13 21:21:50.741760 zram_generator::config[1264]: No configuration found. Jan 13 21:21:50.872754 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:21:50.938213 systemd[1]: Reloading finished in 366 ms. Jan 13 21:21:50.960127 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:21:50.977880 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:21:51.001657 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:21:51.018608 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:21:51.035736 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:21:51.055307 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:21:51.073345 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:21:51.093379 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:21:51.100392 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:21:51.100771 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:21:51.111415 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:21:51.127562 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:21:51.138498 augenrules[1329]: No rules Jan 13 21:21:51.144321 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:21:51.154459 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:21:51.171756 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:21:51.181286 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:21:51.184864 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:21:51.189407 systemd-udevd[1322]: Using default interface naming scheme 'v255'. Jan 13 21:21:51.197235 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:21:51.209739 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:21:51.210233 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:21:51.222107 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:21:51.223328 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:21:51.235857 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:21:51.247581 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:21:51.259187 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:21:51.259433 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:21:51.269786 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:21:51.314245 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:21:51.351458 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:21:51.351895 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:21:51.360973 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:21:51.379873 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:21:51.400888 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:21:51.419567 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:21:51.424532 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 13 21:21:51.436432 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:21:51.450631 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:21:51.461817 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:21:51.482580 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:21:51.492306 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:21:51.492558 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:21:51.497245 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:21:51.498482 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:21:51.501467 systemd-resolved[1319]: Positive Trust Anchors: Jan 13 21:21:51.501515 systemd-resolved[1319]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:21:51.501584 systemd-resolved[1319]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:21:51.510249 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:21:51.510502 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:21:51.517800 systemd-resolved[1319]: Defaulting to hostname 'linux'. Jan 13 21:21:51.521413 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:21:51.522569 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:21:51.535686 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:21:51.547050 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:21:51.548477 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:21:51.565803 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:21:51.599728 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 13 21:21:51.599929 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jan 13 21:21:51.631931 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 13 21:21:51.631984 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:21:51.632373 systemd[1]: Finished ensure-sysext.service. Jan 13 21:21:51.648199 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Jan 13 21:21:51.649779 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 13 21:21:51.656326 kernel: ACPI: button: Sleep Button [SLPF] Jan 13 21:21:51.689104 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 13 21:21:51.699381 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:21:51.721397 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Jan 13 21:21:51.731318 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:21:51.731449 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:21:51.758175 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1343) Jan 13 21:21:51.820734 kernel: EDAC MC: Ver: 3.0.0 Jan 13 21:21:51.827745 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Jan 13 21:21:51.837228 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:21:51.851806 systemd-networkd[1374]: lo: Link UP Jan 13 21:21:51.852388 systemd-networkd[1374]: lo: Gained carrier Jan 13 21:21:51.857754 systemd-networkd[1374]: Enumeration completed Jan 13 21:21:51.857916 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:21:51.861030 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:21:51.861049 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:21:51.861841 systemd-networkd[1374]: eth0: Link UP Jan 13 21:21:51.861859 systemd-networkd[1374]: eth0: Gained carrier Jan 13 21:21:51.861885 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:21:51.868473 systemd[1]: Reached target network.target - Network. Jan 13 21:21:51.872253 systemd-networkd[1374]: eth0: DHCPv4 address 10.128.0.40/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jan 13 21:21:51.886981 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:21:51.910093 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Jan 13 21:21:51.935675 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:21:51.954337 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:21:51.962681 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:21:51.977045 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:21:51.997428 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:21:52.015865 lvm[1416]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:21:52.049667 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:21:52.050349 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:21:52.055742 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:21:52.072401 lvm[1418]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:21:52.098709 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:21:52.110784 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:21:52.123386 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:21:52.133537 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:21:52.145462 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:21:52.156614 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:21:52.166608 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:21:52.178363 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:21:52.189380 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:21:52.189443 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:21:52.198310 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:21:52.207192 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:21:52.219001 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:21:52.231669 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:21:52.242334 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:21:52.252505 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:21:52.262344 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:21:52.271419 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:21:52.271480 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:21:52.283390 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:21:52.298427 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 21:21:52.324501 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:21:52.359311 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:21:52.380439 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:21:52.390369 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:21:52.395649 jq[1428]: false Jan 13 21:21:52.400441 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:21:52.403784 coreos-metadata[1426]: Jan 13 21:21:52.403 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Jan 13 21:21:52.406135 coreos-metadata[1426]: Jan 13 21:21:52.405 INFO Fetch successful Jan 13 21:21:52.406135 coreos-metadata[1426]: Jan 13 21:21:52.406 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Jan 13 21:21:52.407127 coreos-metadata[1426]: Jan 13 21:21:52.406 INFO Fetch successful Jan 13 21:21:52.407127 coreos-metadata[1426]: Jan 13 21:21:52.407 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Jan 13 21:21:52.407896 coreos-metadata[1426]: Jan 13 21:21:52.407 INFO Fetch successful Jan 13 21:21:52.407896 coreos-metadata[1426]: Jan 13 21:21:52.407 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Jan 13 21:21:52.408721 coreos-metadata[1426]: Jan 13 21:21:52.408 INFO Fetch successful Jan 13 21:21:52.416547 systemd[1]: Started ntpd.service - Network Time Service. Jan 13 21:21:52.434154 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:21:52.452401 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:21:52.470181 extend-filesystems[1431]: Found loop4 Jan 13 21:21:52.470181 extend-filesystems[1431]: Found loop5 Jan 13 21:21:52.470181 extend-filesystems[1431]: Found loop6 Jan 13 21:21:52.470181 extend-filesystems[1431]: Found loop7 Jan 13 21:21:52.470181 extend-filesystems[1431]: Found sda Jan 13 21:21:52.470181 extend-filesystems[1431]: Found sda1 Jan 13 21:21:52.470181 extend-filesystems[1431]: Found sda2 Jan 13 21:21:52.470181 extend-filesystems[1431]: Found sda3 Jan 13 21:21:52.470181 extend-filesystems[1431]: Found usr Jan 13 21:21:52.470181 extend-filesystems[1431]: Found sda4 Jan 13 21:21:52.470181 extend-filesystems[1431]: Found sda6 Jan 13 21:21:52.470181 extend-filesystems[1431]: Found sda7 Jan 13 21:21:52.470181 extend-filesystems[1431]: Found sda9 Jan 13 21:21:52.470181 extend-filesystems[1431]: Checking size of /dev/sda9 Jan 13 21:21:52.598427 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jan 13 21:21:52.598475 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jan 13 21:21:52.598494 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1358) Jan 13 21:21:52.478853 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:21:52.598598 ntpd[1433]: 13 Jan 21:21:52 ntpd[1433]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 19:01:38 UTC 2025 (1): Starting Jan 13 21:21:52.598598 ntpd[1433]: 13 Jan 21:21:52 ntpd[1433]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 21:21:52.598598 ntpd[1433]: 13 Jan 21:21:52 ntpd[1433]: ---------------------------------------------------- Jan 13 21:21:52.598598 ntpd[1433]: 13 Jan 21:21:52 ntpd[1433]: ntp-4 is maintained by Network Time Foundation, Jan 13 21:21:52.598598 ntpd[1433]: 13 Jan 21:21:52 ntpd[1433]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 21:21:52.598598 ntpd[1433]: 13 Jan 21:21:52 ntpd[1433]: corporation. Support and training for ntp-4 are Jan 13 21:21:52.598598 ntpd[1433]: 13 Jan 21:21:52 ntpd[1433]: available at https://www.nwtime.org/support Jan 13 21:21:52.598598 ntpd[1433]: 13 Jan 21:21:52 ntpd[1433]: ---------------------------------------------------- Jan 13 21:21:52.598598 ntpd[1433]: 13 Jan 21:21:52 ntpd[1433]: proto: precision = 0.086 usec (-23) Jan 13 21:21:52.598598 ntpd[1433]: 13 Jan 21:21:52 ntpd[1433]: basedate set to 2025-01-01 Jan 13 21:21:52.598598 ntpd[1433]: 13 Jan 21:21:52 ntpd[1433]: gps base set to 2025-01-05 (week 2348) Jan 13 21:21:52.598598 ntpd[1433]: 13 Jan 21:21:52 ntpd[1433]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 21:21:52.598598 ntpd[1433]: 13 Jan 21:21:52 ntpd[1433]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 21:21:52.598598 ntpd[1433]: 13 Jan 21:21:52 ntpd[1433]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 21:21:52.598598 ntpd[1433]: 13 Jan 21:21:52 ntpd[1433]: Listen normally on 3 eth0 10.128.0.40:123 Jan 13 21:21:52.598598 ntpd[1433]: 13 Jan 21:21:52 ntpd[1433]: Listen normally on 4 lo [::1]:123 Jan 13 21:21:52.598598 ntpd[1433]: 13 Jan 21:21:52 ntpd[1433]: bind(21) AF_INET6 fe80::4001:aff:fe80:28%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:21:52.598598 ntpd[1433]: 13 Jan 21:21:52 ntpd[1433]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:28%2#123 Jan 13 21:21:52.598598 ntpd[1433]: 13 Jan 21:21:52 ntpd[1433]: failed to init interface for address fe80::4001:aff:fe80:28%2 Jan 13 21:21:52.598598 ntpd[1433]: 13 Jan 21:21:52 ntpd[1433]: Listening on routing socket on fd #21 for interface updates Jan 13 21:21:52.598598 ntpd[1433]: 13 Jan 21:21:52 ntpd[1433]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:21:52.598598 ntpd[1433]: 13 Jan 21:21:52 ntpd[1433]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:21:52.487495 dbus-daemon[1427]: [system] SELinux support is enabled Jan 13 21:21:52.600083 extend-filesystems[1431]: Resized partition /dev/sda9 Jan 13 21:21:52.497620 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:21:52.493112 dbus-daemon[1427]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1374 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 13 21:21:52.614529 extend-filesystems[1451]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:21:52.614529 extend-filesystems[1451]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 13 21:21:52.614529 extend-filesystems[1451]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 13 21:21:52.614529 extend-filesystems[1451]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jan 13 21:21:52.512247 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jan 13 21:21:52.502345 ntpd[1433]: ntpd 4.2.8p17@1.4004-o Mon Jan 13 19:01:38 UTC 2025 (1): Starting Jan 13 21:21:52.677891 extend-filesystems[1431]: Resized filesystem in /dev/sda9 Jan 13 21:21:52.514390 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:21:52.502415 ntpd[1433]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 13 21:21:52.516518 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:21:52.502433 ntpd[1433]: ---------------------------------------------------- Jan 13 21:21:52.692643 update_engine[1452]: I20250113 21:21:52.683112 1452 main.cc:92] Flatcar Update Engine starting Jan 13 21:21:52.629328 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:21:52.502450 ntpd[1433]: ntp-4 is maintained by Network Time Foundation, Jan 13 21:21:52.658653 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:21:52.694963 update_engine[1452]: I20250113 21:21:52.694604 1452 update_check_scheduler.cc:74] Next update check in 10m7s Jan 13 21:21:52.502464 ntpd[1433]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 13 21:21:52.671263 systemd-logind[1447]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 21:21:52.502478 ntpd[1433]: corporation. Support and training for ntp-4 are Jan 13 21:21:52.671293 systemd-logind[1447]: Watching system buttons on /dev/input/event3 (Sleep Button) Jan 13 21:21:52.502493 ntpd[1433]: available at https://www.nwtime.org/support Jan 13 21:21:52.671322 systemd-logind[1447]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:21:52.502509 ntpd[1433]: ---------------------------------------------------- Jan 13 21:21:52.672363 systemd-logind[1447]: New seat seat0. Jan 13 21:21:52.506558 ntpd[1433]: proto: precision = 0.086 usec (-23) Jan 13 21:21:52.509135 ntpd[1433]: basedate set to 2025-01-01 Jan 13 21:21:52.509189 ntpd[1433]: gps base set to 2025-01-05 (week 2348) Jan 13 21:21:52.516534 ntpd[1433]: Listen and drop on 0 v6wildcard [::]:123 Jan 13 21:21:52.516603 ntpd[1433]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 13 21:21:52.516890 ntpd[1433]: Listen normally on 2 lo 127.0.0.1:123 Jan 13 21:21:52.516951 ntpd[1433]: Listen normally on 3 eth0 10.128.0.40:123 Jan 13 21:21:52.517010 ntpd[1433]: Listen normally on 4 lo [::1]:123 Jan 13 21:21:52.517075 ntpd[1433]: bind(21) AF_INET6 fe80::4001:aff:fe80:28%2#123 flags 0x11 failed: Cannot assign requested address Jan 13 21:21:52.517107 ntpd[1433]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:28%2#123 Jan 13 21:21:52.517129 ntpd[1433]: failed to init interface for address fe80::4001:aff:fe80:28%2 Jan 13 21:21:52.517222 ntpd[1433]: Listening on routing socket on fd #21 for interface updates Jan 13 21:21:52.518786 ntpd[1433]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:21:52.518821 ntpd[1433]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 13 21:21:52.706106 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:21:52.717753 jq[1460]: true Jan 13 21:21:52.731310 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:21:52.731598 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:21:52.732039 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:21:52.732354 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:21:52.742938 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:21:52.743192 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:21:52.762869 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:21:52.763103 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:21:52.804688 (ntainerd)[1466]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:21:52.822299 jq[1465]: true Jan 13 21:21:52.842953 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 21:21:52.864366 dbus-daemon[1427]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 13 21:21:52.891598 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:21:52.908677 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:21:52.917688 tar[1463]: linux-amd64/helm Jan 13 21:21:52.919406 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:21:52.919687 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:21:52.919922 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:21:52.942482 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 13 21:21:52.953335 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:21:52.953611 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:21:52.977342 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:21:52.992176 bash[1495]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:21:52.998688 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:21:53.024188 systemd[1]: Starting sshkeys.service... Jan 13 21:21:53.105341 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 21:21:53.125121 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 21:21:53.279175 coreos-metadata[1500]: Jan 13 21:21:53.278 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jan 13 21:21:53.288305 coreos-metadata[1500]: Jan 13 21:21:53.287 INFO Fetch failed with 404: resource not found Jan 13 21:21:53.288305 coreos-metadata[1500]: Jan 13 21:21:53.287 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jan 13 21:21:53.288937 coreos-metadata[1500]: Jan 13 21:21:53.288 INFO Fetch successful Jan 13 21:21:53.288937 coreos-metadata[1500]: Jan 13 21:21:53.288 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jan 13 21:21:53.302560 coreos-metadata[1500]: Jan 13 21:21:53.301 INFO Fetch failed with 404: resource not found Jan 13 21:21:53.302560 coreos-metadata[1500]: Jan 13 21:21:53.301 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jan 13 21:21:53.302560 coreos-metadata[1500]: Jan 13 21:21:53.302 INFO Fetch failed with 404: resource not found Jan 13 21:21:53.302560 coreos-metadata[1500]: Jan 13 21:21:53.302 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jan 13 21:21:53.306118 coreos-metadata[1500]: Jan 13 21:21:53.305 INFO Fetch successful Jan 13 21:21:53.313068 unknown[1500]: wrote ssh authorized keys file for user: core Jan 13 21:21:53.367709 systemd-networkd[1374]: eth0: Gained IPv6LL Jan 13 21:21:53.382654 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:21:53.383609 dbus-daemon[1427]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 13 21:21:53.384776 dbus-daemon[1427]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1496 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 13 21:21:53.409570 update-ssh-keys[1510]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:21:53.425264 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 21:21:53.437616 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 13 21:21:53.450270 systemd[1]: Finished sshkeys.service. Jan 13 21:21:53.467633 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:21:53.489348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:21:53.505539 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:21:53.517374 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Jan 13 21:21:53.535803 systemd[1]: Starting polkit.service - Authorization Manager... Jan 13 21:21:53.561170 init.sh[1519]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jan 13 21:21:53.561170 init.sh[1519]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jan 13 21:21:53.561170 init.sh[1519]: + /usr/bin/google_instance_setup Jan 13 21:21:53.569989 locksmithd[1497]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:21:53.608803 sshd_keygen[1457]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:21:53.620473 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:21:53.655645 polkitd[1521]: Started polkitd version 121 Jan 13 21:21:53.681482 containerd[1466]: time="2025-01-13T21:21:53.678524341Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:21:53.678928 polkitd[1521]: Loading rules from directory /etc/polkit-1/rules.d Jan 13 21:21:53.679039 polkitd[1521]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 13 21:21:53.683604 polkitd[1521]: Finished loading, compiling and executing 2 rules Jan 13 21:21:53.687594 dbus-daemon[1427]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 13 21:21:53.687843 systemd[1]: Started polkit.service - Authorization Manager. Jan 13 21:21:53.688388 polkitd[1521]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 13 21:21:53.734400 systemd-hostnamed[1496]: Hostname set to (transient) Jan 13 21:21:53.737992 systemd-resolved[1319]: System hostname changed to 'ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal'. Jan 13 21:21:53.768301 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:21:53.789731 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:21:53.806608 systemd[1]: Started sshd@0-10.128.0.40:22-147.75.109.163:45682.service - OpenSSH per-connection server daemon (147.75.109.163:45682). Jan 13 21:21:53.836867 containerd[1466]: time="2025-01-13T21:21:53.832660464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:21:53.846834 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:21:53.852333 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:21:53.857262 containerd[1466]: time="2025-01-13T21:21:53.857182105Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:21:53.857466 containerd[1466]: time="2025-01-13T21:21:53.857437883Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:21:53.857593 containerd[1466]: time="2025-01-13T21:21:53.857572015Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:21:53.859247 containerd[1466]: time="2025-01-13T21:21:53.859132308Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:21:53.859632 containerd[1466]: time="2025-01-13T21:21:53.859602431Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:21:53.860428 containerd[1466]: time="2025-01-13T21:21:53.860324202Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:21:53.860428 containerd[1466]: time="2025-01-13T21:21:53.860379686Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:21:53.866181 containerd[1466]: time="2025-01-13T21:21:53.861608274Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:21:53.866181 containerd[1466]: time="2025-01-13T21:21:53.861644897Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:21:53.866181 containerd[1466]: time="2025-01-13T21:21:53.861670158Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:21:53.866181 containerd[1466]: time="2025-01-13T21:21:53.861688978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:21:53.866181 containerd[1466]: time="2025-01-13T21:21:53.861813380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:21:53.866181 containerd[1466]: time="2025-01-13T21:21:53.862102912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:21:53.868118 containerd[1466]: time="2025-01-13T21:21:53.868054669Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:21:53.868358 containerd[1466]: time="2025-01-13T21:21:53.868325680Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:21:53.869463 containerd[1466]: time="2025-01-13T21:21:53.869404042Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:21:53.871132 containerd[1466]: time="2025-01-13T21:21:53.871030430Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:21:53.871683 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:21:53.889179 containerd[1466]: time="2025-01-13T21:21:53.887285063Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:21:53.889179 containerd[1466]: time="2025-01-13T21:21:53.887387176Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:21:53.889179 containerd[1466]: time="2025-01-13T21:21:53.887414712Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:21:53.889179 containerd[1466]: time="2025-01-13T21:21:53.887493597Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:21:53.889179 containerd[1466]: time="2025-01-13T21:21:53.887535595Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:21:53.889179 containerd[1466]: time="2025-01-13T21:21:53.887750513Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:21:53.889179 containerd[1466]: time="2025-01-13T21:21:53.888216792Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:21:53.889179 containerd[1466]: time="2025-01-13T21:21:53.888453922Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:21:53.889179 containerd[1466]: time="2025-01-13T21:21:53.888488464Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:21:53.889179 containerd[1466]: time="2025-01-13T21:21:53.888514931Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:21:53.889179 containerd[1466]: time="2025-01-13T21:21:53.888540194Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:21:53.889179 containerd[1466]: time="2025-01-13T21:21:53.888562071Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:21:53.889179 containerd[1466]: time="2025-01-13T21:21:53.888584835Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:21:53.889179 containerd[1466]: time="2025-01-13T21:21:53.888612347Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:21:53.889915 containerd[1466]: time="2025-01-13T21:21:53.888641775Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:21:53.889915 containerd[1466]: time="2025-01-13T21:21:53.888667701Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:21:53.889915 containerd[1466]: time="2025-01-13T21:21:53.888692577Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:21:53.889915 containerd[1466]: time="2025-01-13T21:21:53.888720216Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:21:53.889915 containerd[1466]: time="2025-01-13T21:21:53.888797543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:21:53.889915 containerd[1466]: time="2025-01-13T21:21:53.888828942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:21:53.889915 containerd[1466]: time="2025-01-13T21:21:53.888855554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:21:53.889915 containerd[1466]: time="2025-01-13T21:21:53.888898467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:21:53.889915 containerd[1466]: time="2025-01-13T21:21:53.888923555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:21:53.889915 containerd[1466]: time="2025-01-13T21:21:53.888949522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:21:53.889915 containerd[1466]: time="2025-01-13T21:21:53.888973419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:21:53.889915 containerd[1466]: time="2025-01-13T21:21:53.888999854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:21:53.889915 containerd[1466]: time="2025-01-13T21:21:53.889037553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:21:53.889915 containerd[1466]: time="2025-01-13T21:21:53.889067096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:21:53.892687 containerd[1466]: time="2025-01-13T21:21:53.889090615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:21:53.892687 containerd[1466]: time="2025-01-13T21:21:53.889110835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:21:53.892687 containerd[1466]: time="2025-01-13T21:21:53.889132635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:21:53.896514 containerd[1466]: time="2025-01-13T21:21:53.893497847Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:21:53.896514 containerd[1466]: time="2025-01-13T21:21:53.893610422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:21:53.896514 containerd[1466]: time="2025-01-13T21:21:53.893660415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:21:53.896514 containerd[1466]: time="2025-01-13T21:21:53.893683154Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:21:53.896514 containerd[1466]: time="2025-01-13T21:21:53.893787219Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:21:53.896514 containerd[1466]: time="2025-01-13T21:21:53.893892486Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:21:53.896514 containerd[1466]: time="2025-01-13T21:21:53.893913841Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:21:53.896514 containerd[1466]: time="2025-01-13T21:21:53.893954146Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:21:53.896514 containerd[1466]: time="2025-01-13T21:21:53.893973070Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:21:53.896514 containerd[1466]: time="2025-01-13T21:21:53.893993643Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:21:53.896514 containerd[1466]: time="2025-01-13T21:21:53.894029041Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:21:53.896514 containerd[1466]: time="2025-01-13T21:21:53.894047326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:21:53.898217 containerd[1466]: time="2025-01-13T21:21:53.896471165Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:21:53.898217 containerd[1466]: time="2025-01-13T21:21:53.897267958Z" level=info msg="Connect containerd service" Jan 13 21:21:53.900601 containerd[1466]: time="2025-01-13T21:21:53.898585813Z" level=info msg="using legacy CRI server" Jan 13 21:21:53.900601 containerd[1466]: time="2025-01-13T21:21:53.898613231Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:21:53.900601 containerd[1466]: time="2025-01-13T21:21:53.900276215Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:21:53.907199 containerd[1466]: time="2025-01-13T21:21:53.906586624Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:21:53.907199 containerd[1466]: time="2025-01-13T21:21:53.906724839Z" level=info msg="Start subscribing containerd event" Jan 13 21:21:53.907199 containerd[1466]: time="2025-01-13T21:21:53.906795505Z" level=info msg="Start recovering state" Jan 13 21:21:53.907199 containerd[1466]: time="2025-01-13T21:21:53.906893995Z" level=info msg="Start event monitor" Jan 13 21:21:53.907199 containerd[1466]: time="2025-01-13T21:21:53.906923164Z" level=info msg="Start snapshots syncer" Jan 13 21:21:53.907199 containerd[1466]: time="2025-01-13T21:21:53.906940251Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:21:53.907199 containerd[1466]: time="2025-01-13T21:21:53.906953017Z" level=info msg="Start streaming server" Jan 13 21:21:53.914643 containerd[1466]: time="2025-01-13T21:21:53.911346090Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:21:53.914643 containerd[1466]: time="2025-01-13T21:21:53.912321682Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:21:53.914643 containerd[1466]: time="2025-01-13T21:21:53.912955959Z" level=info msg="containerd successfully booted in 0.239108s" Jan 13 21:21:53.912814 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:21:53.966037 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:21:53.988228 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:21:54.003631 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:21:54.013656 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:21:54.250501 sshd[1549]: Accepted publickey for core from 147.75.109.163 port 45682 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:21:54.258107 sshd[1549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:54.286857 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:21:54.308778 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:21:54.327510 systemd-logind[1447]: New session 1 of user core. Jan 13 21:21:54.362660 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:21:54.385599 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:21:54.427669 (systemd)[1564]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:21:54.512581 tar[1463]: linux-amd64/LICENSE Jan 13 21:21:54.513174 tar[1463]: linux-amd64/README.md Jan 13 21:21:54.555679 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:21:54.673687 systemd[1564]: Queued start job for default target default.target. Jan 13 21:21:54.680643 systemd[1564]: Created slice app.slice - User Application Slice. Jan 13 21:21:54.680952 systemd[1564]: Reached target paths.target - Paths. Jan 13 21:21:54.680970 systemd[1564]: Reached target timers.target - Timers. Jan 13 21:21:54.683637 systemd[1564]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:21:54.704258 instance-setup[1524]: INFO Running google_set_multiqueue. Jan 13 21:21:54.716317 systemd[1564]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:21:54.719521 systemd[1564]: Reached target sockets.target - Sockets. Jan 13 21:21:54.719561 systemd[1564]: Reached target basic.target - Basic System. Jan 13 21:21:54.719635 systemd[1564]: Reached target default.target - Main User Target. Jan 13 21:21:54.719687 systemd[1564]: Startup finished in 273ms. Jan 13 21:21:54.720215 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:21:54.729838 instance-setup[1524]: INFO Set channels for eth0 to 2. Jan 13 21:21:54.738603 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:21:54.739117 instance-setup[1524]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jan 13 21:21:54.741376 instance-setup[1524]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jan 13 21:21:54.741787 instance-setup[1524]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jan 13 21:21:54.743921 instance-setup[1524]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jan 13 21:21:54.744717 instance-setup[1524]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jan 13 21:21:54.747176 instance-setup[1524]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jan 13 21:21:54.747410 instance-setup[1524]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jan 13 21:21:54.750311 instance-setup[1524]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jan 13 21:21:54.765961 instance-setup[1524]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 13 21:21:54.771830 instance-setup[1524]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Jan 13 21:21:54.773927 instance-setup[1524]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jan 13 21:21:54.773988 instance-setup[1524]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jan 13 21:21:54.798011 init.sh[1519]: + /usr/bin/google_metadata_script_runner --script-type startup Jan 13 21:21:54.997549 systemd[1]: Started sshd@1-10.128.0.40:22-147.75.109.163:45686.service - OpenSSH per-connection server daemon (147.75.109.163:45686). Jan 13 21:21:55.002305 startup-script[1604]: INFO Starting startup scripts. Jan 13 21:21:55.021599 startup-script[1604]: INFO No startup scripts found in metadata. Jan 13 21:21:55.021688 startup-script[1604]: INFO Finished running startup scripts. Jan 13 21:21:55.082032 init.sh[1519]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jan 13 21:21:55.082032 init.sh[1519]: + daemon_pids=() Jan 13 21:21:55.083594 init.sh[1519]: + for d in accounts clock_skew network Jan 13 21:21:55.083594 init.sh[1519]: + daemon_pids+=($!) Jan 13 21:21:55.083594 init.sh[1519]: + for d in accounts clock_skew network Jan 13 21:21:55.083594 init.sh[1519]: + daemon_pids+=($!) Jan 13 21:21:55.083594 init.sh[1519]: + for d in accounts clock_skew network Jan 13 21:21:55.083855 init.sh[1611]: + /usr/bin/google_accounts_daemon Jan 13 21:21:55.085183 init.sh[1612]: + /usr/bin/google_clock_skew_daemon Jan 13 21:21:55.086363 init.sh[1519]: + daemon_pids+=($!) Jan 13 21:21:55.086363 init.sh[1519]: + NOTIFY_SOCKET=/run/systemd/notify Jan 13 21:21:55.086363 init.sh[1519]: + /usr/bin/systemd-notify --ready Jan 13 21:21:55.086493 init.sh[1613]: + /usr/bin/google_network_daemon Jan 13 21:21:55.110879 systemd[1]: Started oem-gce.service - GCE Linux Agent. Jan 13 21:21:55.128591 init.sh[1519]: + wait -n 1611 1612 1613 Jan 13 21:21:55.374339 sshd[1609]: Accepted publickey for core from 147.75.109.163 port 45686 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:21:55.375308 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:55.392229 systemd-logind[1447]: New session 2 of user core. Jan 13 21:21:55.394369 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:21:55.477610 google-networking[1613]: INFO Starting Google Networking daemon. Jan 13 21:21:55.499112 google-clock-skew[1612]: INFO Starting Google Clock Skew daemon. Jan 13 21:21:55.502995 ntpd[1433]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:28%2]:123 Jan 13 21:21:55.503614 ntpd[1433]: 13 Jan 21:21:55 ntpd[1433]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:28%2]:123 Jan 13 21:21:55.511456 google-clock-skew[1612]: INFO Clock drift token has changed: 0. Jan 13 21:21:55.568808 groupadd[1624]: group added to /etc/group: name=google-sudoers, GID=1000 Jan 13 21:21:55.572602 groupadd[1624]: group added to /etc/gshadow: name=google-sudoers Jan 13 21:21:55.600424 sshd[1609]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:55.607882 systemd[1]: sshd@1-10.128.0.40:22-147.75.109.163:45686.service: Deactivated successfully. Jan 13 21:21:55.609294 systemd-logind[1447]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:21:55.613228 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:21:55.615393 systemd-logind[1447]: Removed session 2. Jan 13 21:21:56.000990 systemd-resolved[1319]: Clock change detected. Flushing caches. Jan 13 21:21:56.003500 google-clock-skew[1612]: INFO Synced system time with hardware clock. Jan 13 21:21:56.012319 groupadd[1624]: new group: name=google-sudoers, GID=1000 Jan 13 21:21:56.020384 systemd[1]: Started sshd@2-10.128.0.40:22-147.75.109.163:45692.service - OpenSSH per-connection server daemon (147.75.109.163:45692). Jan 13 21:21:56.067775 google-accounts[1611]: INFO Starting Google Accounts daemon. Jan 13 21:21:56.083131 google-accounts[1611]: WARNING OS Login not installed. Jan 13 21:21:56.084951 google-accounts[1611]: INFO Creating a new user account for 0. Jan 13 21:21:56.091458 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:21:56.094983 init.sh[1642]: useradd: invalid user name '0': use --badname to ignore Jan 13 21:21:56.095394 google-accounts[1611]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jan 13 21:21:56.107841 (kubelet)[1644]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:21:56.110719 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:21:56.120935 systemd[1]: Startup finished in 1.027s (kernel) + 16.687s (initrd) + 9.372s (userspace) = 27.087s. Jan 13 21:21:56.334302 sshd[1633]: Accepted publickey for core from 147.75.109.163 port 45692 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:21:56.336661 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:21:56.343862 systemd-logind[1447]: New session 3 of user core. Jan 13 21:21:56.352448 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:21:56.551501 sshd[1633]: pam_unix(sshd:session): session closed for user core Jan 13 21:21:56.556015 systemd[1]: sshd@2-10.128.0.40:22-147.75.109.163:45692.service: Deactivated successfully. Jan 13 21:21:56.559610 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:21:56.561669 systemd-logind[1447]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:21:56.564225 systemd-logind[1447]: Removed session 3. Jan 13 21:21:57.016677 kubelet[1644]: E0113 21:21:57.016599 1644 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:21:57.019700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:21:57.019956 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:21:57.020504 systemd[1]: kubelet.service: Consumed 1.244s CPU time. Jan 13 21:22:06.611662 systemd[1]: Started sshd@3-10.128.0.40:22-147.75.109.163:55510.service - OpenSSH per-connection server daemon (147.75.109.163:55510). Jan 13 21:22:06.898849 sshd[1661]: Accepted publickey for core from 147.75.109.163 port 55510 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:22:06.900721 sshd[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:06.907143 systemd-logind[1447]: New session 4 of user core. Jan 13 21:22:06.912455 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:22:07.065764 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:22:07.075103 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:22:07.122504 sshd[1661]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:07.130822 systemd[1]: sshd@3-10.128.0.40:22-147.75.109.163:55510.service: Deactivated successfully. Jan 13 21:22:07.133057 systemd-logind[1447]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:22:07.134507 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:22:07.137075 systemd-logind[1447]: Removed session 4. Jan 13 21:22:07.182355 systemd[1]: Started sshd@4-10.128.0.40:22-147.75.109.163:55514.service - OpenSSH per-connection server daemon (147.75.109.163:55514). Jan 13 21:22:07.378541 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:22:07.390911 (kubelet)[1678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:22:07.447587 kubelet[1678]: E0113 21:22:07.447172 1678 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:22:07.451818 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:22:07.452074 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:22:07.469340 sshd[1671]: Accepted publickey for core from 147.75.109.163 port 55514 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:22:07.471182 sshd[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:07.477636 systemd-logind[1447]: New session 5 of user core. Jan 13 21:22:07.491547 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:22:07.679265 sshd[1671]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:07.684576 systemd[1]: sshd@4-10.128.0.40:22-147.75.109.163:55514.service: Deactivated successfully. Jan 13 21:22:07.686849 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:22:07.687945 systemd-logind[1447]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:22:07.689592 systemd-logind[1447]: Removed session 5. Jan 13 21:22:07.735648 systemd[1]: Started sshd@5-10.128.0.40:22-147.75.109.163:60056.service - OpenSSH per-connection server daemon (147.75.109.163:60056). Jan 13 21:22:08.019109 sshd[1691]: Accepted publickey for core from 147.75.109.163 port 60056 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:22:08.021129 sshd[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:08.026958 systemd-logind[1447]: New session 6 of user core. Jan 13 21:22:08.038556 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:22:08.232519 sshd[1691]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:08.237269 systemd[1]: sshd@5-10.128.0.40:22-147.75.109.163:60056.service: Deactivated successfully. Jan 13 21:22:08.239647 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:22:08.241409 systemd-logind[1447]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:22:08.242856 systemd-logind[1447]: Removed session 6. Jan 13 21:22:08.284084 systemd[1]: Started sshd@6-10.128.0.40:22-147.75.109.163:60060.service - OpenSSH per-connection server daemon (147.75.109.163:60060). Jan 13 21:22:08.580105 sshd[1698]: Accepted publickey for core from 147.75.109.163 port 60060 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:22:08.581905 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:08.587005 systemd-logind[1447]: New session 7 of user core. Jan 13 21:22:08.597467 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:22:08.774072 sudo[1701]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:22:08.774593 sudo[1701]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:22:08.793089 sudo[1701]: pam_unix(sudo:session): session closed for user root Jan 13 21:22:08.836386 sshd[1698]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:08.842147 systemd[1]: sshd@6-10.128.0.40:22-147.75.109.163:60060.service: Deactivated successfully. Jan 13 21:22:08.844796 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:22:08.846908 systemd-logind[1447]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:22:08.849075 systemd-logind[1447]: Removed session 7. Jan 13 21:22:08.891620 systemd[1]: Started sshd@7-10.128.0.40:22-147.75.109.163:60068.service - OpenSSH per-connection server daemon (147.75.109.163:60068). Jan 13 21:22:09.184624 sshd[1706]: Accepted publickey for core from 147.75.109.163 port 60068 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:22:09.186143 sshd[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:09.192400 systemd-logind[1447]: New session 8 of user core. Jan 13 21:22:09.199446 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:22:09.362867 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:22:09.363393 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:22:09.368501 sudo[1710]: pam_unix(sudo:session): session closed for user root Jan 13 21:22:09.381942 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:22:09.382447 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:22:09.398598 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:22:09.402537 auditctl[1713]: No rules Jan 13 21:22:09.403042 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:22:09.403339 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:22:09.414113 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:22:09.446252 augenrules[1731]: No rules Jan 13 21:22:09.446977 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:22:09.448532 sudo[1709]: pam_unix(sudo:session): session closed for user root Jan 13 21:22:09.492083 sshd[1706]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:09.496482 systemd[1]: sshd@7-10.128.0.40:22-147.75.109.163:60068.service: Deactivated successfully. Jan 13 21:22:09.498734 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:22:09.500543 systemd-logind[1447]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:22:09.502062 systemd-logind[1447]: Removed session 8. Jan 13 21:22:09.550023 systemd[1]: Started sshd@8-10.128.0.40:22-147.75.109.163:60080.service - OpenSSH per-connection server daemon (147.75.109.163:60080). Jan 13 21:22:09.829840 sshd[1739]: Accepted publickey for core from 147.75.109.163 port 60080 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:22:09.831728 sshd[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:22:09.838284 systemd-logind[1447]: New session 9 of user core. Jan 13 21:22:09.847437 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:22:10.008861 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:22:10.009386 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:22:10.464618 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:22:10.467989 (dockerd)[1758]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:22:10.928581 dockerd[1758]: time="2025-01-13T21:22:10.928510042Z" level=info msg="Starting up" Jan 13 21:22:11.053384 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1212034950-merged.mount: Deactivated successfully. Jan 13 21:22:11.092988 systemd[1]: var-lib-docker-metacopy\x2dcheck1325280621-merged.mount: Deactivated successfully. Jan 13 21:22:11.114709 dockerd[1758]: time="2025-01-13T21:22:11.114395929Z" level=info msg="Loading containers: start." Jan 13 21:22:11.273337 kernel: Initializing XFRM netlink socket Jan 13 21:22:11.376843 systemd-networkd[1374]: docker0: Link UP Jan 13 21:22:11.397524 dockerd[1758]: time="2025-01-13T21:22:11.397454088Z" level=info msg="Loading containers: done." Jan 13 21:22:11.424709 dockerd[1758]: time="2025-01-13T21:22:11.424646243Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:22:11.425023 dockerd[1758]: time="2025-01-13T21:22:11.424804580Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 21:22:11.425023 dockerd[1758]: time="2025-01-13T21:22:11.424969136Z" level=info msg="Daemon has completed initialization" Jan 13 21:22:11.465688 dockerd[1758]: time="2025-01-13T21:22:11.465532203Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:22:11.466123 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:22:12.047396 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1117862164-merged.mount: Deactivated successfully. Jan 13 21:22:12.391230 containerd[1466]: time="2025-01-13T21:22:12.391152347Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Jan 13 21:22:12.909699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1532127819.mount: Deactivated successfully. Jan 13 21:22:14.410990 containerd[1466]: time="2025-01-13T21:22:14.410905064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:14.412706 containerd[1466]: time="2025-01-13T21:22:14.412632302Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=27982111" Jan 13 21:22:14.415001 containerd[1466]: time="2025-01-13T21:22:14.414416823Z" level=info msg="ImageCreate event name:\"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:14.418805 containerd[1466]: time="2025-01-13T21:22:14.418745688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:14.420384 containerd[1466]: time="2025-01-13T21:22:14.420335749Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"27972283\" in 2.029127889s" Jan 13 21:22:14.420587 containerd[1466]: time="2025-01-13T21:22:14.420560195Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Jan 13 21:22:14.424913 containerd[1466]: time="2025-01-13T21:22:14.424861006Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Jan 13 21:22:15.818492 containerd[1466]: time="2025-01-13T21:22:15.818417280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:15.820050 containerd[1466]: time="2025-01-13T21:22:15.819971081Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=24704091" Jan 13 21:22:15.822036 containerd[1466]: time="2025-01-13T21:22:15.821956932Z" level=info msg="ImageCreate event name:\"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:15.829817 containerd[1466]: time="2025-01-13T21:22:15.829191223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:15.830821 containerd[1466]: time="2025-01-13T21:22:15.830766810Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"26147269\" in 1.405685088s" Jan 13 21:22:15.830961 containerd[1466]: time="2025-01-13T21:22:15.830826337Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Jan 13 21:22:15.831503 containerd[1466]: time="2025-01-13T21:22:15.831457010Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Jan 13 21:22:17.032540 containerd[1466]: time="2025-01-13T21:22:17.032462221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:17.033992 containerd[1466]: time="2025-01-13T21:22:17.033912877Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=18653983" Jan 13 21:22:17.035750 containerd[1466]: time="2025-01-13T21:22:17.035682149Z" level=info msg="ImageCreate event name:\"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:17.041256 containerd[1466]: time="2025-01-13T21:22:17.040118146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:17.042961 containerd[1466]: time="2025-01-13T21:22:17.041710456Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"20097197\" in 1.210209967s" Jan 13 21:22:17.042961 containerd[1466]: time="2025-01-13T21:22:17.041852295Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Jan 13 21:22:17.042961 containerd[1466]: time="2025-01-13T21:22:17.042522339Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 13 21:22:17.565763 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:22:17.577574 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:22:17.999719 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:22:18.009839 (kubelet)[1969]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:22:18.097592 kubelet[1969]: E0113 21:22:18.097535 1969 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:22:18.101791 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:22:18.102081 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:22:18.537134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1853105706.mount: Deactivated successfully. Jan 13 21:22:19.174175 containerd[1466]: time="2025-01-13T21:22:19.174099483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:19.175750 containerd[1466]: time="2025-01-13T21:22:19.175672244Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=30232138" Jan 13 21:22:19.177569 containerd[1466]: time="2025-01-13T21:22:19.177490783Z" level=info msg="ImageCreate event name:\"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:19.183336 containerd[1466]: time="2025-01-13T21:22:19.181768108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:19.183336 containerd[1466]: time="2025-01-13T21:22:19.182751994Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"30229262\" in 2.140190332s" Jan 13 21:22:19.183336 containerd[1466]: time="2025-01-13T21:22:19.182799524Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Jan 13 21:22:19.183820 containerd[1466]: time="2025-01-13T21:22:19.183774910Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:22:19.634044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount929088323.mount: Deactivated successfully. Jan 13 21:22:20.730052 containerd[1466]: time="2025-01-13T21:22:20.729996653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:20.731899 containerd[1466]: time="2025-01-13T21:22:20.731818187Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Jan 13 21:22:20.733189 containerd[1466]: time="2025-01-13T21:22:20.733147965Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:20.738672 containerd[1466]: time="2025-01-13T21:22:20.738281811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:20.740063 containerd[1466]: time="2025-01-13T21:22:20.740015265Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.55619667s" Jan 13 21:22:20.740184 containerd[1466]: time="2025-01-13T21:22:20.740069726Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 21:22:20.741072 containerd[1466]: time="2025-01-13T21:22:20.740818542Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 13 21:22:21.207070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2272169051.mount: Deactivated successfully. Jan 13 21:22:21.213460 containerd[1466]: time="2025-01-13T21:22:21.213395376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:21.214637 containerd[1466]: time="2025-01-13T21:22:21.214540174Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Jan 13 21:22:21.216641 containerd[1466]: time="2025-01-13T21:22:21.216544764Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:21.221906 containerd[1466]: time="2025-01-13T21:22:21.220628334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:21.221906 containerd[1466]: time="2025-01-13T21:22:21.221739504Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 480.876758ms" Jan 13 21:22:21.221906 containerd[1466]: time="2025-01-13T21:22:21.221783820Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 13 21:22:21.222891 containerd[1466]: time="2025-01-13T21:22:21.222837569Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 13 21:22:21.655642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3611294513.mount: Deactivated successfully. Jan 13 21:22:23.822436 containerd[1466]: time="2025-01-13T21:22:23.822361001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:23.824165 containerd[1466]: time="2025-01-13T21:22:23.824067987Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56786556" Jan 13 21:22:23.825871 containerd[1466]: time="2025-01-13T21:22:23.825796439Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:23.835330 containerd[1466]: time="2025-01-13T21:22:23.835248792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:23.838708 containerd[1466]: time="2025-01-13T21:22:23.836881214Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.613997885s" Jan 13 21:22:23.838708 containerd[1466]: time="2025-01-13T21:22:23.836934956Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 13 21:22:24.135511 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 13 21:22:27.854368 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:22:27.860599 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:22:27.904693 systemd[1]: Reloading requested from client PID 2111 ('systemctl') (unit session-9.scope)... Jan 13 21:22:27.904716 systemd[1]: Reloading... Jan 13 21:22:28.047325 zram_generator::config[2147]: No configuration found. Jan 13 21:22:28.226935 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:22:28.342972 systemd[1]: Reloading finished in 437 ms. Jan 13 21:22:28.403674 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:22:28.404525 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:22:28.404959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:22:28.414927 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:22:28.797357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:22:28.809845 (kubelet)[2199]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:22:28.865950 kubelet[2199]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:22:28.865950 kubelet[2199]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:22:28.865950 kubelet[2199]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:22:28.866569 kubelet[2199]: I0113 21:22:28.866045 2199 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:22:29.736374 kubelet[2199]: I0113 21:22:29.736315 2199 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 21:22:29.736374 kubelet[2199]: I0113 21:22:29.736353 2199 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:22:29.736761 kubelet[2199]: I0113 21:22:29.736722 2199 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 21:22:29.773581 kubelet[2199]: E0113 21:22:29.773521 2199 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.40:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:22:29.777127 kubelet[2199]: I0113 21:22:29.776939 2199 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:22:29.795692 kubelet[2199]: E0113 21:22:29.795628 2199 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 21:22:29.795692 kubelet[2199]: I0113 21:22:29.795679 2199 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 21:22:29.801280 kubelet[2199]: I0113 21:22:29.801251 2199 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:22:29.803707 kubelet[2199]: I0113 21:22:29.803661 2199 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 21:22:29.804042 kubelet[2199]: I0113 21:22:29.803988 2199 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:22:29.804349 kubelet[2199]: I0113 21:22:29.804034 2199 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 21:22:29.804563 kubelet[2199]: I0113 21:22:29.804362 2199 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:22:29.804563 kubelet[2199]: I0113 21:22:29.804379 2199 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 21:22:29.804563 kubelet[2199]: I0113 21:22:29.804515 2199 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:22:29.808390 kubelet[2199]: I0113 21:22:29.808348 2199 kubelet.go:408] "Attempting to sync node with API server" Jan 13 21:22:29.808390 kubelet[2199]: I0113 21:22:29.808384 2199 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:22:29.808529 kubelet[2199]: I0113 21:22:29.808436 2199 kubelet.go:314] "Adding apiserver pod source" Jan 13 21:22:29.808529 kubelet[2199]: I0113 21:22:29.808459 2199 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:22:29.815223 kubelet[2199]: W0113 21:22:29.813755 2199 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.40:6443: connect: connection refused Jan 13 21:22:29.815223 kubelet[2199]: E0113 21:22:29.813845 2199 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.40:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:22:29.815223 kubelet[2199]: W0113 21:22:29.815189 2199 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.40:6443: connect: connection refused Jan 13 21:22:29.815426 kubelet[2199]: E0113 21:22:29.815250 2199 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.40:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:22:29.815917 kubelet[2199]: I0113 21:22:29.815879 2199 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:22:29.818722 kubelet[2199]: I0113 21:22:29.818676 2199 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:22:29.820317 kubelet[2199]: W0113 21:22:29.820269 2199 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:22:29.824252 kubelet[2199]: I0113 21:22:29.823843 2199 server.go:1269] "Started kubelet" Jan 13 21:22:29.826334 kubelet[2199]: I0113 21:22:29.826278 2199 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:22:29.827727 kubelet[2199]: I0113 21:22:29.827700 2199 server.go:460] "Adding debug handlers to kubelet server" Jan 13 21:22:29.831745 kubelet[2199]: I0113 21:22:29.830726 2199 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:22:29.831745 kubelet[2199]: I0113 21:22:29.831276 2199 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:22:29.831745 kubelet[2199]: I0113 21:22:29.831567 2199 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:22:29.838060 kubelet[2199]: E0113 21:22:29.834091 2199 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.40:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.40:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal.181a5d71d6f4f017 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal,UID:ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal,},FirstTimestamp:2025-01-13 21:22:29.823811607 +0000 UTC m=+1.008349205,LastTimestamp:2025-01-13 21:22:29.823811607 +0000 UTC m=+1.008349205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal,}" Jan 13 21:22:29.838413 kubelet[2199]: I0113 21:22:29.838279 2199 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 21:22:29.842958 kubelet[2199]: I0113 21:22:29.841178 2199 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 21:22:29.842958 kubelet[2199]: E0113 21:22:29.841478 2199 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" not found" Jan 13 21:22:29.842958 kubelet[2199]: E0113 21:22:29.842280 2199 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.40:6443: connect: connection refused" interval="200ms" Jan 13 21:22:29.842958 kubelet[2199]: I0113 21:22:29.842347 2199 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 21:22:29.842958 kubelet[2199]: W0113 21:22:29.842765 2199 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.40:6443: connect: connection refused Jan 13 21:22:29.842958 kubelet[2199]: E0113 21:22:29.842833 2199 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.40:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:22:29.842958 kubelet[2199]: I0113 21:22:29.842913 2199 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:22:29.848943 kubelet[2199]: E0113 21:22:29.848732 2199 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:22:29.849386 kubelet[2199]: I0113 21:22:29.849358 2199 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:22:29.849386 kubelet[2199]: I0113 21:22:29.849382 2199 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:22:29.849531 kubelet[2199]: I0113 21:22:29.849460 2199 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:22:29.874853 kubelet[2199]: I0113 21:22:29.874781 2199 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:22:29.876998 kubelet[2199]: I0113 21:22:29.876947 2199 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:22:29.876998 kubelet[2199]: I0113 21:22:29.876994 2199 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:22:29.877188 kubelet[2199]: I0113 21:22:29.877020 2199 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 21:22:29.877188 kubelet[2199]: E0113 21:22:29.877088 2199 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:22:29.884157 kubelet[2199]: W0113 21:22:29.884106 2199 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.40:6443: connect: connection refused Jan 13 21:22:29.884299 kubelet[2199]: E0113 21:22:29.884157 2199 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.40:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:22:29.886815 kubelet[2199]: I0113 21:22:29.886786 2199 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:22:29.886815 kubelet[2199]: I0113 21:22:29.886811 2199 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:22:29.886988 kubelet[2199]: I0113 21:22:29.886835 2199 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:22:29.890375 kubelet[2199]: I0113 21:22:29.890337 2199 policy_none.go:49] "None policy: Start" Jan 13 21:22:29.891360 kubelet[2199]: I0113 21:22:29.891287 2199 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:22:29.891360 kubelet[2199]: I0113 21:22:29.891321 2199 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:22:29.899252 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:22:29.914526 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:22:29.930087 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:22:29.932273 kubelet[2199]: I0113 21:22:29.931765 2199 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:22:29.932273 kubelet[2199]: I0113 21:22:29.932031 2199 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 21:22:29.932273 kubelet[2199]: I0113 21:22:29.932050 2199 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:22:29.934299 kubelet[2199]: I0113 21:22:29.934264 2199 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:22:29.935559 kubelet[2199]: E0113 21:22:29.935510 2199 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" not found" Jan 13 21:22:30.001570 systemd[1]: Created slice kubepods-burstable-pod3b45228d0ed350bc8b99738859cba43f.slice - libcontainer container kubepods-burstable-pod3b45228d0ed350bc8b99738859cba43f.slice. Jan 13 21:22:30.016561 systemd[1]: Created slice kubepods-burstable-pod3d2a9f5b3721cf00128d9b04d92a9a2b.slice - libcontainer container kubepods-burstable-pod3d2a9f5b3721cf00128d9b04d92a9a2b.slice. Jan 13 21:22:30.031735 systemd[1]: Created slice kubepods-burstable-pod27f0df2c4074aec29e15420d492076da.slice - libcontainer container kubepods-burstable-pod27f0df2c4074aec29e15420d492076da.slice. Jan 13 21:22:30.036107 kubelet[2199]: I0113 21:22:30.036059 2199 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:30.036599 kubelet[2199]: E0113 21:22:30.036537 2199 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.40:6443/api/v1/nodes\": dial tcp 10.128.0.40:6443: connect: connection refused" node="ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:30.043282 kubelet[2199]: I0113 21:22:30.043247 2199 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3d2a9f5b3721cf00128d9b04d92a9a2b-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" (UID: \"3d2a9f5b3721cf00128d9b04d92a9a2b\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:30.043586 kubelet[2199]: I0113 21:22:30.043543 2199 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3d2a9f5b3721cf00128d9b04d92a9a2b-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" (UID: \"3d2a9f5b3721cf00128d9b04d92a9a2b\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:30.043683 kubelet[2199]: I0113 21:22:30.043589 2199 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3d2a9f5b3721cf00128d9b04d92a9a2b-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" (UID: \"3d2a9f5b3721cf00128d9b04d92a9a2b\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:30.043683 kubelet[2199]: I0113 21:22:30.043621 2199 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27f0df2c4074aec29e15420d492076da-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" (UID: \"27f0df2c4074aec29e15420d492076da\") " pod="kube-system/kube-scheduler-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:30.043683 kubelet[2199]: I0113 21:22:30.043648 2199 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3d2a9f5b3721cf00128d9b04d92a9a2b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" (UID: \"3d2a9f5b3721cf00128d9b04d92a9a2b\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:30.043848 kubelet[2199]: I0113 21:22:30.043683 2199 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3d2a9f5b3721cf00128d9b04d92a9a2b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" (UID: \"3d2a9f5b3721cf00128d9b04d92a9a2b\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:30.043848 kubelet[2199]: I0113 21:22:30.043729 2199 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b45228d0ed350bc8b99738859cba43f-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" (UID: \"3b45228d0ed350bc8b99738859cba43f\") " pod="kube-system/kube-apiserver-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:30.043848 kubelet[2199]: I0113 21:22:30.043758 2199 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b45228d0ed350bc8b99738859cba43f-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" (UID: \"3b45228d0ed350bc8b99738859cba43f\") " pod="kube-system/kube-apiserver-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:30.043848 kubelet[2199]: I0113 21:22:30.043810 2199 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b45228d0ed350bc8b99738859cba43f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" (UID: \"3b45228d0ed350bc8b99738859cba43f\") " pod="kube-system/kube-apiserver-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:30.044004 kubelet[2199]: E0113 21:22:30.043335 2199 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.40:6443: connect: connection refused" interval="400ms" Jan 13 21:22:30.242261 kubelet[2199]: I0113 21:22:30.242186 2199 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:30.242707 kubelet[2199]: E0113 21:22:30.242657 2199 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.40:6443/api/v1/nodes\": dial tcp 10.128.0.40:6443: connect: connection refused" node="ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:30.313491 containerd[1466]: time="2025-01-13T21:22:30.313354735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal,Uid:3b45228d0ed350bc8b99738859cba43f,Namespace:kube-system,Attempt:0,}" Jan 13 21:22:30.330307 containerd[1466]: time="2025-01-13T21:22:30.330243429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal,Uid:3d2a9f5b3721cf00128d9b04d92a9a2b,Namespace:kube-system,Attempt:0,}" Jan 13 21:22:30.336043 containerd[1466]: time="2025-01-13T21:22:30.336001658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal,Uid:27f0df2c4074aec29e15420d492076da,Namespace:kube-system,Attempt:0,}" Jan 13 21:22:30.445144 kubelet[2199]: E0113 21:22:30.445063 2199 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.40:6443: connect: connection refused" interval="800ms" Jan 13 21:22:30.647373 kubelet[2199]: I0113 21:22:30.647319 2199 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:30.647801 kubelet[2199]: E0113 21:22:30.647751 2199 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.40:6443/api/v1/nodes\": dial tcp 10.128.0.40:6443: connect: connection refused" node="ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:30.684725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount410652881.mount: Deactivated successfully. Jan 13 21:22:30.693138 containerd[1466]: time="2025-01-13T21:22:30.693072217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:22:30.694378 containerd[1466]: time="2025-01-13T21:22:30.694306942Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=313954" Jan 13 21:22:30.695977 containerd[1466]: time="2025-01-13T21:22:30.695921915Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:22:30.697658 containerd[1466]: time="2025-01-13T21:22:30.697552257Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:22:30.698933 containerd[1466]: time="2025-01-13T21:22:30.698769668Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:22:30.698933 containerd[1466]: time="2025-01-13T21:22:30.698876974Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:22:30.700451 containerd[1466]: time="2025-01-13T21:22:30.700404967Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:22:30.702324 containerd[1466]: time="2025-01-13T21:22:30.702263615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:22:30.704323 containerd[1466]: time="2025-01-13T21:22:30.704285143Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 390.82259ms" Jan 13 21:22:30.707094 containerd[1466]: time="2025-01-13T21:22:30.706591369Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 370.507473ms" Jan 13 21:22:30.720241 containerd[1466]: time="2025-01-13T21:22:30.719864362Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 389.528736ms" Jan 13 21:22:30.743335 kubelet[2199]: W0113 21:22:30.742458 2199 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.40:6443: connect: connection refused Jan 13 21:22:30.743335 kubelet[2199]: E0113 21:22:30.742563 2199 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.40:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:22:30.922258 containerd[1466]: time="2025-01-13T21:22:30.921413702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:22:30.927532 containerd[1466]: time="2025-01-13T21:22:30.927434344Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:22:30.927977 containerd[1466]: time="2025-01-13T21:22:30.927924841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:30.930257 containerd[1466]: time="2025-01-13T21:22:30.929819678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:22:30.930257 containerd[1466]: time="2025-01-13T21:22:30.929911949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:22:30.930257 containerd[1466]: time="2025-01-13T21:22:30.929940247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:30.930257 containerd[1466]: time="2025-01-13T21:22:30.930083169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:30.930692 containerd[1466]: time="2025-01-13T21:22:30.929409014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:30.935257 containerd[1466]: time="2025-01-13T21:22:30.934230316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:22:30.935257 containerd[1466]: time="2025-01-13T21:22:30.934311352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:22:30.935257 containerd[1466]: time="2025-01-13T21:22:30.934350960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:30.935257 containerd[1466]: time="2025-01-13T21:22:30.934499247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:30.975439 systemd[1]: Started cri-containerd-5dcf25472982245e503459394dd1ec83ed0850137a0c30cc3714ef15acd477c4.scope - libcontainer container 5dcf25472982245e503459394dd1ec83ed0850137a0c30cc3714ef15acd477c4. Jan 13 21:22:30.991482 systemd[1]: Started cri-containerd-8bdb54468b0946d86445f6d9ff176c553e008009245a3476f33a708a761b8ed7.scope - libcontainer container 8bdb54468b0946d86445f6d9ff176c553e008009245a3476f33a708a761b8ed7. Jan 13 21:22:30.995416 systemd[1]: Started cri-containerd-d789f0de5921529640148c564916913ade4cd0a650e7ecf403afec6d9f408bfa.scope - libcontainer container d789f0de5921529640148c564916913ade4cd0a650e7ecf403afec6d9f408bfa. Jan 13 21:22:31.067940 kubelet[2199]: W0113 21:22:31.067864 2199 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.40:6443: connect: connection refused Jan 13 21:22:31.069013 kubelet[2199]: E0113 21:22:31.068058 2199 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.40:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:22:31.081391 containerd[1466]: time="2025-01-13T21:22:31.081281899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal,Uid:3b45228d0ed350bc8b99738859cba43f,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bdb54468b0946d86445f6d9ff176c553e008009245a3476f33a708a761b8ed7\"" Jan 13 21:22:31.087318 kubelet[2199]: E0113 21:22:31.087150 2199 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-21291" Jan 13 21:22:31.093213 containerd[1466]: time="2025-01-13T21:22:31.092988610Z" level=info msg="CreateContainer within sandbox \"8bdb54468b0946d86445f6d9ff176c553e008009245a3476f33a708a761b8ed7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:22:31.121182 containerd[1466]: time="2025-01-13T21:22:31.120935697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal,Uid:3d2a9f5b3721cf00128d9b04d92a9a2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5dcf25472982245e503459394dd1ec83ed0850137a0c30cc3714ef15acd477c4\"" Jan 13 21:22:31.123861 kubelet[2199]: E0113 21:22:31.123820 2199 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-0-23124d1c691ead31c35f.c.flat" Jan 13 21:22:31.126304 containerd[1466]: time="2025-01-13T21:22:31.126062578Z" level=info msg="CreateContainer within sandbox \"5dcf25472982245e503459394dd1ec83ed0850137a0c30cc3714ef15acd477c4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:22:31.131099 containerd[1466]: time="2025-01-13T21:22:31.131054793Z" level=info msg="CreateContainer within sandbox \"8bdb54468b0946d86445f6d9ff176c553e008009245a3476f33a708a761b8ed7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7d4d21df0c7a3cb7be43bc20a96aa2b2312837243f269694d57f53a45a6701b2\"" Jan 13 21:22:31.132844 containerd[1466]: time="2025-01-13T21:22:31.132112969Z" level=info msg="StartContainer for \"7d4d21df0c7a3cb7be43bc20a96aa2b2312837243f269694d57f53a45a6701b2\"" Jan 13 21:22:31.143726 containerd[1466]: time="2025-01-13T21:22:31.143679599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal,Uid:27f0df2c4074aec29e15420d492076da,Namespace:kube-system,Attempt:0,} returns sandbox id \"d789f0de5921529640148c564916913ade4cd0a650e7ecf403afec6d9f408bfa\"" Jan 13 21:22:31.146881 kubelet[2199]: E0113 21:22:31.146842 2199 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-21291" Jan 13 21:22:31.150113 containerd[1466]: time="2025-01-13T21:22:31.150064178Z" level=info msg="CreateContainer within sandbox \"5dcf25472982245e503459394dd1ec83ed0850137a0c30cc3714ef15acd477c4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"263340ae21c87cad7d9c46f342bc4872058b661d2750218db188a8949f0c19a2\"" Jan 13 21:22:31.150930 containerd[1466]: time="2025-01-13T21:22:31.150769984Z" level=info msg="StartContainer for \"263340ae21c87cad7d9c46f342bc4872058b661d2750218db188a8949f0c19a2\"" Jan 13 21:22:31.150930 containerd[1466]: time="2025-01-13T21:22:31.150811221Z" level=info msg="CreateContainer within sandbox \"d789f0de5921529640148c564916913ade4cd0a650e7ecf403afec6d9f408bfa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:22:31.177830 containerd[1466]: time="2025-01-13T21:22:31.177659298Z" level=info msg="CreateContainer within sandbox \"d789f0de5921529640148c564916913ade4cd0a650e7ecf403afec6d9f408bfa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"71bfcdebf9c8c903cc343af229473bff212c0db0c4019274d5d2fd497978969f\"" Jan 13 21:22:31.181480 containerd[1466]: time="2025-01-13T21:22:31.181363520Z" level=info msg="StartContainer for \"71bfcdebf9c8c903cc343af229473bff212c0db0c4019274d5d2fd497978969f\"" Jan 13 21:22:31.197476 systemd[1]: Started cri-containerd-7d4d21df0c7a3cb7be43bc20a96aa2b2312837243f269694d57f53a45a6701b2.scope - libcontainer container 7d4d21df0c7a3cb7be43bc20a96aa2b2312837243f269694d57f53a45a6701b2. Jan 13 21:22:31.218373 systemd[1]: Started cri-containerd-263340ae21c87cad7d9c46f342bc4872058b661d2750218db188a8949f0c19a2.scope - libcontainer container 263340ae21c87cad7d9c46f342bc4872058b661d2750218db188a8949f0c19a2. Jan 13 21:22:31.248726 kubelet[2199]: E0113 21:22:31.248659 2199 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.40:6443: connect: connection refused" interval="1.6s" Jan 13 21:22:31.264408 systemd[1]: Started cri-containerd-71bfcdebf9c8c903cc343af229473bff212c0db0c4019274d5d2fd497978969f.scope - libcontainer container 71bfcdebf9c8c903cc343af229473bff212c0db0c4019274d5d2fd497978969f. Jan 13 21:22:31.322226 containerd[1466]: time="2025-01-13T21:22:31.322044075Z" level=info msg="StartContainer for \"7d4d21df0c7a3cb7be43bc20a96aa2b2312837243f269694d57f53a45a6701b2\" returns successfully" Jan 13 21:22:31.338988 containerd[1466]: time="2025-01-13T21:22:31.338920093Z" level=info msg="StartContainer for \"263340ae21c87cad7d9c46f342bc4872058b661d2750218db188a8949f0c19a2\" returns successfully" Jan 13 21:22:31.345103 kubelet[2199]: W0113 21:22:31.344954 2199 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.40:6443: connect: connection refused Jan 13 21:22:31.345103 kubelet[2199]: E0113 21:22:31.345048 2199 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.40:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:22:31.388289 kubelet[2199]: W0113 21:22:31.387611 2199 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.40:6443: connect: connection refused Jan 13 21:22:31.388710 kubelet[2199]: E0113 21:22:31.388382 2199 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.40:6443: connect: connection refused" logger="UnhandledError" Jan 13 21:22:31.391390 containerd[1466]: time="2025-01-13T21:22:31.391330224Z" level=info msg="StartContainer for \"71bfcdebf9c8c903cc343af229473bff212c0db0c4019274d5d2fd497978969f\" returns successfully" Jan 13 21:22:31.453678 kubelet[2199]: I0113 21:22:31.453530 2199 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:31.454452 kubelet[2199]: E0113 21:22:31.454403 2199 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.40:6443/api/v1/nodes\": dial tcp 10.128.0.40:6443: connect: connection refused" node="ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:33.059743 kubelet[2199]: I0113 21:22:33.059694 2199 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:34.570446 kubelet[2199]: E0113 21:22:34.570046 2199 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" not found" node="ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:34.599507 kubelet[2199]: E0113 21:22:34.598476 2199 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal.181a5d71d6f4f017 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal,UID:ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal,},FirstTimestamp:2025-01-13 21:22:29.823811607 +0000 UTC m=+1.008349205,LastTimestamp:2025-01-13 21:22:29.823811607 +0000 UTC m=+1.008349205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal,}" Jan 13 21:22:34.658192 kubelet[2199]: E0113 21:22:34.657786 2199 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal.181a5d71d870917b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal,UID:ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal,},FirstTimestamp:2025-01-13 21:22:29.848691067 +0000 UTC m=+1.033228667,LastTimestamp:2025-01-13 21:22:29.848691067 +0000 UTC m=+1.033228667,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal,}" Jan 13 21:22:34.674278 kubelet[2199]: I0113 21:22:34.671989 2199 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:34.722907 kubelet[2199]: E0113 21:22:34.722734 2199 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal.181a5d71daa53dc6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal,UID:ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal,},FirstTimestamp:2025-01-13 21:22:29.885697478 +0000 UTC m=+1.070235078,LastTimestamp:2025-01-13 21:22:29.885697478 +0000 UTC m=+1.070235078,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal,}" Jan 13 21:22:34.776760 kubelet[2199]: E0113 21:22:34.776617 2199 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal.181a5d71daa55672 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal,UID:ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal,},FirstTimestamp:2025-01-13 21:22:29.885703794 +0000 UTC m=+1.070241388,LastTimestamp:2025-01-13 21:22:29.885703794 +0000 UTC m=+1.070241388,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal,}" Jan 13 21:22:34.818417 kubelet[2199]: I0113 21:22:34.818366 2199 apiserver.go:52] "Watching apiserver" Jan 13 21:22:34.843537 kubelet[2199]: I0113 21:22:34.843367 2199 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 21:22:35.115827 kubelet[2199]: E0113 21:22:35.115430 2199 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:36.783736 systemd[1]: Reloading requested from client PID 2475 ('systemctl') (unit session-9.scope)... Jan 13 21:22:36.783758 systemd[1]: Reloading... Jan 13 21:22:36.910431 zram_generator::config[2511]: No configuration found. Jan 13 21:22:37.080384 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:22:37.203509 systemd[1]: Reloading finished in 419 ms. Jan 13 21:22:37.261858 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:22:37.285879 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:22:37.286507 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:22:37.286764 systemd[1]: kubelet.service: Consumed 1.513s CPU time, 118.6M memory peak, 0B memory swap peak. Jan 13 21:22:37.295652 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:22:37.560375 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:22:37.576830 (kubelet)[2563]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:22:37.658644 kubelet[2563]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:22:37.660257 kubelet[2563]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:22:37.660257 kubelet[2563]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:22:37.660257 kubelet[2563]: I0113 21:22:37.659379 2563 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:22:37.669737 kubelet[2563]: I0113 21:22:37.669677 2563 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 21:22:37.669737 kubelet[2563]: I0113 21:22:37.669709 2563 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:22:37.670527 kubelet[2563]: I0113 21:22:37.670066 2563 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 21:22:37.677108 kubelet[2563]: I0113 21:22:37.676413 2563 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:22:37.680720 kubelet[2563]: I0113 21:22:37.680688 2563 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:22:37.685541 kubelet[2563]: E0113 21:22:37.685483 2563 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 21:22:37.685541 kubelet[2563]: I0113 21:22:37.685528 2563 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 21:22:37.689004 kubelet[2563]: I0113 21:22:37.688968 2563 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:22:37.689255 kubelet[2563]: I0113 21:22:37.689130 2563 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 21:22:37.689405 kubelet[2563]: I0113 21:22:37.689359 2563 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:22:37.689692 kubelet[2563]: I0113 21:22:37.689405 2563 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 21:22:37.689692 kubelet[2563]: I0113 21:22:37.689688 2563 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:22:37.689958 kubelet[2563]: I0113 21:22:37.689705 2563 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 21:22:37.689958 kubelet[2563]: I0113 21:22:37.689770 2563 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:22:37.689958 kubelet[2563]: I0113 21:22:37.689944 2563 kubelet.go:408] "Attempting to sync node with API server" Jan 13 21:22:37.690319 kubelet[2563]: I0113 21:22:37.689962 2563 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:22:37.690319 kubelet[2563]: I0113 21:22:37.690005 2563 kubelet.go:314] "Adding apiserver pod source" Jan 13 21:22:37.690319 kubelet[2563]: I0113 21:22:37.690027 2563 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:22:37.695222 kubelet[2563]: I0113 21:22:37.693566 2563 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:22:37.695222 kubelet[2563]: I0113 21:22:37.694227 2563 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:22:37.695222 kubelet[2563]: I0113 21:22:37.694775 2563 server.go:1269] "Started kubelet" Jan 13 21:22:37.703677 kubelet[2563]: I0113 21:22:37.703640 2563 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:22:37.712120 kubelet[2563]: I0113 21:22:37.712069 2563 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:22:37.717691 kubelet[2563]: I0113 21:22:37.717663 2563 server.go:460] "Adding debug handlers to kubelet server" Jan 13 21:22:37.719733 kubelet[2563]: I0113 21:22:37.719664 2563 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:22:37.720126 kubelet[2563]: I0113 21:22:37.720094 2563 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:22:37.721517 kubelet[2563]: I0113 21:22:37.721489 2563 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 21:22:37.724694 kubelet[2563]: I0113 21:22:37.724675 2563 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 21:22:37.725912 kubelet[2563]: E0113 21:22:37.725885 2563 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" not found" Jan 13 21:22:37.729759 kubelet[2563]: I0113 21:22:37.729739 2563 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 21:22:37.730095 kubelet[2563]: I0113 21:22:37.730078 2563 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:22:37.736713 kubelet[2563]: I0113 21:22:37.736681 2563 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:22:37.743933 kubelet[2563]: I0113 21:22:37.743881 2563 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:22:37.744233 kubelet[2563]: I0113 21:22:37.744187 2563 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:22:37.744405 kubelet[2563]: I0113 21:22:37.744367 2563 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 21:22:37.744640 kubelet[2563]: E0113 21:22:37.744613 2563 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:22:37.754196 kubelet[2563]: I0113 21:22:37.754161 2563 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:22:37.756130 kubelet[2563]: I0113 21:22:37.754441 2563 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:22:37.756130 kubelet[2563]: I0113 21:22:37.754548 2563 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:22:37.761080 kubelet[2563]: E0113 21:22:37.760805 2563 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:22:37.813616 sudo[2593]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jan 13 21:22:37.815968 sudo[2593]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jan 13 21:22:37.845522 kubelet[2563]: E0113 21:22:37.845477 2563 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:22:37.878498 kubelet[2563]: I0113 21:22:37.878459 2563 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:22:37.878498 kubelet[2563]: I0113 21:22:37.878490 2563 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:22:37.878733 kubelet[2563]: I0113 21:22:37.878516 2563 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:22:37.878798 kubelet[2563]: I0113 21:22:37.878776 2563 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:22:37.878853 kubelet[2563]: I0113 21:22:37.878795 2563 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:22:37.878905 kubelet[2563]: I0113 21:22:37.878867 2563 policy_none.go:49] "None policy: Start" Jan 13 21:22:37.880997 kubelet[2563]: I0113 21:22:37.880965 2563 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:22:37.880997 kubelet[2563]: I0113 21:22:37.881005 2563 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:22:37.881715 kubelet[2563]: I0113 21:22:37.881307 2563 state_mem.go:75] "Updated machine memory state" Jan 13 21:22:37.890988 kubelet[2563]: I0113 21:22:37.890958 2563 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:22:37.893518 kubelet[2563]: I0113 21:22:37.891821 2563 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 21:22:37.893518 kubelet[2563]: I0113 21:22:37.891844 2563 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:22:37.896212 kubelet[2563]: I0113 21:22:37.896162 2563 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:22:38.020081 kubelet[2563]: I0113 21:22:38.020030 2563 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:38.036237 update_engine[1452]: I20250113 21:22:38.033885 1452 update_attempter.cc:509] Updating boot flags... Jan 13 21:22:38.036789 kubelet[2563]: I0113 21:22:38.034802 2563 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:38.039579 kubelet[2563]: I0113 21:22:38.038063 2563 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:38.078917 kubelet[2563]: W0113 21:22:38.077736 2563 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 13 21:22:38.084505 kubelet[2563]: W0113 21:22:38.081723 2563 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 13 21:22:38.091961 kubelet[2563]: W0113 21:22:38.091916 2563 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jan 13 21:22:38.134818 kubelet[2563]: I0113 21:22:38.134350 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27f0df2c4074aec29e15420d492076da-kubeconfig\") pod \"kube-scheduler-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" (UID: \"27f0df2c4074aec29e15420d492076da\") " pod="kube-system/kube-scheduler-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:38.134818 kubelet[2563]: I0113 21:22:38.134404 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b45228d0ed350bc8b99738859cba43f-ca-certs\") pod \"kube-apiserver-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" (UID: \"3b45228d0ed350bc8b99738859cba43f\") " pod="kube-system/kube-apiserver-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:38.134818 kubelet[2563]: I0113 21:22:38.134439 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b45228d0ed350bc8b99738859cba43f-k8s-certs\") pod \"kube-apiserver-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" (UID: \"3b45228d0ed350bc8b99738859cba43f\") " pod="kube-system/kube-apiserver-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:38.134818 kubelet[2563]: I0113 21:22:38.134470 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b45228d0ed350bc8b99738859cba43f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" (UID: \"3b45228d0ed350bc8b99738859cba43f\") " pod="kube-system/kube-apiserver-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:38.136299 kubelet[2563]: I0113 21:22:38.134508 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3d2a9f5b3721cf00128d9b04d92a9a2b-ca-certs\") pod \"kube-controller-manager-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" (UID: \"3d2a9f5b3721cf00128d9b04d92a9a2b\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:38.136299 kubelet[2563]: I0113 21:22:38.134536 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3d2a9f5b3721cf00128d9b04d92a9a2b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" (UID: \"3d2a9f5b3721cf00128d9b04d92a9a2b\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:38.136299 kubelet[2563]: I0113 21:22:38.134565 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3d2a9f5b3721cf00128d9b04d92a9a2b-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" (UID: \"3d2a9f5b3721cf00128d9b04d92a9a2b\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:38.136299 kubelet[2563]: I0113 21:22:38.134595 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3d2a9f5b3721cf00128d9b04d92a9a2b-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" (UID: \"3d2a9f5b3721cf00128d9b04d92a9a2b\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:38.136557 kubelet[2563]: I0113 21:22:38.134627 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3d2a9f5b3721cf00128d9b04d92a9a2b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" (UID: \"3d2a9f5b3721cf00128d9b04d92a9a2b\") " pod="kube-system/kube-controller-manager-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" Jan 13 21:22:38.166247 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2607) Jan 13 21:22:38.382806 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2611) Jan 13 21:22:38.691417 kubelet[2563]: I0113 21:22:38.691371 2563 apiserver.go:52] "Watching apiserver" Jan 13 21:22:38.730600 kubelet[2563]: I0113 21:22:38.730549 2563 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 21:22:38.792068 kubelet[2563]: I0113 21:22:38.791749 2563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" podStartSLOduration=0.791690154 podStartE2EDuration="791.690154ms" podCreationTimestamp="2025-01-13 21:22:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:22:38.790599146 +0000 UTC m=+1.207751024" watchObservedRunningTime="2025-01-13 21:22:38.791690154 +0000 UTC m=+1.208842014" Jan 13 21:22:38.793267 kubelet[2563]: I0113 21:22:38.792761 2563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" podStartSLOduration=0.792742156 podStartE2EDuration="792.742156ms" podCreationTimestamp="2025-01-13 21:22:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:22:38.775464892 +0000 UTC m=+1.192616760" watchObservedRunningTime="2025-01-13 21:22:38.792742156 +0000 UTC m=+1.209894032" Jan 13 21:22:38.813054 kubelet[2563]: I0113 21:22:38.810816 2563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" podStartSLOduration=0.810792775 podStartE2EDuration="810.792775ms" podCreationTimestamp="2025-01-13 21:22:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:22:38.810373268 +0000 UTC m=+1.227525144" watchObservedRunningTime="2025-01-13 21:22:38.810792775 +0000 UTC m=+1.227944637" Jan 13 21:22:38.875005 sudo[2593]: pam_unix(sudo:session): session closed for user root Jan 13 21:22:41.059382 sudo[1742]: pam_unix(sudo:session): session closed for user root Jan 13 21:22:41.102796 sshd[1739]: pam_unix(sshd:session): session closed for user core Jan 13 21:22:41.107749 systemd[1]: sshd@8-10.128.0.40:22-147.75.109.163:60080.service: Deactivated successfully. Jan 13 21:22:41.110778 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:22:41.111566 systemd[1]: session-9.scope: Consumed 7.232s CPU time, 157.8M memory peak, 0B memory swap peak. Jan 13 21:22:41.114485 systemd-logind[1447]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:22:41.117276 systemd-logind[1447]: Removed session 9. Jan 13 21:22:41.863368 kubelet[2563]: I0113 21:22:41.863326 2563 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:22:41.864381 containerd[1466]: time="2025-01-13T21:22:41.864304930Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:22:41.866290 kubelet[2563]: I0113 21:22:41.864917 2563 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:22:42.704238 systemd[1]: Created slice kubepods-besteffort-pod4ed18b2b_1d62_43fd_94fd_fb506f7c0f82.slice - libcontainer container kubepods-besteffort-pod4ed18b2b_1d62_43fd_94fd_fb506f7c0f82.slice. Jan 13 21:22:42.750449 systemd[1]: Created slice kubepods-burstable-pod8256be8e_e45b_4cb4_a574_7d75fb60126d.slice - libcontainer container kubepods-burstable-pod8256be8e_e45b_4cb4_a574_7d75fb60126d.slice. Jan 13 21:22:42.767033 kubelet[2563]: I0113 21:22:42.766995 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-hostproc\") pod \"cilium-6psvv\" (UID: \"8256be8e-e45b-4cb4-a574-7d75fb60126d\") " pod="kube-system/cilium-6psvv" Jan 13 21:22:42.767240 kubelet[2563]: I0113 21:22:42.767044 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-lib-modules\") pod \"cilium-6psvv\" (UID: \"8256be8e-e45b-4cb4-a574-7d75fb60126d\") " pod="kube-system/cilium-6psvv" Jan 13 21:22:42.767240 kubelet[2563]: I0113 21:22:42.767096 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-xtables-lock\") pod \"cilium-6psvv\" (UID: \"8256be8e-e45b-4cb4-a574-7d75fb60126d\") " pod="kube-system/cilium-6psvv" Jan 13 21:22:42.767240 kubelet[2563]: I0113 21:22:42.767120 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-host-proc-sys-net\") pod \"cilium-6psvv\" (UID: \"8256be8e-e45b-4cb4-a574-7d75fb60126d\") " pod="kube-system/cilium-6psvv" Jan 13 21:22:42.768633 kubelet[2563]: I0113 21:22:42.768575 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ed18b2b-1d62-43fd-94fd-fb506f7c0f82-lib-modules\") pod \"kube-proxy-k6mcj\" (UID: \"4ed18b2b-1d62-43fd-94fd-fb506f7c0f82\") " pod="kube-system/kube-proxy-k6mcj" Jan 13 21:22:42.768770 kubelet[2563]: I0113 21:22:42.768660 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-cilium-run\") pod \"cilium-6psvv\" (UID: \"8256be8e-e45b-4cb4-a574-7d75fb60126d\") " pod="kube-system/cilium-6psvv" Jan 13 21:22:42.768770 kubelet[2563]: I0113 21:22:42.768712 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ed18b2b-1d62-43fd-94fd-fb506f7c0f82-xtables-lock\") pod \"kube-proxy-k6mcj\" (UID: \"4ed18b2b-1d62-43fd-94fd-fb506f7c0f82\") " pod="kube-system/kube-proxy-k6mcj" Jan 13 21:22:42.768770 kubelet[2563]: I0113 21:22:42.768738 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-cilium-cgroup\") pod \"cilium-6psvv\" (UID: \"8256be8e-e45b-4cb4-a574-7d75fb60126d\") " pod="kube-system/cilium-6psvv" Jan 13 21:22:42.768952 kubelet[2563]: I0113 21:22:42.768784 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-cni-path\") pod \"cilium-6psvv\" (UID: \"8256be8e-e45b-4cb4-a574-7d75fb60126d\") " pod="kube-system/cilium-6psvv" Jan 13 21:22:42.768952 kubelet[2563]: I0113 21:22:42.768814 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8256be8e-e45b-4cb4-a574-7d75fb60126d-clustermesh-secrets\") pod \"cilium-6psvv\" (UID: \"8256be8e-e45b-4cb4-a574-7d75fb60126d\") " pod="kube-system/cilium-6psvv" Jan 13 21:22:42.768952 kubelet[2563]: I0113 21:22:42.768842 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4ed18b2b-1d62-43fd-94fd-fb506f7c0f82-kube-proxy\") pod \"kube-proxy-k6mcj\" (UID: \"4ed18b2b-1d62-43fd-94fd-fb506f7c0f82\") " pod="kube-system/kube-proxy-k6mcj" Jan 13 21:22:42.768952 kubelet[2563]: I0113 21:22:42.768907 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rg8q\" (UniqueName: \"kubernetes.io/projected/4ed18b2b-1d62-43fd-94fd-fb506f7c0f82-kube-api-access-4rg8q\") pod \"kube-proxy-k6mcj\" (UID: \"4ed18b2b-1d62-43fd-94fd-fb506f7c0f82\") " pod="kube-system/kube-proxy-k6mcj" Jan 13 21:22:42.769167 kubelet[2563]: I0113 21:22:42.768957 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpjkd\" (UniqueName: \"kubernetes.io/projected/8256be8e-e45b-4cb4-a574-7d75fb60126d-kube-api-access-bpjkd\") pod \"cilium-6psvv\" (UID: \"8256be8e-e45b-4cb4-a574-7d75fb60126d\") " pod="kube-system/cilium-6psvv" Jan 13 21:22:42.769167 kubelet[2563]: I0113 21:22:42.768985 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-etc-cni-netd\") pod \"cilium-6psvv\" (UID: \"8256be8e-e45b-4cb4-a574-7d75fb60126d\") " pod="kube-system/cilium-6psvv" Jan 13 21:22:42.769167 kubelet[2563]: I0113 21:22:42.769011 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8256be8e-e45b-4cb4-a574-7d75fb60126d-hubble-tls\") pod \"cilium-6psvv\" (UID: \"8256be8e-e45b-4cb4-a574-7d75fb60126d\") " pod="kube-system/cilium-6psvv" Jan 13 21:22:42.769167 kubelet[2563]: I0113 21:22:42.769060 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-host-proc-sys-kernel\") pod \"cilium-6psvv\" (UID: \"8256be8e-e45b-4cb4-a574-7d75fb60126d\") " pod="kube-system/cilium-6psvv" Jan 13 21:22:42.769167 kubelet[2563]: I0113 21:22:42.769086 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8256be8e-e45b-4cb4-a574-7d75fb60126d-cilium-config-path\") pod \"cilium-6psvv\" (UID: \"8256be8e-e45b-4cb4-a574-7d75fb60126d\") " pod="kube-system/cilium-6psvv" Jan 13 21:22:42.769167 kubelet[2563]: I0113 21:22:42.769139 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-bpf-maps\") pod \"cilium-6psvv\" (UID: \"8256be8e-e45b-4cb4-a574-7d75fb60126d\") " pod="kube-system/cilium-6psvv" Jan 13 21:22:43.008063 systemd[1]: Created slice kubepods-besteffort-podf0ab9c5b_b148_465d_b766_874fa90fc856.slice - libcontainer container kubepods-besteffort-podf0ab9c5b_b148_465d_b766_874fa90fc856.slice. Jan 13 21:22:43.015541 containerd[1466]: time="2025-01-13T21:22:43.015310765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k6mcj,Uid:4ed18b2b-1d62-43fd-94fd-fb506f7c0f82,Namespace:kube-system,Attempt:0,}" Jan 13 21:22:43.062233 containerd[1466]: time="2025-01-13T21:22:43.059266576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6psvv,Uid:8256be8e-e45b-4cb4-a574-7d75fb60126d,Namespace:kube-system,Attempt:0,}" Jan 13 21:22:43.072596 kubelet[2563]: I0113 21:22:43.072463 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gj7l\" (UniqueName: \"kubernetes.io/projected/f0ab9c5b-b148-465d-b766-874fa90fc856-kube-api-access-6gj7l\") pod \"cilium-operator-5d85765b45-hk2zc\" (UID: \"f0ab9c5b-b148-465d-b766-874fa90fc856\") " pod="kube-system/cilium-operator-5d85765b45-hk2zc" Jan 13 21:22:43.072596 kubelet[2563]: I0113 21:22:43.072545 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f0ab9c5b-b148-465d-b766-874fa90fc856-cilium-config-path\") pod \"cilium-operator-5d85765b45-hk2zc\" (UID: \"f0ab9c5b-b148-465d-b766-874fa90fc856\") " pod="kube-system/cilium-operator-5d85765b45-hk2zc" Jan 13 21:22:43.106138 containerd[1466]: time="2025-01-13T21:22:43.104347017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:22:43.106138 containerd[1466]: time="2025-01-13T21:22:43.104425647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:22:43.106138 containerd[1466]: time="2025-01-13T21:22:43.104449086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:43.106138 containerd[1466]: time="2025-01-13T21:22:43.104570691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:43.115146 containerd[1466]: time="2025-01-13T21:22:43.114439232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:22:43.115146 containerd[1466]: time="2025-01-13T21:22:43.114527352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:22:43.115146 containerd[1466]: time="2025-01-13T21:22:43.114564127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:43.115146 containerd[1466]: time="2025-01-13T21:22:43.114686771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:43.140949 systemd[1]: Started cri-containerd-1ddae5d5f613295d8eff363d5d6bdd67698a23d2d519c2cadcc7537351231524.scope - libcontainer container 1ddae5d5f613295d8eff363d5d6bdd67698a23d2d519c2cadcc7537351231524. Jan 13 21:22:43.147399 systemd[1]: Started cri-containerd-4b896f7f823c92ed6e28d2b5cb44313a8cf2256fbf479043b0de60695de7db7e.scope - libcontainer container 4b896f7f823c92ed6e28d2b5cb44313a8cf2256fbf479043b0de60695de7db7e. Jan 13 21:22:43.198768 containerd[1466]: time="2025-01-13T21:22:43.198574923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6psvv,Uid:8256be8e-e45b-4cb4-a574-7d75fb60126d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ddae5d5f613295d8eff363d5d6bdd67698a23d2d519c2cadcc7537351231524\"" Jan 13 21:22:43.203079 containerd[1466]: time="2025-01-13T21:22:43.203022355Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 21:22:43.219602 containerd[1466]: time="2025-01-13T21:22:43.219555698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k6mcj,Uid:4ed18b2b-1d62-43fd-94fd-fb506f7c0f82,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b896f7f823c92ed6e28d2b5cb44313a8cf2256fbf479043b0de60695de7db7e\"" Jan 13 21:22:43.225036 containerd[1466]: time="2025-01-13T21:22:43.224989374Z" level=info msg="CreateContainer within sandbox \"4b896f7f823c92ed6e28d2b5cb44313a8cf2256fbf479043b0de60695de7db7e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:22:43.253615 containerd[1466]: time="2025-01-13T21:22:43.253554937Z" level=info msg="CreateContainer within sandbox \"4b896f7f823c92ed6e28d2b5cb44313a8cf2256fbf479043b0de60695de7db7e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"18a05bc91b1d0402e29078841f053c0f4f8a18524be7fbf96c49c74121a47cc5\"" Jan 13 21:22:43.254857 containerd[1466]: time="2025-01-13T21:22:43.254709816Z" level=info msg="StartContainer for \"18a05bc91b1d0402e29078841f053c0f4f8a18524be7fbf96c49c74121a47cc5\"" Jan 13 21:22:43.298850 systemd[1]: Started cri-containerd-18a05bc91b1d0402e29078841f053c0f4f8a18524be7fbf96c49c74121a47cc5.scope - libcontainer container 18a05bc91b1d0402e29078841f053c0f4f8a18524be7fbf96c49c74121a47cc5. Jan 13 21:22:43.318111 containerd[1466]: time="2025-01-13T21:22:43.317873566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-hk2zc,Uid:f0ab9c5b-b148-465d-b766-874fa90fc856,Namespace:kube-system,Attempt:0,}" Jan 13 21:22:43.350507 containerd[1466]: time="2025-01-13T21:22:43.350180579Z" level=info msg="StartContainer for \"18a05bc91b1d0402e29078841f053c0f4f8a18524be7fbf96c49c74121a47cc5\" returns successfully" Jan 13 21:22:43.382232 containerd[1466]: time="2025-01-13T21:22:43.380832180Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:22:43.382232 containerd[1466]: time="2025-01-13T21:22:43.380904671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:22:43.382232 containerd[1466]: time="2025-01-13T21:22:43.380952280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:43.382232 containerd[1466]: time="2025-01-13T21:22:43.381102054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:22:43.417178 systemd[1]: Started cri-containerd-dd61ae5d1c5b66c8a8e82bce04dc94de47aaa5d68606a9857c1ab8dd02c43cce.scope - libcontainer container dd61ae5d1c5b66c8a8e82bce04dc94de47aaa5d68606a9857c1ab8dd02c43cce. Jan 13 21:22:43.497776 containerd[1466]: time="2025-01-13T21:22:43.497451839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-hk2zc,Uid:f0ab9c5b-b148-465d-b766-874fa90fc856,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd61ae5d1c5b66c8a8e82bce04dc94de47aaa5d68606a9857c1ab8dd02c43cce\"" Jan 13 21:22:44.890222 kubelet[2563]: I0113 21:22:44.890126 2563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k6mcj" podStartSLOduration=2.890100921 podStartE2EDuration="2.890100921s" podCreationTimestamp="2025-01-13 21:22:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:22:43.846274407 +0000 UTC m=+6.263426283" watchObservedRunningTime="2025-01-13 21:22:44.890100921 +0000 UTC m=+7.307252796" Jan 13 21:22:48.754092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1139571290.mount: Deactivated successfully. Jan 13 21:22:51.532665 containerd[1466]: time="2025-01-13T21:22:51.532599968Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:51.534538 containerd[1466]: time="2025-01-13T21:22:51.534467382Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166735343" Jan 13 21:22:51.535450 containerd[1466]: time="2025-01-13T21:22:51.535382601Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:51.537617 containerd[1466]: time="2025-01-13T21:22:51.537552358Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.334356802s" Jan 13 21:22:51.537812 containerd[1466]: time="2025-01-13T21:22:51.537779447Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 13 21:22:51.539664 containerd[1466]: time="2025-01-13T21:22:51.539628235Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 21:22:51.541849 containerd[1466]: time="2025-01-13T21:22:51.541803041Z" level=info msg="CreateContainer within sandbox \"1ddae5d5f613295d8eff363d5d6bdd67698a23d2d519c2cadcc7537351231524\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:22:51.563815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3305369677.mount: Deactivated successfully. Jan 13 21:22:51.568025 containerd[1466]: time="2025-01-13T21:22:51.567974566Z" level=info msg="CreateContainer within sandbox \"1ddae5d5f613295d8eff363d5d6bdd67698a23d2d519c2cadcc7537351231524\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2aad61beab7bf9dacb953eeb7d9489a13bbce927202f99c0ff714795969cf544\"" Jan 13 21:22:51.569020 containerd[1466]: time="2025-01-13T21:22:51.568961678Z" level=info msg="StartContainer for \"2aad61beab7bf9dacb953eeb7d9489a13bbce927202f99c0ff714795969cf544\"" Jan 13 21:22:51.615471 systemd[1]: Started cri-containerd-2aad61beab7bf9dacb953eeb7d9489a13bbce927202f99c0ff714795969cf544.scope - libcontainer container 2aad61beab7bf9dacb953eeb7d9489a13bbce927202f99c0ff714795969cf544. Jan 13 21:22:51.658570 containerd[1466]: time="2025-01-13T21:22:51.658179626Z" level=info msg="StartContainer for \"2aad61beab7bf9dacb953eeb7d9489a13bbce927202f99c0ff714795969cf544\" returns successfully" Jan 13 21:22:51.672689 systemd[1]: cri-containerd-2aad61beab7bf9dacb953eeb7d9489a13bbce927202f99c0ff714795969cf544.scope: Deactivated successfully. Jan 13 21:22:52.555668 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2aad61beab7bf9dacb953eeb7d9489a13bbce927202f99c0ff714795969cf544-rootfs.mount: Deactivated successfully. Jan 13 21:22:53.516973 containerd[1466]: time="2025-01-13T21:22:53.516883781Z" level=info msg="shim disconnected" id=2aad61beab7bf9dacb953eeb7d9489a13bbce927202f99c0ff714795969cf544 namespace=k8s.io Jan 13 21:22:53.516973 containerd[1466]: time="2025-01-13T21:22:53.516970514Z" level=warning msg="cleaning up after shim disconnected" id=2aad61beab7bf9dacb953eeb7d9489a13bbce927202f99c0ff714795969cf544 namespace=k8s.io Jan 13 21:22:53.518363 containerd[1466]: time="2025-01-13T21:22:53.516984849Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:22:53.861570 containerd[1466]: time="2025-01-13T21:22:53.861502458Z" level=info msg="CreateContainer within sandbox \"1ddae5d5f613295d8eff363d5d6bdd67698a23d2d519c2cadcc7537351231524\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:22:53.906643 containerd[1466]: time="2025-01-13T21:22:53.906570789Z" level=info msg="CreateContainer within sandbox \"1ddae5d5f613295d8eff363d5d6bdd67698a23d2d519c2cadcc7537351231524\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"23de540d9a283ab4a8e8fbc17ad1fb226ce1d6118cc54160f768c96278a44e5c\"" Jan 13 21:22:53.909239 containerd[1466]: time="2025-01-13T21:22:53.908413707Z" level=info msg="StartContainer for \"23de540d9a283ab4a8e8fbc17ad1fb226ce1d6118cc54160f768c96278a44e5c\"" Jan 13 21:22:53.987681 systemd[1]: Started cri-containerd-23de540d9a283ab4a8e8fbc17ad1fb226ce1d6118cc54160f768c96278a44e5c.scope - libcontainer container 23de540d9a283ab4a8e8fbc17ad1fb226ce1d6118cc54160f768c96278a44e5c. Jan 13 21:22:54.055239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3660083324.mount: Deactivated successfully. Jan 13 21:22:54.073335 containerd[1466]: time="2025-01-13T21:22:54.073284802Z" level=info msg="StartContainer for \"23de540d9a283ab4a8e8fbc17ad1fb226ce1d6118cc54160f768c96278a44e5c\" returns successfully" Jan 13 21:22:54.094740 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:22:54.095195 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:22:54.096402 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:22:54.105279 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:22:54.107671 systemd[1]: cri-containerd-23de540d9a283ab4a8e8fbc17ad1fb226ce1d6118cc54160f768c96278a44e5c.scope: Deactivated successfully. Jan 13 21:22:54.159258 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:22:54.163432 containerd[1466]: time="2025-01-13T21:22:54.163191317Z" level=info msg="shim disconnected" id=23de540d9a283ab4a8e8fbc17ad1fb226ce1d6118cc54160f768c96278a44e5c namespace=k8s.io Jan 13 21:22:54.163432 containerd[1466]: time="2025-01-13T21:22:54.163315587Z" level=warning msg="cleaning up after shim disconnected" id=23de540d9a283ab4a8e8fbc17ad1fb226ce1d6118cc54160f768c96278a44e5c namespace=k8s.io Jan 13 21:22:54.163432 containerd[1466]: time="2025-01-13T21:22:54.163351252Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:22:54.188922 containerd[1466]: time="2025-01-13T21:22:54.188786239Z" level=warning msg="cleanup warnings time=\"2025-01-13T21:22:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 13 21:22:54.826776 containerd[1466]: time="2025-01-13T21:22:54.826704190Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:54.828313 containerd[1466]: time="2025-01-13T21:22:54.828217705Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18907197" Jan 13 21:22:54.829898 containerd[1466]: time="2025-01-13T21:22:54.829827295Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:22:54.831967 containerd[1466]: time="2025-01-13T21:22:54.831795881Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.291907032s" Jan 13 21:22:54.831967 containerd[1466]: time="2025-01-13T21:22:54.831848727Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 13 21:22:54.835828 containerd[1466]: time="2025-01-13T21:22:54.835669464Z" level=info msg="CreateContainer within sandbox \"dd61ae5d1c5b66c8a8e82bce04dc94de47aaa5d68606a9857c1ab8dd02c43cce\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 21:22:54.858340 containerd[1466]: time="2025-01-13T21:22:54.858276056Z" level=info msg="CreateContainer within sandbox \"dd61ae5d1c5b66c8a8e82bce04dc94de47aaa5d68606a9857c1ab8dd02c43cce\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"76abe57d89b0e53945952fadb28bb3dae40376c2f52b4c535ec8d44c5b975dc6\"" Jan 13 21:22:54.860320 containerd[1466]: time="2025-01-13T21:22:54.859922540Z" level=info msg="StartContainer for \"76abe57d89b0e53945952fadb28bb3dae40376c2f52b4c535ec8d44c5b975dc6\"" Jan 13 21:22:54.876228 containerd[1466]: time="2025-01-13T21:22:54.875192427Z" level=info msg="CreateContainer within sandbox \"1ddae5d5f613295d8eff363d5d6bdd67698a23d2d519c2cadcc7537351231524\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:22:54.899527 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23de540d9a283ab4a8e8fbc17ad1fb226ce1d6118cc54160f768c96278a44e5c-rootfs.mount: Deactivated successfully. Jan 13 21:22:54.932039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2371571481.mount: Deactivated successfully. Jan 13 21:22:54.951039 containerd[1466]: time="2025-01-13T21:22:54.950702409Z" level=info msg="CreateContainer within sandbox \"1ddae5d5f613295d8eff363d5d6bdd67698a23d2d519c2cadcc7537351231524\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"87996f3c9f64cfc5c10f26e6a6ab1e6671500fb8cfa39609887ace6d409895f3\"" Jan 13 21:22:54.953177 containerd[1466]: time="2025-01-13T21:22:54.953137906Z" level=info msg="StartContainer for \"87996f3c9f64cfc5c10f26e6a6ab1e6671500fb8cfa39609887ace6d409895f3\"" Jan 13 21:22:54.959036 systemd[1]: Started cri-containerd-76abe57d89b0e53945952fadb28bb3dae40376c2f52b4c535ec8d44c5b975dc6.scope - libcontainer container 76abe57d89b0e53945952fadb28bb3dae40376c2f52b4c535ec8d44c5b975dc6. Jan 13 21:22:55.005465 systemd[1]: Started cri-containerd-87996f3c9f64cfc5c10f26e6a6ab1e6671500fb8cfa39609887ace6d409895f3.scope - libcontainer container 87996f3c9f64cfc5c10f26e6a6ab1e6671500fb8cfa39609887ace6d409895f3. Jan 13 21:22:55.035837 containerd[1466]: time="2025-01-13T21:22:55.035506410Z" level=info msg="StartContainer for \"76abe57d89b0e53945952fadb28bb3dae40376c2f52b4c535ec8d44c5b975dc6\" returns successfully" Jan 13 21:22:55.070956 containerd[1466]: time="2025-01-13T21:22:55.070899152Z" level=info msg="StartContainer for \"87996f3c9f64cfc5c10f26e6a6ab1e6671500fb8cfa39609887ace6d409895f3\" returns successfully" Jan 13 21:22:55.075281 systemd[1]: cri-containerd-87996f3c9f64cfc5c10f26e6a6ab1e6671500fb8cfa39609887ace6d409895f3.scope: Deactivated successfully. Jan 13 21:22:55.289887 containerd[1466]: time="2025-01-13T21:22:55.289421053Z" level=info msg="shim disconnected" id=87996f3c9f64cfc5c10f26e6a6ab1e6671500fb8cfa39609887ace6d409895f3 namespace=k8s.io Jan 13 21:22:55.289887 containerd[1466]: time="2025-01-13T21:22:55.289499538Z" level=warning msg="cleaning up after shim disconnected" id=87996f3c9f64cfc5c10f26e6a6ab1e6671500fb8cfa39609887ace6d409895f3 namespace=k8s.io Jan 13 21:22:55.289887 containerd[1466]: time="2025-01-13T21:22:55.289524235Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:22:55.892404 containerd[1466]: time="2025-01-13T21:22:55.892352079Z" level=info msg="CreateContainer within sandbox \"1ddae5d5f613295d8eff363d5d6bdd67698a23d2d519c2cadcc7537351231524\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:22:55.921231 containerd[1466]: time="2025-01-13T21:22:55.919550169Z" level=info msg="CreateContainer within sandbox \"1ddae5d5f613295d8eff363d5d6bdd67698a23d2d519c2cadcc7537351231524\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7fd3f83f19bb9b4e30d554630cc46ff30a1521c9f89f16c29908348d2ce19bc4\"" Jan 13 21:22:55.921231 containerd[1466]: time="2025-01-13T21:22:55.920384672Z" level=info msg="StartContainer for \"7fd3f83f19bb9b4e30d554630cc46ff30a1521c9f89f16c29908348d2ce19bc4\"" Jan 13 21:22:56.001494 systemd[1]: Started cri-containerd-7fd3f83f19bb9b4e30d554630cc46ff30a1521c9f89f16c29908348d2ce19bc4.scope - libcontainer container 7fd3f83f19bb9b4e30d554630cc46ff30a1521c9f89f16c29908348d2ce19bc4. Jan 13 21:22:56.065243 containerd[1466]: time="2025-01-13T21:22:56.064538940Z" level=info msg="StartContainer for \"7fd3f83f19bb9b4e30d554630cc46ff30a1521c9f89f16c29908348d2ce19bc4\" returns successfully" Jan 13 21:22:56.070280 systemd[1]: cri-containerd-7fd3f83f19bb9b4e30d554630cc46ff30a1521c9f89f16c29908348d2ce19bc4.scope: Deactivated successfully. Jan 13 21:22:56.153120 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7fd3f83f19bb9b4e30d554630cc46ff30a1521c9f89f16c29908348d2ce19bc4-rootfs.mount: Deactivated successfully. Jan 13 21:22:56.158163 containerd[1466]: time="2025-01-13T21:22:56.157823320Z" level=info msg="shim disconnected" id=7fd3f83f19bb9b4e30d554630cc46ff30a1521c9f89f16c29908348d2ce19bc4 namespace=k8s.io Jan 13 21:22:56.158163 containerd[1466]: time="2025-01-13T21:22:56.157895724Z" level=warning msg="cleaning up after shim disconnected" id=7fd3f83f19bb9b4e30d554630cc46ff30a1521c9f89f16c29908348d2ce19bc4 namespace=k8s.io Jan 13 21:22:56.158163 containerd[1466]: time="2025-01-13T21:22:56.157910745Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:22:56.255093 kubelet[2563]: I0113 21:22:56.254996 2563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-hk2zc" podStartSLOduration=2.92153815 podStartE2EDuration="14.254968135s" podCreationTimestamp="2025-01-13 21:22:42 +0000 UTC" firstStartedPulling="2025-01-13 21:22:43.499824794 +0000 UTC m=+5.916976643" lastFinishedPulling="2025-01-13 21:22:54.833254752 +0000 UTC m=+17.250406628" observedRunningTime="2025-01-13 21:22:56.096575357 +0000 UTC m=+18.513727230" watchObservedRunningTime="2025-01-13 21:22:56.254968135 +0000 UTC m=+18.672120004" Jan 13 21:22:56.897695 containerd[1466]: time="2025-01-13T21:22:56.896812604Z" level=info msg="CreateContainer within sandbox \"1ddae5d5f613295d8eff363d5d6bdd67698a23d2d519c2cadcc7537351231524\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:22:56.925663 containerd[1466]: time="2025-01-13T21:22:56.922960734Z" level=info msg="CreateContainer within sandbox \"1ddae5d5f613295d8eff363d5d6bdd67698a23d2d519c2cadcc7537351231524\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"658c8c6d2fe0556a5e591238c0819e463bb6cad96e1d7f646b28c593b106687a\"" Jan 13 21:22:56.925663 containerd[1466]: time="2025-01-13T21:22:56.923848847Z" level=info msg="StartContainer for \"658c8c6d2fe0556a5e591238c0819e463bb6cad96e1d7f646b28c593b106687a\"" Jan 13 21:22:56.978888 systemd[1]: Started cri-containerd-658c8c6d2fe0556a5e591238c0819e463bb6cad96e1d7f646b28c593b106687a.scope - libcontainer container 658c8c6d2fe0556a5e591238c0819e463bb6cad96e1d7f646b28c593b106687a. Jan 13 21:22:57.022058 containerd[1466]: time="2025-01-13T21:22:57.021881482Z" level=info msg="StartContainer for \"658c8c6d2fe0556a5e591238c0819e463bb6cad96e1d7f646b28c593b106687a\" returns successfully" Jan 13 21:22:57.171770 kubelet[2563]: I0113 21:22:57.170602 2563 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 13 21:22:57.260108 kubelet[2563]: W0113 21:22:57.260048 2563 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal' and this object Jan 13 21:22:57.264437 kubelet[2563]: E0113 21:22:57.263495 2563 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal' and this object" logger="UnhandledError" Jan 13 21:22:57.281233 kubelet[2563]: I0113 21:22:57.277755 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aad7917b-6e41-430a-8002-7023bcb852bf-config-volume\") pod \"coredns-6f6b679f8f-7c67n\" (UID: \"aad7917b-6e41-430a-8002-7023bcb852bf\") " pod="kube-system/coredns-6f6b679f8f-7c67n" Jan 13 21:22:57.284118 kubelet[2563]: I0113 21:22:57.282481 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69b4e1e2-68f7-4b21-ae1c-a80eb24c85fc-config-volume\") pod \"coredns-6f6b679f8f-fgpcm\" (UID: \"69b4e1e2-68f7-4b21-ae1c-a80eb24c85fc\") " pod="kube-system/coredns-6f6b679f8f-fgpcm" Jan 13 21:22:57.282832 systemd[1]: Created slice kubepods-burstable-podaad7917b_6e41_430a_8002_7023bcb852bf.slice - libcontainer container kubepods-burstable-podaad7917b_6e41_430a_8002_7023bcb852bf.slice. Jan 13 21:22:57.288285 kubelet[2563]: I0113 21:22:57.286538 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc9c4\" (UniqueName: \"kubernetes.io/projected/aad7917b-6e41-430a-8002-7023bcb852bf-kube-api-access-hc9c4\") pod \"coredns-6f6b679f8f-7c67n\" (UID: \"aad7917b-6e41-430a-8002-7023bcb852bf\") " pod="kube-system/coredns-6f6b679f8f-7c67n" Jan 13 21:22:57.291374 kubelet[2563]: I0113 21:22:57.290311 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx5p9\" (UniqueName: \"kubernetes.io/projected/69b4e1e2-68f7-4b21-ae1c-a80eb24c85fc-kube-api-access-rx5p9\") pod \"coredns-6f6b679f8f-fgpcm\" (UID: \"69b4e1e2-68f7-4b21-ae1c-a80eb24c85fc\") " pod="kube-system/coredns-6f6b679f8f-fgpcm" Jan 13 21:22:57.296723 systemd[1]: Created slice kubepods-burstable-pod69b4e1e2_68f7_4b21_ae1c_a80eb24c85fc.slice - libcontainer container kubepods-burstable-pod69b4e1e2_68f7_4b21_ae1c_a80eb24c85fc.slice. Jan 13 21:22:57.957864 kubelet[2563]: I0113 21:22:57.955032 2563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6psvv" podStartSLOduration=7.618435022 podStartE2EDuration="15.955010795s" podCreationTimestamp="2025-01-13 21:22:42 +0000 UTC" firstStartedPulling="2025-01-13 21:22:43.202392452 +0000 UTC m=+5.619544311" lastFinishedPulling="2025-01-13 21:22:51.538968217 +0000 UTC m=+13.956120084" observedRunningTime="2025-01-13 21:22:57.95165926 +0000 UTC m=+20.368811202" watchObservedRunningTime="2025-01-13 21:22:57.955010795 +0000 UTC m=+20.372162668" Jan 13 21:22:58.393563 kubelet[2563]: E0113 21:22:58.393510 2563 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 13 21:22:58.394145 kubelet[2563]: E0113 21:22:58.393642 2563 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/aad7917b-6e41-430a-8002-7023bcb852bf-config-volume podName:aad7917b-6e41-430a-8002-7023bcb852bf nodeName:}" failed. No retries permitted until 2025-01-13 21:22:58.893613712 +0000 UTC m=+21.310765584 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/aad7917b-6e41-430a-8002-7023bcb852bf-config-volume") pod "coredns-6f6b679f8f-7c67n" (UID: "aad7917b-6e41-430a-8002-7023bcb852bf") : failed to sync configmap cache: timed out waiting for the condition Jan 13 21:22:58.394145 kubelet[2563]: E0113 21:22:58.393509 2563 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jan 13 21:22:58.394145 kubelet[2563]: E0113 21:22:58.393994 2563 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/69b4e1e2-68f7-4b21-ae1c-a80eb24c85fc-config-volume podName:69b4e1e2-68f7-4b21-ae1c-a80eb24c85fc nodeName:}" failed. No retries permitted until 2025-01-13 21:22:58.893969542 +0000 UTC m=+21.311121409 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/69b4e1e2-68f7-4b21-ae1c-a80eb24c85fc-config-volume") pod "coredns-6f6b679f8f-fgpcm" (UID: "69b4e1e2-68f7-4b21-ae1c-a80eb24c85fc") : failed to sync configmap cache: timed out waiting for the condition Jan 13 21:22:59.093065 containerd[1466]: time="2025-01-13T21:22:59.092993741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7c67n,Uid:aad7917b-6e41-430a-8002-7023bcb852bf,Namespace:kube-system,Attempt:0,}" Jan 13 21:22:59.102125 containerd[1466]: time="2025-01-13T21:22:59.102062832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fgpcm,Uid:69b4e1e2-68f7-4b21-ae1c-a80eb24c85fc,Namespace:kube-system,Attempt:0,}" Jan 13 21:22:59.527905 systemd-networkd[1374]: cilium_host: Link UP Jan 13 21:22:59.531561 systemd-networkd[1374]: cilium_net: Link UP Jan 13 21:22:59.531883 systemd-networkd[1374]: cilium_net: Gained carrier Jan 13 21:22:59.532176 systemd-networkd[1374]: cilium_host: Gained carrier Jan 13 21:22:59.682060 systemd-networkd[1374]: cilium_vxlan: Link UP Jan 13 21:22:59.682072 systemd-networkd[1374]: cilium_vxlan: Gained carrier Jan 13 21:22:59.968242 kernel: NET: Registered PF_ALG protocol family Jan 13 21:23:00.161400 systemd-networkd[1374]: cilium_net: Gained IPv6LL Jan 13 21:23:00.481732 systemd-networkd[1374]: cilium_host: Gained IPv6LL Jan 13 21:23:00.850089 systemd-networkd[1374]: lxc_health: Link UP Jan 13 21:23:00.859426 systemd-networkd[1374]: lxc_health: Gained carrier Jan 13 21:23:01.188280 systemd-networkd[1374]: cilium_vxlan: Gained IPv6LL Jan 13 21:23:01.188741 systemd-networkd[1374]: lxc884142ae5b92: Link UP Jan 13 21:23:01.200492 kernel: eth0: renamed from tmp6e342 Jan 13 21:23:01.211816 systemd-networkd[1374]: lxc00f3971aea67: Link UP Jan 13 21:23:01.221686 kernel: eth0: renamed from tmp4b846 Jan 13 21:23:01.234967 systemd-networkd[1374]: lxc00f3971aea67: Gained carrier Jan 13 21:23:01.240264 systemd-networkd[1374]: lxc884142ae5b92: Gained carrier Jan 13 21:23:02.275516 systemd-networkd[1374]: lxc884142ae5b92: Gained IPv6LL Jan 13 21:23:02.337568 systemd-networkd[1374]: lxc00f3971aea67: Gained IPv6LL Jan 13 21:23:02.658007 systemd-networkd[1374]: lxc_health: Gained IPv6LL Jan 13 21:23:04.866318 ntpd[1433]: Listen normally on 7 cilium_host 192.168.0.41:123 Jan 13 21:23:04.867591 ntpd[1433]: 13 Jan 21:23:04 ntpd[1433]: Listen normally on 7 cilium_host 192.168.0.41:123 Jan 13 21:23:04.867591 ntpd[1433]: 13 Jan 21:23:04 ntpd[1433]: Listen normally on 8 cilium_net [fe80::2482:34ff:fe64:f781%4]:123 Jan 13 21:23:04.867591 ntpd[1433]: 13 Jan 21:23:04 ntpd[1433]: Listen normally on 9 cilium_host [fe80::14a2:a1ff:fe0c:bac1%5]:123 Jan 13 21:23:04.867591 ntpd[1433]: 13 Jan 21:23:04 ntpd[1433]: Listen normally on 10 cilium_vxlan [fe80::a09e:7dff:fede:a200%6]:123 Jan 13 21:23:04.867591 ntpd[1433]: 13 Jan 21:23:04 ntpd[1433]: Listen normally on 11 lxc_health [fe80::2408:b3ff:fe17:9163%8]:123 Jan 13 21:23:04.867591 ntpd[1433]: 13 Jan 21:23:04 ntpd[1433]: Listen normally on 12 lxc884142ae5b92 [fe80::68c5:e3ff:fee5:77d0%10]:123 Jan 13 21:23:04.867591 ntpd[1433]: 13 Jan 21:23:04 ntpd[1433]: Listen normally on 13 lxc00f3971aea67 [fe80::4023:e2ff:fecd:6a91%12]:123 Jan 13 21:23:04.867108 ntpd[1433]: Listen normally on 8 cilium_net [fe80::2482:34ff:fe64:f781%4]:123 Jan 13 21:23:04.867228 ntpd[1433]: Listen normally on 9 cilium_host [fe80::14a2:a1ff:fe0c:bac1%5]:123 Jan 13 21:23:04.867301 ntpd[1433]: Listen normally on 10 cilium_vxlan [fe80::a09e:7dff:fede:a200%6]:123 Jan 13 21:23:04.867376 ntpd[1433]: Listen normally on 11 lxc_health [fe80::2408:b3ff:fe17:9163%8]:123 Jan 13 21:23:04.867436 ntpd[1433]: Listen normally on 12 lxc884142ae5b92 [fe80::68c5:e3ff:fee5:77d0%10]:123 Jan 13 21:23:04.867491 ntpd[1433]: Listen normally on 13 lxc00f3971aea67 [fe80::4023:e2ff:fecd:6a91%12]:123 Jan 13 21:23:06.355722 containerd[1466]: time="2025-01-13T21:23:06.354324553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:06.355722 containerd[1466]: time="2025-01-13T21:23:06.354392463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:06.355722 containerd[1466]: time="2025-01-13T21:23:06.354442251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:06.355722 containerd[1466]: time="2025-01-13T21:23:06.354618878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:06.356722 containerd[1466]: time="2025-01-13T21:23:06.353566000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:23:06.356722 containerd[1466]: time="2025-01-13T21:23:06.353769964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:23:06.362617 containerd[1466]: time="2025-01-13T21:23:06.358851311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:06.374298 containerd[1466]: time="2025-01-13T21:23:06.371669876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:23:06.433485 systemd[1]: Started cri-containerd-4b846bea1c35b5986e06fcc1cd32783323a1661b59437da94c84d38e892923a8.scope - libcontainer container 4b846bea1c35b5986e06fcc1cd32783323a1661b59437da94c84d38e892923a8. Jan 13 21:23:06.441852 systemd[1]: Started cri-containerd-6e342b23635faae7060a7b64aa49c22cc1b221d7799f4f4aeef242abbe6390bf.scope - libcontainer container 6e342b23635faae7060a7b64aa49c22cc1b221d7799f4f4aeef242abbe6390bf. Jan 13 21:23:06.578301 containerd[1466]: time="2025-01-13T21:23:06.578183812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fgpcm,Uid:69b4e1e2-68f7-4b21-ae1c-a80eb24c85fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b846bea1c35b5986e06fcc1cd32783323a1661b59437da94c84d38e892923a8\"" Jan 13 21:23:06.582837 containerd[1466]: time="2025-01-13T21:23:06.582660351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7c67n,Uid:aad7917b-6e41-430a-8002-7023bcb852bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e342b23635faae7060a7b64aa49c22cc1b221d7799f4f4aeef242abbe6390bf\"" Jan 13 21:23:06.592138 containerd[1466]: time="2025-01-13T21:23:06.592071234Z" level=info msg="CreateContainer within sandbox \"4b846bea1c35b5986e06fcc1cd32783323a1661b59437da94c84d38e892923a8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:23:06.593962 containerd[1466]: time="2025-01-13T21:23:06.593915130Z" level=info msg="CreateContainer within sandbox \"6e342b23635faae7060a7b64aa49c22cc1b221d7799f4f4aeef242abbe6390bf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:23:06.634977 containerd[1466]: time="2025-01-13T21:23:06.634493456Z" level=info msg="CreateContainer within sandbox \"4b846bea1c35b5986e06fcc1cd32783323a1661b59437da94c84d38e892923a8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4bdab0d1d10c37fd2139b28096b5bb1e4e9fe75d8927db6e405895f52b2a056a\"" Jan 13 21:23:06.637821 containerd[1466]: time="2025-01-13T21:23:06.637730262Z" level=info msg="StartContainer for \"4bdab0d1d10c37fd2139b28096b5bb1e4e9fe75d8927db6e405895f52b2a056a\"" Jan 13 21:23:06.641704 containerd[1466]: time="2025-01-13T21:23:06.641619675Z" level=info msg="CreateContainer within sandbox \"6e342b23635faae7060a7b64aa49c22cc1b221d7799f4f4aeef242abbe6390bf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"be3a06b68f61c383f7838a107ef7e3f6fa35f1b4a095afe23797630737208cc8\"" Jan 13 21:23:06.645496 containerd[1466]: time="2025-01-13T21:23:06.643405938Z" level=info msg="StartContainer for \"be3a06b68f61c383f7838a107ef7e3f6fa35f1b4a095afe23797630737208cc8\"" Jan 13 21:23:06.701473 systemd[1]: Started cri-containerd-4bdab0d1d10c37fd2139b28096b5bb1e4e9fe75d8927db6e405895f52b2a056a.scope - libcontainer container 4bdab0d1d10c37fd2139b28096b5bb1e4e9fe75d8927db6e405895f52b2a056a. Jan 13 21:23:06.703301 systemd[1]: Started cri-containerd-be3a06b68f61c383f7838a107ef7e3f6fa35f1b4a095afe23797630737208cc8.scope - libcontainer container be3a06b68f61c383f7838a107ef7e3f6fa35f1b4a095afe23797630737208cc8. Jan 13 21:23:06.767086 containerd[1466]: time="2025-01-13T21:23:06.766958583Z" level=info msg="StartContainer for \"4bdab0d1d10c37fd2139b28096b5bb1e4e9fe75d8927db6e405895f52b2a056a\" returns successfully" Jan 13 21:23:06.769913 containerd[1466]: time="2025-01-13T21:23:06.769787857Z" level=info msg="StartContainer for \"be3a06b68f61c383f7838a107ef7e3f6fa35f1b4a095afe23797630737208cc8\" returns successfully" Jan 13 21:23:06.964281 kubelet[2563]: I0113 21:23:06.964036 2563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-fgpcm" podStartSLOduration=23.964010748 podStartE2EDuration="23.964010748s" podCreationTimestamp="2025-01-13 21:22:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:23:06.96236734 +0000 UTC m=+29.379519214" watchObservedRunningTime="2025-01-13 21:23:06.964010748 +0000 UTC m=+29.381162623" Jan 13 21:23:07.013248 kubelet[2563]: I0113 21:23:07.011861 2563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-7c67n" podStartSLOduration=25.011831853 podStartE2EDuration="25.011831853s" podCreationTimestamp="2025-01-13 21:22:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:23:06.982835436 +0000 UTC m=+29.399987310" watchObservedRunningTime="2025-01-13 21:23:07.011831853 +0000 UTC m=+29.428983728" Jan 13 21:23:07.366420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1578023650.mount: Deactivated successfully. Jan 13 21:23:31.538698 systemd[1]: Started sshd@9-10.128.0.40:22-147.75.109.163:34086.service - OpenSSH per-connection server daemon (147.75.109.163:34086). Jan 13 21:23:31.829519 sshd[3955]: Accepted publickey for core from 147.75.109.163 port 34086 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:23:31.831531 sshd[3955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:31.838309 systemd-logind[1447]: New session 10 of user core. Jan 13 21:23:31.842482 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:23:32.144912 sshd[3955]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:32.149737 systemd[1]: sshd@9-10.128.0.40:22-147.75.109.163:34086.service: Deactivated successfully. Jan 13 21:23:32.153569 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:23:32.156377 systemd-logind[1447]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:23:32.158148 systemd-logind[1447]: Removed session 10. Jan 13 21:23:37.201643 systemd[1]: Started sshd@10-10.128.0.40:22-147.75.109.163:34090.service - OpenSSH per-connection server daemon (147.75.109.163:34090). Jan 13 21:23:37.500408 sshd[3971]: Accepted publickey for core from 147.75.109.163 port 34090 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:23:37.502302 sshd[3971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:37.508278 systemd-logind[1447]: New session 11 of user core. Jan 13 21:23:37.514435 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:23:37.809678 sshd[3971]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:37.814773 systemd[1]: sshd@10-10.128.0.40:22-147.75.109.163:34090.service: Deactivated successfully. Jan 13 21:23:37.817724 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:23:37.819867 systemd-logind[1447]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:23:37.822765 systemd-logind[1447]: Removed session 11. Jan 13 21:23:42.866635 systemd[1]: Started sshd@11-10.128.0.40:22-147.75.109.163:40894.service - OpenSSH per-connection server daemon (147.75.109.163:40894). Jan 13 21:23:43.162365 sshd[3987]: Accepted publickey for core from 147.75.109.163 port 40894 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:23:43.164634 sshd[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:43.171557 systemd-logind[1447]: New session 12 of user core. Jan 13 21:23:43.178503 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:23:43.459715 sshd[3987]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:43.464657 systemd[1]: sshd@11-10.128.0.40:22-147.75.109.163:40894.service: Deactivated successfully. Jan 13 21:23:43.467878 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:23:43.470460 systemd-logind[1447]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:23:43.472446 systemd-logind[1447]: Removed session 12. Jan 13 21:23:48.516648 systemd[1]: Started sshd@12-10.128.0.40:22-147.75.109.163:35214.service - OpenSSH per-connection server daemon (147.75.109.163:35214). Jan 13 21:23:48.818692 sshd[4003]: Accepted publickey for core from 147.75.109.163 port 35214 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:23:48.820711 sshd[4003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:48.827891 systemd-logind[1447]: New session 13 of user core. Jan 13 21:23:48.834524 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:23:49.113595 sshd[4003]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:49.119596 systemd[1]: sshd@12-10.128.0.40:22-147.75.109.163:35214.service: Deactivated successfully. Jan 13 21:23:49.122752 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:23:49.123913 systemd-logind[1447]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:23:49.125751 systemd-logind[1447]: Removed session 13. Jan 13 21:23:49.166592 systemd[1]: Started sshd@13-10.128.0.40:22-147.75.109.163:35220.service - OpenSSH per-connection server daemon (147.75.109.163:35220). Jan 13 21:23:49.459057 sshd[4017]: Accepted publickey for core from 147.75.109.163 port 35220 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:23:49.461061 sshd[4017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:49.468920 systemd-logind[1447]: New session 14 of user core. Jan 13 21:23:49.473470 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:23:49.801386 sshd[4017]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:49.806470 systemd[1]: sshd@13-10.128.0.40:22-147.75.109.163:35220.service: Deactivated successfully. Jan 13 21:23:49.809313 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:23:49.811819 systemd-logind[1447]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:23:49.813689 systemd-logind[1447]: Removed session 14. Jan 13 21:23:49.860600 systemd[1]: Started sshd@14-10.128.0.40:22-147.75.109.163:35234.service - OpenSSH per-connection server daemon (147.75.109.163:35234). Jan 13 21:23:50.151963 sshd[4028]: Accepted publickey for core from 147.75.109.163 port 35234 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:23:50.153975 sshd[4028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:50.160737 systemd-logind[1447]: New session 15 of user core. Jan 13 21:23:50.170519 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:23:50.445466 sshd[4028]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:50.451138 systemd[1]: sshd@14-10.128.0.40:22-147.75.109.163:35234.service: Deactivated successfully. Jan 13 21:23:50.454312 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:23:50.456558 systemd-logind[1447]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:23:50.458390 systemd-logind[1447]: Removed session 15. Jan 13 21:23:55.501670 systemd[1]: Started sshd@15-10.128.0.40:22-147.75.109.163:35246.service - OpenSSH per-connection server daemon (147.75.109.163:35246). Jan 13 21:23:55.785152 sshd[4041]: Accepted publickey for core from 147.75.109.163 port 35246 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:23:55.787270 sshd[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:23:55.793766 systemd-logind[1447]: New session 16 of user core. Jan 13 21:23:55.801552 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:23:56.083073 sshd[4041]: pam_unix(sshd:session): session closed for user core Jan 13 21:23:56.089920 systemd[1]: sshd@15-10.128.0.40:22-147.75.109.163:35246.service: Deactivated successfully. Jan 13 21:23:56.092942 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:23:56.095171 systemd-logind[1447]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:23:56.097081 systemd-logind[1447]: Removed session 16. Jan 13 21:24:01.136600 systemd[1]: Started sshd@16-10.128.0.40:22-147.75.109.163:56122.service - OpenSSH per-connection server daemon (147.75.109.163:56122). Jan 13 21:24:01.429901 sshd[4054]: Accepted publickey for core from 147.75.109.163 port 56122 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:01.431813 sshd[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:01.438069 systemd-logind[1447]: New session 17 of user core. Jan 13 21:24:01.440502 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:24:01.717354 sshd[4054]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:01.722276 systemd[1]: sshd@16-10.128.0.40:22-147.75.109.163:56122.service: Deactivated successfully. Jan 13 21:24:01.725434 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:24:01.727398 systemd-logind[1447]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:24:01.729702 systemd-logind[1447]: Removed session 17. Jan 13 21:24:01.770621 systemd[1]: Started sshd@17-10.128.0.40:22-147.75.109.163:56128.service - OpenSSH per-connection server daemon (147.75.109.163:56128). Jan 13 21:24:02.061219 sshd[4067]: Accepted publickey for core from 147.75.109.163 port 56128 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:02.063148 sshd[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:02.068909 systemd-logind[1447]: New session 18 of user core. Jan 13 21:24:02.075409 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:24:02.418551 sshd[4067]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:02.422975 systemd[1]: sshd@17-10.128.0.40:22-147.75.109.163:56128.service: Deactivated successfully. Jan 13 21:24:02.425748 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:24:02.427865 systemd-logind[1447]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:24:02.429736 systemd-logind[1447]: Removed session 18. Jan 13 21:24:02.475831 systemd[1]: Started sshd@18-10.128.0.40:22-147.75.109.163:56138.service - OpenSSH per-connection server daemon (147.75.109.163:56138). Jan 13 21:24:02.757871 sshd[4078]: Accepted publickey for core from 147.75.109.163 port 56138 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:02.759813 sshd[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:02.766297 systemd-logind[1447]: New session 19 of user core. Jan 13 21:24:02.774419 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:24:04.555906 sshd[4078]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:04.562802 systemd[1]: sshd@18-10.128.0.40:22-147.75.109.163:56138.service: Deactivated successfully. Jan 13 21:24:04.568092 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:24:04.570813 systemd-logind[1447]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:24:04.572267 systemd-logind[1447]: Removed session 19. Jan 13 21:24:04.615074 systemd[1]: Started sshd@19-10.128.0.40:22-147.75.109.163:56146.service - OpenSSH per-connection server daemon (147.75.109.163:56146). Jan 13 21:24:04.907039 sshd[4096]: Accepted publickey for core from 147.75.109.163 port 56146 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:04.909072 sshd[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:04.917048 systemd-logind[1447]: New session 20 of user core. Jan 13 21:24:04.925461 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 21:24:05.340728 sshd[4096]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:05.346128 systemd[1]: sshd@19-10.128.0.40:22-147.75.109.163:56146.service: Deactivated successfully. Jan 13 21:24:05.349146 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 21:24:05.350402 systemd-logind[1447]: Session 20 logged out. Waiting for processes to exit. Jan 13 21:24:05.352293 systemd-logind[1447]: Removed session 20. Jan 13 21:24:05.399656 systemd[1]: Started sshd@20-10.128.0.40:22-147.75.109.163:56158.service - OpenSSH per-connection server daemon (147.75.109.163:56158). Jan 13 21:24:05.681921 sshd[4107]: Accepted publickey for core from 147.75.109.163 port 56158 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:05.684055 sshd[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:05.690819 systemd-logind[1447]: New session 21 of user core. Jan 13 21:24:05.701472 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 21:24:05.970252 sshd[4107]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:05.976102 systemd[1]: sshd@20-10.128.0.40:22-147.75.109.163:56158.service: Deactivated successfully. Jan 13 21:24:05.979953 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 21:24:05.981111 systemd-logind[1447]: Session 21 logged out. Waiting for processes to exit. Jan 13 21:24:05.982794 systemd-logind[1447]: Removed session 21. Jan 13 21:24:11.028613 systemd[1]: Started sshd@21-10.128.0.40:22-147.75.109.163:33280.service - OpenSSH per-connection server daemon (147.75.109.163:33280). Jan 13 21:24:11.315632 sshd[4119]: Accepted publickey for core from 147.75.109.163 port 33280 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:11.317645 sshd[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:11.324055 systemd-logind[1447]: New session 22 of user core. Jan 13 21:24:11.331419 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 21:24:11.601113 sshd[4119]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:11.606815 systemd[1]: sshd@21-10.128.0.40:22-147.75.109.163:33280.service: Deactivated successfully. Jan 13 21:24:11.610335 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 21:24:11.611448 systemd-logind[1447]: Session 22 logged out. Waiting for processes to exit. Jan 13 21:24:11.612993 systemd-logind[1447]: Removed session 22. Jan 13 21:24:16.658628 systemd[1]: Started sshd@22-10.128.0.40:22-147.75.109.163:33294.service - OpenSSH per-connection server daemon (147.75.109.163:33294). Jan 13 21:24:16.942531 sshd[4136]: Accepted publickey for core from 147.75.109.163 port 33294 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:16.944501 sshd[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:16.951038 systemd-logind[1447]: New session 23 of user core. Jan 13 21:24:16.959495 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 21:24:17.226094 sshd[4136]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:17.232244 systemd[1]: sshd@22-10.128.0.40:22-147.75.109.163:33294.service: Deactivated successfully. Jan 13 21:24:17.235365 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 21:24:17.236501 systemd-logind[1447]: Session 23 logged out. Waiting for processes to exit. Jan 13 21:24:17.238058 systemd-logind[1447]: Removed session 23. Jan 13 21:24:22.279996 systemd[1]: Started sshd@23-10.128.0.40:22-147.75.109.163:57236.service - OpenSSH per-connection server daemon (147.75.109.163:57236). Jan 13 21:24:22.574597 sshd[4148]: Accepted publickey for core from 147.75.109.163 port 57236 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:22.576542 sshd[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:22.583043 systemd-logind[1447]: New session 24 of user core. Jan 13 21:24:22.591464 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 21:24:22.860880 sshd[4148]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:22.865941 systemd[1]: sshd@23-10.128.0.40:22-147.75.109.163:57236.service: Deactivated successfully. Jan 13 21:24:22.868989 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 21:24:22.871219 systemd-logind[1447]: Session 24 logged out. Waiting for processes to exit. Jan 13 21:24:22.873005 systemd-logind[1447]: Removed session 24. Jan 13 21:24:26.472645 systemd[1]: Started sshd@24-10.128.0.40:22-14.199.52.62:36526.service - OpenSSH per-connection server daemon (14.199.52.62:36526). Jan 13 21:24:27.453006 sshd[4161]: Connection closed by authenticating user root 14.199.52.62 port 36526 [preauth] Jan 13 21:24:27.456454 systemd[1]: sshd@24-10.128.0.40:22-14.199.52.62:36526.service: Deactivated successfully. Jan 13 21:24:27.917506 systemd[1]: Started sshd@25-10.128.0.40:22-147.75.109.163:42528.service - OpenSSH per-connection server daemon (147.75.109.163:42528). Jan 13 21:24:28.202811 sshd[4166]: Accepted publickey for core from 147.75.109.163 port 42528 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:28.204824 sshd[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:28.210726 systemd-logind[1447]: New session 25 of user core. Jan 13 21:24:28.216428 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 21:24:28.484263 sshd[4166]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:28.489214 systemd[1]: sshd@25-10.128.0.40:22-147.75.109.163:42528.service: Deactivated successfully. Jan 13 21:24:28.491736 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 21:24:28.493530 systemd-logind[1447]: Session 25 logged out. Waiting for processes to exit. Jan 13 21:24:28.494972 systemd-logind[1447]: Removed session 25. Jan 13 21:24:28.541604 systemd[1]: Started sshd@26-10.128.0.40:22-147.75.109.163:42534.service - OpenSSH per-connection server daemon (147.75.109.163:42534). Jan 13 21:24:28.833100 sshd[4179]: Accepted publickey for core from 147.75.109.163 port 42534 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:28.835043 sshd[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:28.843359 systemd-logind[1447]: New session 26 of user core. Jan 13 21:24:28.844805 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 21:24:30.689836 containerd[1466]: time="2025-01-13T21:24:30.689733696Z" level=info msg="StopContainer for \"76abe57d89b0e53945952fadb28bb3dae40376c2f52b4c535ec8d44c5b975dc6\" with timeout 30 (s)" Jan 13 21:24:30.692674 containerd[1466]: time="2025-01-13T21:24:30.690832873Z" level=info msg="Stop container \"76abe57d89b0e53945952fadb28bb3dae40376c2f52b4c535ec8d44c5b975dc6\" with signal terminated" Jan 13 21:24:30.725595 containerd[1466]: time="2025-01-13T21:24:30.725535619Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:24:30.728976 systemd[1]: cri-containerd-76abe57d89b0e53945952fadb28bb3dae40376c2f52b4c535ec8d44c5b975dc6.scope: Deactivated successfully. Jan 13 21:24:30.742516 containerd[1466]: time="2025-01-13T21:24:30.742467816Z" level=info msg="StopContainer for \"658c8c6d2fe0556a5e591238c0819e463bb6cad96e1d7f646b28c593b106687a\" with timeout 2 (s)" Jan 13 21:24:30.743518 containerd[1466]: time="2025-01-13T21:24:30.743474747Z" level=info msg="Stop container \"658c8c6d2fe0556a5e591238c0819e463bb6cad96e1d7f646b28c593b106687a\" with signal terminated" Jan 13 21:24:30.761533 systemd-networkd[1374]: lxc_health: Link DOWN Jan 13 21:24:30.761547 systemd-networkd[1374]: lxc_health: Lost carrier Jan 13 21:24:30.788809 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76abe57d89b0e53945952fadb28bb3dae40376c2f52b4c535ec8d44c5b975dc6-rootfs.mount: Deactivated successfully. Jan 13 21:24:30.792741 systemd[1]: cri-containerd-658c8c6d2fe0556a5e591238c0819e463bb6cad96e1d7f646b28c593b106687a.scope: Deactivated successfully. Jan 13 21:24:30.793852 systemd[1]: cri-containerd-658c8c6d2fe0556a5e591238c0819e463bb6cad96e1d7f646b28c593b106687a.scope: Consumed 9.714s CPU time. Jan 13 21:24:30.814929 containerd[1466]: time="2025-01-13T21:24:30.814365334Z" level=info msg="shim disconnected" id=76abe57d89b0e53945952fadb28bb3dae40376c2f52b4c535ec8d44c5b975dc6 namespace=k8s.io Jan 13 21:24:30.814929 containerd[1466]: time="2025-01-13T21:24:30.814439285Z" level=warning msg="cleaning up after shim disconnected" id=76abe57d89b0e53945952fadb28bb3dae40376c2f52b4c535ec8d44c5b975dc6 namespace=k8s.io Jan 13 21:24:30.814929 containerd[1466]: time="2025-01-13T21:24:30.814454967Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:24:30.825848 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-658c8c6d2fe0556a5e591238c0819e463bb6cad96e1d7f646b28c593b106687a-rootfs.mount: Deactivated successfully. Jan 13 21:24:30.835428 containerd[1466]: time="2025-01-13T21:24:30.835076553Z" level=info msg="shim disconnected" id=658c8c6d2fe0556a5e591238c0819e463bb6cad96e1d7f646b28c593b106687a namespace=k8s.io Jan 13 21:24:30.835428 containerd[1466]: time="2025-01-13T21:24:30.835190016Z" level=warning msg="cleaning up after shim disconnected" id=658c8c6d2fe0556a5e591238c0819e463bb6cad96e1d7f646b28c593b106687a namespace=k8s.io Jan 13 21:24:30.835428 containerd[1466]: time="2025-01-13T21:24:30.835222202Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:24:30.856404 containerd[1466]: time="2025-01-13T21:24:30.856341629Z" level=info msg="StopContainer for \"76abe57d89b0e53945952fadb28bb3dae40376c2f52b4c535ec8d44c5b975dc6\" returns successfully" Jan 13 21:24:30.857222 containerd[1466]: time="2025-01-13T21:24:30.857155777Z" level=info msg="StopPodSandbox for \"dd61ae5d1c5b66c8a8e82bce04dc94de47aaa5d68606a9857c1ab8dd02c43cce\"" Jan 13 21:24:30.857764 containerd[1466]: time="2025-01-13T21:24:30.857441451Z" level=info msg="Container to stop \"76abe57d89b0e53945952fadb28bb3dae40376c2f52b4c535ec8d44c5b975dc6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:24:30.863336 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dd61ae5d1c5b66c8a8e82bce04dc94de47aaa5d68606a9857c1ab8dd02c43cce-shm.mount: Deactivated successfully. Jan 13 21:24:30.873464 containerd[1466]: time="2025-01-13T21:24:30.873416900Z" level=info msg="StopContainer for \"658c8c6d2fe0556a5e591238c0819e463bb6cad96e1d7f646b28c593b106687a\" returns successfully" Jan 13 21:24:30.875055 systemd[1]: cri-containerd-dd61ae5d1c5b66c8a8e82bce04dc94de47aaa5d68606a9857c1ab8dd02c43cce.scope: Deactivated successfully. Jan 13 21:24:30.881219 containerd[1466]: time="2025-01-13T21:24:30.881150377Z" level=info msg="StopPodSandbox for \"1ddae5d5f613295d8eff363d5d6bdd67698a23d2d519c2cadcc7537351231524\"" Jan 13 21:24:30.881503 containerd[1466]: time="2025-01-13T21:24:30.881450972Z" level=info msg="Container to stop \"658c8c6d2fe0556a5e591238c0819e463bb6cad96e1d7f646b28c593b106687a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:24:30.881627 containerd[1466]: time="2025-01-13T21:24:30.881595965Z" level=info msg="Container to stop \"87996f3c9f64cfc5c10f26e6a6ab1e6671500fb8cfa39609887ace6d409895f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:24:30.881716 containerd[1466]: time="2025-01-13T21:24:30.881698439Z" level=info msg="Container to stop \"23de540d9a283ab4a8e8fbc17ad1fb226ce1d6118cc54160f768c96278a44e5c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:24:30.881800 containerd[1466]: time="2025-01-13T21:24:30.881783670Z" level=info msg="Container to stop \"7fd3f83f19bb9b4e30d554630cc46ff30a1521c9f89f16c29908348d2ce19bc4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:24:30.881906 containerd[1466]: time="2025-01-13T21:24:30.881881571Z" level=info msg="Container to stop \"2aad61beab7bf9dacb953eeb7d9489a13bbce927202f99c0ff714795969cf544\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:24:30.888571 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1ddae5d5f613295d8eff363d5d6bdd67698a23d2d519c2cadcc7537351231524-shm.mount: Deactivated successfully. Jan 13 21:24:30.904071 systemd[1]: cri-containerd-1ddae5d5f613295d8eff363d5d6bdd67698a23d2d519c2cadcc7537351231524.scope: Deactivated successfully. Jan 13 21:24:30.944330 containerd[1466]: time="2025-01-13T21:24:30.943914327Z" level=info msg="shim disconnected" id=dd61ae5d1c5b66c8a8e82bce04dc94de47aaa5d68606a9857c1ab8dd02c43cce namespace=k8s.io Jan 13 21:24:30.944330 containerd[1466]: time="2025-01-13T21:24:30.944119351Z" level=warning msg="cleaning up after shim disconnected" id=dd61ae5d1c5b66c8a8e82bce04dc94de47aaa5d68606a9857c1ab8dd02c43cce namespace=k8s.io Jan 13 21:24:30.944330 containerd[1466]: time="2025-01-13T21:24:30.944267009Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:24:30.953770 containerd[1466]: time="2025-01-13T21:24:30.953182524Z" level=info msg="shim disconnected" id=1ddae5d5f613295d8eff363d5d6bdd67698a23d2d519c2cadcc7537351231524 namespace=k8s.io Jan 13 21:24:30.953770 containerd[1466]: time="2025-01-13T21:24:30.953548963Z" level=warning msg="cleaning up after shim disconnected" id=1ddae5d5f613295d8eff363d5d6bdd67698a23d2d519c2cadcc7537351231524 namespace=k8s.io Jan 13 21:24:30.953770 containerd[1466]: time="2025-01-13T21:24:30.953573711Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:24:30.977853 containerd[1466]: time="2025-01-13T21:24:30.977798635Z" level=info msg="TearDown network for sandbox \"dd61ae5d1c5b66c8a8e82bce04dc94de47aaa5d68606a9857c1ab8dd02c43cce\" successfully" Jan 13 21:24:30.977853 containerd[1466]: time="2025-01-13T21:24:30.977848207Z" level=info msg="StopPodSandbox for \"dd61ae5d1c5b66c8a8e82bce04dc94de47aaa5d68606a9857c1ab8dd02c43cce\" returns successfully" Jan 13 21:24:30.983657 containerd[1466]: time="2025-01-13T21:24:30.983160888Z" level=info msg="TearDown network for sandbox \"1ddae5d5f613295d8eff363d5d6bdd67698a23d2d519c2cadcc7537351231524\" successfully" Jan 13 21:24:30.983657 containerd[1466]: time="2025-01-13T21:24:30.983223065Z" level=info msg="StopPodSandbox for \"1ddae5d5f613295d8eff363d5d6bdd67698a23d2d519c2cadcc7537351231524\" returns successfully" Jan 13 21:24:31.112223 kubelet[2563]: I0113 21:24:31.112140 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f0ab9c5b-b148-465d-b766-874fa90fc856-cilium-config-path\") pod \"f0ab9c5b-b148-465d-b766-874fa90fc856\" (UID: \"f0ab9c5b-b148-465d-b766-874fa90fc856\") " Jan 13 21:24:31.112223 kubelet[2563]: I0113 21:24:31.112224 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-etc-cni-netd\") pod \"8256be8e-e45b-4cb4-a574-7d75fb60126d\" (UID: \"8256be8e-e45b-4cb4-a574-7d75fb60126d\") " Jan 13 21:24:31.112917 kubelet[2563]: I0113 21:24:31.112259 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gj7l\" (UniqueName: \"kubernetes.io/projected/f0ab9c5b-b148-465d-b766-874fa90fc856-kube-api-access-6gj7l\") pod \"f0ab9c5b-b148-465d-b766-874fa90fc856\" (UID: \"f0ab9c5b-b148-465d-b766-874fa90fc856\") " Jan 13 21:24:31.112917 kubelet[2563]: I0113 21:24:31.112293 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-lib-modules\") pod \"8256be8e-e45b-4cb4-a574-7d75fb60126d\" (UID: \"8256be8e-e45b-4cb4-a574-7d75fb60126d\") " Jan 13 21:24:31.112917 kubelet[2563]: I0113 21:24:31.112317 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-cilium-cgroup\") pod \"8256be8e-e45b-4cb4-a574-7d75fb60126d\" (UID: \"8256be8e-e45b-4cb4-a574-7d75fb60126d\") " Jan 13 21:24:31.112917 kubelet[2563]: I0113 21:24:31.112341 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-bpf-maps\") pod \"8256be8e-e45b-4cb4-a574-7d75fb60126d\" (UID: \"8256be8e-e45b-4cb4-a574-7d75fb60126d\") " Jan 13 21:24:31.112917 kubelet[2563]: I0113 21:24:31.112364 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-cilium-run\") pod \"8256be8e-e45b-4cb4-a574-7d75fb60126d\" (UID: \"8256be8e-e45b-4cb4-a574-7d75fb60126d\") " Jan 13 21:24:31.112917 kubelet[2563]: I0113 21:24:31.112388 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-host-proc-sys-net\") pod \"8256be8e-e45b-4cb4-a574-7d75fb60126d\" (UID: \"8256be8e-e45b-4cb4-a574-7d75fb60126d\") " Jan 13 21:24:31.115445 kubelet[2563]: I0113 21:24:31.112418 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-cni-path\") pod \"8256be8e-e45b-4cb4-a574-7d75fb60126d\" (UID: \"8256be8e-e45b-4cb4-a574-7d75fb60126d\") " Jan 13 21:24:31.115445 kubelet[2563]: I0113 21:24:31.112445 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-hostproc\") pod \"8256be8e-e45b-4cb4-a574-7d75fb60126d\" (UID: \"8256be8e-e45b-4cb4-a574-7d75fb60126d\") " Jan 13 21:24:31.115445 kubelet[2563]: I0113 21:24:31.112472 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8256be8e-e45b-4cb4-a574-7d75fb60126d-clustermesh-secrets\") pod \"8256be8e-e45b-4cb4-a574-7d75fb60126d\" (UID: \"8256be8e-e45b-4cb4-a574-7d75fb60126d\") " Jan 13 21:24:31.115445 kubelet[2563]: I0113 21:24:31.112499 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8256be8e-e45b-4cb4-a574-7d75fb60126d-cilium-config-path\") pod \"8256be8e-e45b-4cb4-a574-7d75fb60126d\" (UID: \"8256be8e-e45b-4cb4-a574-7d75fb60126d\") " Jan 13 21:24:31.115445 kubelet[2563]: I0113 21:24:31.112527 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpjkd\" (UniqueName: \"kubernetes.io/projected/8256be8e-e45b-4cb4-a574-7d75fb60126d-kube-api-access-bpjkd\") pod \"8256be8e-e45b-4cb4-a574-7d75fb60126d\" (UID: \"8256be8e-e45b-4cb4-a574-7d75fb60126d\") " Jan 13 21:24:31.115445 kubelet[2563]: I0113 21:24:31.112555 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-xtables-lock\") pod \"8256be8e-e45b-4cb4-a574-7d75fb60126d\" (UID: \"8256be8e-e45b-4cb4-a574-7d75fb60126d\") " Jan 13 21:24:31.115779 kubelet[2563]: I0113 21:24:31.112585 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8256be8e-e45b-4cb4-a574-7d75fb60126d-hubble-tls\") pod \"8256be8e-e45b-4cb4-a574-7d75fb60126d\" (UID: \"8256be8e-e45b-4cb4-a574-7d75fb60126d\") " Jan 13 21:24:31.115779 kubelet[2563]: I0113 21:24:31.112612 2563 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-host-proc-sys-kernel\") pod \"8256be8e-e45b-4cb4-a574-7d75fb60126d\" (UID: \"8256be8e-e45b-4cb4-a574-7d75fb60126d\") " Jan 13 21:24:31.115779 kubelet[2563]: I0113 21:24:31.112711 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8256be8e-e45b-4cb4-a574-7d75fb60126d" (UID: "8256be8e-e45b-4cb4-a574-7d75fb60126d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:24:31.115779 kubelet[2563]: I0113 21:24:31.113280 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8256be8e-e45b-4cb4-a574-7d75fb60126d" (UID: "8256be8e-e45b-4cb4-a574-7d75fb60126d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:24:31.115779 kubelet[2563]: I0113 21:24:31.113330 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8256be8e-e45b-4cb4-a574-7d75fb60126d" (UID: "8256be8e-e45b-4cb4-a574-7d75fb60126d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:24:31.116237 kubelet[2563]: I0113 21:24:31.116170 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0ab9c5b-b148-465d-b766-874fa90fc856-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f0ab9c5b-b148-465d-b766-874fa90fc856" (UID: "f0ab9c5b-b148-465d-b766-874fa90fc856"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:24:31.116330 kubelet[2563]: I0113 21:24:31.116286 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-cni-path" (OuterVolumeSpecName: "cni-path") pod "8256be8e-e45b-4cb4-a574-7d75fb60126d" (UID: "8256be8e-e45b-4cb4-a574-7d75fb60126d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:24:31.116330 kubelet[2563]: I0113 21:24:31.116318 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-hostproc" (OuterVolumeSpecName: "hostproc") pod "8256be8e-e45b-4cb4-a574-7d75fb60126d" (UID: "8256be8e-e45b-4cb4-a574-7d75fb60126d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:24:31.116547 kubelet[2563]: I0113 21:24:31.116517 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8256be8e-e45b-4cb4-a574-7d75fb60126d" (UID: "8256be8e-e45b-4cb4-a574-7d75fb60126d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:24:31.116547 kubelet[2563]: I0113 21:24:31.116563 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8256be8e-e45b-4cb4-a574-7d75fb60126d" (UID: "8256be8e-e45b-4cb4-a574-7d75fb60126d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:24:31.116547 kubelet[2563]: I0113 21:24:31.116588 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8256be8e-e45b-4cb4-a574-7d75fb60126d" (UID: "8256be8e-e45b-4cb4-a574-7d75fb60126d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:24:31.116547 kubelet[2563]: I0113 21:24:31.116610 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8256be8e-e45b-4cb4-a574-7d75fb60126d" (UID: "8256be8e-e45b-4cb4-a574-7d75fb60126d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:24:31.116984 kubelet[2563]: I0113 21:24:31.116955 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8256be8e-e45b-4cb4-a574-7d75fb60126d" (UID: "8256be8e-e45b-4cb4-a574-7d75fb60126d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:24:31.120012 kubelet[2563]: I0113 21:24:31.119883 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0ab9c5b-b148-465d-b766-874fa90fc856-kube-api-access-6gj7l" (OuterVolumeSpecName: "kube-api-access-6gj7l") pod "f0ab9c5b-b148-465d-b766-874fa90fc856" (UID: "f0ab9c5b-b148-465d-b766-874fa90fc856"). InnerVolumeSpecName "kube-api-access-6gj7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:24:31.126093 kubelet[2563]: I0113 21:24:31.126027 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8256be8e-e45b-4cb4-a574-7d75fb60126d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8256be8e-e45b-4cb4-a574-7d75fb60126d" (UID: "8256be8e-e45b-4cb4-a574-7d75fb60126d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:24:31.126446 kubelet[2563]: I0113 21:24:31.126394 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8256be8e-e45b-4cb4-a574-7d75fb60126d-kube-api-access-bpjkd" (OuterVolumeSpecName: "kube-api-access-bpjkd") pod "8256be8e-e45b-4cb4-a574-7d75fb60126d" (UID: "8256be8e-e45b-4cb4-a574-7d75fb60126d"). InnerVolumeSpecName "kube-api-access-bpjkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:24:31.127396 kubelet[2563]: I0113 21:24:31.127350 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8256be8e-e45b-4cb4-a574-7d75fb60126d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8256be8e-e45b-4cb4-a574-7d75fb60126d" (UID: "8256be8e-e45b-4cb4-a574-7d75fb60126d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:24:31.127691 kubelet[2563]: I0113 21:24:31.127623 2563 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8256be8e-e45b-4cb4-a574-7d75fb60126d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8256be8e-e45b-4cb4-a574-7d75fb60126d" (UID: "8256be8e-e45b-4cb4-a574-7d75fb60126d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 21:24:31.151710 kubelet[2563]: I0113 21:24:31.151589 2563 scope.go:117] "RemoveContainer" containerID="76abe57d89b0e53945952fadb28bb3dae40376c2f52b4c535ec8d44c5b975dc6" Jan 13 21:24:31.156854 containerd[1466]: time="2025-01-13T21:24:31.156237508Z" level=info msg="RemoveContainer for \"76abe57d89b0e53945952fadb28bb3dae40376c2f52b4c535ec8d44c5b975dc6\"" Jan 13 21:24:31.162644 containerd[1466]: time="2025-01-13T21:24:31.162600489Z" level=info msg="RemoveContainer for \"76abe57d89b0e53945952fadb28bb3dae40376c2f52b4c535ec8d44c5b975dc6\" returns successfully" Jan 13 21:24:31.163773 kubelet[2563]: I0113 21:24:31.163524 2563 scope.go:117] "RemoveContainer" containerID="76abe57d89b0e53945952fadb28bb3dae40376c2f52b4c535ec8d44c5b975dc6" Jan 13 21:24:31.164250 containerd[1466]: time="2025-01-13T21:24:31.163999251Z" level=error msg="ContainerStatus for \"76abe57d89b0e53945952fadb28bb3dae40376c2f52b4c535ec8d44c5b975dc6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"76abe57d89b0e53945952fadb28bb3dae40376c2f52b4c535ec8d44c5b975dc6\": not found" Jan 13 21:24:31.166129 kubelet[2563]: E0113 21:24:31.165953 2563 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"76abe57d89b0e53945952fadb28bb3dae40376c2f52b4c535ec8d44c5b975dc6\": not found" containerID="76abe57d89b0e53945952fadb28bb3dae40376c2f52b4c535ec8d44c5b975dc6" Jan 13 21:24:31.166539 kubelet[2563]: I0113 21:24:31.166000 2563 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"76abe57d89b0e53945952fadb28bb3dae40376c2f52b4c535ec8d44c5b975dc6"} err="failed to get container status \"76abe57d89b0e53945952fadb28bb3dae40376c2f52b4c535ec8d44c5b975dc6\": rpc error: code = NotFound desc = an error occurred when try to find container \"76abe57d89b0e53945952fadb28bb3dae40376c2f52b4c535ec8d44c5b975dc6\": not found" Jan 13 21:24:31.166539 kubelet[2563]: I0113 21:24:31.166435 2563 scope.go:117] "RemoveContainer" containerID="658c8c6d2fe0556a5e591238c0819e463bb6cad96e1d7f646b28c593b106687a" Jan 13 21:24:31.168871 containerd[1466]: time="2025-01-13T21:24:31.168729465Z" level=info msg="RemoveContainer for \"658c8c6d2fe0556a5e591238c0819e463bb6cad96e1d7f646b28c593b106687a\"" Jan 13 21:24:31.169083 systemd[1]: Removed slice kubepods-besteffort-podf0ab9c5b_b148_465d_b766_874fa90fc856.slice - libcontainer container kubepods-besteffort-podf0ab9c5b_b148_465d_b766_874fa90fc856.slice. Jan 13 21:24:31.175118 containerd[1466]: time="2025-01-13T21:24:31.174790292Z" level=info msg="RemoveContainer for \"658c8c6d2fe0556a5e591238c0819e463bb6cad96e1d7f646b28c593b106687a\" returns successfully" Jan 13 21:24:31.177226 kubelet[2563]: I0113 21:24:31.175755 2563 scope.go:117] "RemoveContainer" containerID="7fd3f83f19bb9b4e30d554630cc46ff30a1521c9f89f16c29908348d2ce19bc4" Jan 13 21:24:31.178094 containerd[1466]: time="2025-01-13T21:24:31.177798081Z" level=info msg="RemoveContainer for \"7fd3f83f19bb9b4e30d554630cc46ff30a1521c9f89f16c29908348d2ce19bc4\"" Jan 13 21:24:31.180421 systemd[1]: Removed slice kubepods-burstable-pod8256be8e_e45b_4cb4_a574_7d75fb60126d.slice - libcontainer container kubepods-burstable-pod8256be8e_e45b_4cb4_a574_7d75fb60126d.slice. Jan 13 21:24:31.180598 systemd[1]: kubepods-burstable-pod8256be8e_e45b_4cb4_a574_7d75fb60126d.slice: Consumed 9.840s CPU time. Jan 13 21:24:31.185719 containerd[1466]: time="2025-01-13T21:24:31.185675902Z" level=info msg="RemoveContainer for \"7fd3f83f19bb9b4e30d554630cc46ff30a1521c9f89f16c29908348d2ce19bc4\" returns successfully" Jan 13 21:24:31.186026 kubelet[2563]: I0113 21:24:31.185919 2563 scope.go:117] "RemoveContainer" containerID="87996f3c9f64cfc5c10f26e6a6ab1e6671500fb8cfa39609887ace6d409895f3" Jan 13 21:24:31.188523 containerd[1466]: time="2025-01-13T21:24:31.187600937Z" level=info msg="RemoveContainer for \"87996f3c9f64cfc5c10f26e6a6ab1e6671500fb8cfa39609887ace6d409895f3\"" Jan 13 21:24:31.195266 containerd[1466]: time="2025-01-13T21:24:31.193876545Z" level=info msg="RemoveContainer for \"87996f3c9f64cfc5c10f26e6a6ab1e6671500fb8cfa39609887ace6d409895f3\" returns successfully" Jan 13 21:24:31.195512 kubelet[2563]: I0113 21:24:31.195487 2563 scope.go:117] "RemoveContainer" containerID="23de540d9a283ab4a8e8fbc17ad1fb226ce1d6118cc54160f768c96278a44e5c" Jan 13 21:24:31.197290 containerd[1466]: time="2025-01-13T21:24:31.197256208Z" level=info msg="RemoveContainer for \"23de540d9a283ab4a8e8fbc17ad1fb226ce1d6118cc54160f768c96278a44e5c\"" Jan 13 21:24:31.202989 containerd[1466]: time="2025-01-13T21:24:31.202922954Z" level=info msg="RemoveContainer for \"23de540d9a283ab4a8e8fbc17ad1fb226ce1d6118cc54160f768c96278a44e5c\" returns successfully" Jan 13 21:24:31.203463 kubelet[2563]: I0113 21:24:31.203342 2563 scope.go:117] "RemoveContainer" containerID="2aad61beab7bf9dacb953eeb7d9489a13bbce927202f99c0ff714795969cf544" Jan 13 21:24:31.205395 containerd[1466]: time="2025-01-13T21:24:31.204941218Z" level=info msg="RemoveContainer for \"2aad61beab7bf9dacb953eeb7d9489a13bbce927202f99c0ff714795969cf544\"" Jan 13 21:24:31.209483 containerd[1466]: time="2025-01-13T21:24:31.209444939Z" level=info msg="RemoveContainer for \"2aad61beab7bf9dacb953eeb7d9489a13bbce927202f99c0ff714795969cf544\" returns successfully" Jan 13 21:24:31.209986 kubelet[2563]: I0113 21:24:31.209721 2563 scope.go:117] "RemoveContainer" containerID="658c8c6d2fe0556a5e591238c0819e463bb6cad96e1d7f646b28c593b106687a" Jan 13 21:24:31.211829 containerd[1466]: time="2025-01-13T21:24:31.211692060Z" level=error msg="ContainerStatus for \"658c8c6d2fe0556a5e591238c0819e463bb6cad96e1d7f646b28c593b106687a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"658c8c6d2fe0556a5e591238c0819e463bb6cad96e1d7f646b28c593b106687a\": not found" Jan 13 21:24:31.212100 kubelet[2563]: E0113 21:24:31.212052 2563 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"658c8c6d2fe0556a5e591238c0819e463bb6cad96e1d7f646b28c593b106687a\": not found" containerID="658c8c6d2fe0556a5e591238c0819e463bb6cad96e1d7f646b28c593b106687a" Jan 13 21:24:31.212228 kubelet[2563]: I0113 21:24:31.212091 2563 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"658c8c6d2fe0556a5e591238c0819e463bb6cad96e1d7f646b28c593b106687a"} err="failed to get container status \"658c8c6d2fe0556a5e591238c0819e463bb6cad96e1d7f646b28c593b106687a\": rpc error: code = NotFound desc = an error occurred when try to find container \"658c8c6d2fe0556a5e591238c0819e463bb6cad96e1d7f646b28c593b106687a\": not found" Jan 13 21:24:31.212228 kubelet[2563]: I0113 21:24:31.212124 2563 scope.go:117] "RemoveContainer" containerID="7fd3f83f19bb9b4e30d554630cc46ff30a1521c9f89f16c29908348d2ce19bc4" Jan 13 21:24:31.212550 containerd[1466]: time="2025-01-13T21:24:31.212483370Z" level=error msg="ContainerStatus for \"7fd3f83f19bb9b4e30d554630cc46ff30a1521c9f89f16c29908348d2ce19bc4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7fd3f83f19bb9b4e30d554630cc46ff30a1521c9f89f16c29908348d2ce19bc4\": not found" Jan 13 21:24:31.212716 kubelet[2563]: E0113 21:24:31.212682 2563 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7fd3f83f19bb9b4e30d554630cc46ff30a1521c9f89f16c29908348d2ce19bc4\": not found" containerID="7fd3f83f19bb9b4e30d554630cc46ff30a1521c9f89f16c29908348d2ce19bc4" Jan 13 21:24:31.213049 kubelet[2563]: I0113 21:24:31.212718 2563 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7fd3f83f19bb9b4e30d554630cc46ff30a1521c9f89f16c29908348d2ce19bc4"} err="failed to get container status \"7fd3f83f19bb9b4e30d554630cc46ff30a1521c9f89f16c29908348d2ce19bc4\": rpc error: code = NotFound desc = an error occurred when try to find container \"7fd3f83f19bb9b4e30d554630cc46ff30a1521c9f89f16c29908348d2ce19bc4\": not found" Jan 13 21:24:31.213049 kubelet[2563]: I0113 21:24:31.212744 2563 scope.go:117] "RemoveContainer" containerID="87996f3c9f64cfc5c10f26e6a6ab1e6671500fb8cfa39609887ace6d409895f3" Jan 13 21:24:31.213049 kubelet[2563]: I0113 21:24:31.212949 2563 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-cni-path\") on node \"ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 21:24:31.213049 kubelet[2563]: I0113 21:24:31.212973 2563 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-hostproc\") on node \"ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 21:24:31.213049 kubelet[2563]: I0113 21:24:31.212991 2563 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8256be8e-e45b-4cb4-a574-7d75fb60126d-clustermesh-secrets\") on node \"ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 21:24:31.213049 kubelet[2563]: I0113 21:24:31.213008 2563 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-bpjkd\" (UniqueName: \"kubernetes.io/projected/8256be8e-e45b-4cb4-a574-7d75fb60126d-kube-api-access-bpjkd\") on node \"ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 21:24:31.213049 kubelet[2563]: I0113 21:24:31.213035 2563 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8256be8e-e45b-4cb4-a574-7d75fb60126d-cilium-config-path\") on node \"ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 21:24:31.214305 containerd[1466]: time="2025-01-13T21:24:31.212959303Z" level=error msg="ContainerStatus for \"87996f3c9f64cfc5c10f26e6a6ab1e6671500fb8cfa39609887ace6d409895f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"87996f3c9f64cfc5c10f26e6a6ab1e6671500fb8cfa39609887ace6d409895f3\": not found" Jan 13 21:24:31.214305 containerd[1466]: time="2025-01-13T21:24:31.213949664Z" level=error msg="ContainerStatus for \"23de540d9a283ab4a8e8fbc17ad1fb226ce1d6118cc54160f768c96278a44e5c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"23de540d9a283ab4a8e8fbc17ad1fb226ce1d6118cc54160f768c96278a44e5c\": not found" Jan 13 21:24:31.215553 kubelet[2563]: I0113 21:24:31.213053 2563 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-xtables-lock\") on node \"ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 21:24:31.215553 kubelet[2563]: I0113 21:24:31.213070 2563 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8256be8e-e45b-4cb4-a574-7d75fb60126d-hubble-tls\") on node \"ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 21:24:31.215553 kubelet[2563]: I0113 21:24:31.213085 2563 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-host-proc-sys-kernel\") on node \"ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 21:24:31.215553 kubelet[2563]: I0113 21:24:31.213101 2563 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f0ab9c5b-b148-465d-b766-874fa90fc856-cilium-config-path\") on node \"ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 21:24:31.215553 kubelet[2563]: I0113 21:24:31.213116 2563 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-etc-cni-netd\") on node \"ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 21:24:31.215553 kubelet[2563]: I0113 21:24:31.213132 2563 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6gj7l\" (UniqueName: \"kubernetes.io/projected/f0ab9c5b-b148-465d-b766-874fa90fc856-kube-api-access-6gj7l\") on node \"ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 21:24:31.215553 kubelet[2563]: I0113 21:24:31.213149 2563 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-lib-modules\") on node \"ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 21:24:31.216213 containerd[1466]: time="2025-01-13T21:24:31.214643223Z" level=error msg="ContainerStatus for \"2aad61beab7bf9dacb953eeb7d9489a13bbce927202f99c0ff714795969cf544\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2aad61beab7bf9dacb953eeb7d9489a13bbce927202f99c0ff714795969cf544\": not found" Jan 13 21:24:31.216303 kubelet[2563]: I0113 21:24:31.213167 2563 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-cilium-cgroup\") on node \"ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 21:24:31.216303 kubelet[2563]: I0113 21:24:31.213187 2563 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-host-proc-sys-net\") on node \"ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 21:24:31.216303 kubelet[2563]: I0113 21:24:31.213221 2563 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-bpf-maps\") on node \"ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 21:24:31.216303 kubelet[2563]: I0113 21:24:31.213238 2563 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8256be8e-e45b-4cb4-a574-7d75fb60126d-cilium-run\") on node \"ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal\" DevicePath \"\"" Jan 13 21:24:31.216303 kubelet[2563]: E0113 21:24:31.213353 2563 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"87996f3c9f64cfc5c10f26e6a6ab1e6671500fb8cfa39609887ace6d409895f3\": not found" containerID="87996f3c9f64cfc5c10f26e6a6ab1e6671500fb8cfa39609887ace6d409895f3" Jan 13 21:24:31.216303 kubelet[2563]: I0113 21:24:31.213479 2563 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"87996f3c9f64cfc5c10f26e6a6ab1e6671500fb8cfa39609887ace6d409895f3"} err="failed to get container status \"87996f3c9f64cfc5c10f26e6a6ab1e6671500fb8cfa39609887ace6d409895f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"87996f3c9f64cfc5c10f26e6a6ab1e6671500fb8cfa39609887ace6d409895f3\": not found" Jan 13 21:24:31.217738 kubelet[2563]: I0113 21:24:31.213512 2563 scope.go:117] "RemoveContainer" containerID="23de540d9a283ab4a8e8fbc17ad1fb226ce1d6118cc54160f768c96278a44e5c" Jan 13 21:24:31.217738 kubelet[2563]: E0113 21:24:31.214259 2563 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"23de540d9a283ab4a8e8fbc17ad1fb226ce1d6118cc54160f768c96278a44e5c\": not found" containerID="23de540d9a283ab4a8e8fbc17ad1fb226ce1d6118cc54160f768c96278a44e5c" Jan 13 21:24:31.217738 kubelet[2563]: I0113 21:24:31.214294 2563 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"23de540d9a283ab4a8e8fbc17ad1fb226ce1d6118cc54160f768c96278a44e5c"} err="failed to get container status \"23de540d9a283ab4a8e8fbc17ad1fb226ce1d6118cc54160f768c96278a44e5c\": rpc error: code = NotFound desc = an error occurred when try to find container \"23de540d9a283ab4a8e8fbc17ad1fb226ce1d6118cc54160f768c96278a44e5c\": not found" Jan 13 21:24:31.217738 kubelet[2563]: I0113 21:24:31.214348 2563 scope.go:117] "RemoveContainer" containerID="2aad61beab7bf9dacb953eeb7d9489a13bbce927202f99c0ff714795969cf544" Jan 13 21:24:31.217738 kubelet[2563]: E0113 21:24:31.215422 2563 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2aad61beab7bf9dacb953eeb7d9489a13bbce927202f99c0ff714795969cf544\": not found" containerID="2aad61beab7bf9dacb953eeb7d9489a13bbce927202f99c0ff714795969cf544" Jan 13 21:24:31.217738 kubelet[2563]: I0113 21:24:31.215454 2563 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2aad61beab7bf9dacb953eeb7d9489a13bbce927202f99c0ff714795969cf544"} err="failed to get container status \"2aad61beab7bf9dacb953eeb7d9489a13bbce927202f99c0ff714795969cf544\": rpc error: code = NotFound desc = an error occurred when try to find container \"2aad61beab7bf9dacb953eeb7d9489a13bbce927202f99c0ff714795969cf544\": not found" Jan 13 21:24:31.705169 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd61ae5d1c5b66c8a8e82bce04dc94de47aaa5d68606a9857c1ab8dd02c43cce-rootfs.mount: Deactivated successfully. Jan 13 21:24:31.705347 systemd[1]: var-lib-kubelet-pods-f0ab9c5b\x2db148\x2d465d\x2db766\x2d874fa90fc856-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6gj7l.mount: Deactivated successfully. Jan 13 21:24:31.705463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ddae5d5f613295d8eff363d5d6bdd67698a23d2d519c2cadcc7537351231524-rootfs.mount: Deactivated successfully. Jan 13 21:24:31.705563 systemd[1]: var-lib-kubelet-pods-8256be8e\x2de45b\x2d4cb4\x2da574\x2d7d75fb60126d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbpjkd.mount: Deactivated successfully. Jan 13 21:24:31.705677 systemd[1]: var-lib-kubelet-pods-8256be8e\x2de45b\x2d4cb4\x2da574\x2d7d75fb60126d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 21:24:31.705787 systemd[1]: var-lib-kubelet-pods-8256be8e\x2de45b\x2d4cb4\x2da574\x2d7d75fb60126d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 21:24:31.748740 kubelet[2563]: I0113 21:24:31.748670 2563 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8256be8e-e45b-4cb4-a574-7d75fb60126d" path="/var/lib/kubelet/pods/8256be8e-e45b-4cb4-a574-7d75fb60126d/volumes" Jan 13 21:24:31.749594 kubelet[2563]: I0113 21:24:31.749543 2563 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0ab9c5b-b148-465d-b766-874fa90fc856" path="/var/lib/kubelet/pods/f0ab9c5b-b148-465d-b766-874fa90fc856/volumes" Jan 13 21:24:32.669813 sshd[4179]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:32.675746 systemd[1]: sshd@26-10.128.0.40:22-147.75.109.163:42534.service: Deactivated successfully. Jan 13 21:24:32.678691 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 21:24:32.679029 systemd[1]: session-26.scope: Consumed 1.080s CPU time. Jan 13 21:24:32.679957 systemd-logind[1447]: Session 26 logged out. Waiting for processes to exit. Jan 13 21:24:32.681795 systemd-logind[1447]: Removed session 26. Jan 13 21:24:32.726664 systemd[1]: Started sshd@27-10.128.0.40:22-147.75.109.163:42550.service - OpenSSH per-connection server daemon (147.75.109.163:42550). Jan 13 21:24:32.866066 ntpd[1433]: Deleting interface #11 lxc_health, fe80::2408:b3ff:fe17:9163%8#123, interface stats: received=0, sent=0, dropped=0, active_time=88 secs Jan 13 21:24:32.866585 ntpd[1433]: 13 Jan 21:24:32 ntpd[1433]: Deleting interface #11 lxc_health, fe80::2408:b3ff:fe17:9163%8#123, interface stats: received=0, sent=0, dropped=0, active_time=88 secs Jan 13 21:24:32.942214 kubelet[2563]: E0113 21:24:32.942010 2563 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:24:33.024301 sshd[4344]: Accepted publickey for core from 147.75.109.163 port 42550 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:33.026341 sshd[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:33.034540 systemd-logind[1447]: New session 27 of user core. Jan 13 21:24:33.040448 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 21:24:33.973966 kubelet[2563]: E0113 21:24:33.972969 2563 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8256be8e-e45b-4cb4-a574-7d75fb60126d" containerName="mount-cgroup" Jan 13 21:24:33.973966 kubelet[2563]: E0113 21:24:33.973015 2563 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f0ab9c5b-b148-465d-b766-874fa90fc856" containerName="cilium-operator" Jan 13 21:24:33.973966 kubelet[2563]: E0113 21:24:33.973028 2563 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8256be8e-e45b-4cb4-a574-7d75fb60126d" containerName="cilium-agent" Jan 13 21:24:33.973966 kubelet[2563]: E0113 21:24:33.973040 2563 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8256be8e-e45b-4cb4-a574-7d75fb60126d" containerName="apply-sysctl-overwrites" Jan 13 21:24:33.973966 kubelet[2563]: E0113 21:24:33.973050 2563 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8256be8e-e45b-4cb4-a574-7d75fb60126d" containerName="mount-bpf-fs" Jan 13 21:24:33.973966 kubelet[2563]: E0113 21:24:33.973063 2563 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8256be8e-e45b-4cb4-a574-7d75fb60126d" containerName="clean-cilium-state" Jan 13 21:24:33.973966 kubelet[2563]: I0113 21:24:33.973111 2563 memory_manager.go:354] "RemoveStaleState removing state" podUID="8256be8e-e45b-4cb4-a574-7d75fb60126d" containerName="cilium-agent" Jan 13 21:24:33.973966 kubelet[2563]: I0113 21:24:33.973124 2563 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0ab9c5b-b148-465d-b766-874fa90fc856" containerName="cilium-operator" Jan 13 21:24:33.989929 systemd[1]: Created slice kubepods-burstable-pod7a11dbc2_aec4_415f_b23c_4077a8456b33.slice - libcontainer container kubepods-burstable-pod7a11dbc2_aec4_415f_b23c_4077a8456b33.slice. Jan 13 21:24:33.993545 sshd[4344]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:34.007044 systemd[1]: sshd@27-10.128.0.40:22-147.75.109.163:42550.service: Deactivated successfully. Jan 13 21:24:34.013401 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 21:24:34.017650 systemd-logind[1447]: Session 27 logged out. Waiting for processes to exit. Jan 13 21:24:34.019534 systemd-logind[1447]: Removed session 27. Jan 13 21:24:34.030156 kubelet[2563]: I0113 21:24:34.029593 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9gj6\" (UniqueName: \"kubernetes.io/projected/7a11dbc2-aec4-415f-b23c-4077a8456b33-kube-api-access-h9gj6\") pod \"cilium-8d8bs\" (UID: \"7a11dbc2-aec4-415f-b23c-4077a8456b33\") " pod="kube-system/cilium-8d8bs" Jan 13 21:24:34.030156 kubelet[2563]: I0113 21:24:34.029648 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a11dbc2-aec4-415f-b23c-4077a8456b33-hostproc\") pod \"cilium-8d8bs\" (UID: \"7a11dbc2-aec4-415f-b23c-4077a8456b33\") " pod="kube-system/cilium-8d8bs" Jan 13 21:24:34.030156 kubelet[2563]: I0113 21:24:34.029676 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a11dbc2-aec4-415f-b23c-4077a8456b33-hubble-tls\") pod \"cilium-8d8bs\" (UID: \"7a11dbc2-aec4-415f-b23c-4077a8456b33\") " pod="kube-system/cilium-8d8bs" Jan 13 21:24:34.030156 kubelet[2563]: I0113 21:24:34.029703 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a11dbc2-aec4-415f-b23c-4077a8456b33-cilium-run\") pod \"cilium-8d8bs\" (UID: \"7a11dbc2-aec4-415f-b23c-4077a8456b33\") " pod="kube-system/cilium-8d8bs" Jan 13 21:24:34.030156 kubelet[2563]: I0113 21:24:34.029743 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a11dbc2-aec4-415f-b23c-4077a8456b33-bpf-maps\") pod \"cilium-8d8bs\" (UID: \"7a11dbc2-aec4-415f-b23c-4077a8456b33\") " pod="kube-system/cilium-8d8bs" Jan 13 21:24:34.030156 kubelet[2563]: I0113 21:24:34.029769 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7a11dbc2-aec4-415f-b23c-4077a8456b33-cilium-ipsec-secrets\") pod \"cilium-8d8bs\" (UID: \"7a11dbc2-aec4-415f-b23c-4077a8456b33\") " pod="kube-system/cilium-8d8bs" Jan 13 21:24:34.030621 kubelet[2563]: I0113 21:24:34.029797 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a11dbc2-aec4-415f-b23c-4077a8456b33-cilium-cgroup\") pod \"cilium-8d8bs\" (UID: \"7a11dbc2-aec4-415f-b23c-4077a8456b33\") " pod="kube-system/cilium-8d8bs" Jan 13 21:24:34.030621 kubelet[2563]: I0113 21:24:34.029824 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a11dbc2-aec4-415f-b23c-4077a8456b33-lib-modules\") pod \"cilium-8d8bs\" (UID: \"7a11dbc2-aec4-415f-b23c-4077a8456b33\") " pod="kube-system/cilium-8d8bs" Jan 13 21:24:34.030621 kubelet[2563]: I0113 21:24:34.029862 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a11dbc2-aec4-415f-b23c-4077a8456b33-clustermesh-secrets\") pod \"cilium-8d8bs\" (UID: \"7a11dbc2-aec4-415f-b23c-4077a8456b33\") " pod="kube-system/cilium-8d8bs" Jan 13 21:24:34.030621 kubelet[2563]: I0113 21:24:34.029887 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a11dbc2-aec4-415f-b23c-4077a8456b33-host-proc-sys-kernel\") pod \"cilium-8d8bs\" (UID: \"7a11dbc2-aec4-415f-b23c-4077a8456b33\") " pod="kube-system/cilium-8d8bs" Jan 13 21:24:34.030621 kubelet[2563]: I0113 21:24:34.029915 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a11dbc2-aec4-415f-b23c-4077a8456b33-cni-path\") pod \"cilium-8d8bs\" (UID: \"7a11dbc2-aec4-415f-b23c-4077a8456b33\") " pod="kube-system/cilium-8d8bs" Jan 13 21:24:34.030621 kubelet[2563]: I0113 21:24:34.029939 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a11dbc2-aec4-415f-b23c-4077a8456b33-host-proc-sys-net\") pod \"cilium-8d8bs\" (UID: \"7a11dbc2-aec4-415f-b23c-4077a8456b33\") " pod="kube-system/cilium-8d8bs" Jan 13 21:24:34.030952 kubelet[2563]: I0113 21:24:34.029964 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a11dbc2-aec4-415f-b23c-4077a8456b33-etc-cni-netd\") pod \"cilium-8d8bs\" (UID: \"7a11dbc2-aec4-415f-b23c-4077a8456b33\") " pod="kube-system/cilium-8d8bs" Jan 13 21:24:34.030952 kubelet[2563]: I0113 21:24:34.029991 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a11dbc2-aec4-415f-b23c-4077a8456b33-cilium-config-path\") pod \"cilium-8d8bs\" (UID: \"7a11dbc2-aec4-415f-b23c-4077a8456b33\") " pod="kube-system/cilium-8d8bs" Jan 13 21:24:34.030952 kubelet[2563]: I0113 21:24:34.030020 2563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a11dbc2-aec4-415f-b23c-4077a8456b33-xtables-lock\") pod \"cilium-8d8bs\" (UID: \"7a11dbc2-aec4-415f-b23c-4077a8456b33\") " pod="kube-system/cilium-8d8bs" Jan 13 21:24:34.053413 systemd[1]: Started sshd@28-10.128.0.40:22-147.75.109.163:42566.service - OpenSSH per-connection server daemon (147.75.109.163:42566). Jan 13 21:24:34.305121 containerd[1466]: time="2025-01-13T21:24:34.304921722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8d8bs,Uid:7a11dbc2-aec4-415f-b23c-4077a8456b33,Namespace:kube-system,Attempt:0,}" Jan 13 21:24:34.342538 containerd[1466]: time="2025-01-13T21:24:34.342415797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:24:34.342538 containerd[1466]: time="2025-01-13T21:24:34.342483748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:24:34.343001 containerd[1466]: time="2025-01-13T21:24:34.342520611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:24:34.343001 containerd[1466]: time="2025-01-13T21:24:34.342931259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:24:34.370443 systemd[1]: Started cri-containerd-1d08464857ff06dfe3505af36ad09d7ac22bed6d8ddedd7f52ac04ec279353e5.scope - libcontainer container 1d08464857ff06dfe3505af36ad09d7ac22bed6d8ddedd7f52ac04ec279353e5. Jan 13 21:24:34.396137 sshd[4357]: Accepted publickey for core from 147.75.109.163 port 42566 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:34.398037 sshd[4357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:34.406600 systemd-logind[1447]: New session 28 of user core. Jan 13 21:24:34.412457 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 21:24:34.415087 containerd[1466]: time="2025-01-13T21:24:34.414840828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8d8bs,Uid:7a11dbc2-aec4-415f-b23c-4077a8456b33,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d08464857ff06dfe3505af36ad09d7ac22bed6d8ddedd7f52ac04ec279353e5\"" Jan 13 21:24:34.423619 containerd[1466]: time="2025-01-13T21:24:34.423560556Z" level=info msg="CreateContainer within sandbox \"1d08464857ff06dfe3505af36ad09d7ac22bed6d8ddedd7f52ac04ec279353e5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:24:34.442309 containerd[1466]: time="2025-01-13T21:24:34.442231068Z" level=info msg="CreateContainer within sandbox \"1d08464857ff06dfe3505af36ad09d7ac22bed6d8ddedd7f52ac04ec279353e5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"24d0f8af1290c70b25ae7e91fc20d0a66dc7580902cf97015b5dd422d86d7e23\"" Jan 13 21:24:34.444442 containerd[1466]: time="2025-01-13T21:24:34.443116768Z" level=info msg="StartContainer for \"24d0f8af1290c70b25ae7e91fc20d0a66dc7580902cf97015b5dd422d86d7e23\"" Jan 13 21:24:34.485478 systemd[1]: Started cri-containerd-24d0f8af1290c70b25ae7e91fc20d0a66dc7580902cf97015b5dd422d86d7e23.scope - libcontainer container 24d0f8af1290c70b25ae7e91fc20d0a66dc7580902cf97015b5dd422d86d7e23. Jan 13 21:24:34.522675 containerd[1466]: time="2025-01-13T21:24:34.522615351Z" level=info msg="StartContainer for \"24d0f8af1290c70b25ae7e91fc20d0a66dc7580902cf97015b5dd422d86d7e23\" returns successfully" Jan 13 21:24:34.534603 systemd[1]: cri-containerd-24d0f8af1290c70b25ae7e91fc20d0a66dc7580902cf97015b5dd422d86d7e23.scope: Deactivated successfully. Jan 13 21:24:34.581098 containerd[1466]: time="2025-01-13T21:24:34.581014936Z" level=info msg="shim disconnected" id=24d0f8af1290c70b25ae7e91fc20d0a66dc7580902cf97015b5dd422d86d7e23 namespace=k8s.io Jan 13 21:24:34.581098 containerd[1466]: time="2025-01-13T21:24:34.581091788Z" level=warning msg="cleaning up after shim disconnected" id=24d0f8af1290c70b25ae7e91fc20d0a66dc7580902cf97015b5dd422d86d7e23 namespace=k8s.io Jan 13 21:24:34.581098 containerd[1466]: time="2025-01-13T21:24:34.581105929Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:24:34.608488 sshd[4357]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:34.615804 systemd[1]: sshd@28-10.128.0.40:22-147.75.109.163:42566.service: Deactivated successfully. Jan 13 21:24:34.618723 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 21:24:34.620061 systemd-logind[1447]: Session 28 logged out. Waiting for processes to exit. Jan 13 21:24:34.621702 systemd-logind[1447]: Removed session 28. Jan 13 21:24:34.662661 systemd[1]: Started sshd@29-10.128.0.40:22-147.75.109.163:42574.service - OpenSSH per-connection server daemon (147.75.109.163:42574). Jan 13 21:24:34.954003 sshd[4472]: Accepted publickey for core from 147.75.109.163 port 42574 ssh2: RSA SHA256:4a8MBPA1K1nr/1oDxFmgCFKe6F5Z/yg4VlwVNKpP4jI Jan 13 21:24:34.956032 sshd[4472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:24:34.962571 systemd-logind[1447]: New session 29 of user core. Jan 13 21:24:34.967446 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 13 21:24:35.198938 containerd[1466]: time="2025-01-13T21:24:35.198108717Z" level=info msg="CreateContainer within sandbox \"1d08464857ff06dfe3505af36ad09d7ac22bed6d8ddedd7f52ac04ec279353e5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:24:35.226416 containerd[1466]: time="2025-01-13T21:24:35.226146857Z" level=info msg="CreateContainer within sandbox \"1d08464857ff06dfe3505af36ad09d7ac22bed6d8ddedd7f52ac04ec279353e5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"11a972941f1fd516e3b117e01f43a0f423c4c24b44378c8eaefa71d83146ad99\"" Jan 13 21:24:35.230232 containerd[1466]: time="2025-01-13T21:24:35.229372528Z" level=info msg="StartContainer for \"11a972941f1fd516e3b117e01f43a0f423c4c24b44378c8eaefa71d83146ad99\"" Jan 13 21:24:35.301472 systemd[1]: Started cri-containerd-11a972941f1fd516e3b117e01f43a0f423c4c24b44378c8eaefa71d83146ad99.scope - libcontainer container 11a972941f1fd516e3b117e01f43a0f423c4c24b44378c8eaefa71d83146ad99. Jan 13 21:24:35.338464 containerd[1466]: time="2025-01-13T21:24:35.338410496Z" level=info msg="StartContainer for \"11a972941f1fd516e3b117e01f43a0f423c4c24b44378c8eaefa71d83146ad99\" returns successfully" Jan 13 21:24:35.347585 systemd[1]: cri-containerd-11a972941f1fd516e3b117e01f43a0f423c4c24b44378c8eaefa71d83146ad99.scope: Deactivated successfully. Jan 13 21:24:35.382181 containerd[1466]: time="2025-01-13T21:24:35.382053258Z" level=info msg="shim disconnected" id=11a972941f1fd516e3b117e01f43a0f423c4c24b44378c8eaefa71d83146ad99 namespace=k8s.io Jan 13 21:24:35.382181 containerd[1466]: time="2025-01-13T21:24:35.382151922Z" level=warning msg="cleaning up after shim disconnected" id=11a972941f1fd516e3b117e01f43a0f423c4c24b44378c8eaefa71d83146ad99 namespace=k8s.io Jan 13 21:24:35.382181 containerd[1466]: time="2025-01-13T21:24:35.382168553Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:24:36.143366 systemd[1]: run-containerd-runc-k8s.io-11a972941f1fd516e3b117e01f43a0f423c4c24b44378c8eaefa71d83146ad99-runc.2r9OKB.mount: Deactivated successfully. Jan 13 21:24:36.143533 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11a972941f1fd516e3b117e01f43a0f423c4c24b44378c8eaefa71d83146ad99-rootfs.mount: Deactivated successfully. Jan 13 21:24:36.193737 containerd[1466]: time="2025-01-13T21:24:36.193483770Z" level=info msg="CreateContainer within sandbox \"1d08464857ff06dfe3505af36ad09d7ac22bed6d8ddedd7f52ac04ec279353e5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:24:36.221112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4009334766.mount: Deactivated successfully. Jan 13 21:24:36.227876 containerd[1466]: time="2025-01-13T21:24:36.227810748Z" level=info msg="CreateContainer within sandbox \"1d08464857ff06dfe3505af36ad09d7ac22bed6d8ddedd7f52ac04ec279353e5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b2ca8041be8ad23a3e81c33c1829e158cdd7b3d1639213b21585f6eae1fcaf3f\"" Jan 13 21:24:36.232233 containerd[1466]: time="2025-01-13T21:24:36.231191869Z" level=info msg="StartContainer for \"b2ca8041be8ad23a3e81c33c1829e158cdd7b3d1639213b21585f6eae1fcaf3f\"" Jan 13 21:24:36.290515 systemd[1]: Started cri-containerd-b2ca8041be8ad23a3e81c33c1829e158cdd7b3d1639213b21585f6eae1fcaf3f.scope - libcontainer container b2ca8041be8ad23a3e81c33c1829e158cdd7b3d1639213b21585f6eae1fcaf3f. Jan 13 21:24:36.331801 containerd[1466]: time="2025-01-13T21:24:36.331734112Z" level=info msg="StartContainer for \"b2ca8041be8ad23a3e81c33c1829e158cdd7b3d1639213b21585f6eae1fcaf3f\" returns successfully" Jan 13 21:24:36.335780 systemd[1]: cri-containerd-b2ca8041be8ad23a3e81c33c1829e158cdd7b3d1639213b21585f6eae1fcaf3f.scope: Deactivated successfully. Jan 13 21:24:36.376824 containerd[1466]: time="2025-01-13T21:24:36.376725565Z" level=info msg="shim disconnected" id=b2ca8041be8ad23a3e81c33c1829e158cdd7b3d1639213b21585f6eae1fcaf3f namespace=k8s.io Jan 13 21:24:36.376824 containerd[1466]: time="2025-01-13T21:24:36.376802695Z" level=warning msg="cleaning up after shim disconnected" id=b2ca8041be8ad23a3e81c33c1829e158cdd7b3d1639213b21585f6eae1fcaf3f namespace=k8s.io Jan 13 21:24:36.376824 containerd[1466]: time="2025-01-13T21:24:36.376818555Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:24:37.143618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2ca8041be8ad23a3e81c33c1829e158cdd7b3d1639213b21585f6eae1fcaf3f-rootfs.mount: Deactivated successfully. Jan 13 21:24:37.199616 containerd[1466]: time="2025-01-13T21:24:37.199337796Z" level=info msg="CreateContainer within sandbox \"1d08464857ff06dfe3505af36ad09d7ac22bed6d8ddedd7f52ac04ec279353e5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:24:37.227023 containerd[1466]: time="2025-01-13T21:24:37.226958764Z" level=info msg="CreateContainer within sandbox \"1d08464857ff06dfe3505af36ad09d7ac22bed6d8ddedd7f52ac04ec279353e5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ebc0db987b217ab03c4b3d276e98c72b54845693222d3ddf01f359b7b8045c30\"" Jan 13 21:24:37.227999 containerd[1466]: time="2025-01-13T21:24:37.227959123Z" level=info msg="StartContainer for \"ebc0db987b217ab03c4b3d276e98c72b54845693222d3ddf01f359b7b8045c30\"" Jan 13 21:24:37.290533 systemd[1]: Started cri-containerd-ebc0db987b217ab03c4b3d276e98c72b54845693222d3ddf01f359b7b8045c30.scope - libcontainer container ebc0db987b217ab03c4b3d276e98c72b54845693222d3ddf01f359b7b8045c30. Jan 13 21:24:37.335494 systemd[1]: cri-containerd-ebc0db987b217ab03c4b3d276e98c72b54845693222d3ddf01f359b7b8045c30.scope: Deactivated successfully. Jan 13 21:24:37.339104 containerd[1466]: time="2025-01-13T21:24:37.338032930Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a11dbc2_aec4_415f_b23c_4077a8456b33.slice/cri-containerd-ebc0db987b217ab03c4b3d276e98c72b54845693222d3ddf01f359b7b8045c30.scope/memory.events\": no such file or directory" Jan 13 21:24:37.345821 containerd[1466]: time="2025-01-13T21:24:37.345778007Z" level=info msg="StartContainer for \"ebc0db987b217ab03c4b3d276e98c72b54845693222d3ddf01f359b7b8045c30\" returns successfully" Jan 13 21:24:37.375675 containerd[1466]: time="2025-01-13T21:24:37.375596981Z" level=info msg="shim disconnected" id=ebc0db987b217ab03c4b3d276e98c72b54845693222d3ddf01f359b7b8045c30 namespace=k8s.io Jan 13 21:24:37.375675 containerd[1466]: time="2025-01-13T21:24:37.375673051Z" level=warning msg="cleaning up after shim disconnected" id=ebc0db987b217ab03c4b3d276e98c72b54845693222d3ddf01f359b7b8045c30 namespace=k8s.io Jan 13 21:24:37.377311 containerd[1466]: time="2025-01-13T21:24:37.375686040Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:24:37.747149 kubelet[2563]: E0113 21:24:37.745588 2563 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-fgpcm" podUID="69b4e1e2-68f7-4b21-ae1c-a80eb24c85fc" Jan 13 21:24:37.770475 containerd[1466]: time="2025-01-13T21:24:37.770365799Z" level=info msg="StopPodSandbox for \"1ddae5d5f613295d8eff363d5d6bdd67698a23d2d519c2cadcc7537351231524\"" Jan 13 21:24:37.771020 containerd[1466]: time="2025-01-13T21:24:37.770498813Z" level=info msg="TearDown network for sandbox \"1ddae5d5f613295d8eff363d5d6bdd67698a23d2d519c2cadcc7537351231524\" successfully" Jan 13 21:24:37.771020 containerd[1466]: time="2025-01-13T21:24:37.770518702Z" level=info msg="StopPodSandbox for \"1ddae5d5f613295d8eff363d5d6bdd67698a23d2d519c2cadcc7537351231524\" returns successfully" Jan 13 21:24:37.771241 containerd[1466]: time="2025-01-13T21:24:37.771032469Z" level=info msg="RemovePodSandbox for \"1ddae5d5f613295d8eff363d5d6bdd67698a23d2d519c2cadcc7537351231524\"" Jan 13 21:24:37.771241 containerd[1466]: time="2025-01-13T21:24:37.771069157Z" level=info msg="Forcibly stopping sandbox \"1ddae5d5f613295d8eff363d5d6bdd67698a23d2d519c2cadcc7537351231524\"" Jan 13 21:24:37.771241 containerd[1466]: time="2025-01-13T21:24:37.771170645Z" level=info msg="TearDown network for sandbox \"1ddae5d5f613295d8eff363d5d6bdd67698a23d2d519c2cadcc7537351231524\" successfully" Jan 13 21:24:37.776313 containerd[1466]: time="2025-01-13T21:24:37.776247316Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1ddae5d5f613295d8eff363d5d6bdd67698a23d2d519c2cadcc7537351231524\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:24:37.776493 containerd[1466]: time="2025-01-13T21:24:37.776327418Z" level=info msg="RemovePodSandbox \"1ddae5d5f613295d8eff363d5d6bdd67698a23d2d519c2cadcc7537351231524\" returns successfully" Jan 13 21:24:37.777025 containerd[1466]: time="2025-01-13T21:24:37.776975426Z" level=info msg="StopPodSandbox for \"dd61ae5d1c5b66c8a8e82bce04dc94de47aaa5d68606a9857c1ab8dd02c43cce\"" Jan 13 21:24:37.777165 containerd[1466]: time="2025-01-13T21:24:37.777096728Z" level=info msg="TearDown network for sandbox \"dd61ae5d1c5b66c8a8e82bce04dc94de47aaa5d68606a9857c1ab8dd02c43cce\" successfully" Jan 13 21:24:37.777165 containerd[1466]: time="2025-01-13T21:24:37.777117022Z" level=info msg="StopPodSandbox for \"dd61ae5d1c5b66c8a8e82bce04dc94de47aaa5d68606a9857c1ab8dd02c43cce\" returns successfully" Jan 13 21:24:37.777565 containerd[1466]: time="2025-01-13T21:24:37.777516002Z" level=info msg="RemovePodSandbox for \"dd61ae5d1c5b66c8a8e82bce04dc94de47aaa5d68606a9857c1ab8dd02c43cce\"" Jan 13 21:24:37.777565 containerd[1466]: time="2025-01-13T21:24:37.777549623Z" level=info msg="Forcibly stopping sandbox \"dd61ae5d1c5b66c8a8e82bce04dc94de47aaa5d68606a9857c1ab8dd02c43cce\"" Jan 13 21:24:37.777720 containerd[1466]: time="2025-01-13T21:24:37.777626879Z" level=info msg="TearDown network for sandbox \"dd61ae5d1c5b66c8a8e82bce04dc94de47aaa5d68606a9857c1ab8dd02c43cce\" successfully" Jan 13 21:24:37.782435 containerd[1466]: time="2025-01-13T21:24:37.782380130Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dd61ae5d1c5b66c8a8e82bce04dc94de47aaa5d68606a9857c1ab8dd02c43cce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:24:37.782610 containerd[1466]: time="2025-01-13T21:24:37.782462638Z" level=info msg="RemovePodSandbox \"dd61ae5d1c5b66c8a8e82bce04dc94de47aaa5d68606a9857c1ab8dd02c43cce\" returns successfully" Jan 13 21:24:37.943383 kubelet[2563]: E0113 21:24:37.943314 2563 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:24:38.143500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ebc0db987b217ab03c4b3d276e98c72b54845693222d3ddf01f359b7b8045c30-rootfs.mount: Deactivated successfully. Jan 13 21:24:38.204892 containerd[1466]: time="2025-01-13T21:24:38.204546145Z" level=info msg="CreateContainer within sandbox \"1d08464857ff06dfe3505af36ad09d7ac22bed6d8ddedd7f52ac04ec279353e5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:24:38.233097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3709027268.mount: Deactivated successfully. Jan 13 21:24:38.234144 containerd[1466]: time="2025-01-13T21:24:38.233588620Z" level=info msg="CreateContainer within sandbox \"1d08464857ff06dfe3505af36ad09d7ac22bed6d8ddedd7f52ac04ec279353e5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c2802e9fd039345a49de2a8da45539ff0a30d418fd2fba13c793b570f7e5abb6\"" Jan 13 21:24:38.235848 containerd[1466]: time="2025-01-13T21:24:38.235178420Z" level=info msg="StartContainer for \"c2802e9fd039345a49de2a8da45539ff0a30d418fd2fba13c793b570f7e5abb6\"" Jan 13 21:24:38.310446 systemd[1]: Started cri-containerd-c2802e9fd039345a49de2a8da45539ff0a30d418fd2fba13c793b570f7e5abb6.scope - libcontainer container c2802e9fd039345a49de2a8da45539ff0a30d418fd2fba13c793b570f7e5abb6. Jan 13 21:24:38.347886 containerd[1466]: time="2025-01-13T21:24:38.347824700Z" level=info msg="StartContainer for \"c2802e9fd039345a49de2a8da45539ff0a30d418fd2fba13c793b570f7e5abb6\" returns successfully" Jan 13 21:24:38.853290 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 13 21:24:39.747226 kubelet[2563]: E0113 21:24:39.745443 2563 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-fgpcm" podUID="69b4e1e2-68f7-4b21-ae1c-a80eb24c85fc" Jan 13 21:24:40.593262 kubelet[2563]: I0113 21:24:40.591365 2563 setters.go:600] "Node became not ready" node="ci-4081-3-0-23124d1c691ead31c35f.c.flatcar-212911.internal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T21:24:40Z","lastTransitionTime":"2025-01-13T21:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 21:24:41.592111 systemd[1]: run-containerd-runc-k8s.io-c2802e9fd039345a49de2a8da45539ff0a30d418fd2fba13c793b570f7e5abb6-runc.ZNdNLl.mount: Deactivated successfully. Jan 13 21:24:41.745779 kubelet[2563]: E0113 21:24:41.745241 2563 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-fgpcm" podUID="69b4e1e2-68f7-4b21-ae1c-a80eb24c85fc" Jan 13 21:24:42.277447 systemd-networkd[1374]: lxc_health: Link UP Jan 13 21:24:42.294028 systemd-networkd[1374]: lxc_health: Gained carrier Jan 13 21:24:42.370948 kubelet[2563]: I0113 21:24:42.370861 2563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8d8bs" podStartSLOduration=9.370835701 podStartE2EDuration="9.370835701s" podCreationTimestamp="2025-01-13 21:24:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:24:39.230152071 +0000 UTC m=+121.647303957" watchObservedRunningTime="2025-01-13 21:24:42.370835701 +0000 UTC m=+124.787987575" Jan 13 21:24:43.585525 systemd-networkd[1374]: lxc_health: Gained IPv6LL Jan 13 21:24:45.866359 ntpd[1433]: Listen normally on 14 lxc_health [fe80::f022:91ff:fecc:7ceb%14]:123 Jan 13 21:24:45.867044 ntpd[1433]: 13 Jan 21:24:45 ntpd[1433]: Listen normally on 14 lxc_health [fe80::f022:91ff:fecc:7ceb%14]:123 Jan 13 21:24:48.413458 systemd[1]: run-containerd-runc-k8s.io-c2802e9fd039345a49de2a8da45539ff0a30d418fd2fba13c793b570f7e5abb6-runc.VEWqQC.mount: Deactivated successfully. Jan 13 21:24:50.678588 sshd[4472]: pam_unix(sshd:session): session closed for user core Jan 13 21:24:50.683855 systemd[1]: sshd@29-10.128.0.40:22-147.75.109.163:42574.service: Deactivated successfully. Jan 13 21:24:50.686920 systemd[1]: session-29.scope: Deactivated successfully. Jan 13 21:24:50.689168 systemd-logind[1447]: Session 29 logged out. Waiting for processes to exit. Jan 13 21:24:50.691364 systemd-logind[1447]: Removed session 29.