Dec 13 02:14:19.088309 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 02:14:19.088350 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:14:19.088368 kernel: BIOS-provided physical RAM map: Dec 13 02:14:19.088380 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Dec 13 02:14:19.090627 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Dec 13 02:14:19.090651 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Dec 13 02:14:19.090675 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Dec 13 02:14:19.090829 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Dec 13 02:14:19.090844 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd276fff] usable Dec 13 02:14:19.090858 kernel: BIOS-e820: [mem 0x00000000bd277000-0x00000000bd280fff] ACPI data Dec 13 02:14:19.090871 kernel: BIOS-e820: [mem 0x00000000bd281000-0x00000000bf8ecfff] usable Dec 13 02:14:19.090885 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Dec 13 02:14:19.090899 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Dec 13 02:14:19.091049 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Dec 13 02:14:19.091072 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Dec 13 02:14:19.091087 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Dec 13 02:14:19.091102 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Dec 13 02:14:19.091252 kernel: NX (Execute Disable) protection: active Dec 13 02:14:19.091268 kernel: efi: EFI v2.70 by EDK II Dec 13 02:14:19.091284 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd277018 Dec 13 02:14:19.091298 kernel: random: crng init done Dec 13 02:14:19.091329 kernel: SMBIOS 2.4 present. Dec 13 02:14:19.091467 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Dec 13 02:14:19.091482 kernel: Hypervisor detected: KVM Dec 13 02:14:19.091495 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 02:14:19.091509 kernel: kvm-clock: cpu 0, msr 18419b001, primary cpu clock Dec 13 02:14:19.091524 kernel: kvm-clock: using sched offset of 12903714524 cycles Dec 13 02:14:19.091540 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 02:14:19.091555 kernel: tsc: Detected 2299.998 MHz processor Dec 13 02:14:19.091571 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 02:14:19.091588 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 02:14:19.091603 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Dec 13 02:14:19.091624 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 02:14:19.091639 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 13 02:14:19.091655 kernel: Using GB pages for direct mapping Dec 13 02:14:19.091671 kernel: Secure boot disabled Dec 13 02:14:19.091687 kernel: ACPI: Early table checksum verification disabled Dec 13 02:14:19.091702 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Dec 13 02:14:19.091718 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Dec 13 02:14:19.091734 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Dec 13 02:14:19.091759 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Dec 13 02:14:19.091776 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Dec 13 02:14:19.091793 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Dec 13 02:14:19.091809 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Dec 13 02:14:19.091826 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Dec 13 02:14:19.091843 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Dec 13 02:14:19.091863 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Dec 13 02:14:19.091879 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Dec 13 02:14:19.091895 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Dec 13 02:14:19.091910 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Dec 13 02:14:19.091925 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Dec 13 02:14:19.091942 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Dec 13 02:14:19.091957 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Dec 13 02:14:19.091973 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Dec 13 02:14:19.091989 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Dec 13 02:14:19.092010 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Dec 13 02:14:19.092026 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Dec 13 02:14:19.092043 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 02:14:19.092059 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 02:14:19.092076 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 02:14:19.092092 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Dec 13 02:14:19.092109 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Dec 13 02:14:19.092135 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Dec 13 02:14:19.092152 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Dec 13 02:14:19.092173 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Dec 13 02:14:19.092190 kernel: Zone ranges: Dec 13 02:14:19.092207 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 02:14:19.092223 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 02:14:19.092240 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Dec 13 02:14:19.092257 kernel: Movable zone start for each node Dec 13 02:14:19.092273 kernel: Early memory node ranges Dec 13 02:14:19.092290 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Dec 13 02:14:19.092307 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Dec 13 02:14:19.092327 kernel: node 0: [mem 0x0000000000100000-0x00000000bd276fff] Dec 13 02:14:19.092344 kernel: node 0: [mem 0x00000000bd281000-0x00000000bf8ecfff] Dec 13 02:14:19.092360 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Dec 13 02:14:19.092376 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Dec 13 02:14:19.092435 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Dec 13 02:14:19.092453 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 02:14:19.092469 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Dec 13 02:14:19.092486 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Dec 13 02:14:19.092503 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Dec 13 02:14:19.092525 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 13 02:14:19.092542 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Dec 13 02:14:19.092558 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 02:14:19.092575 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 02:14:19.092591 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 02:14:19.092607 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 02:14:19.092624 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 02:14:19.092641 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 02:14:19.092658 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 02:14:19.092679 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 02:14:19.092695 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 02:14:19.092712 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 02:14:19.092728 kernel: Booting paravirtualized kernel on KVM Dec 13 02:14:19.092745 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 02:14:19.092762 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 02:14:19.092779 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 02:14:19.092796 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 02:14:19.092812 kernel: pcpu-alloc: [0] 0 1 Dec 13 02:14:19.092834 kernel: kvm-guest: PV spinlocks enabled Dec 13 02:14:19.092850 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 02:14:19.092867 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932270 Dec 13 02:14:19.092883 kernel: Policy zone: Normal Dec 13 02:14:19.092902 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:14:19.092920 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 02:14:19.092936 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 02:14:19.092953 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 02:14:19.092970 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 02:14:19.092992 kernel: Memory: 7515408K/7860544K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 344876K reserved, 0K cma-reserved) Dec 13 02:14:19.093009 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 02:14:19.093026 kernel: Kernel/User page tables isolation: enabled Dec 13 02:14:19.093042 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 02:14:19.093059 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 02:14:19.093076 kernel: rcu: Hierarchical RCU implementation. Dec 13 02:14:19.093094 kernel: rcu: RCU event tracing is enabled. Dec 13 02:14:19.093111 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 02:14:19.093139 kernel: Rude variant of Tasks RCU enabled. Dec 13 02:14:19.093169 kernel: Tracing variant of Tasks RCU enabled. Dec 13 02:14:19.093187 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 02:14:19.093209 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 02:14:19.093227 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 02:14:19.093242 kernel: Console: colour dummy device 80x25 Dec 13 02:14:19.093260 kernel: printk: console [ttyS0] enabled Dec 13 02:14:19.093277 kernel: ACPI: Core revision 20210730 Dec 13 02:14:19.093295 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 02:14:19.093313 kernel: x2apic enabled Dec 13 02:14:19.093335 kernel: Switched APIC routing to physical x2apic. Dec 13 02:14:19.093353 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Dec 13 02:14:19.093370 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 02:14:19.093403 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Dec 13 02:14:19.093431 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Dec 13 02:14:19.093449 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Dec 13 02:14:19.093467 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 02:14:19.093489 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 02:14:19.093507 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 02:14:19.093524 kernel: Spectre V2 : Mitigation: IBRS Dec 13 02:14:19.093542 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 02:14:19.093559 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 02:14:19.093576 kernel: RETBleed: Mitigation: IBRS Dec 13 02:14:19.093594 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 02:14:19.093611 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Dec 13 02:14:19.093628 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 02:14:19.093650 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 02:14:19.093667 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 02:14:19.093685 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 02:14:19.093704 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 02:14:19.093722 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 02:14:19.093739 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 02:14:19.093756 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 02:14:19.093773 kernel: Freeing SMP alternatives memory: 32K Dec 13 02:14:19.093791 kernel: pid_max: default: 32768 minimum: 301 Dec 13 02:14:19.093812 kernel: LSM: Security Framework initializing Dec 13 02:14:19.093829 kernel: SELinux: Initializing. Dec 13 02:14:19.093846 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 02:14:19.093864 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 02:14:19.093881 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Dec 13 02:14:19.093899 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Dec 13 02:14:19.093917 kernel: signal: max sigframe size: 1776 Dec 13 02:14:19.093934 kernel: rcu: Hierarchical SRCU implementation. Dec 13 02:14:19.093951 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 02:14:19.093973 kernel: smp: Bringing up secondary CPUs ... Dec 13 02:14:19.093991 kernel: x86: Booting SMP configuration: Dec 13 02:14:19.094008 kernel: .... node #0, CPUs: #1 Dec 13 02:14:19.094026 kernel: kvm-clock: cpu 1, msr 18419b041, secondary cpu clock Dec 13 02:14:19.094044 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 02:14:19.094063 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 02:14:19.094081 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 02:14:19.094099 kernel: smpboot: Max logical packages: 1 Dec 13 02:14:19.094128 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 13 02:14:19.094149 kernel: devtmpfs: initialized Dec 13 02:14:19.094166 kernel: x86/mm: Memory block size: 128MB Dec 13 02:14:19.094185 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Dec 13 02:14:19.094202 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 02:14:19.094220 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 02:14:19.094237 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 02:14:19.094253 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 02:14:19.094270 kernel: audit: initializing netlink subsys (disabled) Dec 13 02:14:19.094293 kernel: audit: type=2000 audit(1734056057.864:1): state=initialized audit_enabled=0 res=1 Dec 13 02:14:19.094310 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 02:14:19.094327 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 02:14:19.094345 kernel: cpuidle: using governor menu Dec 13 02:14:19.094363 kernel: ACPI: bus type PCI registered Dec 13 02:14:19.094381 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 02:14:19.099494 kernel: dca service started, version 1.12.1 Dec 13 02:14:19.099519 kernel: PCI: Using configuration type 1 for base access Dec 13 02:14:19.099537 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 02:14:19.099561 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 02:14:19.099578 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 02:14:19.099595 kernel: ACPI: Added _OSI(Module Device) Dec 13 02:14:19.099613 kernel: ACPI: Added _OSI(Processor Device) Dec 13 02:14:19.099630 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 02:14:19.099647 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 02:14:19.099665 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 02:14:19.099682 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 02:14:19.099699 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 02:14:19.099720 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 02:14:19.099737 kernel: ACPI: Interpreter enabled Dec 13 02:14:19.099754 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 02:14:19.099772 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 02:14:19.099789 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 02:14:19.099806 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 02:14:19.099823 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 02:14:19.100069 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 02:14:19.100259 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 02:14:19.100282 kernel: PCI host bridge to bus 0000:00 Dec 13 02:14:19.100699 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 02:14:19.101146 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 02:14:19.109093 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 02:14:19.109290 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Dec 13 02:14:19.109458 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 02:14:19.109650 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 02:14:19.109836 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Dec 13 02:14:19.110011 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 02:14:19.110195 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 02:14:19.110376 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Dec 13 02:14:19.110563 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Dec 13 02:14:19.110727 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Dec 13 02:14:19.112839 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 02:14:19.113040 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Dec 13 02:14:19.113235 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Dec 13 02:14:19.113448 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 02:14:19.113634 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Dec 13 02:14:19.113815 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Dec 13 02:14:19.113844 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 02:14:19.113861 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 02:14:19.113877 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 02:14:19.113894 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 02:14:19.113911 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 02:14:19.113927 kernel: iommu: Default domain type: Translated Dec 13 02:14:19.113943 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 02:14:19.113959 kernel: vgaarb: loaded Dec 13 02:14:19.113976 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 02:14:19.113997 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 02:14:19.114015 kernel: PTP clock support registered Dec 13 02:14:19.114033 kernel: Registered efivars operations Dec 13 02:14:19.114050 kernel: PCI: Using ACPI for IRQ routing Dec 13 02:14:19.114066 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 02:14:19.114081 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Dec 13 02:14:19.114096 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Dec 13 02:14:19.114119 kernel: e820: reserve RAM buffer [mem 0xbd277000-0xbfffffff] Dec 13 02:14:19.114137 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Dec 13 02:14:19.114162 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Dec 13 02:14:19.114176 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 02:14:19.116060 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 02:14:19.116080 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 02:14:19.116098 kernel: pnp: PnP ACPI init Dec 13 02:14:19.116125 kernel: pnp: PnP ACPI: found 7 devices Dec 13 02:14:19.116144 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 02:14:19.116162 kernel: NET: Registered PF_INET protocol family Dec 13 02:14:19.116180 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 02:14:19.116205 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 02:14:19.116223 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 02:14:19.116241 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 02:14:19.116259 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 02:14:19.116277 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 02:14:19.116295 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 02:14:19.116313 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 02:14:19.116330 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 02:14:19.116352 kernel: NET: Registered PF_XDP protocol family Dec 13 02:14:19.116545 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 02:14:19.116701 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 02:14:19.116850 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 02:14:19.116995 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Dec 13 02:14:19.117175 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 02:14:19.117200 kernel: PCI: CLS 0 bytes, default 64 Dec 13 02:14:19.117224 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 02:14:19.117242 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Dec 13 02:14:19.117259 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 02:14:19.117278 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 02:14:19.117296 kernel: clocksource: Switched to clocksource tsc Dec 13 02:14:19.117315 kernel: Initialise system trusted keyrings Dec 13 02:14:19.117332 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 02:14:19.117351 kernel: Key type asymmetric registered Dec 13 02:14:19.117368 kernel: Asymmetric key parser 'x509' registered Dec 13 02:14:19.117629 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 02:14:19.117654 kernel: io scheduler mq-deadline registered Dec 13 02:14:19.117672 kernel: io scheduler kyber registered Dec 13 02:14:19.117691 kernel: io scheduler bfq registered Dec 13 02:14:19.117709 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 02:14:19.117861 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 02:14:19.118313 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Dec 13 02:14:19.118340 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Dec 13 02:14:19.118662 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Dec 13 02:14:19.118697 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 02:14:19.118872 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Dec 13 02:14:19.118895 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 02:14:19.118913 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 02:14:19.118931 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 02:14:19.118948 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Dec 13 02:14:19.118965 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Dec 13 02:14:19.119154 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Dec 13 02:14:19.119184 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 02:14:19.119202 kernel: i8042: Warning: Keylock active Dec 13 02:14:19.119219 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 02:14:19.119236 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 02:14:19.119418 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 02:14:19.119571 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 02:14:19.119731 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T02:14:18 UTC (1734056058) Dec 13 02:14:19.119894 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 02:14:19.119924 kernel: intel_pstate: CPU model not supported Dec 13 02:14:19.119943 kernel: pstore: Registered efi as persistent store backend Dec 13 02:14:19.119960 kernel: NET: Registered PF_INET6 protocol family Dec 13 02:14:19.119978 kernel: Segment Routing with IPv6 Dec 13 02:14:19.119996 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 02:14:19.120013 kernel: NET: Registered PF_PACKET protocol family Dec 13 02:14:19.120030 kernel: Key type dns_resolver registered Dec 13 02:14:19.120047 kernel: IPI shorthand broadcast: enabled Dec 13 02:14:19.120063 kernel: sched_clock: Marking stable (762019642, 167291620)->(979748734, -50437472) Dec 13 02:14:19.120083 kernel: registered taskstats version 1 Dec 13 02:14:19.120100 kernel: Loading compiled-in X.509 certificates Dec 13 02:14:19.120128 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 02:14:19.120146 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 02:14:19.120163 kernel: Key type .fscrypt registered Dec 13 02:14:19.120179 kernel: Key type fscrypt-provisioning registered Dec 13 02:14:19.120196 kernel: pstore: Using crash dump compression: deflate Dec 13 02:14:19.120212 kernel: ima: Allocated hash algorithm: sha1 Dec 13 02:14:19.120228 kernel: ima: No architecture policies found Dec 13 02:14:19.120248 kernel: clk: Disabling unused clocks Dec 13 02:14:19.120264 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 02:14:19.120281 kernel: Write protecting the kernel read-only data: 28672k Dec 13 02:14:19.120297 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 02:14:19.120314 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 02:14:19.120330 kernel: Run /init as init process Dec 13 02:14:19.120346 kernel: with arguments: Dec 13 02:14:19.120363 kernel: /init Dec 13 02:14:19.120378 kernel: with environment: Dec 13 02:14:19.120423 kernel: HOME=/ Dec 13 02:14:19.120440 kernel: TERM=linux Dec 13 02:14:19.120456 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 02:14:19.120477 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:14:19.120498 systemd[1]: Detected virtualization kvm. Dec 13 02:14:19.120515 systemd[1]: Detected architecture x86-64. Dec 13 02:14:19.120532 systemd[1]: Running in initrd. Dec 13 02:14:19.120552 systemd[1]: No hostname configured, using default hostname. Dec 13 02:14:19.120569 systemd[1]: Hostname set to . Dec 13 02:14:19.120587 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:14:19.120604 systemd[1]: Queued start job for default target initrd.target. Dec 13 02:14:19.120621 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:14:19.120638 systemd[1]: Reached target cryptsetup.target. Dec 13 02:14:19.120655 systemd[1]: Reached target paths.target. Dec 13 02:14:19.120671 systemd[1]: Reached target slices.target. Dec 13 02:14:19.120692 systemd[1]: Reached target swap.target. Dec 13 02:14:19.120708 systemd[1]: Reached target timers.target. Dec 13 02:14:19.120726 systemd[1]: Listening on iscsid.socket. Dec 13 02:14:19.120744 systemd[1]: Listening on iscsiuio.socket. Dec 13 02:14:19.120761 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 02:14:19.120778 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 02:14:19.120795 systemd[1]: Listening on systemd-journald.socket. Dec 13 02:14:19.120812 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:14:19.120832 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:14:19.120850 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:14:19.120885 systemd[1]: Reached target sockets.target. Dec 13 02:14:19.120906 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:14:19.120924 systemd[1]: Finished network-cleanup.service. Dec 13 02:14:19.120942 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 02:14:19.120963 systemd[1]: Starting systemd-journald.service... Dec 13 02:14:19.120981 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:14:19.120999 systemd[1]: Starting systemd-resolved.service... Dec 13 02:14:19.121016 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 02:14:19.121034 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:14:19.121052 kernel: audit: type=1130 audit(1734056059.093:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:19.121069 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 02:14:19.121088 kernel: audit: type=1130 audit(1734056059.100:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:19.121105 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:14:19.121132 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 02:14:19.121156 systemd-journald[189]: Journal started Dec 13 02:14:19.121244 systemd-journald[189]: Runtime Journal (/run/log/journal/70d5ffd0398a20d917c72b2407f1e970) is 8.0M, max 148.8M, 140.8M free. Dec 13 02:14:19.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:19.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:19.128353 kernel: audit: type=1130 audit(1734056059.120:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:19.128432 systemd[1]: Started systemd-journald.service. Dec 13 02:14:19.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:19.145327 kernel: audit: type=1130 audit(1734056059.132:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:19.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:19.129318 systemd-modules-load[190]: Inserted module 'overlay' Dec 13 02:14:19.139448 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:14:19.145087 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 02:14:19.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:19.153464 kernel: audit: type=1130 audit(1734056059.142:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:19.183518 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 02:14:19.197542 kernel: audit: type=1130 audit(1734056059.186:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:19.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:19.193882 systemd[1]: Starting dracut-cmdline.service... Dec 13 02:14:19.201604 systemd-resolved[191]: Positive Trust Anchors: Dec 13 02:14:19.201917 systemd-resolved[191]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:14:19.201981 systemd-resolved[191]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:14:19.219681 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 02:14:19.208670 systemd-resolved[191]: Defaulting to hostname 'linux'. Dec 13 02:14:19.213030 systemd[1]: Started systemd-resolved.service. Dec 13 02:14:19.227522 kernel: Bridge firewalling registered Dec 13 02:14:19.225131 systemd-modules-load[190]: Inserted module 'br_netfilter' Dec 13 02:14:19.231516 dracut-cmdline[206]: dracut-dracut-053 Dec 13 02:14:19.231516 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:14:19.248661 systemd[1]: Reached target nss-lookup.target. Dec 13 02:14:19.260197 kernel: audit: type=1130 audit(1734056059.247:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:19.260237 kernel: SCSI subsystem initialized Dec 13 02:14:19.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:19.281225 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 02:14:19.281308 kernel: device-mapper: uevent: version 1.0.3 Dec 13 02:14:19.284427 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 02:14:19.288427 systemd-modules-load[190]: Inserted module 'dm_multipath' Dec 13 02:14:19.289844 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:14:19.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:19.302778 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:14:19.314532 kernel: audit: type=1130 audit(1734056059.300:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:19.315619 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:14:19.327578 kernel: audit: type=1130 audit(1734056059.318:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:19.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:19.345433 kernel: Loading iSCSI transport class v2.0-870. Dec 13 02:14:19.366515 kernel: iscsi: registered transport (tcp) Dec 13 02:14:19.393709 kernel: iscsi: registered transport (qla4xxx) Dec 13 02:14:19.393796 kernel: QLogic iSCSI HBA Driver Dec 13 02:14:19.439579 systemd[1]: Finished dracut-cmdline.service. Dec 13 02:14:19.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:19.441827 systemd[1]: Starting dracut-pre-udev.service... Dec 13 02:14:19.499454 kernel: raid6: avx2x4 gen() 18196 MB/s Dec 13 02:14:19.516432 kernel: raid6: avx2x4 xor() 7863 MB/s Dec 13 02:14:19.533434 kernel: raid6: avx2x2 gen() 18139 MB/s Dec 13 02:14:19.550441 kernel: raid6: avx2x2 xor() 18240 MB/s Dec 13 02:14:19.567465 kernel: raid6: avx2x1 gen() 13694 MB/s Dec 13 02:14:19.584447 kernel: raid6: avx2x1 xor() 15788 MB/s Dec 13 02:14:19.601472 kernel: raid6: sse2x4 gen() 10968 MB/s Dec 13 02:14:19.618434 kernel: raid6: sse2x4 xor() 6610 MB/s Dec 13 02:14:19.635434 kernel: raid6: sse2x2 gen() 12005 MB/s Dec 13 02:14:19.652434 kernel: raid6: sse2x2 xor() 7397 MB/s Dec 13 02:14:19.669447 kernel: raid6: sse2x1 gen() 10139 MB/s Dec 13 02:14:19.687080 kernel: raid6: sse2x1 xor() 5157 MB/s Dec 13 02:14:19.687118 kernel: raid6: using algorithm avx2x4 gen() 18196 MB/s Dec 13 02:14:19.687141 kernel: raid6: .... xor() 7863 MB/s, rmw enabled Dec 13 02:14:19.687872 kernel: raid6: using avx2x2 recovery algorithm Dec 13 02:14:19.703434 kernel: xor: automatically using best checksumming function avx Dec 13 02:14:19.809440 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 02:14:19.820985 systemd[1]: Finished dracut-pre-udev.service. Dec 13 02:14:19.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:19.820000 audit: BPF prog-id=7 op=LOAD Dec 13 02:14:19.820000 audit: BPF prog-id=8 op=LOAD Dec 13 02:14:19.822644 systemd[1]: Starting systemd-udevd.service... Dec 13 02:14:19.840114 systemd-udevd[389]: Using default interface naming scheme 'v252'. Dec 13 02:14:19.847309 systemd[1]: Started systemd-udevd.service. Dec 13 02:14:19.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:19.849847 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 02:14:19.868527 dracut-pre-trigger[392]: rd.md=0: removing MD RAID activation Dec 13 02:14:19.904532 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 02:14:19.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:19.910280 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:14:19.977097 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:14:19.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:20.051419 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 02:14:20.085175 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 02:14:20.085287 kernel: AES CTR mode by8 optimization enabled Dec 13 02:14:20.097440 kernel: scsi host0: Virtio SCSI HBA Dec 13 02:14:20.105416 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Dec 13 02:14:20.202511 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Dec 13 02:14:20.221174 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Dec 13 02:14:20.221433 kernel: sd 0:0:1:0: [sda] Write Protect is off Dec 13 02:14:20.221658 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Dec 13 02:14:20.221861 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 02:14:20.222094 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 02:14:20.222119 kernel: GPT:17805311 != 25165823 Dec 13 02:14:20.222142 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 02:14:20.222163 kernel: GPT:17805311 != 25165823 Dec 13 02:14:20.222184 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 02:14:20.222203 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:14:20.222224 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Dec 13 02:14:20.271412 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 02:14:20.295670 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (429) Dec 13 02:14:20.293847 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 02:14:20.305539 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 02:14:20.328556 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:14:20.349535 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 02:14:20.367805 systemd[1]: Starting disk-uuid.service... Dec 13 02:14:20.383915 disk-uuid[504]: Primary Header is updated. Dec 13 02:14:20.383915 disk-uuid[504]: Secondary Entries is updated. Dec 13 02:14:20.383915 disk-uuid[504]: Secondary Header is updated. Dec 13 02:14:20.414513 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:14:20.443421 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:14:20.457443 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:14:21.454420 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:14:21.454829 disk-uuid[505]: The operation has completed successfully. Dec 13 02:14:21.520323 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 02:14:21.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:21.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:21.520478 systemd[1]: Finished disk-uuid.service. Dec 13 02:14:21.535075 systemd[1]: Starting verity-setup.service... Dec 13 02:14:21.563424 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 02:14:21.638604 systemd[1]: Found device dev-mapper-usr.device. Dec 13 02:14:21.650308 systemd[1]: Mounting sysusr-usr.mount... Dec 13 02:14:21.662872 systemd[1]: Finished verity-setup.service. Dec 13 02:14:21.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:21.753445 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 02:14:21.753539 systemd[1]: Mounted sysusr-usr.mount. Dec 13 02:14:21.753932 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 02:14:21.808695 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:14:21.808737 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:14:21.808767 kernel: BTRFS info (device sda6): has skinny extents Dec 13 02:14:21.808789 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:14:21.754859 systemd[1]: Starting ignition-setup.service... Dec 13 02:14:21.767781 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 02:14:21.826769 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 02:14:21.837348 systemd[1]: Finished ignition-setup.service. Dec 13 02:14:21.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:21.856655 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 02:14:21.890424 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 02:14:21.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:21.898000 audit: BPF prog-id=9 op=LOAD Dec 13 02:14:21.900547 systemd[1]: Starting systemd-networkd.service... Dec 13 02:14:21.933177 systemd-networkd[679]: lo: Link UP Dec 13 02:14:21.933192 systemd-networkd[679]: lo: Gained carrier Dec 13 02:14:21.934681 systemd-networkd[679]: Enumeration completed Dec 13 02:14:21.934836 systemd[1]: Started systemd-networkd.service. Dec 13 02:14:21.935229 systemd-networkd[679]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:14:21.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:21.937186 systemd-networkd[679]: eth0: Link UP Dec 13 02:14:21.937194 systemd-networkd[679]: eth0: Gained carrier Dec 13 02:14:21.947511 systemd-networkd[679]: eth0: DHCPv4 address 10.128.0.98/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 02:14:21.968844 systemd[1]: Reached target network.target. Dec 13 02:14:21.980846 systemd[1]: Starting iscsiuio.service... Dec 13 02:14:22.033716 systemd[1]: Started iscsiuio.service. Dec 13 02:14:22.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:22.042007 systemd[1]: Starting iscsid.service... Dec 13 02:14:22.061689 iscsid[689]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:14:22.061689 iscsid[689]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 02:14:22.061689 iscsid[689]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 02:14:22.061689 iscsid[689]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 02:14:22.061689 iscsid[689]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 02:14:22.061689 iscsid[689]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:14:22.061689 iscsid[689]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 02:14:22.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:22.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:22.054682 systemd[1]: Started iscsid.service. Dec 13 02:14:22.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:22.143104 ignition[653]: Ignition 2.14.0 Dec 13 02:14:22.069717 systemd[1]: Starting dracut-initqueue.service... Dec 13 02:14:22.143118 ignition[653]: Stage: fetch-offline Dec 13 02:14:22.141920 systemd[1]: Finished dracut-initqueue.service. Dec 13 02:14:22.143205 ignition[653]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:14:22.172904 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 02:14:22.143247 ignition[653]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:14:22.191884 systemd[1]: Reached target remote-fs-pre.target. Dec 13 02:14:22.164282 ignition[653]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:14:22.206547 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:14:22.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:22.164660 ignition[653]: parsed url from cmdline: "" Dec 13 02:14:22.219585 systemd[1]: Reached target remote-fs.target. Dec 13 02:14:22.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:22.164669 ignition[653]: no config URL provided Dec 13 02:14:22.234708 systemd[1]: Starting dracut-pre-mount.service... Dec 13 02:14:22.164688 ignition[653]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:14:22.255707 systemd[1]: Starting ignition-fetch.service... Dec 13 02:14:22.164703 ignition[653]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:14:22.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:22.272248 systemd[1]: Finished dracut-pre-mount.service. Dec 13 02:14:22.164716 ignition[653]: failed to fetch config: resource requires networking Dec 13 02:14:22.288875 unknown[703]: fetched base config from "system" Dec 13 02:14:22.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:22.164896 ignition[653]: Ignition finished successfully Dec 13 02:14:22.288898 unknown[703]: fetched base config from "system" Dec 13 02:14:22.269706 ignition[703]: Ignition 2.14.0 Dec 13 02:14:22.288914 unknown[703]: fetched user config from "gcp" Dec 13 02:14:22.269717 ignition[703]: Stage: fetch Dec 13 02:14:22.308026 systemd[1]: Finished ignition-fetch.service. Dec 13 02:14:22.269849 ignition[703]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:14:22.325900 systemd[1]: Starting ignition-kargs.service... Dec 13 02:14:22.269875 ignition[703]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:14:22.365019 systemd[1]: Finished ignition-kargs.service. Dec 13 02:14:22.276769 ignition[703]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:14:22.372957 systemd[1]: Starting ignition-disks.service... Dec 13 02:14:22.276968 ignition[703]: parsed url from cmdline: "" Dec 13 02:14:22.395653 systemd[1]: Finished ignition-disks.service. Dec 13 02:14:22.276975 ignition[703]: no config URL provided Dec 13 02:14:22.414907 systemd[1]: Reached target initrd-root-device.target. Dec 13 02:14:22.276984 ignition[703]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:14:22.429705 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:14:22.276996 ignition[703]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:14:22.444687 systemd[1]: Reached target local-fs.target. Dec 13 02:14:22.277034 ignition[703]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Dec 13 02:14:22.451793 systemd[1]: Reached target sysinit.target. Dec 13 02:14:22.282866 ignition[703]: GET result: OK Dec 13 02:14:22.464824 systemd[1]: Reached target basic.target. Dec 13 02:14:22.282947 ignition[703]: parsing config with SHA512: 0b2acb0a640ae42a0330ef94854990f0adb3d66a6742ee82366bd54baa164392188f8e840dd7a9a6820eaddfa458c287ae4a891d9e8659992489f129255e01b8 Dec 13 02:14:22.479142 systemd[1]: Starting systemd-fsck-root.service... Dec 13 02:14:22.290365 ignition[703]: fetch: fetch complete Dec 13 02:14:22.290381 ignition[703]: fetch: fetch passed Dec 13 02:14:22.290560 ignition[703]: Ignition finished successfully Dec 13 02:14:22.340055 ignition[709]: Ignition 2.14.0 Dec 13 02:14:22.340064 ignition[709]: Stage: kargs Dec 13 02:14:22.340298 ignition[709]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:14:22.340340 ignition[709]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:14:22.348010 ignition[709]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:14:22.349459 ignition[709]: kargs: kargs passed Dec 13 02:14:22.349509 ignition[709]: Ignition finished successfully Dec 13 02:14:22.384457 ignition[715]: Ignition 2.14.0 Dec 13 02:14:22.384466 ignition[715]: Stage: disks Dec 13 02:14:22.384611 ignition[715]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:14:22.384644 ignition[715]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:14:22.392972 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:14:22.394507 ignition[715]: disks: disks passed Dec 13 02:14:22.394563 ignition[715]: Ignition finished successfully Dec 13 02:14:22.517364 systemd-fsck[723]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 02:14:22.697354 systemd[1]: Finished systemd-fsck-root.service. Dec 13 02:14:22.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:22.706778 systemd[1]: Mounting sysroot.mount... Dec 13 02:14:22.736218 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 02:14:22.736134 systemd[1]: Mounted sysroot.mount. Dec 13 02:14:22.743792 systemd[1]: Reached target initrd-root-fs.target. Dec 13 02:14:22.765728 systemd[1]: Mounting sysroot-usr.mount... Dec 13 02:14:22.782041 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 02:14:22.782127 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 02:14:22.782171 systemd[1]: Reached target ignition-diskful.target. Dec 13 02:14:22.802922 systemd[1]: Mounted sysroot-usr.mount. Dec 13 02:14:22.828623 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:14:22.886576 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (729) Dec 13 02:14:22.886618 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:14:22.886642 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:14:22.886666 kernel: BTRFS info (device sda6): has skinny extents Dec 13 02:14:22.886687 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:14:22.873061 systemd[1]: Starting initrd-setup-root.service... Dec 13 02:14:22.895758 initrd-setup-root[752]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 02:14:22.905538 initrd-setup-root[760]: cut: /sysroot/etc/group: No such file or directory Dec 13 02:14:22.916670 initrd-setup-root[768]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 02:14:22.926983 initrd-setup-root[776]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 02:14:22.947413 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:14:22.982441 systemd[1]: Finished initrd-setup-root.service. Dec 13 02:14:22.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:22.983726 systemd[1]: Starting ignition-mount.service... Dec 13 02:14:23.005586 systemd[1]: Starting sysroot-boot.service... Dec 13 02:14:23.020940 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 02:14:23.021088 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 02:14:23.048683 ignition[795]: INFO : Ignition 2.14.0 Dec 13 02:14:23.048683 ignition[795]: INFO : Stage: mount Dec 13 02:14:23.048683 ignition[795]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:14:23.048683 ignition[795]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:14:23.133716 kernel: kauditd_printk_skb: 24 callbacks suppressed Dec 13 02:14:23.133746 kernel: audit: type=1130 audit(1734056063.055:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:23.133763 kernel: audit: type=1130 audit(1734056063.100:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:23.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:23.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:23.044650 systemd[1]: Finished sysroot-boot.service. Dec 13 02:14:23.166598 ignition[795]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:14:23.166598 ignition[795]: INFO : mount: mount passed Dec 13 02:14:23.166598 ignition[795]: INFO : Ignition finished successfully Dec 13 02:14:23.230548 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (804) Dec 13 02:14:23.230593 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:14:23.230618 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:14:23.230640 kernel: BTRFS info (device sda6): has skinny extents Dec 13 02:14:23.230663 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:14:23.056997 systemd[1]: Finished ignition-mount.service. Dec 13 02:14:23.103116 systemd[1]: Starting ignition-files.service... Dec 13 02:14:23.163729 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:14:23.231200 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:14:23.271533 ignition[823]: INFO : Ignition 2.14.0 Dec 13 02:14:23.271533 ignition[823]: INFO : Stage: files Dec 13 02:14:23.271533 ignition[823]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:14:23.271533 ignition[823]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:14:23.271533 ignition[823]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:14:23.271533 ignition[823]: DEBUG : files: compiled without relabeling support, skipping Dec 13 02:14:23.347554 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (828) Dec 13 02:14:23.284121 unknown[823]: wrote ssh authorized keys file for user: core Dec 13 02:14:23.356581 ignition[823]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 02:14:23.356581 ignition[823]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 02:14:23.356581 ignition[823]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 02:14:23.356581 ignition[823]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 02:14:23.356581 ignition[823]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 02:14:23.356581 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/hosts" Dec 13 02:14:23.356581 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(3): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:14:23.356581 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem28253775" Dec 13 02:14:23.356581 ignition[823]: CRITICAL : files: createFilesystemsFiles: createFiles: op(3): op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem28253775": device or resource busy Dec 13 02:14:23.356581 ignition[823]: ERROR : files: createFilesystemsFiles: createFiles: op(3): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem28253775", trying btrfs: device or resource busy Dec 13 02:14:23.356581 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem28253775" Dec 13 02:14:23.356581 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem28253775" Dec 13 02:14:23.356581 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [started] unmounting "/mnt/oem28253775" Dec 13 02:14:23.356581 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [finished] unmounting "/mnt/oem28253775" Dec 13 02:14:23.356581 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/hosts" Dec 13 02:14:23.356581 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:14:23.356581 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 02:14:23.611569 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Dec 13 02:14:23.626092 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:14:23.642552 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 02:14:23.642552 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 02:14:23.655614 systemd-networkd[679]: eth0: Gained IPv6LL Dec 13 02:14:23.916347 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Dec 13 02:14:24.113998 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 02:14:24.129542 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Dec 13 02:14:24.129542 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(9): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:14:24.129542 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1996734534" Dec 13 02:14:24.129542 ignition[823]: CRITICAL : files: createFilesystemsFiles: createFiles: op(9): op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1996734534": device or resource busy Dec 13 02:14:24.129542 ignition[823]: ERROR : files: createFilesystemsFiles: createFiles: op(9): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1996734534", trying btrfs: device or resource busy Dec 13 02:14:24.129542 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1996734534" Dec 13 02:14:24.129542 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1996734534" Dec 13 02:14:24.129542 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [started] unmounting "/mnt/oem1996734534" Dec 13 02:14:24.129542 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [finished] unmounting "/mnt/oem1996734534" Dec 13 02:14:24.129542 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Dec 13 02:14:24.129542 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/install.sh" Dec 13 02:14:24.129542 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 02:14:24.129542 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:14:24.129542 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:14:24.129542 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:14:24.374591 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:14:24.374591 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:14:24.374591 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:14:24.374591 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:14:24.374591 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:14:24.374591 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 02:14:24.374591 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 02:14:24.374591 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Dec 13 02:14:24.374591 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(13): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:14:24.374591 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3883949024" Dec 13 02:14:24.374591 ignition[823]: CRITICAL : files: createFilesystemsFiles: createFiles: op(13): op(14): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3883949024": device or resource busy Dec 13 02:14:24.374591 ignition[823]: ERROR : files: createFilesystemsFiles: createFiles: op(13): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3883949024", trying btrfs: device or resource busy Dec 13 02:14:24.374591 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3883949024" Dec 13 02:14:24.374591 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3883949024" Dec 13 02:14:24.130739 systemd[1]: mnt-oem1996734534.mount: Deactivated successfully. Dec 13 02:14:24.631629 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [started] unmounting "/mnt/oem3883949024" Dec 13 02:14:24.631629 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [finished] unmounting "/mnt/oem3883949024" Dec 13 02:14:24.631629 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Dec 13 02:14:24.631629 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(17): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 02:14:24.631629 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 02:14:24.631629 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET result: OK Dec 13 02:14:24.150105 systemd[1]: mnt-oem3883949024.mount: Deactivated successfully. Dec 13 02:14:24.940788 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(17): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 02:14:24.940788 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(18): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Dec 13 02:14:24.975571 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(18): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:14:24.975571 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(19): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem46196176" Dec 13 02:14:24.975571 ignition[823]: CRITICAL : files: createFilesystemsFiles: createFiles: op(18): op(19): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem46196176": device or resource busy Dec 13 02:14:24.975571 ignition[823]: ERROR : files: createFilesystemsFiles: createFiles: op(18): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem46196176", trying btrfs: device or resource busy Dec 13 02:14:24.975571 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem46196176" Dec 13 02:14:24.975571 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem46196176" Dec 13 02:14:24.975571 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [started] unmounting "/mnt/oem46196176" Dec 13 02:14:24.975571 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [finished] unmounting "/mnt/oem46196176" Dec 13 02:14:24.975571 ignition[823]: INFO : files: createFilesystemsFiles: createFiles: op(18): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Dec 13 02:14:24.975571 ignition[823]: INFO : files: op(1c): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 02:14:24.975571 ignition[823]: INFO : files: op(1c): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 02:14:24.975571 ignition[823]: INFO : files: op(1d): [started] processing unit "oem-gce.service" Dec 13 02:14:24.975571 ignition[823]: INFO : files: op(1d): [finished] processing unit "oem-gce.service" Dec 13 02:14:24.975571 ignition[823]: INFO : files: op(1e): [started] processing unit "oem-gce-enable-oslogin.service" Dec 13 02:14:24.975571 ignition[823]: INFO : files: op(1e): [finished] processing unit "oem-gce-enable-oslogin.service" Dec 13 02:14:24.975571 ignition[823]: INFO : files: op(1f): [started] processing unit "prepare-helm.service" Dec 13 02:14:24.975571 ignition[823]: INFO : files: op(1f): op(20): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:14:25.443577 kernel: audit: type=1130 audit(1734056065.013:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.443742 kernel: audit: type=1130 audit(1734056065.101:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.443761 kernel: audit: type=1130 audit(1734056065.167:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.443784 kernel: audit: type=1131 audit(1734056065.167:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.443799 kernel: audit: type=1130 audit(1734056065.276:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.443813 kernel: audit: type=1131 audit(1734056065.276:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.443828 kernel: audit: type=1130 audit(1734056065.405:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:24.959171 systemd[1]: mnt-oem46196176.mount: Deactivated successfully. Dec 13 02:14:25.459698 ignition[823]: INFO : files: op(1f): op(20): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:14:25.459698 ignition[823]: INFO : files: op(1f): [finished] processing unit "prepare-helm.service" Dec 13 02:14:25.459698 ignition[823]: INFO : files: op(21): [started] setting preset to enabled for "prepare-helm.service" Dec 13 02:14:25.459698 ignition[823]: INFO : files: op(21): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 02:14:25.459698 ignition[823]: INFO : files: op(22): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 02:14:25.459698 ignition[823]: INFO : files: op(22): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 02:14:25.459698 ignition[823]: INFO : files: op(23): [started] setting preset to enabled for "oem-gce.service" Dec 13 02:14:25.459698 ignition[823]: INFO : files: op(23): [finished] setting preset to enabled for "oem-gce.service" Dec 13 02:14:25.459698 ignition[823]: INFO : files: op(24): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Dec 13 02:14:25.459698 ignition[823]: INFO : files: op(24): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Dec 13 02:14:25.459698 ignition[823]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:14:25.459698 ignition[823]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:14:25.459698 ignition[823]: INFO : files: files passed Dec 13 02:14:25.459698 ignition[823]: INFO : Ignition finished successfully Dec 13 02:14:25.745581 kernel: audit: type=1131 audit(1734056065.546:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:24.979905 systemd[1]: Finished ignition-files.service. Dec 13 02:14:25.025061 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 02:14:25.777610 initrd-setup-root-after-ignition[846]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:14:25.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.053760 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 02:14:25.054831 systemd[1]: Starting ignition-quench.service... Dec 13 02:14:25.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.081039 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 02:14:25.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.103133 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 02:14:25.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.103275 systemd[1]: Finished ignition-quench.service. Dec 13 02:14:25.168923 systemd[1]: Reached target ignition-complete.target. Dec 13 02:14:25.915588 iscsid[689]: iscsid shutting down. Dec 13 02:14:25.223768 systemd[1]: Starting initrd-parse-etc.service... Dec 13 02:14:25.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.939862 ignition[861]: INFO : Ignition 2.14.0 Dec 13 02:14:25.939862 ignition[861]: INFO : Stage: umount Dec 13 02:14:25.939862 ignition[861]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:14:25.939862 ignition[861]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:14:25.939862 ignition[861]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:14:25.939862 ignition[861]: INFO : umount: umount passed Dec 13 02:14:25.939862 ignition[861]: INFO : Ignition finished successfully Dec 13 02:14:25.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:26.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:26.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.265614 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 02:14:26.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.265734 systemd[1]: Finished initrd-parse-etc.service. Dec 13 02:14:26.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.277943 systemd[1]: Reached target initrd-fs.target. Dec 13 02:14:25.334782 systemd[1]: Reached target initrd.target. Dec 13 02:14:25.351927 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 02:14:25.353276 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 02:14:25.376981 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 02:14:25.408174 systemd[1]: Starting initrd-cleanup.service... Dec 13 02:14:26.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.460332 systemd[1]: Stopped target nss-lookup.target. Dec 13 02:14:26.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.486875 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 02:14:25.506950 systemd[1]: Stopped target timers.target. Dec 13 02:14:26.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.526834 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 02:14:26.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:26.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.527032 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 02:14:25.548073 systemd[1]: Stopped target initrd.target. Dec 13 02:14:25.581940 systemd[1]: Stopped target basic.target. Dec 13 02:14:25.600917 systemd[1]: Stopped target ignition-complete.target. Dec 13 02:14:25.619017 systemd[1]: Stopped target ignition-diskful.target. Dec 13 02:14:26.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.636960 systemd[1]: Stopped target initrd-root-device.target. Dec 13 02:14:26.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:26.307000 audit: BPF prog-id=6 op=UNLOAD Dec 13 02:14:25.656946 systemd[1]: Stopped target remote-fs.target. Dec 13 02:14:25.676939 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 02:14:25.697959 systemd[1]: Stopped target sysinit.target. Dec 13 02:14:26.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.719055 systemd[1]: Stopped target local-fs.target. Dec 13 02:14:26.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.731936 systemd[1]: Stopped target local-fs-pre.target. Dec 13 02:14:26.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.753869 systemd[1]: Stopped target swap.target. Dec 13 02:14:25.767793 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 02:14:26.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.768005 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 02:14:25.786006 systemd[1]: Stopped target cryptsetup.target. Dec 13 02:14:25.807842 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 02:14:26.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.808040 systemd[1]: Stopped dracut-initqueue.service. Dec 13 02:14:26.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.833067 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 02:14:26.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.833292 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 02:14:25.849952 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 02:14:26.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.850122 systemd[1]: Stopped ignition-files.service. Dec 13 02:14:26.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.867286 systemd[1]: Stopping ignition-mount.service... Dec 13 02:14:26.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.886652 systemd[1]: Stopping iscsid.service... Dec 13 02:14:26.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.894795 systemd[1]: Stopping sysroot-boot.service... Dec 13 02:14:26.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:26.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:25.907660 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 02:14:25.907913 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 02:14:25.931879 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 02:14:25.932114 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 02:14:25.951368 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 02:14:25.952548 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 02:14:26.665563 systemd-journald[189]: Received SIGTERM from PID 1 (systemd). Dec 13 02:14:25.952661 systemd[1]: Stopped iscsid.service. Dec 13 02:14:25.955284 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 02:14:25.955404 systemd[1]: Stopped ignition-mount.service. Dec 13 02:14:25.968211 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 02:14:25.968313 systemd[1]: Stopped sysroot-boot.service. Dec 13 02:14:25.985345 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 02:14:25.985516 systemd[1]: Stopped ignition-disks.service. Dec 13 02:14:26.026676 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 02:14:26.026749 systemd[1]: Stopped ignition-kargs.service. Dec 13 02:14:26.033832 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 02:14:26.033893 systemd[1]: Stopped ignition-fetch.service. Dec 13 02:14:26.055737 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 02:14:26.055807 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 02:14:26.071735 systemd[1]: Stopped target paths.target. Dec 13 02:14:26.085659 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 02:14:26.090505 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 02:14:26.100545 systemd[1]: Stopped target slices.target. Dec 13 02:14:26.113546 systemd[1]: Stopped target sockets.target. Dec 13 02:14:26.130614 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 02:14:26.130701 systemd[1]: Closed iscsid.socket. Dec 13 02:14:26.145580 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 02:14:26.145677 systemd[1]: Stopped ignition-setup.service. Dec 13 02:14:26.160655 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 02:14:26.160734 systemd[1]: Stopped initrd-setup-root.service. Dec 13 02:14:26.175761 systemd[1]: Stopping iscsiuio.service... Dec 13 02:14:26.189975 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 02:14:26.190086 systemd[1]: Stopped iscsiuio.service. Dec 13 02:14:26.204982 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 02:14:26.205092 systemd[1]: Finished initrd-cleanup.service. Dec 13 02:14:26.220643 systemd[1]: Stopped target network.target. Dec 13 02:14:26.235624 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 02:14:26.235708 systemd[1]: Closed iscsiuio.socket. Dec 13 02:14:26.249825 systemd[1]: Stopping systemd-networkd.service... Dec 13 02:14:26.253468 systemd-networkd[679]: eth0: DHCPv6 lease lost Dec 13 02:14:26.672000 audit: BPF prog-id=9 op=UNLOAD Dec 13 02:14:26.256881 systemd[1]: Stopping systemd-resolved.service... Dec 13 02:14:26.276908 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 02:14:26.277036 systemd[1]: Stopped systemd-resolved.service. Dec 13 02:14:26.293269 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 02:14:26.293422 systemd[1]: Stopped systemd-networkd.service. Dec 13 02:14:26.309303 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 02:14:26.309346 systemd[1]: Closed systemd-networkd.socket. Dec 13 02:14:26.326541 systemd[1]: Stopping network-cleanup.service... Dec 13 02:14:26.332716 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 02:14:26.332789 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 02:14:26.345843 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:14:26.345905 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:14:26.367767 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 02:14:26.367827 systemd[1]: Stopped systemd-modules-load.service. Dec 13 02:14:26.382819 systemd[1]: Stopping systemd-udevd.service... Dec 13 02:14:26.399233 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 02:14:26.399891 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 02:14:26.400035 systemd[1]: Stopped systemd-udevd.service. Dec 13 02:14:26.406067 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 02:14:26.406163 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 02:14:26.427619 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 02:14:26.427687 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 02:14:26.442674 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 02:14:26.442744 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 02:14:26.449830 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 02:14:26.449892 systemd[1]: Stopped dracut-cmdline.service. Dec 13 02:14:26.472755 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:14:26.472826 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 02:14:26.488832 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 02:14:26.505520 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 02:14:26.505732 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 02:14:26.521831 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 02:14:26.521894 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 02:14:26.537630 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:14:26.537709 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 02:14:26.553984 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 02:14:26.554701 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 02:14:26.554813 systemd[1]: Stopped network-cleanup.service. Dec 13 02:14:26.568009 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 02:14:26.568124 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 02:14:26.582922 systemd[1]: Reached target initrd-switch-root.target. Dec 13 02:14:26.600665 systemd[1]: Starting initrd-switch-root.service... Dec 13 02:14:26.625641 systemd[1]: Switching root. Dec 13 02:14:26.675768 systemd-journald[189]: Journal stopped Dec 13 02:14:31.369764 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 02:14:31.369913 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 02:14:31.369939 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 02:14:31.369970 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 02:14:31.369992 kernel: SELinux: policy capability open_perms=1 Dec 13 02:14:31.370013 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 02:14:31.370036 kernel: SELinux: policy capability always_check_network=0 Dec 13 02:14:31.370060 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 02:14:31.370086 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 02:14:31.370108 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 02:14:31.370137 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 02:14:31.370162 systemd[1]: Successfully loaded SELinux policy in 106.875ms. Dec 13 02:14:31.370209 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.785ms. Dec 13 02:14:31.370235 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:14:31.370260 systemd[1]: Detected virtualization kvm. Dec 13 02:14:31.370284 systemd[1]: Detected architecture x86-64. Dec 13 02:14:31.370308 systemd[1]: Detected first boot. Dec 13 02:14:31.370332 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:14:31.370357 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 02:14:31.370384 systemd[1]: Populated /etc with preset unit settings. Dec 13 02:14:31.370432 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:14:31.370457 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:14:31.370482 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:14:31.370518 kernel: kauditd_printk_skb: 51 callbacks suppressed Dec 13 02:14:31.370541 kernel: audit: type=1334 audit(1734056070.440:89): prog-id=12 op=LOAD Dec 13 02:14:31.370563 kernel: audit: type=1334 audit(1734056070.440:90): prog-id=3 op=UNLOAD Dec 13 02:14:31.370587 kernel: audit: type=1334 audit(1734056070.446:91): prog-id=13 op=LOAD Dec 13 02:14:31.370619 kernel: audit: type=1334 audit(1734056070.452:92): prog-id=14 op=LOAD Dec 13 02:14:31.370644 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 02:14:31.370671 kernel: audit: type=1334 audit(1734056070.453:93): prog-id=4 op=UNLOAD Dec 13 02:14:31.370693 kernel: audit: type=1334 audit(1734056070.453:94): prog-id=5 op=UNLOAD Dec 13 02:14:31.370716 kernel: audit: type=1131 audit(1734056070.455:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.370739 systemd[1]: Stopped initrd-switch-root.service. Dec 13 02:14:31.370762 kernel: audit: type=1334 audit(1734056070.523:96): prog-id=12 op=UNLOAD Dec 13 02:14:31.370785 kernel: audit: type=1130 audit(1734056070.539:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.370812 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 02:14:31.370836 kernel: audit: type=1131 audit(1734056070.539:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.370860 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 02:14:31.370885 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 02:14:31.370917 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 02:14:31.370942 systemd[1]: Created slice system-getty.slice. Dec 13 02:14:31.370965 systemd[1]: Created slice system-modprobe.slice. Dec 13 02:14:31.370993 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 02:14:31.371014 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 02:14:31.371037 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 02:14:31.371059 systemd[1]: Created slice user.slice. Dec 13 02:14:31.371082 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:14:31.371105 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 02:14:31.371128 systemd[1]: Set up automount boot.automount. Dec 13 02:14:31.371160 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 02:14:31.371187 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 02:14:31.371214 systemd[1]: Stopped target initrd-fs.target. Dec 13 02:14:31.371237 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 02:14:31.371259 systemd[1]: Reached target integritysetup.target. Dec 13 02:14:31.371283 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:14:31.371306 systemd[1]: Reached target remote-fs.target. Dec 13 02:14:31.371329 systemd[1]: Reached target slices.target. Dec 13 02:14:31.371353 systemd[1]: Reached target swap.target. Dec 13 02:14:31.371374 systemd[1]: Reached target torcx.target. Dec 13 02:14:31.371472 systemd[1]: Reached target veritysetup.target. Dec 13 02:14:31.371502 systemd[1]: Listening on systemd-coredump.socket. Dec 13 02:14:31.371534 systemd[1]: Listening on systemd-initctl.socket. Dec 13 02:14:31.371558 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:14:31.371581 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:14:31.371604 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:14:31.371628 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 02:14:31.371651 systemd[1]: Mounting dev-hugepages.mount... Dec 13 02:14:31.371674 systemd[1]: Mounting dev-mqueue.mount... Dec 13 02:14:31.371698 systemd[1]: Mounting media.mount... Dec 13 02:14:31.371722 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:14:31.371749 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 02:14:31.371772 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 02:14:31.371796 systemd[1]: Mounting tmp.mount... Dec 13 02:14:31.371819 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 02:14:31.371841 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:14:31.371865 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:14:31.371889 systemd[1]: Starting modprobe@configfs.service... Dec 13 02:14:31.371924 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:14:31.371948 systemd[1]: Starting modprobe@drm.service... Dec 13 02:14:31.371977 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:14:31.372035 systemd[1]: Starting modprobe@fuse.service... Dec 13 02:14:31.372062 systemd[1]: Starting modprobe@loop.service... Dec 13 02:14:31.372087 kernel: fuse: init (API version 7.34) Dec 13 02:14:31.372110 kernel: loop: module loaded Dec 13 02:14:31.372134 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 02:14:31.372158 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 02:14:31.372181 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 02:14:31.372204 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 02:14:31.372234 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 02:14:31.372259 systemd[1]: Stopped systemd-journald.service. Dec 13 02:14:31.372284 systemd[1]: Starting systemd-journald.service... Dec 13 02:14:31.372309 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:14:31.372332 systemd[1]: Starting systemd-network-generator.service... Dec 13 02:14:31.372363 systemd-journald[985]: Journal started Dec 13 02:14:31.372477 systemd-journald[985]: Runtime Journal (/run/log/journal/70d5ffd0398a20d917c72b2407f1e970) is 8.0M, max 148.8M, 140.8M free. Dec 13 02:14:26.956000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 02:14:27.110000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:14:27.110000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:14:27.110000 audit: BPF prog-id=10 op=LOAD Dec 13 02:14:27.110000 audit: BPF prog-id=10 op=UNLOAD Dec 13 02:14:27.110000 audit: BPF prog-id=11 op=LOAD Dec 13 02:14:27.110000 audit: BPF prog-id=11 op=UNLOAD Dec 13 02:14:27.275000 audit[894]: AVC avc: denied { associate } for pid=894 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 02:14:27.275000 audit[894]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8b2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=877 pid=894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:14:27.275000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:14:27.285000 audit[894]: AVC avc: denied { associate } for pid=894 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 02:14:27.285000 audit[894]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d989 a2=1ed a3=0 items=2 ppid=877 pid=894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:14:27.285000 audit: CWD cwd="/" Dec 13 02:14:27.285000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:27.285000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:27.285000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:14:30.440000 audit: BPF prog-id=12 op=LOAD Dec 13 02:14:30.440000 audit: BPF prog-id=3 op=UNLOAD Dec 13 02:14:30.446000 audit: BPF prog-id=13 op=LOAD Dec 13 02:14:30.452000 audit: BPF prog-id=14 op=LOAD Dec 13 02:14:30.453000 audit: BPF prog-id=4 op=UNLOAD Dec 13 02:14:30.453000 audit: BPF prog-id=5 op=UNLOAD Dec 13 02:14:30.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:30.523000 audit: BPF prog-id=12 op=UNLOAD Dec 13 02:14:30.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:30.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.324000 audit: BPF prog-id=15 op=LOAD Dec 13 02:14:31.324000 audit: BPF prog-id=16 op=LOAD Dec 13 02:14:31.324000 audit: BPF prog-id=17 op=LOAD Dec 13 02:14:31.324000 audit: BPF prog-id=13 op=UNLOAD Dec 13 02:14:31.324000 audit: BPF prog-id=14 op=UNLOAD Dec 13 02:14:31.366000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 02:14:31.366000 audit[985]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffe475210c0 a2=4000 a3=7ffe4752115c items=0 ppid=1 pid=985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:14:31.366000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 02:14:27.270534 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-12-13T02:14:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:14:30.439401 systemd[1]: Queued start job for default target multi-user.target. Dec 13 02:14:27.271537 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-12-13T02:14:27Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 02:14:30.456167 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 02:14:27.271573 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-12-13T02:14:27Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 02:14:27.271628 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-12-13T02:14:27Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 02:14:27.271648 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-12-13T02:14:27Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 02:14:27.271705 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-12-13T02:14:27Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 02:14:27.271730 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-12-13T02:14:27Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 02:14:27.272033 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-12-13T02:14:27Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 02:14:27.272104 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-12-13T02:14:27Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 02:14:27.272130 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-12-13T02:14:27Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 02:14:27.274805 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-12-13T02:14:27Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 02:14:27.274875 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-12-13T02:14:27Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 02:14:27.274912 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-12-13T02:14:27Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 02:14:27.274940 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-12-13T02:14:27Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 02:14:27.274974 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-12-13T02:14:27Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 02:14:27.275001 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-12-13T02:14:27Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 02:14:29.815620 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-12-13T02:14:29Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:14:29.815928 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-12-13T02:14:29Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:14:29.816077 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-12-13T02:14:29Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:14:29.816308 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-12-13T02:14:29Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:14:29.816365 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-12-13T02:14:29Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 02:14:29.816459 /usr/lib/systemd/system-generators/torcx-generator[894]: time="2024-12-13T02:14:29Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 02:14:31.384447 systemd[1]: Starting systemd-remount-fs.service... Dec 13 02:14:31.399449 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:14:31.415590 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 02:14:31.415721 systemd[1]: Stopped verity-setup.service. Dec 13 02:14:31.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.439586 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:14:31.449433 systemd[1]: Started systemd-journald.service. Dec 13 02:14:31.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.458996 systemd[1]: Mounted dev-hugepages.mount. Dec 13 02:14:31.466797 systemd[1]: Mounted dev-mqueue.mount. Dec 13 02:14:31.473755 systemd[1]: Mounted media.mount. Dec 13 02:14:31.480722 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 02:14:31.490739 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 02:14:31.499762 systemd[1]: Mounted tmp.mount. Dec 13 02:14:31.506855 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 02:14:31.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.515942 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:14:31.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.524951 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 02:14:31.525177 systemd[1]: Finished modprobe@configfs.service. Dec 13 02:14:31.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.534007 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:14:31.534237 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:14:31.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.542947 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:14:31.543153 systemd[1]: Finished modprobe@drm.service. Dec 13 02:14:31.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.551944 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:14:31.552150 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:14:31.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.560917 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 02:14:31.561125 systemd[1]: Finished modprobe@fuse.service. Dec 13 02:14:31.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.569876 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:14:31.570081 systemd[1]: Finished modprobe@loop.service. Dec 13 02:14:31.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.578872 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:14:31.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.587869 systemd[1]: Finished systemd-network-generator.service. Dec 13 02:14:31.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.596932 systemd[1]: Finished systemd-remount-fs.service. Dec 13 02:14:31.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.605958 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:14:31.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.615308 systemd[1]: Reached target network-pre.target. Dec 13 02:14:31.625227 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 02:14:31.635037 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 02:14:31.642575 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 02:14:31.645462 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 02:14:31.654453 systemd[1]: Starting systemd-journal-flush.service... Dec 13 02:14:31.662602 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:14:31.664517 systemd[1]: Starting systemd-random-seed.service... Dec 13 02:14:31.669177 systemd-journald[985]: Time spent on flushing to /var/log/journal/70d5ffd0398a20d917c72b2407f1e970 is 79.959ms for 1159 entries. Dec 13 02:14:31.669177 systemd-journald[985]: System Journal (/var/log/journal/70d5ffd0398a20d917c72b2407f1e970) is 8.0M, max 584.8M, 576.8M free. Dec 13 02:14:31.774169 systemd-journald[985]: Received client request to flush runtime journal. Dec 13 02:14:31.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.678296 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:14:31.682007 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:14:31.691659 systemd[1]: Starting systemd-sysusers.service... Dec 13 02:14:31.700239 systemd[1]: Starting systemd-udev-settle.service... Dec 13 02:14:31.775202 udevadm[999]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 02:14:31.710863 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 02:14:31.719737 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 02:14:31.728939 systemd[1]: Finished systemd-random-seed.service. Dec 13 02:14:31.741278 systemd[1]: Reached target first-boot-complete.target. Dec 13 02:14:31.755194 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:14:31.772920 systemd[1]: Finished systemd-sysusers.service. Dec 13 02:14:31.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.782163 systemd[1]: Finished systemd-journal-flush.service. Dec 13 02:14:31.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:31.793505 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:14:31.853661 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:14:31.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:32.375111 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 02:14:32.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:32.382000 audit: BPF prog-id=18 op=LOAD Dec 13 02:14:32.383000 audit: BPF prog-id=19 op=LOAD Dec 13 02:14:32.383000 audit: BPF prog-id=7 op=UNLOAD Dec 13 02:14:32.383000 audit: BPF prog-id=8 op=UNLOAD Dec 13 02:14:32.385384 systemd[1]: Starting systemd-udevd.service... Dec 13 02:14:32.408076 systemd-udevd[1004]: Using default interface naming scheme 'v252'. Dec 13 02:14:32.452185 systemd[1]: Started systemd-udevd.service. Dec 13 02:14:32.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:32.460000 audit: BPF prog-id=20 op=LOAD Dec 13 02:14:32.463406 systemd[1]: Starting systemd-networkd.service... Dec 13 02:14:32.477000 audit: BPF prog-id=21 op=LOAD Dec 13 02:14:32.478000 audit: BPF prog-id=22 op=LOAD Dec 13 02:14:32.478000 audit: BPF prog-id=23 op=LOAD Dec 13 02:14:32.480615 systemd[1]: Starting systemd-userdbd.service... Dec 13 02:14:32.530211 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 02:14:32.543141 systemd[1]: Started systemd-userdbd.service. Dec 13 02:14:32.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:32.666974 systemd-networkd[1017]: lo: Link UP Dec 13 02:14:32.667590 systemd-networkd[1017]: lo: Gained carrier Dec 13 02:14:32.668516 systemd-networkd[1017]: Enumeration completed Dec 13 02:14:32.668670 systemd[1]: Started systemd-networkd.service. Dec 13 02:14:32.670329 systemd-networkd[1017]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:14:32.672698 systemd-networkd[1017]: eth0: Link UP Dec 13 02:14:32.672881 systemd-networkd[1017]: eth0: Gained carrier Dec 13 02:14:32.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:32.686438 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 02:14:32.690590 systemd-networkd[1017]: eth0: DHCPv4 address 10.128.0.98/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 02:14:32.699138 kernel: ACPI: button: Power Button [PWRF] Dec 13 02:14:32.699252 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Dec 13 02:14:32.718419 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 02:14:32.745000 audit[1018]: AVC avc: denied { confidentiality } for pid=1018 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 02:14:32.756428 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1036) Dec 13 02:14:32.745000 audit[1018]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=564c6856e240 a1=337fc a2=7fae7df11bc5 a3=5 items=110 ppid=1004 pid=1018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:14:32.745000 audit: CWD cwd="/" Dec 13 02:14:32.745000 audit: PATH item=0 name=(null) inode=1044 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=1 name=(null) inode=14343 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=2 name=(null) inode=14343 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=3 name=(null) inode=14344 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=4 name=(null) inode=14343 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=5 name=(null) inode=14345 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=6 name=(null) inode=14343 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=7 name=(null) inode=14346 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=8 name=(null) inode=14346 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=9 name=(null) inode=14347 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=10 name=(null) inode=14346 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=11 name=(null) inode=14348 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=12 name=(null) inode=14346 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=13 name=(null) inode=14349 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=14 name=(null) inode=14346 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=15 name=(null) inode=14350 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=16 name=(null) inode=14346 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=17 name=(null) inode=14351 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=18 name=(null) inode=14343 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=19 name=(null) inode=14352 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=20 name=(null) inode=14352 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=21 name=(null) inode=14353 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=22 name=(null) inode=14352 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=23 name=(null) inode=14354 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=24 name=(null) inode=14352 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=25 name=(null) inode=14355 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=26 name=(null) inode=14352 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=27 name=(null) inode=14356 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=28 name=(null) inode=14352 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=29 name=(null) inode=14357 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=30 name=(null) inode=14343 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=31 name=(null) inode=14358 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=32 name=(null) inode=14358 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=33 name=(null) inode=14359 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=34 name=(null) inode=14358 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=35 name=(null) inode=14360 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=36 name=(null) inode=14358 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=37 name=(null) inode=14361 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=38 name=(null) inode=14358 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=39 name=(null) inode=14362 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=40 name=(null) inode=14358 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=41 name=(null) inode=14363 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=42 name=(null) inode=14343 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=43 name=(null) inode=14364 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=44 name=(null) inode=14364 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=45 name=(null) inode=14365 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=46 name=(null) inode=14364 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=47 name=(null) inode=14366 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=48 name=(null) inode=14364 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=49 name=(null) inode=14367 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=50 name=(null) inode=14364 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=51 name=(null) inode=14368 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=52 name=(null) inode=14364 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=53 name=(null) inode=14369 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=54 name=(null) inode=1044 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=55 name=(null) inode=14370 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=56 name=(null) inode=14370 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=57 name=(null) inode=14371 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=58 name=(null) inode=14370 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=59 name=(null) inode=14372 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=60 name=(null) inode=14370 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=61 name=(null) inode=14373 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=62 name=(null) inode=14373 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=63 name=(null) inode=14374 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=64 name=(null) inode=14373 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=65 name=(null) inode=14375 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=66 name=(null) inode=14373 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=67 name=(null) inode=14376 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=68 name=(null) inode=14373 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=69 name=(null) inode=14377 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=70 name=(null) inode=14373 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=71 name=(null) inode=14378 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=72 name=(null) inode=14370 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=73 name=(null) inode=14379 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=74 name=(null) inode=14379 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=75 name=(null) inode=14380 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=76 name=(null) inode=14379 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=77 name=(null) inode=14381 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=78 name=(null) inode=14379 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=79 name=(null) inode=14382 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=80 name=(null) inode=14379 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=81 name=(null) inode=14383 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=82 name=(null) inode=14379 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=83 name=(null) inode=14384 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=84 name=(null) inode=14370 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=85 name=(null) inode=14385 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=86 name=(null) inode=14385 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=87 name=(null) inode=14386 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=88 name=(null) inode=14385 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=89 name=(null) inode=14387 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=90 name=(null) inode=14385 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=91 name=(null) inode=14388 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=92 name=(null) inode=14385 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=93 name=(null) inode=14389 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=94 name=(null) inode=14385 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=95 name=(null) inode=14390 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=96 name=(null) inode=14370 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=97 name=(null) inode=14391 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=98 name=(null) inode=14391 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=99 name=(null) inode=14392 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=100 name=(null) inode=14391 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=101 name=(null) inode=14393 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=102 name=(null) inode=14391 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=103 name=(null) inode=14394 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=104 name=(null) inode=14391 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=105 name=(null) inode=14395 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=106 name=(null) inode=14391 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=107 name=(null) inode=14396 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PATH item=109 name=(null) inode=14397 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:14:32.745000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 02:14:32.819428 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 13 02:14:32.845082 kernel: EDAC MC: Ver: 3.0.0 Dec 13 02:14:32.845130 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 02:14:32.861212 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:14:32.869465 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 02:14:32.886949 systemd[1]: Finished systemd-udev-settle.service. Dec 13 02:14:32.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:32.897132 systemd[1]: Starting lvm2-activation-early.service... Dec 13 02:14:32.925802 lvm[1041]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:14:32.954797 systemd[1]: Finished lvm2-activation-early.service. Dec 13 02:14:32.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:32.963792 systemd[1]: Reached target cryptsetup.target. Dec 13 02:14:32.974557 systemd[1]: Starting lvm2-activation.service... Dec 13 02:14:32.980992 lvm[1042]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:14:33.006822 systemd[1]: Finished lvm2-activation.service. Dec 13 02:14:33.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.016824 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:14:33.025609 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 02:14:33.025674 systemd[1]: Reached target local-fs.target. Dec 13 02:14:33.034605 systemd[1]: Reached target machines.target. Dec 13 02:14:33.045248 systemd[1]: Starting ldconfig.service... Dec 13 02:14:33.053522 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:14:33.053634 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:14:33.056057 systemd[1]: Starting systemd-boot-update.service... Dec 13 02:14:33.065469 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 02:14:33.077867 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 02:14:33.081194 systemd[1]: Starting systemd-sysext.service... Dec 13 02:14:33.081984 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1044 (bootctl) Dec 13 02:14:33.085600 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 02:14:33.107123 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 02:14:33.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.120857 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 02:14:33.129180 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 02:14:33.129472 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 02:14:33.154428 kernel: loop0: detected capacity change from 0 to 205544 Dec 13 02:14:33.274578 systemd-fsck[1056]: fsck.fat 4.2 (2021-01-31) Dec 13 02:14:33.274578 systemd-fsck[1056]: /dev/sda1: 789 files, 119291/258078 clusters Dec 13 02:14:33.278848 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 02:14:33.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.290816 systemd[1]: Mounting boot.mount... Dec 13 02:14:33.330098 systemd[1]: Mounted boot.mount. Dec 13 02:14:33.354684 systemd[1]: Finished systemd-boot-update.service. Dec 13 02:14:33.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.500606 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 02:14:33.567432 kernel: loop1: detected capacity change from 0 to 205544 Dec 13 02:14:33.598093 (sd-sysext)[1060]: Using extensions 'kubernetes'. Dec 13 02:14:33.598774 (sd-sysext)[1060]: Merged extensions into '/usr'. Dec 13 02:14:33.626462 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:14:33.632573 systemd[1]: Mounting usr-share-oem.mount... Dec 13 02:14:33.641114 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:14:33.643387 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:14:33.652277 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:14:33.662246 systemd[1]: Starting modprobe@loop.service... Dec 13 02:14:33.669734 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:14:33.670015 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:14:33.670299 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:14:33.673541 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 02:14:33.675294 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 02:14:33.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.684127 systemd[1]: Mounted usr-share-oem.mount. Dec 13 02:14:33.692174 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:14:33.692556 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:14:33.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.702327 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:14:33.702559 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:14:33.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.702967 ldconfig[1043]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 02:14:33.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.711314 systemd[1]: Finished ldconfig.service. Dec 13 02:14:33.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.719156 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:14:33.719355 systemd[1]: Finished modprobe@loop.service. Dec 13 02:14:33.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.728509 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:14:33.728681 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:14:33.730222 systemd[1]: Finished systemd-sysext.service. Dec 13 02:14:33.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:33.740271 systemd[1]: Starting ensure-sysext.service... Dec 13 02:14:33.749006 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 02:14:33.764107 systemd[1]: Reloading. Dec 13 02:14:33.767910 systemd-tmpfiles[1067]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 02:14:33.769141 systemd-networkd[1017]: eth0: Gained IPv6LL Dec 13 02:14:33.773802 systemd-tmpfiles[1067]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 02:14:33.778177 systemd-tmpfiles[1067]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 02:14:33.895796 /usr/lib/systemd/system-generators/torcx-generator[1089]: time="2024-12-13T02:14:33Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:14:33.895843 /usr/lib/systemd/system-generators/torcx-generator[1089]: time="2024-12-13T02:14:33Z" level=info msg="torcx already run" Dec 13 02:14:34.045609 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:14:34.045637 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:14:34.069110 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:14:34.143000 audit: BPF prog-id=24 op=LOAD Dec 13 02:14:34.143000 audit: BPF prog-id=21 op=UNLOAD Dec 13 02:14:34.143000 audit: BPF prog-id=25 op=LOAD Dec 13 02:14:34.143000 audit: BPF prog-id=26 op=LOAD Dec 13 02:14:34.143000 audit: BPF prog-id=22 op=UNLOAD Dec 13 02:14:34.143000 audit: BPF prog-id=23 op=UNLOAD Dec 13 02:14:34.144000 audit: BPF prog-id=27 op=LOAD Dec 13 02:14:34.144000 audit: BPF prog-id=28 op=LOAD Dec 13 02:14:34.144000 audit: BPF prog-id=18 op=UNLOAD Dec 13 02:14:34.144000 audit: BPF prog-id=19 op=UNLOAD Dec 13 02:14:34.148000 audit: BPF prog-id=29 op=LOAD Dec 13 02:14:34.148000 audit: BPF prog-id=20 op=UNLOAD Dec 13 02:14:34.150000 audit: BPF prog-id=30 op=LOAD Dec 13 02:14:34.150000 audit: BPF prog-id=15 op=UNLOAD Dec 13 02:14:34.150000 audit: BPF prog-id=31 op=LOAD Dec 13 02:14:34.150000 audit: BPF prog-id=32 op=LOAD Dec 13 02:14:34.150000 audit: BPF prog-id=16 op=UNLOAD Dec 13 02:14:34.150000 audit: BPF prog-id=17 op=UNLOAD Dec 13 02:14:34.155311 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 02:14:34.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:34.170157 systemd[1]: Starting audit-rules.service... Dec 13 02:14:34.179157 systemd[1]: Starting clean-ca-certificates.service... Dec 13 02:14:34.189449 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 02:14:34.200528 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 02:14:34.208000 audit: BPF prog-id=33 op=LOAD Dec 13 02:14:34.211225 systemd[1]: Starting systemd-resolved.service... Dec 13 02:14:34.218000 audit: BPF prog-id=34 op=LOAD Dec 13 02:14:34.221509 systemd[1]: Starting systemd-timesyncd.service... Dec 13 02:14:34.230581 systemd[1]: Starting systemd-update-utmp.service... Dec 13 02:14:34.238372 systemd[1]: Finished clean-ca-certificates.service. Dec 13 02:14:34.240000 audit[1158]: SYSTEM_BOOT pid=1158 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 02:14:34.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:34.247057 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 02:14:34.247302 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 02:14:34.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:34.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:14:34.259878 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 02:14:34.258000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 02:14:34.258000 audit[1161]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe58d4dda0 a2=420 a3=0 items=0 ppid=1131 pid=1161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:14:34.260660 augenrules[1161]: No rules Dec 13 02:14:34.258000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 02:14:34.269891 systemd[1]: Finished audit-rules.service. Dec 13 02:14:34.285788 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:14:34.286369 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:14:34.289125 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:14:34.299423 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:14:34.308542 systemd[1]: Starting modprobe@loop.service... Dec 13 02:14:34.317738 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 02:14:34.324112 enable-oslogin[1169]: /etc/pam.d/sshd already exists. Not enabling OS Login Dec 13 02:14:34.326633 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:14:34.326896 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:14:34.329357 systemd[1]: Starting systemd-update-done.service... Dec 13 02:14:34.336521 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:14:34.336735 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:14:34.339639 systemd[1]: Finished systemd-update-utmp.service. Dec 13 02:14:34.350404 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:14:34.350624 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:14:34.360344 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:14:34.360584 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:14:34.370325 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:14:34.370555 systemd[1]: Finished modprobe@loop.service. Dec 13 02:14:34.379563 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 02:14:34.379822 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 02:14:34.389366 systemd[1]: Finished systemd-update-done.service. Dec 13 02:14:34.395330 systemd-resolved[1148]: Positive Trust Anchors: Dec 13 02:14:34.395804 systemd-resolved[1148]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:14:34.395984 systemd-resolved[1148]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:14:34.400803 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:14:34.401011 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:14:34.404020 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:14:34.404473 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:14:34.407388 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:14:34.415833 systemd-resolved[1148]: Defaulting to hostname 'linux'. Dec 13 02:14:34.416686 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:14:34.425808 systemd[1]: Starting modprobe@loop.service... Dec 13 02:14:34.435006 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 02:14:34.442211 enable-oslogin[1174]: /etc/pam.d/sshd already exists. Not enabling OS Login Dec 13 02:14:34.443639 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:14:34.443908 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:14:34.444114 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:14:34.444420 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:14:34.446303 systemd[1]: Started systemd-resolved.service. Dec 13 02:14:34.456072 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:14:34.456562 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:14:34.465377 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:14:34.465660 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:14:34.475482 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:14:34.475765 systemd[1]: Finished modprobe@loop.service. Dec 13 02:14:34.482150 systemd-timesyncd[1155]: Contacted time server 169.254.169.254:123 (169.254.169.254). Dec 13 02:14:34.482238 systemd-timesyncd[1155]: Initial clock synchronization to Fri 2024-12-13 02:14:34.517997 UTC. Dec 13 02:14:34.484956 systemd[1]: Started systemd-timesyncd.service. Dec 13 02:14:34.494364 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 02:14:34.494671 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 02:14:34.504461 systemd[1]: Reached target network.target. Dec 13 02:14:34.512706 systemd[1]: Reached target nss-lookup.target. Dec 13 02:14:34.521750 systemd[1]: Reached target time-set.target. Dec 13 02:14:34.530699 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:14:34.530969 systemd[1]: Reached target sysinit.target. Dec 13 02:14:34.539870 systemd[1]: Started motdgen.path. Dec 13 02:14:34.546811 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 02:14:34.557022 systemd[1]: Started logrotate.timer. Dec 13 02:14:34.564919 systemd[1]: Started mdadm.timer. Dec 13 02:14:34.571756 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 02:14:34.580659 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:14:34.580884 systemd[1]: Reached target paths.target. Dec 13 02:14:34.587720 systemd[1]: Reached target timers.target. Dec 13 02:14:34.595195 systemd[1]: Listening on dbus.socket. Dec 13 02:14:34.604430 systemd[1]: Starting docker.socket... Dec 13 02:14:34.615918 systemd[1]: Listening on sshd.socket. Dec 13 02:14:34.622843 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:14:34.623082 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:14:34.626035 systemd[1]: Listening on docker.socket. Dec 13 02:14:34.636269 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:14:34.636486 systemd[1]: Reached target sockets.target. Dec 13 02:14:34.644740 systemd[1]: Reached target basic.target. Dec 13 02:14:34.651710 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:14:34.651892 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:14:34.653791 systemd[1]: Starting containerd.service... Dec 13 02:14:34.662774 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 02:14:34.674322 systemd[1]: Starting dbus.service... Dec 13 02:14:34.682645 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 02:14:34.691532 systemd[1]: Starting extend-filesystems.service... Dec 13 02:14:34.698551 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 02:14:34.710268 jq[1181]: false Dec 13 02:14:34.701025 systemd[1]: Starting modprobe@drm.service... Dec 13 02:14:34.710598 systemd[1]: Starting motdgen.service... Dec 13 02:14:34.719756 systemd[1]: Starting prepare-helm.service... Dec 13 02:14:34.728712 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 02:14:34.737745 systemd[1]: Starting sshd-keygen.service... Dec 13 02:14:34.746818 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 02:14:34.755574 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:14:34.755895 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Dec 13 02:14:34.756768 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 02:14:34.758975 systemd[1]: Starting update-engine.service... Dec 13 02:14:34.769900 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 02:14:34.777110 jq[1203]: true Dec 13 02:14:34.788254 extend-filesystems[1182]: Found loop1 Dec 13 02:14:34.788254 extend-filesystems[1182]: Found sda Dec 13 02:14:34.788254 extend-filesystems[1182]: Found sda1 Dec 13 02:14:34.788254 extend-filesystems[1182]: Found sda2 Dec 13 02:14:34.788254 extend-filesystems[1182]: Found sda3 Dec 13 02:14:34.788254 extend-filesystems[1182]: Found usr Dec 13 02:14:34.788254 extend-filesystems[1182]: Found sda4 Dec 13 02:14:34.788254 extend-filesystems[1182]: Found sda6 Dec 13 02:14:34.788254 extend-filesystems[1182]: Found sda7 Dec 13 02:14:34.788254 extend-filesystems[1182]: Found sda9 Dec 13 02:14:34.788254 extend-filesystems[1182]: Checking size of /dev/sda9 Dec 13 02:14:34.784462 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 02:14:34.962285 update_engine[1201]: I1213 02:14:34.932806 1201 main.cc:92] Flatcar Update Engine starting Dec 13 02:14:34.962285 update_engine[1201]: I1213 02:14:34.945202 1201 update_check_scheduler.cc:74] Next update check in 2m36s Dec 13 02:14:34.863133 dbus-daemon[1180]: [system] SELinux support is enabled Dec 13 02:14:34.963410 extend-filesystems[1182]: Resized partition /dev/sda9 Dec 13 02:14:35.005004 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Dec 13 02:14:34.785250 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 02:14:34.887174 dbus-daemon[1180]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1017 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 02:14:35.007600 extend-filesystems[1225]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 02:14:34.786143 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:14:35.016759 tar[1207]: linux-amd64/helm Dec 13 02:14:34.915356 dbus-daemon[1180]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 02:14:34.786467 systemd[1]: Finished modprobe@drm.service. Dec 13 02:14:35.017747 jq[1209]: true Dec 13 02:14:34.795086 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 02:14:34.795367 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 02:14:34.808530 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 02:14:34.808791 systemd[1]: Finished motdgen.service. Dec 13 02:14:34.817143 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 02:14:34.831850 systemd[1]: Reached target network-online.target. Dec 13 02:14:34.841514 systemd[1]: Starting kubelet.service... Dec 13 02:14:34.853687 systemd[1]: Starting oem-gce.service... Dec 13 02:14:35.020793 mkfs.ext4[1229]: mke2fs 1.46.5 (30-Dec-2021) Dec 13 02:14:35.020793 mkfs.ext4[1229]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Dec 13 02:14:35.020793 mkfs.ext4[1229]: Creating filesystem with 262144 4k blocks and 65536 inodes Dec 13 02:14:35.020793 mkfs.ext4[1229]: Filesystem UUID: e46e900e-296d-4fcf-af3c-33e51405a2c3 Dec 13 02:14:35.020793 mkfs.ext4[1229]: Superblock backups stored on blocks: Dec 13 02:14:35.020793 mkfs.ext4[1229]: 32768, 98304, 163840, 229376 Dec 13 02:14:35.020793 mkfs.ext4[1229]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 02:14:35.020793 mkfs.ext4[1229]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 02:14:35.020793 mkfs.ext4[1229]: Creating journal (8192 blocks): done Dec 13 02:14:35.020793 mkfs.ext4[1229]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 02:14:34.877973 systemd[1]: Starting systemd-logind.service... Dec 13 02:14:34.881209 systemd[1]: Started dbus.service. Dec 13 02:14:34.896581 systemd[1]: Finished ensure-sysext.service. Dec 13 02:14:34.910754 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 02:14:34.910797 systemd[1]: Reached target system-config.target. Dec 13 02:14:34.915860 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 02:14:34.915900 systemd[1]: Reached target user-config.target. Dec 13 02:14:34.948798 systemd[1]: Starting systemd-hostnamed.service... Dec 13 02:14:34.969327 systemd[1]: Started update-engine.service. Dec 13 02:14:34.988622 systemd[1]: Started locksmithd.service. Dec 13 02:14:35.054680 umount[1246]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Dec 13 02:14:35.071461 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Dec 13 02:14:35.101381 kernel: loop2: detected capacity change from 0 to 2097152 Dec 13 02:14:35.107032 bash[1243]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:14:35.109880 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 02:14:35.118243 extend-filesystems[1225]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 02:14:35.118243 extend-filesystems[1225]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 13 02:14:35.118243 extend-filesystems[1225]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Dec 13 02:14:35.162572 extend-filesystems[1182]: Resized filesystem in /dev/sda9 Dec 13 02:14:35.181601 kernel: EXT4-fs (loop2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 02:14:35.120098 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 02:14:35.181770 env[1210]: time="2024-12-13T02:14:35.160108393Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 02:14:35.182094 coreos-metadata[1179]: Dec 13 02:14:35.132 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Dec 13 02:14:35.182094 coreos-metadata[1179]: Dec 13 02:14:35.147 INFO Fetch failed with 404: resource not found Dec 13 02:14:35.182094 coreos-metadata[1179]: Dec 13 02:14:35.147 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Dec 13 02:14:35.182094 coreos-metadata[1179]: Dec 13 02:14:35.149 INFO Fetch successful Dec 13 02:14:35.182094 coreos-metadata[1179]: Dec 13 02:14:35.149 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Dec 13 02:14:35.182094 coreos-metadata[1179]: Dec 13 02:14:35.150 INFO Fetch failed with 404: resource not found Dec 13 02:14:35.182094 coreos-metadata[1179]: Dec 13 02:14:35.150 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Dec 13 02:14:35.182094 coreos-metadata[1179]: Dec 13 02:14:35.150 INFO Fetch failed with 404: resource not found Dec 13 02:14:35.182094 coreos-metadata[1179]: Dec 13 02:14:35.150 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Dec 13 02:14:35.182094 coreos-metadata[1179]: Dec 13 02:14:35.152 INFO Fetch successful Dec 13 02:14:35.120382 systemd[1]: Finished extend-filesystems.service. Dec 13 02:14:35.155091 unknown[1179]: wrote ssh authorized keys file for user: core Dec 13 02:14:35.230693 update-ssh-keys[1253]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:14:35.232083 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 02:14:35.240329 env[1210]: time="2024-12-13T02:14:35.240277496Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 02:14:35.240515 env[1210]: time="2024-12-13T02:14:35.240485247Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:14:35.245163 env[1210]: time="2024-12-13T02:14:35.244075145Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:14:35.245163 env[1210]: time="2024-12-13T02:14:35.244110691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:14:35.245163 env[1210]: time="2024-12-13T02:14:35.244353418Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:14:35.245163 env[1210]: time="2024-12-13T02:14:35.244373605Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 02:14:35.245163 env[1210]: time="2024-12-13T02:14:35.244388540Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 02:14:35.245163 env[1210]: time="2024-12-13T02:14:35.244423894Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 02:14:35.245163 env[1210]: time="2024-12-13T02:14:35.244545053Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:14:35.245163 env[1210]: time="2024-12-13T02:14:35.244782608Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:14:35.245163 env[1210]: time="2024-12-13T02:14:35.244940943Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:14:35.245163 env[1210]: time="2024-12-13T02:14:35.244958545Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 02:14:35.245533 env[1210]: time="2024-12-13T02:14:35.245014179Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 02:14:35.245533 env[1210]: time="2024-12-13T02:14:35.245026696Z" level=info msg="metadata content store policy set" policy=shared Dec 13 02:14:35.259192 env[1210]: time="2024-12-13T02:14:35.259131400Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 02:14:35.259416 env[1210]: time="2024-12-13T02:14:35.259369885Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 02:14:35.259597 env[1210]: time="2024-12-13T02:14:35.259551482Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 02:14:35.259886 env[1210]: time="2024-12-13T02:14:35.259857060Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 02:14:35.260068 env[1210]: time="2024-12-13T02:14:35.260042636Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 02:14:35.260214 env[1210]: time="2024-12-13T02:14:35.260190188Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 02:14:35.260356 env[1210]: time="2024-12-13T02:14:35.260334584Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 02:14:35.260507 env[1210]: time="2024-12-13T02:14:35.260485247Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 02:14:35.260638 env[1210]: time="2024-12-13T02:14:35.260616329Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 02:14:35.260778 env[1210]: time="2024-12-13T02:14:35.260754124Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 02:14:35.260917 env[1210]: time="2024-12-13T02:14:35.260895509Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 02:14:35.261058 env[1210]: time="2024-12-13T02:14:35.261036250Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 02:14:35.261443 env[1210]: time="2024-12-13T02:14:35.261387769Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 02:14:35.261751 env[1210]: time="2024-12-13T02:14:35.261727774Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 02:14:35.262499 env[1210]: time="2024-12-13T02:14:35.262458746Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 02:14:35.269261 env[1210]: time="2024-12-13T02:14:35.269202194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 02:14:35.269639 env[1210]: time="2024-12-13T02:14:35.269607397Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 02:14:35.281537 env[1210]: time="2024-12-13T02:14:35.281388187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 02:14:35.282074 env[1210]: time="2024-12-13T02:14:35.282047717Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 02:14:35.282216 env[1210]: time="2024-12-13T02:14:35.282195279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 02:14:35.282348 env[1210]: time="2024-12-13T02:14:35.282328272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 02:14:35.282492 env[1210]: time="2024-12-13T02:14:35.282471227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 02:14:35.282632 env[1210]: time="2024-12-13T02:14:35.282611519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 02:14:35.283066 env[1210]: time="2024-12-13T02:14:35.283040050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 02:14:35.283566 env[1210]: time="2024-12-13T02:14:35.283541080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 02:14:35.283734 env[1210]: time="2024-12-13T02:14:35.283713168Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 02:14:35.286529 env[1210]: time="2024-12-13T02:14:35.286476522Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 02:14:35.287369 env[1210]: time="2024-12-13T02:14:35.287335649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 02:14:35.287610 env[1210]: time="2024-12-13T02:14:35.287583557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 02:14:35.287774 env[1210]: time="2024-12-13T02:14:35.287751269Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 02:14:35.288030 env[1210]: time="2024-12-13T02:14:35.288002050Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 02:14:35.289528 systemd-logind[1214]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 02:14:35.289950 env[1210]: time="2024-12-13T02:14:35.289920926Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 02:14:35.290175 env[1210]: time="2024-12-13T02:14:35.290146298Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 02:14:35.290520 env[1210]: time="2024-12-13T02:14:35.290333857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 02:14:35.291186 env[1210]: time="2024-12-13T02:14:35.291058250Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 02:14:35.298986 env[1210]: time="2024-12-13T02:14:35.291475290Z" level=info msg="Connect containerd service" Dec 13 02:14:35.298986 env[1210]: time="2024-12-13T02:14:35.291574466Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 02:14:35.298986 env[1210]: time="2024-12-13T02:14:35.293092414Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:14:35.298986 env[1210]: time="2024-12-13T02:14:35.293237169Z" level=info msg="Start subscribing containerd event" Dec 13 02:14:35.298986 env[1210]: time="2024-12-13T02:14:35.293329980Z" level=info msg="Start recovering state" Dec 13 02:14:35.298986 env[1210]: time="2024-12-13T02:14:35.293477863Z" level=info msg="Start event monitor" Dec 13 02:14:35.298986 env[1210]: time="2024-12-13T02:14:35.293506101Z" level=info msg="Start snapshots syncer" Dec 13 02:14:35.298986 env[1210]: time="2024-12-13T02:14:35.293522275Z" level=info msg="Start cni network conf syncer for default" Dec 13 02:14:35.298986 env[1210]: time="2024-12-13T02:14:35.293554349Z" level=info msg="Start streaming server" Dec 13 02:14:35.297774 systemd-logind[1214]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 02:14:35.297812 systemd-logind[1214]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 02:14:35.299886 systemd-logind[1214]: New seat seat0. Dec 13 02:14:35.305191 dbus-daemon[1180]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 02:14:35.305416 systemd[1]: Started systemd-hostnamed.service. Dec 13 02:14:35.306140 dbus-daemon[1180]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1230 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 02:14:35.314371 env[1210]: time="2024-12-13T02:14:35.314316817Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 02:14:35.316970 systemd[1]: Started systemd-logind.service. Dec 13 02:14:35.326495 systemd[1]: Starting polkit.service... Dec 13 02:14:35.331440 env[1210]: time="2024-12-13T02:14:35.331367299Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 02:14:35.351056 systemd[1]: Started containerd.service. Dec 13 02:14:35.351628 env[1210]: time="2024-12-13T02:14:35.351589299Z" level=info msg="containerd successfully booted in 0.196578s" Dec 13 02:14:35.437903 polkitd[1260]: Started polkitd version 121 Dec 13 02:14:35.467170 polkitd[1260]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 02:14:35.467463 polkitd[1260]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 02:14:35.474590 polkitd[1260]: Finished loading, compiling and executing 2 rules Dec 13 02:14:35.476319 dbus-daemon[1180]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 02:14:35.476563 systemd[1]: Started polkit.service. Dec 13 02:14:35.477744 polkitd[1260]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 02:14:35.519560 systemd-hostnamed[1230]: Hostname set to (transient) Dec 13 02:14:35.523845 systemd-resolved[1148]: System hostname changed to 'ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal'. Dec 13 02:14:36.602090 tar[1207]: linux-amd64/LICENSE Dec 13 02:14:36.606709 tar[1207]: linux-amd64/README.md Dec 13 02:14:36.623265 systemd[1]: Finished prepare-helm.service. Dec 13 02:14:37.116857 systemd[1]: Started kubelet.service. Dec 13 02:14:38.121775 locksmithd[1242]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 02:14:38.439841 kubelet[1275]: E1213 02:14:38.439707 1275 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:14:38.442590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:14:38.442820 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:14:38.443179 systemd[1]: kubelet.service: Consumed 1.459s CPU time. Dec 13 02:14:41.370775 sshd_keygen[1205]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 02:14:41.413868 systemd[1]: Finished sshd-keygen.service. Dec 13 02:14:41.425235 systemd[1]: Starting issuegen.service... Dec 13 02:14:41.436925 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 02:14:41.437169 systemd[1]: Finished issuegen.service. Dec 13 02:14:41.447186 systemd[1]: Starting systemd-user-sessions.service... Dec 13 02:14:41.460265 systemd[1]: Finished systemd-user-sessions.service. Dec 13 02:14:41.471098 systemd[1]: Started getty@tty1.service. Dec 13 02:14:41.481134 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 02:14:41.489995 systemd[1]: Reached target getty.target. Dec 13 02:14:41.691788 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Dec 13 02:14:43.690518 kernel: loop2: detected capacity change from 0 to 2097152 Dec 13 02:14:43.713543 systemd-nspawn[1295]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Dec 13 02:14:43.713543 systemd-nspawn[1295]: Press ^] three times within 1s to kill container. Dec 13 02:14:43.730492 kernel: EXT4-fs (loop2): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 02:14:43.784598 systemd[1]: Created slice system-sshd.slice. Dec 13 02:14:43.797905 systemd[1]: Started sshd@0-10.128.0.98:22-139.178.68.195:47090.service. Dec 13 02:14:43.838683 systemd[1]: Started oem-gce.service. Dec 13 02:14:43.846230 systemd[1]: Reached target multi-user.target. Dec 13 02:14:43.856831 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 02:14:43.869863 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 02:14:43.870134 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 02:14:43.879745 systemd[1]: Startup finished in 1.038s (kernel) + 8.039s (initrd) + 17.046s (userspace) = 26.124s. Dec 13 02:14:43.903438 systemd-nspawn[1295]: + '[' -e /etc/default/instance_configs.cfg.template ']' Dec 13 02:14:43.903438 systemd-nspawn[1295]: + echo -e '[InstanceSetup]\nset_host_keys = false' Dec 13 02:14:43.903706 systemd-nspawn[1295]: + /usr/bin/google_instance_setup Dec 13 02:14:44.121195 sshd[1301]: Accepted publickey for core from 139.178.68.195 port 47090 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:14:44.124556 sshd[1301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:14:44.143107 systemd[1]: Created slice user-500.slice. Dec 13 02:14:44.145320 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 02:14:44.159036 systemd-logind[1214]: New session 1 of user core. Dec 13 02:14:44.167828 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 02:14:44.172352 systemd[1]: Starting user@500.service... Dec 13 02:14:44.188679 (systemd)[1306]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:14:44.339990 systemd[1306]: Queued start job for default target default.target. Dec 13 02:14:44.342257 systemd[1306]: Reached target paths.target. Dec 13 02:14:44.342535 systemd[1306]: Reached target sockets.target. Dec 13 02:14:44.342777 systemd[1306]: Reached target timers.target. Dec 13 02:14:44.342924 systemd[1306]: Reached target basic.target. Dec 13 02:14:44.343133 systemd[1306]: Reached target default.target. Dec 13 02:14:44.343207 systemd[1306]: Startup finished in 140ms. Dec 13 02:14:44.343222 systemd[1]: Started user@500.service. Dec 13 02:14:44.344663 systemd[1]: Started session-1.scope. Dec 13 02:14:44.571505 systemd[1]: Started sshd@1-10.128.0.98:22-139.178.68.195:47092.service. Dec 13 02:14:44.638825 instance-setup[1304]: INFO Running google_set_multiqueue. Dec 13 02:14:44.653589 instance-setup[1304]: INFO Set channels for eth0 to 2. Dec 13 02:14:44.658000 instance-setup[1304]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Dec 13 02:14:44.659522 instance-setup[1304]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Dec 13 02:14:44.659963 instance-setup[1304]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Dec 13 02:14:44.661813 instance-setup[1304]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Dec 13 02:14:44.662317 instance-setup[1304]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Dec 13 02:14:44.663929 instance-setup[1304]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Dec 13 02:14:44.664593 instance-setup[1304]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Dec 13 02:14:44.666212 instance-setup[1304]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Dec 13 02:14:44.682754 instance-setup[1304]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Dec 13 02:14:44.683412 instance-setup[1304]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Dec 13 02:14:44.729707 systemd-nspawn[1295]: + /usr/bin/google_metadata_script_runner --script-type startup Dec 13 02:14:44.872607 sshd[1317]: Accepted publickey for core from 139.178.68.195 port 47092 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:14:44.874322 sshd[1317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:14:44.882724 systemd[1]: Started session-2.scope. Dec 13 02:14:44.885077 systemd-logind[1214]: New session 2 of user core. Dec 13 02:14:45.088588 sshd[1317]: pam_unix(sshd:session): session closed for user core Dec 13 02:14:45.093800 systemd-logind[1214]: Session 2 logged out. Waiting for processes to exit. Dec 13 02:14:45.096492 systemd[1]: sshd@1-10.128.0.98:22-139.178.68.195:47092.service: Deactivated successfully. Dec 13 02:14:45.097605 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 02:14:45.099939 systemd-logind[1214]: Removed session 2. Dec 13 02:14:45.103235 startup-script[1347]: INFO Starting startup scripts. Dec 13 02:14:45.116295 startup-script[1347]: INFO No startup scripts found in metadata. Dec 13 02:14:45.116478 startup-script[1347]: INFO Finished running startup scripts. Dec 13 02:14:45.136102 systemd[1]: Started sshd@2-10.128.0.98:22-139.178.68.195:47102.service. Dec 13 02:14:45.159444 systemd-nspawn[1295]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Dec 13 02:14:45.159444 systemd-nspawn[1295]: + daemon_pids=() Dec 13 02:14:45.159853 systemd-nspawn[1295]: + for d in accounts clock_skew network Dec 13 02:14:45.159853 systemd-nspawn[1295]: + daemon_pids+=($!) Dec 13 02:14:45.160004 systemd-nspawn[1295]: + for d in accounts clock_skew network Dec 13 02:14:45.160216 systemd-nspawn[1295]: + daemon_pids+=($!) Dec 13 02:14:45.160352 systemd-nspawn[1295]: + for d in accounts clock_skew network Dec 13 02:14:45.160648 systemd-nspawn[1295]: + /usr/bin/google_accounts_daemon Dec 13 02:14:45.160766 systemd-nspawn[1295]: + daemon_pids+=($!) Dec 13 02:14:45.160909 systemd-nspawn[1295]: + NOTIFY_SOCKET=/run/systemd/notify Dec 13 02:14:45.160984 systemd-nspawn[1295]: + /usr/bin/systemd-notify --ready Dec 13 02:14:45.161753 systemd-nspawn[1295]: + /usr/bin/google_clock_skew_daemon Dec 13 02:14:45.161853 systemd-nspawn[1295]: + /usr/bin/google_network_daemon Dec 13 02:14:45.231635 systemd-nspawn[1295]: + wait -n 36 37 38 Dec 13 02:14:45.459657 sshd[1354]: Accepted publickey for core from 139.178.68.195 port 47102 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:14:45.461240 sshd[1354]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:14:45.470228 systemd[1]: Started session-3.scope. Dec 13 02:14:45.472874 systemd-logind[1214]: New session 3 of user core. Dec 13 02:14:45.670700 sshd[1354]: pam_unix(sshd:session): session closed for user core Dec 13 02:14:45.675058 systemd[1]: sshd@2-10.128.0.98:22-139.178.68.195:47102.service: Deactivated successfully. Dec 13 02:14:45.676188 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 02:14:45.680092 systemd-logind[1214]: Session 3 logged out. Waiting for processes to exit. Dec 13 02:14:45.681740 systemd-logind[1214]: Removed session 3. Dec 13 02:14:45.716249 systemd[1]: Started sshd@3-10.128.0.98:22-139.178.68.195:47116.service. Dec 13 02:14:45.881100 google-clock-skew[1357]: INFO Starting Google Clock Skew daemon. Dec 13 02:14:45.910958 google-clock-skew[1357]: INFO Clock drift token has changed: 0. Dec 13 02:14:45.930891 systemd-nspawn[1295]: hwclock: Cannot access the Hardware Clock via any known method. Dec 13 02:14:45.931733 systemd-nspawn[1295]: hwclock: Use the --verbose option to see the details of our search for an access method. Dec 13 02:14:45.932833 google-clock-skew[1357]: WARNING Failed to sync system time with hardware clock. Dec 13 02:14:45.959577 groupadd[1373]: group added to /etc/group: name=google-sudoers, GID=1000 Dec 13 02:14:45.964104 groupadd[1373]: group added to /etc/gshadow: name=google-sudoers Dec 13 02:14:45.975487 groupadd[1373]: new group: name=google-sudoers, GID=1000 Dec 13 02:14:45.993201 google-accounts[1356]: INFO Starting Google Accounts daemon. Dec 13 02:14:45.999133 google-networking[1358]: INFO Starting Google Networking daemon. Dec 13 02:14:46.026300 google-accounts[1356]: WARNING OS Login not installed. Dec 13 02:14:46.027282 google-accounts[1356]: INFO Creating a new user account for 0. Dec 13 02:14:46.030892 sshd[1366]: Accepted publickey for core from 139.178.68.195 port 47116 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:14:46.032631 sshd[1366]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:14:46.040319 systemd[1]: Started session-4.scope. Dec 13 02:14:46.042214 google-accounts[1356]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Dec 13 02:14:46.042477 systemd-logind[1214]: New session 4 of user core. Dec 13 02:14:46.046529 systemd-nspawn[1295]: useradd: invalid user name '0': use --badname to ignore Dec 13 02:14:46.244927 sshd[1366]: pam_unix(sshd:session): session closed for user core Dec 13 02:14:46.249476 systemd[1]: sshd@3-10.128.0.98:22-139.178.68.195:47116.service: Deactivated successfully. Dec 13 02:14:46.250523 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 02:14:46.251339 systemd-logind[1214]: Session 4 logged out. Waiting for processes to exit. Dec 13 02:14:46.252532 systemd-logind[1214]: Removed session 4. Dec 13 02:14:46.291150 systemd[1]: Started sshd@4-10.128.0.98:22-139.178.68.195:37628.service. Dec 13 02:14:46.578388 sshd[1388]: Accepted publickey for core from 139.178.68.195 port 37628 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:14:46.579957 sshd[1388]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:14:46.586919 systemd[1]: Started session-5.scope. Dec 13 02:14:46.587542 systemd-logind[1214]: New session 5 of user core. Dec 13 02:14:46.775198 sudo[1391]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 02:14:46.775658 sudo[1391]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 02:14:46.809534 systemd[1]: Starting docker.service... Dec 13 02:14:46.858803 env[1401]: time="2024-12-13T02:14:46.858676022Z" level=info msg="Starting up" Dec 13 02:14:46.860884 env[1401]: time="2024-12-13T02:14:46.860839787Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 02:14:46.860884 env[1401]: time="2024-12-13T02:14:46.860871157Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 02:14:46.861090 env[1401]: time="2024-12-13T02:14:46.860917102Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 02:14:46.861090 env[1401]: time="2024-12-13T02:14:46.860934458Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 02:14:46.867641 env[1401]: time="2024-12-13T02:14:46.867608449Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 02:14:46.867808 env[1401]: time="2024-12-13T02:14:46.867784383Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 02:14:46.867922 env[1401]: time="2024-12-13T02:14:46.867899390Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 02:14:46.868019 env[1401]: time="2024-12-13T02:14:46.867990511Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 02:14:46.879666 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3286038493-merged.mount: Deactivated successfully. Dec 13 02:14:46.920274 env[1401]: time="2024-12-13T02:14:46.920224417Z" level=info msg="Loading containers: start." Dec 13 02:14:47.097441 kernel: Initializing XFRM netlink socket Dec 13 02:14:47.146288 env[1401]: time="2024-12-13T02:14:47.143379541Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 02:14:47.227634 systemd-networkd[1017]: docker0: Link UP Dec 13 02:14:47.248447 env[1401]: time="2024-12-13T02:14:47.248371709Z" level=info msg="Loading containers: done." Dec 13 02:14:47.264708 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3698106850-merged.mount: Deactivated successfully. Dec 13 02:14:47.268912 env[1401]: time="2024-12-13T02:14:47.268854818Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 02:14:47.269168 env[1401]: time="2024-12-13T02:14:47.269126504Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 02:14:47.269320 env[1401]: time="2024-12-13T02:14:47.269281626Z" level=info msg="Daemon has completed initialization" Dec 13 02:14:47.292916 systemd[1]: Started docker.service. Dec 13 02:14:47.305461 env[1401]: time="2024-12-13T02:14:47.305199107Z" level=info msg="API listen on /run/docker.sock" Dec 13 02:14:48.341088 env[1210]: time="2024-12-13T02:14:48.341018085Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 13 02:14:48.449214 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 02:14:48.449568 systemd[1]: Stopped kubelet.service. Dec 13 02:14:48.449643 systemd[1]: kubelet.service: Consumed 1.459s CPU time. Dec 13 02:14:48.452100 systemd[1]: Starting kubelet.service... Dec 13 02:14:48.720538 systemd[1]: Started kubelet.service. Dec 13 02:14:48.788007 kubelet[1527]: E1213 02:14:48.787961 1527 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:14:48.791726 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:14:48.791957 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:14:49.011204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1273068072.mount: Deactivated successfully. Dec 13 02:14:50.747620 env[1210]: time="2024-12-13T02:14:50.747555128Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:50.750564 env[1210]: time="2024-12-13T02:14:50.750516937Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:50.753242 env[1210]: time="2024-12-13T02:14:50.753194325Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:50.756333 env[1210]: time="2024-12-13T02:14:50.756272530Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:50.757879 env[1210]: time="2024-12-13T02:14:50.757825501Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Dec 13 02:14:50.760622 env[1210]: time="2024-12-13T02:14:50.760587817Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 13 02:14:52.339267 env[1210]: time="2024-12-13T02:14:52.339182122Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:52.342360 env[1210]: time="2024-12-13T02:14:52.342293867Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:52.345304 env[1210]: time="2024-12-13T02:14:52.345266332Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:52.348057 env[1210]: time="2024-12-13T02:14:52.348018035Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:52.349551 env[1210]: time="2024-12-13T02:14:52.349487682Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Dec 13 02:14:52.350335 env[1210]: time="2024-12-13T02:14:52.350302869Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 13 02:14:53.678444 env[1210]: time="2024-12-13T02:14:53.677271526Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:53.738498 env[1210]: time="2024-12-13T02:14:53.738389046Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:53.850805 env[1210]: time="2024-12-13T02:14:53.850742631Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:53.885226 env[1210]: time="2024-12-13T02:14:53.885163263Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:53.886343 env[1210]: time="2024-12-13T02:14:53.886287528Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Dec 13 02:14:53.887224 env[1210]: time="2024-12-13T02:14:53.887190765Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 02:14:55.022748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3588749861.mount: Deactivated successfully. Dec 13 02:14:55.782170 env[1210]: time="2024-12-13T02:14:55.782091394Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:55.786970 env[1210]: time="2024-12-13T02:14:55.786902421Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:55.790842 env[1210]: time="2024-12-13T02:14:55.790780080Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:55.791842 env[1210]: time="2024-12-13T02:14:55.791801566Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:55.792571 env[1210]: time="2024-12-13T02:14:55.792527622Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 02:14:55.793426 env[1210]: time="2024-12-13T02:14:55.793371682Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 02:14:56.223464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount290616179.mount: Deactivated successfully. Dec 13 02:14:57.430972 env[1210]: time="2024-12-13T02:14:57.430892063Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:57.433967 env[1210]: time="2024-12-13T02:14:57.433920721Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:57.436489 env[1210]: time="2024-12-13T02:14:57.436447147Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:57.438895 env[1210]: time="2024-12-13T02:14:57.438852288Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:57.439948 env[1210]: time="2024-12-13T02:14:57.439894798Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 02:14:57.440652 env[1210]: time="2024-12-13T02:14:57.440618598Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 02:14:57.883984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount807702854.mount: Deactivated successfully. Dec 13 02:14:57.892801 env[1210]: time="2024-12-13T02:14:57.892730984Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:57.895289 env[1210]: time="2024-12-13T02:14:57.895242533Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:57.897506 env[1210]: time="2024-12-13T02:14:57.897466810Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:57.899898 env[1210]: time="2024-12-13T02:14:57.899853686Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:57.900650 env[1210]: time="2024-12-13T02:14:57.900606699Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 13 02:14:57.901302 env[1210]: time="2024-12-13T02:14:57.901258859Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 13 02:14:58.298676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount477237293.mount: Deactivated successfully. Dec 13 02:14:58.949203 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 02:14:58.949573 systemd[1]: Stopped kubelet.service. Dec 13 02:14:58.951725 systemd[1]: Starting kubelet.service... Dec 13 02:14:59.294979 systemd[1]: Started kubelet.service. Dec 13 02:14:59.392090 kubelet[1538]: E1213 02:14:59.392025 1538 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:14:59.394414 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:14:59.394645 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:15:01.146292 env[1210]: time="2024-12-13T02:15:01.146214289Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:01.149442 env[1210]: time="2024-12-13T02:15:01.149367399Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:01.153066 env[1210]: time="2024-12-13T02:15:01.153017447Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:01.156212 env[1210]: time="2024-12-13T02:15:01.156155777Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:01.157783 env[1210]: time="2024-12-13T02:15:01.157729419Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Dec 13 02:15:04.281737 systemd[1]: Stopped kubelet.service. Dec 13 02:15:04.285487 systemd[1]: Starting kubelet.service... Dec 13 02:15:04.332453 systemd[1]: Reloading. Dec 13 02:15:04.488119 /usr/lib/systemd/system-generators/torcx-generator[1587]: time="2024-12-13T02:15:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:15:04.488764 /usr/lib/systemd/system-generators/torcx-generator[1587]: time="2024-12-13T02:15:04Z" level=info msg="torcx already run" Dec 13 02:15:04.609196 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:15:04.609224 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:15:04.634011 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:15:04.785594 systemd[1]: Started kubelet.service. Dec 13 02:15:04.800027 systemd[1]: Stopping kubelet.service... Dec 13 02:15:04.801262 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:15:04.801579 systemd[1]: Stopped kubelet.service. Dec 13 02:15:04.804038 systemd[1]: Starting kubelet.service... Dec 13 02:15:05.084507 systemd[1]: Started kubelet.service. Dec 13 02:15:05.170682 kubelet[1642]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:15:05.171270 kubelet[1642]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:15:05.171375 kubelet[1642]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:15:05.171755 kubelet[1642]: I1213 02:15:05.171711 1642 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:15:05.553112 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 02:15:05.732778 kubelet[1642]: I1213 02:15:05.732720 1642 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 02:15:05.732778 kubelet[1642]: I1213 02:15:05.732757 1642 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:15:05.733153 kubelet[1642]: I1213 02:15:05.733116 1642 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 02:15:05.791567 kubelet[1642]: I1213 02:15:05.791525 1642 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:15:05.792112 kubelet[1642]: E1213 02:15:05.792067 1642 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.128.0.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.98:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:15:05.806603 kubelet[1642]: E1213 02:15:05.806416 1642 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 02:15:05.806603 kubelet[1642]: I1213 02:15:05.806464 1642 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 02:15:05.812351 kubelet[1642]: I1213 02:15:05.812291 1642 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:15:05.814762 kubelet[1642]: I1213 02:15:05.814717 1642 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 02:15:05.815050 kubelet[1642]: I1213 02:15:05.814987 1642 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:15:05.815311 kubelet[1642]: I1213 02:15:05.815040 1642 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 02:15:05.815542 kubelet[1642]: I1213 02:15:05.815312 1642 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:15:05.815542 kubelet[1642]: I1213 02:15:05.815331 1642 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 02:15:05.815542 kubelet[1642]: I1213 02:15:05.815498 1642 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:15:05.825698 kubelet[1642]: I1213 02:15:05.825650 1642 kubelet.go:408] "Attempting to sync node with API server" Dec 13 02:15:05.825891 kubelet[1642]: I1213 02:15:05.825718 1642 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:15:05.825891 kubelet[1642]: I1213 02:15:05.825794 1642 kubelet.go:314] "Adding apiserver pod source" Dec 13 02:15:05.825891 kubelet[1642]: I1213 02:15:05.825820 1642 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:15:05.860078 kubelet[1642]: I1213 02:15:05.860042 1642 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:15:05.868828 kubelet[1642]: I1213 02:15:05.868764 1642 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:15:05.872283 kubelet[1642]: W1213 02:15:05.872212 1642 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 02:15:05.873017 kubelet[1642]: I1213 02:15:05.872947 1642 server.go:1269] "Started kubelet" Dec 13 02:15:05.873602 kubelet[1642]: W1213 02:15:05.873161 1642 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.98:6443: connect: connection refused Dec 13 02:15:05.873602 kubelet[1642]: E1213 02:15:05.873272 1642 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.98:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:15:05.873602 kubelet[1642]: W1213 02:15:05.873425 1642 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.98:6443: connect: connection refused Dec 13 02:15:05.873602 kubelet[1642]: E1213 02:15:05.873483 1642 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.98:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:15:05.886467 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 02:15:05.886625 kubelet[1642]: I1213 02:15:05.877325 1642 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:15:05.887268 kubelet[1642]: I1213 02:15:05.887239 1642 server.go:460] "Adding debug handlers to kubelet server" Dec 13 02:15:05.888721 kubelet[1642]: I1213 02:15:05.888643 1642 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:15:05.889041 kubelet[1642]: I1213 02:15:05.888982 1642 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:15:05.889447 kubelet[1642]: I1213 02:15:05.889426 1642 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:15:05.891534 kubelet[1642]: I1213 02:15:05.891503 1642 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 02:15:05.895455 kubelet[1642]: E1213 02:15:05.893272 1642 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.98:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.98:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal.18109ad743270b54 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal,UID:ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal,},FirstTimestamp:2024-12-13 02:15:05.87291938 +0000 UTC m=+0.779029364,LastTimestamp:2024-12-13 02:15:05.87291938 +0000 UTC m=+0.779029364,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal,}" Dec 13 02:15:05.897046 kubelet[1642]: I1213 02:15:05.897017 1642 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 02:15:05.897242 kubelet[1642]: E1213 02:15:05.897205 1642 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" not found" Dec 13 02:15:05.897546 kubelet[1642]: E1213 02:15:05.897509 1642 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.98:6443: connect: connection refused" interval="200ms" Dec 13 02:15:05.898675 kubelet[1642]: I1213 02:15:05.898648 1642 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:15:05.899347 kubelet[1642]: I1213 02:15:05.898901 1642 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 02:15:05.899843 kubelet[1642]: I1213 02:15:05.898982 1642 reconciler.go:26] "Reconciler: start to sync state" Dec 13 02:15:05.900181 kubelet[1642]: W1213 02:15:05.900061 1642 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.98:6443: connect: connection refused Dec 13 02:15:05.900385 kubelet[1642]: E1213 02:15:05.900361 1642 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.98:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:15:05.900681 kubelet[1642]: E1213 02:15:05.900612 1642 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:15:05.901182 kubelet[1642]: I1213 02:15:05.901161 1642 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:15:05.901324 kubelet[1642]: I1213 02:15:05.901307 1642 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:15:05.930601 kubelet[1642]: I1213 02:15:05.930525 1642 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:15:05.930601 kubelet[1642]: I1213 02:15:05.930550 1642 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:15:05.930601 kubelet[1642]: I1213 02:15:05.930575 1642 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:15:05.933809 kubelet[1642]: I1213 02:15:05.933768 1642 policy_none.go:49] "None policy: Start" Dec 13 02:15:05.935140 kubelet[1642]: I1213 02:15:05.935040 1642 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:15:05.935140 kubelet[1642]: I1213 02:15:05.935074 1642 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:15:05.943900 kubelet[1642]: I1213 02:15:05.943845 1642 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:15:05.948070 kubelet[1642]: I1213 02:15:05.947542 1642 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:15:05.948070 kubelet[1642]: I1213 02:15:05.947573 1642 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:15:05.948070 kubelet[1642]: I1213 02:15:05.947599 1642 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 02:15:05.948070 kubelet[1642]: E1213 02:15:05.947660 1642 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:15:05.949738 systemd[1]: Created slice kubepods.slice. Dec 13 02:15:05.952558 kubelet[1642]: W1213 02:15:05.952508 1642 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.98:6443: connect: connection refused Dec 13 02:15:05.952737 kubelet[1642]: E1213 02:15:05.952710 1642 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.98:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:15:05.959251 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 02:15:05.964081 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 02:15:05.971544 kubelet[1642]: I1213 02:15:05.971513 1642 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:15:05.972248 kubelet[1642]: I1213 02:15:05.972209 1642 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 02:15:05.972361 kubelet[1642]: I1213 02:15:05.972233 1642 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 02:15:05.973661 kubelet[1642]: I1213 02:15:05.972821 1642 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:15:05.978527 kubelet[1642]: E1213 02:15:05.978497 1642 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" not found" Dec 13 02:15:06.071108 systemd[1]: Created slice kubepods-burstable-pod704237c51bef17452f7d4a4f38e15835.slice. Dec 13 02:15:06.080776 kubelet[1642]: I1213 02:15:06.080705 1642 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:06.081245 kubelet[1642]: E1213 02:15:06.081212 1642 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.98:6443/api/v1/nodes\": dial tcp 10.128.0.98:6443: connect: connection refused" node="ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:06.088652 systemd[1]: Created slice kubepods-burstable-podcb03bee71e47c76bdbca2e89af1b704e.slice. Dec 13 02:15:06.098961 kubelet[1642]: E1213 02:15:06.098839 1642 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.98:6443: connect: connection refused" interval="400ms" Dec 13 02:15:06.101179 kubelet[1642]: I1213 02:15:06.101140 1642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/56fd8e16e4b9ab54b80a914ea277ec1d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" (UID: \"56fd8e16e4b9ab54b80a914ea277ec1d\") " pod="kube-system/kube-apiserver-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:06.101461 systemd[1]: Created slice kubepods-burstable-pod56fd8e16e4b9ab54b80a914ea277ec1d.slice. Dec 13 02:15:06.102101 kubelet[1642]: I1213 02:15:06.101385 1642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/704237c51bef17452f7d4a4f38e15835-ca-certs\") pod \"kube-controller-manager-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" (UID: \"704237c51bef17452f7d4a4f38e15835\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:06.102268 kubelet[1642]: I1213 02:15:06.102244 1642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/704237c51bef17452f7d4a4f38e15835-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" (UID: \"704237c51bef17452f7d4a4f38e15835\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:06.102467 kubelet[1642]: I1213 02:15:06.102443 1642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/704237c51bef17452f7d4a4f38e15835-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" (UID: \"704237c51bef17452f7d4a4f38e15835\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:06.102624 kubelet[1642]: I1213 02:15:06.102602 1642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb03bee71e47c76bdbca2e89af1b704e-kubeconfig\") pod \"kube-scheduler-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" (UID: \"cb03bee71e47c76bdbca2e89af1b704e\") " pod="kube-system/kube-scheduler-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:06.102763 kubelet[1642]: I1213 02:15:06.102742 1642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/56fd8e16e4b9ab54b80a914ea277ec1d-ca-certs\") pod \"kube-apiserver-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" (UID: \"56fd8e16e4b9ab54b80a914ea277ec1d\") " pod="kube-system/kube-apiserver-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:06.102891 kubelet[1642]: I1213 02:15:06.102870 1642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/56fd8e16e4b9ab54b80a914ea277ec1d-k8s-certs\") pod \"kube-apiserver-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" (UID: \"56fd8e16e4b9ab54b80a914ea277ec1d\") " pod="kube-system/kube-apiserver-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:06.103037 kubelet[1642]: I1213 02:15:06.103014 1642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/704237c51bef17452f7d4a4f38e15835-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" (UID: \"704237c51bef17452f7d4a4f38e15835\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:06.103176 kubelet[1642]: I1213 02:15:06.103152 1642 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/704237c51bef17452f7d4a4f38e15835-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" (UID: \"704237c51bef17452f7d4a4f38e15835\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:06.287610 kubelet[1642]: I1213 02:15:06.287562 1642 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:06.288193 kubelet[1642]: E1213 02:15:06.287985 1642 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.98:6443/api/v1/nodes\": dial tcp 10.128.0.98:6443: connect: connection refused" node="ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:06.384963 env[1210]: time="2024-12-13T02:15:06.384803132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal,Uid:704237c51bef17452f7d4a4f38e15835,Namespace:kube-system,Attempt:0,}" Dec 13 02:15:06.393763 env[1210]: time="2024-12-13T02:15:06.393702632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal,Uid:cb03bee71e47c76bdbca2e89af1b704e,Namespace:kube-system,Attempt:0,}" Dec 13 02:15:06.406755 env[1210]: time="2024-12-13T02:15:06.406691492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal,Uid:56fd8e16e4b9ab54b80a914ea277ec1d,Namespace:kube-system,Attempt:0,}" Dec 13 02:15:06.500669 kubelet[1642]: E1213 02:15:06.500588 1642 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.98:6443: connect: connection refused" interval="800ms" Dec 13 02:15:06.694581 kubelet[1642]: I1213 02:15:06.694172 1642 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:06.694771 kubelet[1642]: E1213 02:15:06.694717 1642 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.128.0.98:6443/api/v1/nodes\": dial tcp 10.128.0.98:6443: connect: connection refused" node="ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:06.772292 kubelet[1642]: W1213 02:15:06.772178 1642 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.98:6443: connect: connection refused Dec 13 02:15:06.772292 kubelet[1642]: E1213 02:15:06.772244 1642 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.128.0.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.98:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:15:06.776600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2591736190.mount: Deactivated successfully. Dec 13 02:15:06.788491 env[1210]: time="2024-12-13T02:15:06.788385166Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:06.790290 env[1210]: time="2024-12-13T02:15:06.790221644Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:06.794505 env[1210]: time="2024-12-13T02:15:06.794459202Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:06.796098 env[1210]: time="2024-12-13T02:15:06.796042277Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:06.797150 env[1210]: time="2024-12-13T02:15:06.797111065Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:06.801285 env[1210]: time="2024-12-13T02:15:06.801221941Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:06.804149 env[1210]: time="2024-12-13T02:15:06.804091424Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:06.806877 env[1210]: time="2024-12-13T02:15:06.806814937Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:06.810938 env[1210]: time="2024-12-13T02:15:06.810890765Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:06.813764 env[1210]: time="2024-12-13T02:15:06.813704230Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:06.815143 env[1210]: time="2024-12-13T02:15:06.815102987Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:06.817129 env[1210]: time="2024-12-13T02:15:06.816957539Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:06.849336 env[1210]: time="2024-12-13T02:15:06.849192732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:15:06.849336 env[1210]: time="2024-12-13T02:15:06.849252238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:15:06.849336 env[1210]: time="2024-12-13T02:15:06.849273121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:15:06.849998 env[1210]: time="2024-12-13T02:15:06.849908696Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f3ecd6b2624b05b5132e14cc3ac77bdb155b5e0c52e12370bf28d5f5b76a343e pid=1682 runtime=io.containerd.runc.v2 Dec 13 02:15:06.886108 env[1210]: time="2024-12-13T02:15:06.886010388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:15:06.886446 env[1210]: time="2024-12-13T02:15:06.886367579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:15:06.886681 env[1210]: time="2024-12-13T02:15:06.886630028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:15:06.887115 env[1210]: time="2024-12-13T02:15:06.887052771Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8600555fca7fa8301228923ccd3e5ab8f534deac91c4a936a0f69c1d411f731 pid=1701 runtime=io.containerd.runc.v2 Dec 13 02:15:06.893180 env[1210]: time="2024-12-13T02:15:06.893095482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:15:06.893470 env[1210]: time="2024-12-13T02:15:06.893379531Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:15:06.893679 env[1210]: time="2024-12-13T02:15:06.893628034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:15:06.894079 env[1210]: time="2024-12-13T02:15:06.894029276Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8cba8f2b33d0fa1741e23279426f1bd795ab375dc3512298cd90797180effcfd pid=1715 runtime=io.containerd.runc.v2 Dec 13 02:15:06.896754 systemd[1]: Started cri-containerd-f3ecd6b2624b05b5132e14cc3ac77bdb155b5e0c52e12370bf28d5f5b76a343e.scope. Dec 13 02:15:06.902907 kubelet[1642]: W1213 02:15:06.902760 1642 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.98:6443: connect: connection refused Dec 13 02:15:06.902907 kubelet[1642]: E1213 02:15:06.902860 1642 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.128.0.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.98:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:15:06.926353 systemd[1]: Started cri-containerd-b8600555fca7fa8301228923ccd3e5ab8f534deac91c4a936a0f69c1d411f731.scope. Dec 13 02:15:06.946497 systemd[1]: Started cri-containerd-8cba8f2b33d0fa1741e23279426f1bd795ab375dc3512298cd90797180effcfd.scope. Dec 13 02:15:06.983499 kubelet[1642]: W1213 02:15:06.983293 1642 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.98:6443: connect: connection refused Dec 13 02:15:06.983499 kubelet[1642]: E1213 02:15:06.983440 1642 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.128.0.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.98:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:15:07.030836 env[1210]: time="2024-12-13T02:15:07.030771611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal,Uid:56fd8e16e4b9ab54b80a914ea277ec1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8600555fca7fa8301228923ccd3e5ab8f534deac91c4a936a0f69c1d411f731\"" Dec 13 02:15:07.033387 kubelet[1642]: E1213 02:15:07.033343 1642 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-21291" Dec 13 02:15:07.035412 env[1210]: time="2024-12-13T02:15:07.035354014Z" level=info msg="CreateContainer within sandbox \"b8600555fca7fa8301228923ccd3e5ab8f534deac91c4a936a0f69c1d411f731\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 02:15:07.061723 env[1210]: time="2024-12-13T02:15:07.061656842Z" level=info msg="CreateContainer within sandbox \"b8600555fca7fa8301228923ccd3e5ab8f534deac91c4a936a0f69c1d411f731\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5c455d2c689878237bb70a6c558308c3a2ea363d099e3c63b21ff181b6cc72a0\"" Dec 13 02:15:07.062663 env[1210]: time="2024-12-13T02:15:07.062627139Z" level=info msg="StartContainer for \"5c455d2c689878237bb70a6c558308c3a2ea363d099e3c63b21ff181b6cc72a0\"" Dec 13 02:15:07.068219 env[1210]: time="2024-12-13T02:15:07.068173260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal,Uid:cb03bee71e47c76bdbca2e89af1b704e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3ecd6b2624b05b5132e14cc3ac77bdb155b5e0c52e12370bf28d5f5b76a343e\"" Dec 13 02:15:07.070699 kubelet[1642]: E1213 02:15:07.070180 1642 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-21291" Dec 13 02:15:07.072046 env[1210]: time="2024-12-13T02:15:07.071997865Z" level=info msg="CreateContainer within sandbox \"f3ecd6b2624b05b5132e14cc3ac77bdb155b5e0c52e12370bf28d5f5b76a343e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 02:15:07.082576 env[1210]: time="2024-12-13T02:15:07.082481254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal,Uid:704237c51bef17452f7d4a4f38e15835,Namespace:kube-system,Attempt:0,} returns sandbox id \"8cba8f2b33d0fa1741e23279426f1bd795ab375dc3512298cd90797180effcfd\"" Dec 13 02:15:07.085800 kubelet[1642]: E1213 02:15:07.085764 1642 kubelet_pods.go:538] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flat" Dec 13 02:15:07.087369 env[1210]: time="2024-12-13T02:15:07.087306429Z" level=info msg="CreateContainer within sandbox \"8cba8f2b33d0fa1741e23279426f1bd795ab375dc3512298cd90797180effcfd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 02:15:07.098551 env[1210]: time="2024-12-13T02:15:07.098485258Z" level=info msg="CreateContainer within sandbox \"f3ecd6b2624b05b5132e14cc3ac77bdb155b5e0c52e12370bf28d5f5b76a343e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cb5b20a2533840fc8170271f91e69f49ce86519e68110d7217f4606dbdd07082\"" Dec 13 02:15:07.099350 env[1210]: time="2024-12-13T02:15:07.099285107Z" level=info msg="StartContainer for \"cb5b20a2533840fc8170271f91e69f49ce86519e68110d7217f4606dbdd07082\"" Dec 13 02:15:07.110129 systemd[1]: Started cri-containerd-5c455d2c689878237bb70a6c558308c3a2ea363d099e3c63b21ff181b6cc72a0.scope. Dec 13 02:15:07.141428 env[1210]: time="2024-12-13T02:15:07.141336601Z" level=info msg="CreateContainer within sandbox \"8cba8f2b33d0fa1741e23279426f1bd795ab375dc3512298cd90797180effcfd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7fcf0f17654124752f4479ffdf7e0c51cce381580d74712e800e5d284e59b229\"" Dec 13 02:15:07.142431 env[1210]: time="2024-12-13T02:15:07.142362129Z" level=info msg="StartContainer for \"7fcf0f17654124752f4479ffdf7e0c51cce381580d74712e800e5d284e59b229\"" Dec 13 02:15:07.165629 systemd[1]: Started cri-containerd-cb5b20a2533840fc8170271f91e69f49ce86519e68110d7217f4606dbdd07082.scope. Dec 13 02:15:07.196431 kubelet[1642]: W1213 02:15:07.192674 1642 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.128.0.98:6443: connect: connection refused Dec 13 02:15:07.196431 kubelet[1642]: E1213 02:15:07.192781 1642 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.128.0.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.98:6443: connect: connection refused" logger="UnhandledError" Dec 13 02:15:07.198533 systemd[1]: Started cri-containerd-7fcf0f17654124752f4479ffdf7e0c51cce381580d74712e800e5d284e59b229.scope. Dec 13 02:15:07.243726 env[1210]: time="2024-12-13T02:15:07.243637128Z" level=info msg="StartContainer for \"5c455d2c689878237bb70a6c558308c3a2ea363d099e3c63b21ff181b6cc72a0\" returns successfully" Dec 13 02:15:07.301342 kubelet[1642]: E1213 02:15:07.301253 1642 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.98:6443: connect: connection refused" interval="1.6s" Dec 13 02:15:07.322471 env[1210]: time="2024-12-13T02:15:07.322385593Z" level=info msg="StartContainer for \"cb5b20a2533840fc8170271f91e69f49ce86519e68110d7217f4606dbdd07082\" returns successfully" Dec 13 02:15:07.331319 env[1210]: time="2024-12-13T02:15:07.331258102Z" level=info msg="StartContainer for \"7fcf0f17654124752f4479ffdf7e0c51cce381580d74712e800e5d284e59b229\" returns successfully" Dec 13 02:15:07.501133 kubelet[1642]: I1213 02:15:07.500996 1642 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:11.180235 kubelet[1642]: E1213 02:15:11.180165 1642 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" not found" node="ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:11.252875 kubelet[1642]: I1213 02:15:11.252820 1642 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:11.252875 kubelet[1642]: E1213 02:15:11.252872 1642 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\": node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" not found" Dec 13 02:15:11.850843 kubelet[1642]: I1213 02:15:11.850779 1642 apiserver.go:52] "Watching apiserver" Dec 13 02:15:11.900934 kubelet[1642]: I1213 02:15:11.900864 1642 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 02:15:13.579403 systemd[1]: Reloading. Dec 13 02:15:13.704204 /usr/lib/systemd/system-generators/torcx-generator[1939]: time="2024-12-13T02:15:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:15:13.704795 /usr/lib/systemd/system-generators/torcx-generator[1939]: time="2024-12-13T02:15:13Z" level=info msg="torcx already run" Dec 13 02:15:13.813068 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:15:13.813097 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:15:13.844884 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:15:14.069043 systemd[1]: Stopping kubelet.service... Dec 13 02:15:14.087611 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:15:14.087850 systemd[1]: Stopped kubelet.service. Dec 13 02:15:14.087915 systemd[1]: kubelet.service: Consumed 1.224s CPU time. Dec 13 02:15:14.090848 systemd[1]: Starting kubelet.service... Dec 13 02:15:14.337759 systemd[1]: Started kubelet.service. Dec 13 02:15:14.417056 kubelet[1987]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:15:14.417056 kubelet[1987]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:15:14.417056 kubelet[1987]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:15:14.417056 kubelet[1987]: I1213 02:15:14.416885 1987 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:15:14.432703 kubelet[1987]: I1213 02:15:14.432664 1987 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 02:15:14.432965 kubelet[1987]: I1213 02:15:14.432947 1987 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:15:14.433714 kubelet[1987]: I1213 02:15:14.433689 1987 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 02:15:14.437942 kubelet[1987]: I1213 02:15:14.437217 1987 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 02:15:14.441768 kubelet[1987]: I1213 02:15:14.441037 1987 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:15:14.447301 kubelet[1987]: E1213 02:15:14.447259 1987 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 02:15:14.447301 kubelet[1987]: I1213 02:15:14.447299 1987 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 02:15:14.451107 kubelet[1987]: I1213 02:15:14.451074 1987 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:15:14.451285 kubelet[1987]: I1213 02:15:14.451252 1987 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 02:15:14.451542 kubelet[1987]: I1213 02:15:14.451481 1987 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:15:14.451854 kubelet[1987]: I1213 02:15:14.451525 1987 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 02:15:14.451854 kubelet[1987]: I1213 02:15:14.451844 1987 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:15:14.452159 kubelet[1987]: I1213 02:15:14.451863 1987 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 02:15:14.452159 kubelet[1987]: I1213 02:15:14.451918 1987 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:15:14.452272 kubelet[1987]: I1213 02:15:14.452178 1987 kubelet.go:408] "Attempting to sync node with API server" Dec 13 02:15:14.453036 kubelet[1987]: I1213 02:15:14.452733 1987 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:15:14.453036 kubelet[1987]: I1213 02:15:14.452807 1987 kubelet.go:314] "Adding apiserver pod source" Dec 13 02:15:14.454450 kubelet[1987]: I1213 02:15:14.454425 1987 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:15:14.460473 kubelet[1987]: I1213 02:15:14.456141 1987 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:15:14.461315 kubelet[1987]: I1213 02:15:14.461292 1987 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:15:14.462073 kubelet[1987]: I1213 02:15:14.462055 1987 server.go:1269] "Started kubelet" Dec 13 02:15:14.467660 kubelet[1987]: I1213 02:15:14.467638 1987 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:15:14.502472 kubelet[1987]: I1213 02:15:14.471541 1987 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:15:14.508571 kubelet[1987]: I1213 02:15:14.471598 1987 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:15:14.508807 kubelet[1987]: I1213 02:15:14.508780 1987 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:15:14.508907 kubelet[1987]: I1213 02:15:14.502883 1987 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 02:15:14.509305 kubelet[1987]: I1213 02:15:14.509276 1987 server.go:460] "Adding debug handlers to kubelet server" Dec 13 02:15:14.510501 kubelet[1987]: I1213 02:15:14.472049 1987 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 02:15:14.510614 kubelet[1987]: I1213 02:15:14.506995 1987 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 02:15:14.510816 kubelet[1987]: I1213 02:15:14.510775 1987 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:15:14.512188 kubelet[1987]: I1213 02:15:14.512161 1987 reconciler.go:26] "Reconciler: start to sync state" Dec 13 02:15:14.512856 kubelet[1987]: I1213 02:15:14.512832 1987 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:15:14.513004 kubelet[1987]: I1213 02:15:14.512989 1987 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:15:14.513123 kubelet[1987]: I1213 02:15:14.513108 1987 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 02:15:14.513283 kubelet[1987]: E1213 02:15:14.513256 1987 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:15:14.516055 kubelet[1987]: I1213 02:15:14.516024 1987 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:15:14.516172 kubelet[1987]: I1213 02:15:14.516145 1987 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:15:14.518524 kubelet[1987]: I1213 02:15:14.518499 1987 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:15:14.527112 kubelet[1987]: E1213 02:15:14.527084 1987 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:15:14.592908 sudo[2017]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 02:15:14.597235 sudo[2017]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 02:15:14.601811 kubelet[1987]: I1213 02:15:14.601465 1987 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:15:14.601811 kubelet[1987]: I1213 02:15:14.601492 1987 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:15:14.601811 kubelet[1987]: I1213 02:15:14.601519 1987 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:15:14.602065 kubelet[1987]: I1213 02:15:14.601828 1987 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 02:15:14.602065 kubelet[1987]: I1213 02:15:14.601848 1987 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 02:15:14.602065 kubelet[1987]: I1213 02:15:14.601886 1987 policy_none.go:49] "None policy: Start" Dec 13 02:15:14.612062 kubelet[1987]: I1213 02:15:14.612032 1987 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:15:14.612195 kubelet[1987]: I1213 02:15:14.612072 1987 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:15:14.612408 kubelet[1987]: I1213 02:15:14.612369 1987 state_mem.go:75] "Updated machine memory state" Dec 13 02:15:14.613550 kubelet[1987]: E1213 02:15:14.613428 1987 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 02:15:14.621230 kubelet[1987]: I1213 02:15:14.621183 1987 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:15:14.621432 kubelet[1987]: I1213 02:15:14.621377 1987 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 02:15:14.621522 kubelet[1987]: I1213 02:15:14.621415 1987 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 02:15:14.623379 kubelet[1987]: I1213 02:15:14.622254 1987 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:15:14.740446 kubelet[1987]: I1213 02:15:14.740386 1987 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:14.754867 kubelet[1987]: I1213 02:15:14.754815 1987 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:14.755222 kubelet[1987]: I1213 02:15:14.755205 1987 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:14.833223 kubelet[1987]: W1213 02:15:14.833182 1987 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 02:15:14.834698 kubelet[1987]: W1213 02:15:14.834669 1987 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 02:15:14.835283 kubelet[1987]: W1213 02:15:14.835257 1987 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 02:15:14.916438 kubelet[1987]: I1213 02:15:14.916287 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cb03bee71e47c76bdbca2e89af1b704e-kubeconfig\") pod \"kube-scheduler-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" (UID: \"cb03bee71e47c76bdbca2e89af1b704e\") " pod="kube-system/kube-scheduler-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:14.916807 kubelet[1987]: I1213 02:15:14.916771 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/56fd8e16e4b9ab54b80a914ea277ec1d-k8s-certs\") pod \"kube-apiserver-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" (UID: \"56fd8e16e4b9ab54b80a914ea277ec1d\") " pod="kube-system/kube-apiserver-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:14.917001 kubelet[1987]: I1213 02:15:14.916981 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/704237c51bef17452f7d4a4f38e15835-ca-certs\") pod \"kube-controller-manager-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" (UID: \"704237c51bef17452f7d4a4f38e15835\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:14.917190 kubelet[1987]: I1213 02:15:14.917168 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/704237c51bef17452f7d4a4f38e15835-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" (UID: \"704237c51bef17452f7d4a4f38e15835\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:14.917417 kubelet[1987]: I1213 02:15:14.917345 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/704237c51bef17452f7d4a4f38e15835-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" (UID: \"704237c51bef17452f7d4a4f38e15835\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:14.917528 kubelet[1987]: I1213 02:15:14.917458 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/56fd8e16e4b9ab54b80a914ea277ec1d-ca-certs\") pod \"kube-apiserver-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" (UID: \"56fd8e16e4b9ab54b80a914ea277ec1d\") " pod="kube-system/kube-apiserver-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:14.917528 kubelet[1987]: I1213 02:15:14.917495 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/56fd8e16e4b9ab54b80a914ea277ec1d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" (UID: \"56fd8e16e4b9ab54b80a914ea277ec1d\") " pod="kube-system/kube-apiserver-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:14.917645 kubelet[1987]: I1213 02:15:14.917526 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/704237c51bef17452f7d4a4f38e15835-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" (UID: \"704237c51bef17452f7d4a4f38e15835\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:14.917645 kubelet[1987]: I1213 02:15:14.917559 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/704237c51bef17452f7d4a4f38e15835-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" (UID: \"704237c51bef17452f7d4a4f38e15835\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:15.394105 sudo[2017]: pam_unix(sudo:session): session closed for user root Dec 13 02:15:15.456089 kubelet[1987]: I1213 02:15:15.456016 1987 apiserver.go:52] "Watching apiserver" Dec 13 02:15:15.510163 kubelet[1987]: I1213 02:15:15.510098 1987 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 02:15:15.580071 kubelet[1987]: W1213 02:15:15.580028 1987 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 02:15:15.580289 kubelet[1987]: E1213 02:15:15.580135 1987 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" Dec 13 02:15:15.617081 kubelet[1987]: I1213 02:15:15.616998 1987 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" podStartSLOduration=1.6169539739999998 podStartE2EDuration="1.616953974s" podCreationTimestamp="2024-12-13 02:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:15:15.597688258 +0000 UTC m=+1.252764480" watchObservedRunningTime="2024-12-13 02:15:15.616953974 +0000 UTC m=+1.272030197" Dec 13 02:15:15.617365 kubelet[1987]: I1213 02:15:15.617203 1987 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" podStartSLOduration=1.617192904 podStartE2EDuration="1.617192904s" podCreationTimestamp="2024-12-13 02:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:15:15.608251585 +0000 UTC m=+1.263327804" watchObservedRunningTime="2024-12-13 02:15:15.617192904 +0000 UTC m=+1.272269126" Dec 13 02:15:15.621337 kubelet[1987]: I1213 02:15:15.621273 1987 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" podStartSLOduration=1.621247253 podStartE2EDuration="1.621247253s" podCreationTimestamp="2024-12-13 02:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:15:15.61944943 +0000 UTC m=+1.274525651" watchObservedRunningTime="2024-12-13 02:15:15.621247253 +0000 UTC m=+1.276323460" Dec 13 02:15:17.656731 sudo[1391]: pam_unix(sudo:session): session closed for user root Dec 13 02:15:17.700233 sshd[1388]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:17.704676 systemd[1]: sshd@4-10.128.0.98:22-139.178.68.195:37628.service: Deactivated successfully. Dec 13 02:15:17.705732 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 02:15:17.705932 systemd[1]: session-5.scope: Consumed 6.100s CPU time. Dec 13 02:15:17.706788 systemd-logind[1214]: Session 5 logged out. Waiting for processes to exit. Dec 13 02:15:17.708099 systemd-logind[1214]: Removed session 5. Dec 13 02:15:18.503299 kubelet[1987]: I1213 02:15:18.503240 1987 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 02:15:18.504316 env[1210]: time="2024-12-13T02:15:18.504253219Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 02:15:18.504790 kubelet[1987]: I1213 02:15:18.504532 1987 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 02:15:19.428764 systemd[1]: Created slice kubepods-besteffort-poddaae424c_bd3e_4449_8b28_cd188eff3022.slice. Dec 13 02:15:19.446926 kubelet[1987]: I1213 02:15:19.446861 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/daae424c-bd3e-4449-8b28-cd188eff3022-lib-modules\") pod \"kube-proxy-ndzwt\" (UID: \"daae424c-bd3e-4449-8b28-cd188eff3022\") " pod="kube-system/kube-proxy-ndzwt" Dec 13 02:15:19.447132 kubelet[1987]: I1213 02:15:19.446936 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/daae424c-bd3e-4449-8b28-cd188eff3022-xtables-lock\") pod \"kube-proxy-ndzwt\" (UID: \"daae424c-bd3e-4449-8b28-cd188eff3022\") " pod="kube-system/kube-proxy-ndzwt" Dec 13 02:15:19.447132 kubelet[1987]: I1213 02:15:19.446966 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/daae424c-bd3e-4449-8b28-cd188eff3022-kube-proxy\") pod \"kube-proxy-ndzwt\" (UID: \"daae424c-bd3e-4449-8b28-cd188eff3022\") " pod="kube-system/kube-proxy-ndzwt" Dec 13 02:15:19.447132 kubelet[1987]: I1213 02:15:19.447010 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4nj2\" (UniqueName: \"kubernetes.io/projected/daae424c-bd3e-4449-8b28-cd188eff3022-kube-api-access-x4nj2\") pod \"kube-proxy-ndzwt\" (UID: \"daae424c-bd3e-4449-8b28-cd188eff3022\") " pod="kube-system/kube-proxy-ndzwt" Dec 13 02:15:19.456448 systemd[1]: Created slice kubepods-burstable-poddb03aa3d_9bc2_4735_8067_99540f039d93.slice. Dec 13 02:15:19.547846 kubelet[1987]: I1213 02:15:19.547781 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-cni-path\") pod \"cilium-z28rd\" (UID: \"db03aa3d-9bc2-4735-8067-99540f039d93\") " pod="kube-system/cilium-z28rd" Dec 13 02:15:19.548510 kubelet[1987]: I1213 02:15:19.547859 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-bpf-maps\") pod \"cilium-z28rd\" (UID: \"db03aa3d-9bc2-4735-8067-99540f039d93\") " pod="kube-system/cilium-z28rd" Dec 13 02:15:19.548510 kubelet[1987]: I1213 02:15:19.547890 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-etc-cni-netd\") pod \"cilium-z28rd\" (UID: \"db03aa3d-9bc2-4735-8067-99540f039d93\") " pod="kube-system/cilium-z28rd" Dec 13 02:15:19.548510 kubelet[1987]: I1213 02:15:19.547915 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-host-proc-sys-net\") pod \"cilium-z28rd\" (UID: \"db03aa3d-9bc2-4735-8067-99540f039d93\") " pod="kube-system/cilium-z28rd" Dec 13 02:15:19.548510 kubelet[1987]: I1213 02:15:19.547957 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-hostproc\") pod \"cilium-z28rd\" (UID: \"db03aa3d-9bc2-4735-8067-99540f039d93\") " pod="kube-system/cilium-z28rd" Dec 13 02:15:19.548510 kubelet[1987]: I1213 02:15:19.547985 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-cilium-cgroup\") pod \"cilium-z28rd\" (UID: \"db03aa3d-9bc2-4735-8067-99540f039d93\") " pod="kube-system/cilium-z28rd" Dec 13 02:15:19.548510 kubelet[1987]: I1213 02:15:19.548013 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/db03aa3d-9bc2-4735-8067-99540f039d93-cilium-config-path\") pod \"cilium-z28rd\" (UID: \"db03aa3d-9bc2-4735-8067-99540f039d93\") " pod="kube-system/cilium-z28rd" Dec 13 02:15:19.548831 kubelet[1987]: I1213 02:15:19.548039 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/db03aa3d-9bc2-4735-8067-99540f039d93-hubble-tls\") pod \"cilium-z28rd\" (UID: \"db03aa3d-9bc2-4735-8067-99540f039d93\") " pod="kube-system/cilium-z28rd" Dec 13 02:15:19.548831 kubelet[1987]: I1213 02:15:19.548066 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-cilium-run\") pod \"cilium-z28rd\" (UID: \"db03aa3d-9bc2-4735-8067-99540f039d93\") " pod="kube-system/cilium-z28rd" Dec 13 02:15:19.548831 kubelet[1987]: I1213 02:15:19.548139 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-lib-modules\") pod \"cilium-z28rd\" (UID: \"db03aa3d-9bc2-4735-8067-99540f039d93\") " pod="kube-system/cilium-z28rd" Dec 13 02:15:19.548831 kubelet[1987]: I1213 02:15:19.548213 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-xtables-lock\") pod \"cilium-z28rd\" (UID: \"db03aa3d-9bc2-4735-8067-99540f039d93\") " pod="kube-system/cilium-z28rd" Dec 13 02:15:19.548831 kubelet[1987]: I1213 02:15:19.548247 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5mhp\" (UniqueName: \"kubernetes.io/projected/db03aa3d-9bc2-4735-8067-99540f039d93-kube-api-access-t5mhp\") pod \"cilium-z28rd\" (UID: \"db03aa3d-9bc2-4735-8067-99540f039d93\") " pod="kube-system/cilium-z28rd" Dec 13 02:15:19.548831 kubelet[1987]: I1213 02:15:19.548279 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/db03aa3d-9bc2-4735-8067-99540f039d93-clustermesh-secrets\") pod \"cilium-z28rd\" (UID: \"db03aa3d-9bc2-4735-8067-99540f039d93\") " pod="kube-system/cilium-z28rd" Dec 13 02:15:19.549144 kubelet[1987]: I1213 02:15:19.548325 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-host-proc-sys-kernel\") pod \"cilium-z28rd\" (UID: \"db03aa3d-9bc2-4735-8067-99540f039d93\") " pod="kube-system/cilium-z28rd" Dec 13 02:15:19.576139 kubelet[1987]: I1213 02:15:19.576093 1987 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 13 02:15:19.613079 systemd[1]: Created slice kubepods-besteffort-pod942189c6_fc48_4d5d_981a_5b8d2fcca44a.slice. Dec 13 02:15:19.649021 kubelet[1987]: I1213 02:15:19.648962 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w98l\" (UniqueName: \"kubernetes.io/projected/942189c6-fc48-4d5d-981a-5b8d2fcca44a-kube-api-access-2w98l\") pod \"cilium-operator-5d85765b45-b4hlm\" (UID: \"942189c6-fc48-4d5d-981a-5b8d2fcca44a\") " pod="kube-system/cilium-operator-5d85765b45-b4hlm" Dec 13 02:15:19.649385 kubelet[1987]: I1213 02:15:19.649353 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/942189c6-fc48-4d5d-981a-5b8d2fcca44a-cilium-config-path\") pod \"cilium-operator-5d85765b45-b4hlm\" (UID: \"942189c6-fc48-4d5d-981a-5b8d2fcca44a\") " pod="kube-system/cilium-operator-5d85765b45-b4hlm" Dec 13 02:15:19.737563 env[1210]: time="2024-12-13T02:15:19.737507116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ndzwt,Uid:daae424c-bd3e-4449-8b28-cd188eff3022,Namespace:kube-system,Attempt:0,}" Dec 13 02:15:19.758248 update_engine[1201]: I1213 02:15:19.757462 1201 update_attempter.cc:509] Updating boot flags... Dec 13 02:15:19.794343 env[1210]: time="2024-12-13T02:15:19.794286104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z28rd,Uid:db03aa3d-9bc2-4735-8067-99540f039d93,Namespace:kube-system,Attempt:0,}" Dec 13 02:15:19.811106 env[1210]: time="2024-12-13T02:15:19.809503311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:15:19.811106 env[1210]: time="2024-12-13T02:15:19.809562699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:15:19.811106 env[1210]: time="2024-12-13T02:15:19.809592151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:15:19.811106 env[1210]: time="2024-12-13T02:15:19.809825792Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/47aa3af1a558ae2c2a89daf533769ebbd274fc757920074c86bbe2d8e622b181 pid=2072 runtime=io.containerd.runc.v2 Dec 13 02:15:19.841773 systemd[1]: Started cri-containerd-47aa3af1a558ae2c2a89daf533769ebbd274fc757920074c86bbe2d8e622b181.scope. Dec 13 02:15:19.856429 env[1210]: time="2024-12-13T02:15:19.855101436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:15:19.856429 env[1210]: time="2024-12-13T02:15:19.855264445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:15:19.856429 env[1210]: time="2024-12-13T02:15:19.855334417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:15:19.856429 env[1210]: time="2024-12-13T02:15:19.855797074Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2d5f0a5866e5886e507472f38ef1e7aaf8ca7549a40bd73a2d12cfeb0e50bc71 pid=2096 runtime=io.containerd.runc.v2 Dec 13 02:15:19.919755 env[1210]: time="2024-12-13T02:15:19.919659965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ndzwt,Uid:daae424c-bd3e-4449-8b28-cd188eff3022,Namespace:kube-system,Attempt:0,} returns sandbox id \"47aa3af1a558ae2c2a89daf533769ebbd274fc757920074c86bbe2d8e622b181\"" Dec 13 02:15:19.926439 systemd[1]: Started cri-containerd-2d5f0a5866e5886e507472f38ef1e7aaf8ca7549a40bd73a2d12cfeb0e50bc71.scope. Dec 13 02:15:19.941542 env[1210]: time="2024-12-13T02:15:19.941480147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-b4hlm,Uid:942189c6-fc48-4d5d-981a-5b8d2fcca44a,Namespace:kube-system,Attempt:0,}" Dec 13 02:15:19.945463 env[1210]: time="2024-12-13T02:15:19.944363974Z" level=info msg="CreateContainer within sandbox \"47aa3af1a558ae2c2a89daf533769ebbd274fc757920074c86bbe2d8e622b181\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 02:15:20.070491 env[1210]: time="2024-12-13T02:15:20.068721561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z28rd,Uid:db03aa3d-9bc2-4735-8067-99540f039d93,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d5f0a5866e5886e507472f38ef1e7aaf8ca7549a40bd73a2d12cfeb0e50bc71\"" Dec 13 02:15:20.075690 env[1210]: time="2024-12-13T02:15:20.075640754Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 02:15:20.096710 env[1210]: time="2024-12-13T02:15:20.096643533Z" level=info msg="CreateContainer within sandbox \"47aa3af1a558ae2c2a89daf533769ebbd274fc757920074c86bbe2d8e622b181\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"36ca261676f3de0bb05883d79f913ed081df718668f934fb08d13898e20995c9\"" Dec 13 02:15:20.100436 env[1210]: time="2024-12-13T02:15:20.100244620Z" level=info msg="StartContainer for \"36ca261676f3de0bb05883d79f913ed081df718668f934fb08d13898e20995c9\"" Dec 13 02:15:20.141078 env[1210]: time="2024-12-13T02:15:20.140812802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:15:20.141078 env[1210]: time="2024-12-13T02:15:20.140932827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:15:20.141078 env[1210]: time="2024-12-13T02:15:20.140973111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:15:20.141408 env[1210]: time="2024-12-13T02:15:20.141299964Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c73a5a4b3b486772ba7cc75248332ab6772c1f6fe640bfd9512633dd073c8903 pid=2165 runtime=io.containerd.runc.v2 Dec 13 02:15:20.166543 systemd[1]: Started cri-containerd-36ca261676f3de0bb05883d79f913ed081df718668f934fb08d13898e20995c9.scope. Dec 13 02:15:20.224876 systemd[1]: Started cri-containerd-c73a5a4b3b486772ba7cc75248332ab6772c1f6fe640bfd9512633dd073c8903.scope. Dec 13 02:15:20.273420 env[1210]: time="2024-12-13T02:15:20.271562446Z" level=info msg="StartContainer for \"36ca261676f3de0bb05883d79f913ed081df718668f934fb08d13898e20995c9\" returns successfully" Dec 13 02:15:20.440517 env[1210]: time="2024-12-13T02:15:20.440336079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-b4hlm,Uid:942189c6-fc48-4d5d-981a-5b8d2fcca44a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c73a5a4b3b486772ba7cc75248332ab6772c1f6fe640bfd9512633dd073c8903\"" Dec 13 02:15:24.267065 kubelet[1987]: I1213 02:15:24.266988 1987 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ndzwt" podStartSLOduration=5.266960161 podStartE2EDuration="5.266960161s" podCreationTimestamp="2024-12-13 02:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:15:20.613579209 +0000 UTC m=+6.268655431" watchObservedRunningTime="2024-12-13 02:15:24.266960161 +0000 UTC m=+9.922036383" Dec 13 02:15:31.326735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4156504674.mount: Deactivated successfully. Dec 13 02:15:34.820769 env[1210]: time="2024-12-13T02:15:34.820694604Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:34.823623 env[1210]: time="2024-12-13T02:15:34.823571937Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:34.826130 env[1210]: time="2024-12-13T02:15:34.826085496Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:34.826989 env[1210]: time="2024-12-13T02:15:34.826933535Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 02:15:34.829735 env[1210]: time="2024-12-13T02:15:34.829690644Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 02:15:34.832195 env[1210]: time="2024-12-13T02:15:34.832142970Z" level=info msg="CreateContainer within sandbox \"2d5f0a5866e5886e507472f38ef1e7aaf8ca7549a40bd73a2d12cfeb0e50bc71\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:15:34.859318 env[1210]: time="2024-12-13T02:15:34.859256807Z" level=info msg="CreateContainer within sandbox \"2d5f0a5866e5886e507472f38ef1e7aaf8ca7549a40bd73a2d12cfeb0e50bc71\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"90f739c81c4b493eb45a25c3f461e9fb46466a48237ba0e78b92239d269d1e00\"" Dec 13 02:15:34.861682 env[1210]: time="2024-12-13T02:15:34.860589754Z" level=info msg="StartContainer for \"90f739c81c4b493eb45a25c3f461e9fb46466a48237ba0e78b92239d269d1e00\"" Dec 13 02:15:34.892706 systemd[1]: Started cri-containerd-90f739c81c4b493eb45a25c3f461e9fb46466a48237ba0e78b92239d269d1e00.scope. Dec 13 02:15:34.910245 systemd[1]: run-containerd-runc-k8s.io-90f739c81c4b493eb45a25c3f461e9fb46466a48237ba0e78b92239d269d1e00-runc.n85mtM.mount: Deactivated successfully. Dec 13 02:15:34.959295 env[1210]: time="2024-12-13T02:15:34.959226438Z" level=info msg="StartContainer for \"90f739c81c4b493eb45a25c3f461e9fb46466a48237ba0e78b92239d269d1e00\" returns successfully" Dec 13 02:15:34.968690 systemd[1]: cri-containerd-90f739c81c4b493eb45a25c3f461e9fb46466a48237ba0e78b92239d269d1e00.scope: Deactivated successfully. Dec 13 02:15:35.706588 systemd[1]: Started sshd@5-10.128.0.98:22-159.89.99.112:51546.service. Dec 13 02:15:35.851376 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90f739c81c4b493eb45a25c3f461e9fb46466a48237ba0e78b92239d269d1e00-rootfs.mount: Deactivated successfully. Dec 13 02:15:36.775372 sshd[2419]: Failed password for root from 159.89.99.112 port 51546 ssh2 Dec 13 02:15:36.794828 env[1210]: time="2024-12-13T02:15:36.794752497Z" level=info msg="shim disconnected" id=90f739c81c4b493eb45a25c3f461e9fb46466a48237ba0e78b92239d269d1e00 Dec 13 02:15:36.794828 env[1210]: time="2024-12-13T02:15:36.794830373Z" level=warning msg="cleaning up after shim disconnected" id=90f739c81c4b493eb45a25c3f461e9fb46466a48237ba0e78b92239d269d1e00 namespace=k8s.io Dec 13 02:15:36.795613 env[1210]: time="2024-12-13T02:15:36.794844590Z" level=info msg="cleaning up dead shim" Dec 13 02:15:36.807508 env[1210]: time="2024-12-13T02:15:36.807448833Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:15:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2421 runtime=io.containerd.runc.v2\n" Dec 13 02:15:37.028960 sshd[2419]: PAM: Permission denied for root from 159.89.99.112 Dec 13 02:15:37.303077 sshd[2419]: Connection closed by authenticating user root 159.89.99.112 port 51546 [preauth] Dec 13 02:15:37.305374 systemd[1]: sshd@5-10.128.0.98:22-159.89.99.112:51546.service: Deactivated successfully. Dec 13 02:15:37.628543 env[1210]: time="2024-12-13T02:15:37.625170746Z" level=info msg="CreateContainer within sandbox \"2d5f0a5866e5886e507472f38ef1e7aaf8ca7549a40bd73a2d12cfeb0e50bc71\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:15:37.656016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1498399120.mount: Deactivated successfully. Dec 13 02:15:37.662248 env[1210]: time="2024-12-13T02:15:37.662169804Z" level=info msg="CreateContainer within sandbox \"2d5f0a5866e5886e507472f38ef1e7aaf8ca7549a40bd73a2d12cfeb0e50bc71\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6d99540c88f5c6c7b64d373cef8a7eab14b8b38a39be92ab076ce6180582cd2c\"" Dec 13 02:15:37.663701 env[1210]: time="2024-12-13T02:15:37.663627490Z" level=info msg="StartContainer for \"6d99540c88f5c6c7b64d373cef8a7eab14b8b38a39be92ab076ce6180582cd2c\"" Dec 13 02:15:37.706616 systemd[1]: run-containerd-runc-k8s.io-6d99540c88f5c6c7b64d373cef8a7eab14b8b38a39be92ab076ce6180582cd2c-runc.biWXOz.mount: Deactivated successfully. Dec 13 02:15:37.718091 systemd[1]: Started cri-containerd-6d99540c88f5c6c7b64d373cef8a7eab14b8b38a39be92ab076ce6180582cd2c.scope. Dec 13 02:15:37.759048 env[1210]: time="2024-12-13T02:15:37.758983900Z" level=info msg="StartContainer for \"6d99540c88f5c6c7b64d373cef8a7eab14b8b38a39be92ab076ce6180582cd2c\" returns successfully" Dec 13 02:15:37.778274 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:15:37.779731 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:15:37.779962 systemd[1]: Stopping systemd-sysctl.service... Dec 13 02:15:37.784904 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:15:37.786168 systemd[1]: cri-containerd-6d99540c88f5c6c7b64d373cef8a7eab14b8b38a39be92ab076ce6180582cd2c.scope: Deactivated successfully. Dec 13 02:15:37.801592 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:15:37.823116 env[1210]: time="2024-12-13T02:15:37.823047351Z" level=info msg="shim disconnected" id=6d99540c88f5c6c7b64d373cef8a7eab14b8b38a39be92ab076ce6180582cd2c Dec 13 02:15:37.823116 env[1210]: time="2024-12-13T02:15:37.823100832Z" level=warning msg="cleaning up after shim disconnected" id=6d99540c88f5c6c7b64d373cef8a7eab14b8b38a39be92ab076ce6180582cd2c namespace=k8s.io Dec 13 02:15:37.823116 env[1210]: time="2024-12-13T02:15:37.823115351Z" level=info msg="cleaning up dead shim" Dec 13 02:15:37.835517 env[1210]: time="2024-12-13T02:15:37.835446568Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:15:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2488 runtime=io.containerd.runc.v2\n" Dec 13 02:15:38.626022 env[1210]: time="2024-12-13T02:15:38.625964856Z" level=info msg="CreateContainer within sandbox \"2d5f0a5866e5886e507472f38ef1e7aaf8ca7549a40bd73a2d12cfeb0e50bc71\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:15:38.647959 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d99540c88f5c6c7b64d373cef8a7eab14b8b38a39be92ab076ce6180582cd2c-rootfs.mount: Deactivated successfully. Dec 13 02:15:38.655927 env[1210]: time="2024-12-13T02:15:38.655862299Z" level=info msg="CreateContainer within sandbox \"2d5f0a5866e5886e507472f38ef1e7aaf8ca7549a40bd73a2d12cfeb0e50bc71\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"19cf06471a70084bf2037261054c06bc4bb9efcae33e90139becc0b8088a47a1\"" Dec 13 02:15:38.657974 env[1210]: time="2024-12-13T02:15:38.656703679Z" level=info msg="StartContainer for \"19cf06471a70084bf2037261054c06bc4bb9efcae33e90139becc0b8088a47a1\"" Dec 13 02:15:38.695619 systemd[1]: Started cri-containerd-19cf06471a70084bf2037261054c06bc4bb9efcae33e90139becc0b8088a47a1.scope. Dec 13 02:15:38.700714 systemd[1]: run-containerd-runc-k8s.io-19cf06471a70084bf2037261054c06bc4bb9efcae33e90139becc0b8088a47a1-runc.IbO686.mount: Deactivated successfully. Dec 13 02:15:38.753175 systemd[1]: cri-containerd-19cf06471a70084bf2037261054c06bc4bb9efcae33e90139becc0b8088a47a1.scope: Deactivated successfully. Dec 13 02:15:38.756739 env[1210]: time="2024-12-13T02:15:38.756691225Z" level=info msg="StartContainer for \"19cf06471a70084bf2037261054c06bc4bb9efcae33e90139becc0b8088a47a1\" returns successfully" Dec 13 02:15:38.794518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19cf06471a70084bf2037261054c06bc4bb9efcae33e90139becc0b8088a47a1-rootfs.mount: Deactivated successfully. Dec 13 02:15:38.806563 env[1210]: time="2024-12-13T02:15:38.806503017Z" level=info msg="shim disconnected" id=19cf06471a70084bf2037261054c06bc4bb9efcae33e90139becc0b8088a47a1 Dec 13 02:15:38.806930 env[1210]: time="2024-12-13T02:15:38.806903876Z" level=warning msg="cleaning up after shim disconnected" id=19cf06471a70084bf2037261054c06bc4bb9efcae33e90139becc0b8088a47a1 namespace=k8s.io Dec 13 02:15:38.807071 env[1210]: time="2024-12-13T02:15:38.807048523Z" level=info msg="cleaning up dead shim" Dec 13 02:15:38.831878 env[1210]: time="2024-12-13T02:15:38.831822478Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:15:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2546 runtime=io.containerd.runc.v2\n" Dec 13 02:15:39.618293 env[1210]: time="2024-12-13T02:15:39.618224925Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:39.621041 env[1210]: time="2024-12-13T02:15:39.620993697Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:39.623859 env[1210]: time="2024-12-13T02:15:39.623822253Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:15:39.624423 env[1210]: time="2024-12-13T02:15:39.624363616Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 02:15:39.636428 env[1210]: time="2024-12-13T02:15:39.631510659Z" level=info msg="CreateContainer within sandbox \"c73a5a4b3b486772ba7cc75248332ab6772c1f6fe640bfd9512633dd073c8903\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 02:15:39.636428 env[1210]: time="2024-12-13T02:15:39.635140149Z" level=info msg="CreateContainer within sandbox \"2d5f0a5866e5886e507472f38ef1e7aaf8ca7549a40bd73a2d12cfeb0e50bc71\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:15:39.656834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1705965567.mount: Deactivated successfully. Dec 13 02:15:39.667287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2222709598.mount: Deactivated successfully. Dec 13 02:15:39.684332 env[1210]: time="2024-12-13T02:15:39.684230572Z" level=info msg="CreateContainer within sandbox \"c73a5a4b3b486772ba7cc75248332ab6772c1f6fe640bfd9512633dd073c8903\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4b2e5e5f15fdd384776688584f60d908205eb5af2196b081c83ebe7166cb74fe\"" Dec 13 02:15:39.686764 env[1210]: time="2024-12-13T02:15:39.685440914Z" level=info msg="StartContainer for \"4b2e5e5f15fdd384776688584f60d908205eb5af2196b081c83ebe7166cb74fe\"" Dec 13 02:15:39.707656 env[1210]: time="2024-12-13T02:15:39.707589638Z" level=info msg="CreateContainer within sandbox \"2d5f0a5866e5886e507472f38ef1e7aaf8ca7549a40bd73a2d12cfeb0e50bc71\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"49e1e95e2b0cbe20d430e2ad409ab429980df25af992c9dd662561fec6bfa4fb\"" Dec 13 02:15:39.710476 env[1210]: time="2024-12-13T02:15:39.710139893Z" level=info msg="StartContainer for \"49e1e95e2b0cbe20d430e2ad409ab429980df25af992c9dd662561fec6bfa4fb\"" Dec 13 02:15:39.730695 systemd[1]: Started cri-containerd-4b2e5e5f15fdd384776688584f60d908205eb5af2196b081c83ebe7166cb74fe.scope. Dec 13 02:15:39.755730 systemd[1]: Started cri-containerd-49e1e95e2b0cbe20d430e2ad409ab429980df25af992c9dd662561fec6bfa4fb.scope. Dec 13 02:15:39.815375 systemd[1]: cri-containerd-49e1e95e2b0cbe20d430e2ad409ab429980df25af992c9dd662561fec6bfa4fb.scope: Deactivated successfully. Dec 13 02:15:39.823724 env[1210]: time="2024-12-13T02:15:39.823619505Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb03aa3d_9bc2_4735_8067_99540f039d93.slice/cri-containerd-49e1e95e2b0cbe20d430e2ad409ab429980df25af992c9dd662561fec6bfa4fb.scope/memory.events\": no such file or directory" Dec 13 02:15:39.824608 env[1210]: time="2024-12-13T02:15:39.824563614Z" level=info msg="StartContainer for \"49e1e95e2b0cbe20d430e2ad409ab429980df25af992c9dd662561fec6bfa4fb\" returns successfully" Dec 13 02:15:39.832309 env[1210]: time="2024-12-13T02:15:39.832251917Z" level=info msg="StartContainer for \"4b2e5e5f15fdd384776688584f60d908205eb5af2196b081c83ebe7166cb74fe\" returns successfully" Dec 13 02:15:40.029035 env[1210]: time="2024-12-13T02:15:40.028964152Z" level=info msg="shim disconnected" id=49e1e95e2b0cbe20d430e2ad409ab429980df25af992c9dd662561fec6bfa4fb Dec 13 02:15:40.029035 env[1210]: time="2024-12-13T02:15:40.029033741Z" level=warning msg="cleaning up after shim disconnected" id=49e1e95e2b0cbe20d430e2ad409ab429980df25af992c9dd662561fec6bfa4fb namespace=k8s.io Dec 13 02:15:40.029436 env[1210]: time="2024-12-13T02:15:40.029048236Z" level=info msg="cleaning up dead shim" Dec 13 02:15:40.044146 env[1210]: time="2024-12-13T02:15:40.044079269Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:15:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2637 runtime=io.containerd.runc.v2\n" Dec 13 02:15:40.659435 env[1210]: time="2024-12-13T02:15:40.646785108Z" level=info msg="CreateContainer within sandbox \"2d5f0a5866e5886e507472f38ef1e7aaf8ca7549a40bd73a2d12cfeb0e50bc71\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:15:40.655882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2993569510.mount: Deactivated successfully. Dec 13 02:15:40.678630 env[1210]: time="2024-12-13T02:15:40.678448282Z" level=info msg="CreateContainer within sandbox \"2d5f0a5866e5886e507472f38ef1e7aaf8ca7549a40bd73a2d12cfeb0e50bc71\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"797000a02f7dceb20f80b26cce306970b504236c48a1307643772d8625d67cb3\"" Dec 13 02:15:40.680434 env[1210]: time="2024-12-13T02:15:40.679025085Z" level=info msg="StartContainer for \"797000a02f7dceb20f80b26cce306970b504236c48a1307643772d8625d67cb3\"" Dec 13 02:15:40.730276 systemd[1]: Started cri-containerd-797000a02f7dceb20f80b26cce306970b504236c48a1307643772d8625d67cb3.scope. Dec 13 02:15:40.849044 env[1210]: time="2024-12-13T02:15:40.848980997Z" level=info msg="StartContainer for \"797000a02f7dceb20f80b26cce306970b504236c48a1307643772d8625d67cb3\" returns successfully" Dec 13 02:15:40.997085 kubelet[1987]: I1213 02:15:40.997007 1987 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-b4hlm" podStartSLOduration=2.813376685 podStartE2EDuration="21.996976936s" podCreationTimestamp="2024-12-13 02:15:19 +0000 UTC" firstStartedPulling="2024-12-13 02:15:20.442353172 +0000 UTC m=+6.097429385" lastFinishedPulling="2024-12-13 02:15:39.625953441 +0000 UTC m=+25.281029636" observedRunningTime="2024-12-13 02:15:40.766292842 +0000 UTC m=+26.421369065" watchObservedRunningTime="2024-12-13 02:15:40.996976936 +0000 UTC m=+26.652053159" Dec 13 02:15:41.073606 kubelet[1987]: I1213 02:15:41.073566 1987 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 02:15:41.196752 systemd[1]: Created slice kubepods-burstable-podd24b8d4b_7838_452a_932a_45aecf377ed6.slice. Dec 13 02:15:41.212155 systemd[1]: Created slice kubepods-burstable-pod510ef387_62e8_4e73_9543_abf7e58c8ae6.slice. Dec 13 02:15:41.214952 kubelet[1987]: I1213 02:15:41.214909 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flgrd\" (UniqueName: \"kubernetes.io/projected/d24b8d4b-7838-452a-932a-45aecf377ed6-kube-api-access-flgrd\") pod \"coredns-6f6b679f8f-fgxbk\" (UID: \"d24b8d4b-7838-452a-932a-45aecf377ed6\") " pod="kube-system/coredns-6f6b679f8f-fgxbk" Dec 13 02:15:41.215126 kubelet[1987]: I1213 02:15:41.215021 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d24b8d4b-7838-452a-932a-45aecf377ed6-config-volume\") pod \"coredns-6f6b679f8f-fgxbk\" (UID: \"d24b8d4b-7838-452a-932a-45aecf377ed6\") " pod="kube-system/coredns-6f6b679f8f-fgxbk" Dec 13 02:15:41.217657 kubelet[1987]: W1213 02:15:41.217623 1987 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal' and this object Dec 13 02:15:41.217823 kubelet[1987]: E1213 02:15:41.217679 1987 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal' and this object" logger="UnhandledError" Dec 13 02:15:41.316215 kubelet[1987]: I1213 02:15:41.316080 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/510ef387-62e8-4e73-9543-abf7e58c8ae6-config-volume\") pod \"coredns-6f6b679f8f-fwsd6\" (UID: \"510ef387-62e8-4e73-9543-abf7e58c8ae6\") " pod="kube-system/coredns-6f6b679f8f-fwsd6" Dec 13 02:15:41.316671 kubelet[1987]: I1213 02:15:41.316631 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj9fc\" (UniqueName: \"kubernetes.io/projected/510ef387-62e8-4e73-9543-abf7e58c8ae6-kube-api-access-qj9fc\") pod \"coredns-6f6b679f8f-fwsd6\" (UID: \"510ef387-62e8-4e73-9543-abf7e58c8ae6\") " pod="kube-system/coredns-6f6b679f8f-fwsd6" Dec 13 02:15:42.406606 env[1210]: time="2024-12-13T02:15:42.406519068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fgxbk,Uid:d24b8d4b-7838-452a-932a-45aecf377ed6,Namespace:kube-system,Attempt:0,}" Dec 13 02:15:42.424683 env[1210]: time="2024-12-13T02:15:42.424623798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fwsd6,Uid:510ef387-62e8-4e73-9543-abf7e58c8ae6,Namespace:kube-system,Attempt:0,}" Dec 13 02:15:43.392199 systemd-networkd[1017]: cilium_host: Link UP Dec 13 02:15:43.392420 systemd-networkd[1017]: cilium_net: Link UP Dec 13 02:15:43.406816 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 02:15:43.406960 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 02:15:43.406559 systemd-networkd[1017]: cilium_net: Gained carrier Dec 13 02:15:43.407689 systemd-networkd[1017]: cilium_host: Gained carrier Dec 13 02:15:43.410315 systemd-networkd[1017]: cilium_net: Gained IPv6LL Dec 13 02:15:43.555757 systemd-networkd[1017]: cilium_vxlan: Link UP Dec 13 02:15:43.555774 systemd-networkd[1017]: cilium_vxlan: Gained carrier Dec 13 02:15:43.655613 systemd-networkd[1017]: cilium_host: Gained IPv6LL Dec 13 02:15:43.857459 kernel: NET: Registered PF_ALG protocol family Dec 13 02:15:44.766594 systemd-networkd[1017]: lxc_health: Link UP Dec 13 02:15:44.788555 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:15:44.789721 systemd-networkd[1017]: lxc_health: Gained carrier Dec 13 02:15:44.807606 systemd-networkd[1017]: cilium_vxlan: Gained IPv6LL Dec 13 02:15:45.454307 systemd-networkd[1017]: lxc95096d1e5468: Link UP Dec 13 02:15:45.463417 kernel: eth0: renamed from tmp1725c Dec 13 02:15:45.479584 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc95096d1e5468: link becomes ready Dec 13 02:15:45.481653 systemd-networkd[1017]: lxc95096d1e5468: Gained carrier Dec 13 02:15:45.492692 systemd-networkd[1017]: lxc45c7a65ae6c5: Link UP Dec 13 02:15:45.504471 kernel: eth0: renamed from tmp71570 Dec 13 02:15:45.524436 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc45c7a65ae6c5: link becomes ready Dec 13 02:15:45.533630 systemd-networkd[1017]: lxc45c7a65ae6c5: Gained carrier Dec 13 02:15:45.797334 kubelet[1987]: I1213 02:15:45.797151 1987 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-z28rd" podStartSLOduration=12.041006505 podStartE2EDuration="26.797126655s" podCreationTimestamp="2024-12-13 02:15:19 +0000 UTC" firstStartedPulling="2024-12-13 02:15:20.072310889 +0000 UTC m=+5.727387099" lastFinishedPulling="2024-12-13 02:15:34.828431034 +0000 UTC m=+20.483507249" observedRunningTime="2024-12-13 02:15:41.681245921 +0000 UTC m=+27.336322144" watchObservedRunningTime="2024-12-13 02:15:45.797126655 +0000 UTC m=+31.452202878" Dec 13 02:15:46.535714 systemd-networkd[1017]: lxc_health: Gained IPv6LL Dec 13 02:15:47.112176 systemd-networkd[1017]: lxc95096d1e5468: Gained IPv6LL Dec 13 02:15:47.239575 systemd-networkd[1017]: lxc45c7a65ae6c5: Gained IPv6LL Dec 13 02:15:49.760733 kubelet[1987]: I1213 02:15:49.760689 1987 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:15:50.374443 env[1210]: time="2024-12-13T02:15:50.372179439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:15:50.374443 env[1210]: time="2024-12-13T02:15:50.372304691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:15:50.374443 env[1210]: time="2024-12-13T02:15:50.372344727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:15:50.374443 env[1210]: time="2024-12-13T02:15:50.372586213Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1725cc885f756049618cba74aaab1efd26a0e2dc81a533d84a72fcef724269df pid=3182 runtime=io.containerd.runc.v2 Dec 13 02:15:50.389766 env[1210]: time="2024-12-13T02:15:50.389594286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:15:50.389766 env[1210]: time="2024-12-13T02:15:50.389661717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:15:50.389766 env[1210]: time="2024-12-13T02:15:50.389681579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:15:50.390281 env[1210]: time="2024-12-13T02:15:50.390204675Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7157041cfed6da1a17c0b6c5e5ec2c99c1addb60e19aec1edc9961ee033b24c4 pid=3192 runtime=io.containerd.runc.v2 Dec 13 02:15:50.429531 systemd[1]: Started cri-containerd-1725cc885f756049618cba74aaab1efd26a0e2dc81a533d84a72fcef724269df.scope. Dec 13 02:15:50.457573 systemd[1]: run-containerd-runc-k8s.io-7157041cfed6da1a17c0b6c5e5ec2c99c1addb60e19aec1edc9961ee033b24c4-runc.2FJnSl.mount: Deactivated successfully. Dec 13 02:15:50.457731 systemd[1]: run-containerd-runc-k8s.io-1725cc885f756049618cba74aaab1efd26a0e2dc81a533d84a72fcef724269df-runc.1YXbsz.mount: Deactivated successfully. Dec 13 02:15:50.471636 systemd[1]: Started cri-containerd-7157041cfed6da1a17c0b6c5e5ec2c99c1addb60e19aec1edc9961ee033b24c4.scope. Dec 13 02:15:50.605434 env[1210]: time="2024-12-13T02:15:50.605356628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fwsd6,Uid:510ef387-62e8-4e73-9543-abf7e58c8ae6,Namespace:kube-system,Attempt:0,} returns sandbox id \"7157041cfed6da1a17c0b6c5e5ec2c99c1addb60e19aec1edc9961ee033b24c4\"" Dec 13 02:15:50.614191 env[1210]: time="2024-12-13T02:15:50.614114288Z" level=info msg="CreateContainer within sandbox \"7157041cfed6da1a17c0b6c5e5ec2c99c1addb60e19aec1edc9961ee033b24c4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:15:50.615035 env[1210]: time="2024-12-13T02:15:50.614986131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fgxbk,Uid:d24b8d4b-7838-452a-932a-45aecf377ed6,Namespace:kube-system,Attempt:0,} returns sandbox id \"1725cc885f756049618cba74aaab1efd26a0e2dc81a533d84a72fcef724269df\"" Dec 13 02:15:50.620210 env[1210]: time="2024-12-13T02:15:50.620162887Z" level=info msg="CreateContainer within sandbox \"1725cc885f756049618cba74aaab1efd26a0e2dc81a533d84a72fcef724269df\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:15:50.642524 env[1210]: time="2024-12-13T02:15:50.641131246Z" level=info msg="CreateContainer within sandbox \"1725cc885f756049618cba74aaab1efd26a0e2dc81a533d84a72fcef724269df\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"33183c86c2720c47fab314b39958d9ce0a108d0d36c710d54ab23c8ff0b254b9\"" Dec 13 02:15:50.645922 env[1210]: time="2024-12-13T02:15:50.645862886Z" level=info msg="StartContainer for \"33183c86c2720c47fab314b39958d9ce0a108d0d36c710d54ab23c8ff0b254b9\"" Dec 13 02:15:50.652739 env[1210]: time="2024-12-13T02:15:50.652693547Z" level=info msg="CreateContainer within sandbox \"7157041cfed6da1a17c0b6c5e5ec2c99c1addb60e19aec1edc9961ee033b24c4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"54bb3efc68b747b484ae18ebc20835653e1de3be6da143c9cd28f5e15faa9546\"" Dec 13 02:15:50.655720 env[1210]: time="2024-12-13T02:15:50.655682716Z" level=info msg="StartContainer for \"54bb3efc68b747b484ae18ebc20835653e1de3be6da143c9cd28f5e15faa9546\"" Dec 13 02:15:50.684309 systemd[1]: Started cri-containerd-33183c86c2720c47fab314b39958d9ce0a108d0d36c710d54ab23c8ff0b254b9.scope. Dec 13 02:15:50.721622 systemd[1]: Started cri-containerd-54bb3efc68b747b484ae18ebc20835653e1de3be6da143c9cd28f5e15faa9546.scope. Dec 13 02:15:50.766425 env[1210]: time="2024-12-13T02:15:50.766348467Z" level=info msg="StartContainer for \"33183c86c2720c47fab314b39958d9ce0a108d0d36c710d54ab23c8ff0b254b9\" returns successfully" Dec 13 02:15:50.792985 env[1210]: time="2024-12-13T02:15:50.792906682Z" level=info msg="StartContainer for \"54bb3efc68b747b484ae18ebc20835653e1de3be6da143c9cd28f5e15faa9546\" returns successfully" Dec 13 02:15:51.712549 kubelet[1987]: I1213 02:15:51.712461 1987 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-fwsd6" podStartSLOduration=32.712435783 podStartE2EDuration="32.712435783s" podCreationTimestamp="2024-12-13 02:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:15:51.705632065 +0000 UTC m=+37.360708289" watchObservedRunningTime="2024-12-13 02:15:51.712435783 +0000 UTC m=+37.367512007" Dec 13 02:15:51.756656 kubelet[1987]: I1213 02:15:51.756439 1987 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-fgxbk" podStartSLOduration=32.756412281 podStartE2EDuration="32.756412281s" podCreationTimestamp="2024-12-13 02:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:15:51.737931412 +0000 UTC m=+37.393007634" watchObservedRunningTime="2024-12-13 02:15:51.756412281 +0000 UTC m=+37.411488501" Dec 13 02:16:07.396181 systemd[1]: Started sshd@6-10.128.0.98:22-139.178.68.195:53544.service. Dec 13 02:16:07.682603 sshd[3342]: Accepted publickey for core from 139.178.68.195 port 53544 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:16:07.684732 sshd[3342]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:16:07.691759 systemd[1]: Started session-6.scope. Dec 13 02:16:07.692448 systemd-logind[1214]: New session 6 of user core. Dec 13 02:16:08.000002 sshd[3342]: pam_unix(sshd:session): session closed for user core Dec 13 02:16:08.005924 systemd-logind[1214]: Session 6 logged out. Waiting for processes to exit. Dec 13 02:16:08.008892 systemd[1]: sshd@6-10.128.0.98:22-139.178.68.195:53544.service: Deactivated successfully. Dec 13 02:16:08.010206 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 02:16:08.013073 systemd-logind[1214]: Removed session 6. Dec 13 02:16:13.045962 systemd[1]: Started sshd@7-10.128.0.98:22-139.178.68.195:53554.service. Dec 13 02:16:13.335063 sshd[3355]: Accepted publickey for core from 139.178.68.195 port 53554 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:16:13.336949 sshd[3355]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:16:13.343895 systemd[1]: Started session-7.scope. Dec 13 02:16:13.344808 systemd-logind[1214]: New session 7 of user core. Dec 13 02:16:13.619625 sshd[3355]: pam_unix(sshd:session): session closed for user core Dec 13 02:16:13.624263 systemd[1]: sshd@7-10.128.0.98:22-139.178.68.195:53554.service: Deactivated successfully. Dec 13 02:16:13.625520 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 02:16:13.626523 systemd-logind[1214]: Session 7 logged out. Waiting for processes to exit. Dec 13 02:16:13.627802 systemd-logind[1214]: Removed session 7. Dec 13 02:16:18.667292 systemd[1]: Started sshd@8-10.128.0.98:22-139.178.68.195:39234.service. Dec 13 02:16:18.955828 sshd[3370]: Accepted publickey for core from 139.178.68.195 port 39234 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:16:18.957950 sshd[3370]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:16:18.965033 systemd[1]: Started session-8.scope. Dec 13 02:16:18.965937 systemd-logind[1214]: New session 8 of user core. Dec 13 02:16:19.244587 sshd[3370]: pam_unix(sshd:session): session closed for user core Dec 13 02:16:19.248921 systemd[1]: sshd@8-10.128.0.98:22-139.178.68.195:39234.service: Deactivated successfully. Dec 13 02:16:19.250059 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 02:16:19.250914 systemd-logind[1214]: Session 8 logged out. Waiting for processes to exit. Dec 13 02:16:19.252119 systemd-logind[1214]: Removed session 8. Dec 13 02:16:24.292706 systemd[1]: Started sshd@9-10.128.0.98:22-139.178.68.195:39244.service. Dec 13 02:16:24.585026 sshd[3384]: Accepted publickey for core from 139.178.68.195 port 39244 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:16:24.587126 sshd[3384]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:16:24.593977 systemd[1]: Started session-9.scope. Dec 13 02:16:24.594611 systemd-logind[1214]: New session 9 of user core. Dec 13 02:16:24.882425 sshd[3384]: pam_unix(sshd:session): session closed for user core Dec 13 02:16:24.886853 systemd[1]: sshd@9-10.128.0.98:22-139.178.68.195:39244.service: Deactivated successfully. Dec 13 02:16:24.888064 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 02:16:24.889334 systemd-logind[1214]: Session 9 logged out. Waiting for processes to exit. Dec 13 02:16:24.890727 systemd-logind[1214]: Removed session 9. Dec 13 02:16:29.929731 systemd[1]: Started sshd@10-10.128.0.98:22-139.178.68.195:51286.service. Dec 13 02:16:30.218928 sshd[3396]: Accepted publickey for core from 139.178.68.195 port 51286 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:16:30.221056 sshd[3396]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:16:30.228705 systemd[1]: Started session-10.scope. Dec 13 02:16:30.229636 systemd-logind[1214]: New session 10 of user core. Dec 13 02:16:30.509701 sshd[3396]: pam_unix(sshd:session): session closed for user core Dec 13 02:16:30.515108 systemd[1]: sshd@10-10.128.0.98:22-139.178.68.195:51286.service: Deactivated successfully. Dec 13 02:16:30.516332 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 02:16:30.517957 systemd-logind[1214]: Session 10 logged out. Waiting for processes to exit. Dec 13 02:16:30.519744 systemd-logind[1214]: Removed session 10. Dec 13 02:16:30.556950 systemd[1]: Started sshd@11-10.128.0.98:22-139.178.68.195:51288.service. Dec 13 02:16:30.849309 sshd[3409]: Accepted publickey for core from 139.178.68.195 port 51288 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:16:30.851841 sshd[3409]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:16:30.859050 systemd[1]: Started session-11.scope. Dec 13 02:16:30.860079 systemd-logind[1214]: New session 11 of user core. Dec 13 02:16:31.188438 sshd[3409]: pam_unix(sshd:session): session closed for user core Dec 13 02:16:31.193161 systemd[1]: sshd@11-10.128.0.98:22-139.178.68.195:51288.service: Deactivated successfully. Dec 13 02:16:31.194378 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 02:16:31.195266 systemd-logind[1214]: Session 11 logged out. Waiting for processes to exit. Dec 13 02:16:31.196554 systemd-logind[1214]: Removed session 11. Dec 13 02:16:31.235509 systemd[1]: Started sshd@12-10.128.0.98:22-139.178.68.195:51298.service. Dec 13 02:16:31.528051 sshd[3418]: Accepted publickey for core from 139.178.68.195 port 51298 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:16:31.530337 sshd[3418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:16:31.537412 systemd[1]: Started session-12.scope. Dec 13 02:16:31.538507 systemd-logind[1214]: New session 12 of user core. Dec 13 02:16:31.822791 sshd[3418]: pam_unix(sshd:session): session closed for user core Dec 13 02:16:31.827930 systemd[1]: sshd@12-10.128.0.98:22-139.178.68.195:51298.service: Deactivated successfully. Dec 13 02:16:31.829211 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 02:16:31.830316 systemd-logind[1214]: Session 12 logged out. Waiting for processes to exit. Dec 13 02:16:31.831714 systemd-logind[1214]: Removed session 12. Dec 13 02:16:36.870141 systemd[1]: Started sshd@13-10.128.0.98:22-139.178.68.195:41488.service. Dec 13 02:16:37.161611 sshd[3431]: Accepted publickey for core from 139.178.68.195 port 41488 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:16:37.163463 sshd[3431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:16:37.170652 systemd[1]: Started session-13.scope. Dec 13 02:16:37.171754 systemd-logind[1214]: New session 13 of user core. Dec 13 02:16:37.454721 sshd[3431]: pam_unix(sshd:session): session closed for user core Dec 13 02:16:37.460614 systemd[1]: sshd@13-10.128.0.98:22-139.178.68.195:41488.service: Deactivated successfully. Dec 13 02:16:37.461819 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 02:16:37.463755 systemd-logind[1214]: Session 13 logged out. Waiting for processes to exit. Dec 13 02:16:37.465281 systemd-logind[1214]: Removed session 13. Dec 13 02:16:42.502255 systemd[1]: Started sshd@14-10.128.0.98:22-139.178.68.195:41502.service. Dec 13 02:16:42.795479 sshd[3443]: Accepted publickey for core from 139.178.68.195 port 41502 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:16:42.798059 sshd[3443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:16:42.805499 systemd[1]: Started session-14.scope. Dec 13 02:16:42.806115 systemd-logind[1214]: New session 14 of user core. Dec 13 02:16:43.087769 sshd[3443]: pam_unix(sshd:session): session closed for user core Dec 13 02:16:43.092545 systemd[1]: sshd@14-10.128.0.98:22-139.178.68.195:41502.service: Deactivated successfully. Dec 13 02:16:43.093799 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 02:16:43.094798 systemd-logind[1214]: Session 14 logged out. Waiting for processes to exit. Dec 13 02:16:43.096020 systemd-logind[1214]: Removed session 14. Dec 13 02:16:43.133234 systemd[1]: Started sshd@15-10.128.0.98:22-139.178.68.195:41506.service. Dec 13 02:16:43.422131 sshd[3455]: Accepted publickey for core from 139.178.68.195 port 41506 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:16:43.424459 sshd[3455]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:16:43.431381 systemd[1]: Started session-15.scope. Dec 13 02:16:43.432021 systemd-logind[1214]: New session 15 of user core. Dec 13 02:16:43.785620 sshd[3455]: pam_unix(sshd:session): session closed for user core Dec 13 02:16:43.789928 systemd[1]: sshd@15-10.128.0.98:22-139.178.68.195:41506.service: Deactivated successfully. Dec 13 02:16:43.791184 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 02:16:43.792279 systemd-logind[1214]: Session 15 logged out. Waiting for processes to exit. Dec 13 02:16:43.793766 systemd-logind[1214]: Removed session 15. Dec 13 02:16:43.832227 systemd[1]: Started sshd@16-10.128.0.98:22-139.178.68.195:41520.service. Dec 13 02:16:44.123674 sshd[3464]: Accepted publickey for core from 139.178.68.195 port 41520 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:16:44.126085 sshd[3464]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:16:44.133223 systemd[1]: Started session-16.scope. Dec 13 02:16:44.134191 systemd-logind[1214]: New session 16 of user core. Dec 13 02:16:46.043710 sshd[3464]: pam_unix(sshd:session): session closed for user core Dec 13 02:16:46.049671 systemd[1]: sshd@16-10.128.0.98:22-139.178.68.195:41520.service: Deactivated successfully. Dec 13 02:16:46.050885 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 02:16:46.051441 systemd-logind[1214]: Session 16 logged out. Waiting for processes to exit. Dec 13 02:16:46.052816 systemd-logind[1214]: Removed session 16. Dec 13 02:16:46.090235 systemd[1]: Started sshd@17-10.128.0.98:22-139.178.68.195:52814.service. Dec 13 02:16:46.386317 sshd[3481]: Accepted publickey for core from 139.178.68.195 port 52814 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:16:46.387262 sshd[3481]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:16:46.396572 systemd[1]: Started session-17.scope. Dec 13 02:16:46.397174 systemd-logind[1214]: New session 17 of user core. Dec 13 02:16:46.818131 sshd[3481]: pam_unix(sshd:session): session closed for user core Dec 13 02:16:46.822923 systemd[1]: sshd@17-10.128.0.98:22-139.178.68.195:52814.service: Deactivated successfully. Dec 13 02:16:46.824250 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 02:16:46.825344 systemd-logind[1214]: Session 17 logged out. Waiting for processes to exit. Dec 13 02:16:46.826916 systemd-logind[1214]: Removed session 17. Dec 13 02:16:46.864166 systemd[1]: Started sshd@18-10.128.0.98:22-139.178.68.195:52818.service. Dec 13 02:16:47.156406 sshd[3491]: Accepted publickey for core from 139.178.68.195 port 52818 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:16:47.157893 sshd[3491]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:16:47.167006 systemd[1]: Started session-18.scope. Dec 13 02:16:47.168143 systemd-logind[1214]: New session 18 of user core. Dec 13 02:16:47.446592 sshd[3491]: pam_unix(sshd:session): session closed for user core Dec 13 02:16:47.452508 systemd-logind[1214]: Session 18 logged out. Waiting for processes to exit. Dec 13 02:16:47.452723 systemd[1]: sshd@18-10.128.0.98:22-139.178.68.195:52818.service: Deactivated successfully. Dec 13 02:16:47.453935 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 02:16:47.455484 systemd-logind[1214]: Removed session 18. Dec 13 02:16:52.494959 systemd[1]: Started sshd@19-10.128.0.98:22-139.178.68.195:52822.service. Dec 13 02:16:52.789191 sshd[3505]: Accepted publickey for core from 139.178.68.195 port 52822 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:16:52.791499 sshd[3505]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:16:52.799475 systemd-logind[1214]: New session 19 of user core. Dec 13 02:16:52.800535 systemd[1]: Started session-19.scope. Dec 13 02:16:53.078828 sshd[3505]: pam_unix(sshd:session): session closed for user core Dec 13 02:16:53.083458 systemd[1]: sshd@19-10.128.0.98:22-139.178.68.195:52822.service: Deactivated successfully. Dec 13 02:16:53.084666 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 02:16:53.085617 systemd-logind[1214]: Session 19 logged out. Waiting for processes to exit. Dec 13 02:16:53.086959 systemd-logind[1214]: Removed session 19. Dec 13 02:16:58.124980 systemd[1]: Started sshd@20-10.128.0.98:22-139.178.68.195:44964.service. Dec 13 02:16:58.416094 sshd[3521]: Accepted publickey for core from 139.178.68.195 port 44964 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:16:58.418311 sshd[3521]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:16:58.425330 systemd[1]: Started session-20.scope. Dec 13 02:16:58.426455 systemd-logind[1214]: New session 20 of user core. Dec 13 02:16:58.697710 sshd[3521]: pam_unix(sshd:session): session closed for user core Dec 13 02:16:58.702976 systemd-logind[1214]: Session 20 logged out. Waiting for processes to exit. Dec 13 02:16:58.703473 systemd[1]: sshd@20-10.128.0.98:22-139.178.68.195:44964.service: Deactivated successfully. Dec 13 02:16:58.704673 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 02:16:58.706126 systemd-logind[1214]: Removed session 20. Dec 13 02:17:03.746026 systemd[1]: Started sshd@21-10.128.0.98:22-139.178.68.195:44972.service. Dec 13 02:17:04.040740 sshd[3535]: Accepted publickey for core from 139.178.68.195 port 44972 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:17:04.042578 sshd[3535]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:17:04.050227 systemd[1]: Started session-21.scope. Dec 13 02:17:04.051296 systemd-logind[1214]: New session 21 of user core. Dec 13 02:17:04.337348 sshd[3535]: pam_unix(sshd:session): session closed for user core Dec 13 02:17:04.342230 systemd[1]: sshd@21-10.128.0.98:22-139.178.68.195:44972.service: Deactivated successfully. Dec 13 02:17:04.343471 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 02:17:04.344507 systemd-logind[1214]: Session 21 logged out. Waiting for processes to exit. Dec 13 02:17:04.345740 systemd-logind[1214]: Removed session 21. Dec 13 02:17:09.384992 systemd[1]: Started sshd@22-10.128.0.98:22-139.178.68.195:43000.service. Dec 13 02:17:09.676794 sshd[3547]: Accepted publickey for core from 139.178.68.195 port 43000 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:17:09.678897 sshd[3547]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:17:09.685778 systemd[1]: Started session-22.scope. Dec 13 02:17:09.686643 systemd-logind[1214]: New session 22 of user core. Dec 13 02:17:09.963563 sshd[3547]: pam_unix(sshd:session): session closed for user core Dec 13 02:17:09.967900 systemd[1]: sshd@22-10.128.0.98:22-139.178.68.195:43000.service: Deactivated successfully. Dec 13 02:17:09.969072 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 02:17:09.970007 systemd-logind[1214]: Session 22 logged out. Waiting for processes to exit. Dec 13 02:17:09.971462 systemd-logind[1214]: Removed session 22. Dec 13 02:17:10.012526 systemd[1]: Started sshd@23-10.128.0.98:22-139.178.68.195:43008.service. Dec 13 02:17:10.310249 sshd[3559]: Accepted publickey for core from 139.178.68.195 port 43008 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:17:10.312056 sshd[3559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:17:10.319702 systemd[1]: Started session-23.scope. Dec 13 02:17:10.320558 systemd-logind[1214]: New session 23 of user core. Dec 13 02:17:10.756604 update_engine[1201]: I1213 02:17:10.756536 1201 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 02:17:10.756604 update_engine[1201]: I1213 02:17:10.756613 1201 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 02:17:10.759530 update_engine[1201]: I1213 02:17:10.758095 1201 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 02:17:10.759530 update_engine[1201]: I1213 02:17:10.759208 1201 omaha_request_params.cc:62] Current group set to lts Dec 13 02:17:10.760149 update_engine[1201]: I1213 02:17:10.759825 1201 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 02:17:10.760149 update_engine[1201]: I1213 02:17:10.759847 1201 update_attempter.cc:643] Scheduling an action processor start. Dec 13 02:17:10.760149 update_engine[1201]: I1213 02:17:10.759877 1201 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 02:17:10.760149 update_engine[1201]: I1213 02:17:10.759925 1201 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 02:17:10.760149 update_engine[1201]: I1213 02:17:10.760038 1201 omaha_request_action.cc:270] Posting an Omaha request to disabled Dec 13 02:17:10.760149 update_engine[1201]: I1213 02:17:10.760047 1201 omaha_request_action.cc:271] Request: Dec 13 02:17:10.760149 update_engine[1201]: Dec 13 02:17:10.760149 update_engine[1201]: Dec 13 02:17:10.760149 update_engine[1201]: Dec 13 02:17:10.760149 update_engine[1201]: Dec 13 02:17:10.760149 update_engine[1201]: Dec 13 02:17:10.760149 update_engine[1201]: Dec 13 02:17:10.760149 update_engine[1201]: Dec 13 02:17:10.760149 update_engine[1201]: Dec 13 02:17:10.760898 locksmithd[1242]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 02:17:10.763278 update_engine[1201]: I1213 02:17:10.760056 1201 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 02:17:10.763278 update_engine[1201]: I1213 02:17:10.762979 1201 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 02:17:10.763278 update_engine[1201]: I1213 02:17:10.763219 1201 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 02:17:10.816039 update_engine[1201]: E1213 02:17:10.815826 1201 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 02:17:10.816039 update_engine[1201]: I1213 02:17:10.815991 1201 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 02:17:12.191914 env[1210]: time="2024-12-13T02:17:12.191849259Z" level=info msg="StopContainer for \"4b2e5e5f15fdd384776688584f60d908205eb5af2196b081c83ebe7166cb74fe\" with timeout 30 (s)" Dec 13 02:17:12.198471 env[1210]: time="2024-12-13T02:17:12.198330824Z" level=info msg="Stop container \"4b2e5e5f15fdd384776688584f60d908205eb5af2196b081c83ebe7166cb74fe\" with signal terminated" Dec 13 02:17:12.208890 systemd[1]: run-containerd-runc-k8s.io-797000a02f7dceb20f80b26cce306970b504236c48a1307643772d8625d67cb3-runc.krWTkd.mount: Deactivated successfully. Dec 13 02:17:12.274426 env[1210]: time="2024-12-13T02:17:12.274316131Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:17:12.284496 env[1210]: time="2024-12-13T02:17:12.284440226Z" level=info msg="StopContainer for \"797000a02f7dceb20f80b26cce306970b504236c48a1307643772d8625d67cb3\" with timeout 2 (s)" Dec 13 02:17:12.284981 env[1210]: time="2024-12-13T02:17:12.284933350Z" level=info msg="Stop container \"797000a02f7dceb20f80b26cce306970b504236c48a1307643772d8625d67cb3\" with signal terminated" Dec 13 02:17:12.288763 systemd[1]: cri-containerd-4b2e5e5f15fdd384776688584f60d908205eb5af2196b081c83ebe7166cb74fe.scope: Deactivated successfully. Dec 13 02:17:12.309113 systemd-networkd[1017]: lxc_health: Link DOWN Dec 13 02:17:12.309126 systemd-networkd[1017]: lxc_health: Lost carrier Dec 13 02:17:12.338746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b2e5e5f15fdd384776688584f60d908205eb5af2196b081c83ebe7166cb74fe-rootfs.mount: Deactivated successfully. Dec 13 02:17:12.342643 systemd[1]: cri-containerd-797000a02f7dceb20f80b26cce306970b504236c48a1307643772d8625d67cb3.scope: Deactivated successfully. Dec 13 02:17:12.343034 systemd[1]: cri-containerd-797000a02f7dceb20f80b26cce306970b504236c48a1307643772d8625d67cb3.scope: Consumed 9.046s CPU time. Dec 13 02:17:12.369981 env[1210]: time="2024-12-13T02:17:12.369788221Z" level=info msg="shim disconnected" id=4b2e5e5f15fdd384776688584f60d908205eb5af2196b081c83ebe7166cb74fe Dec 13 02:17:12.369981 env[1210]: time="2024-12-13T02:17:12.369854379Z" level=warning msg="cleaning up after shim disconnected" id=4b2e5e5f15fdd384776688584f60d908205eb5af2196b081c83ebe7166cb74fe namespace=k8s.io Dec 13 02:17:12.369981 env[1210]: time="2024-12-13T02:17:12.369870504Z" level=info msg="cleaning up dead shim" Dec 13 02:17:12.376080 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-797000a02f7dceb20f80b26cce306970b504236c48a1307643772d8625d67cb3-rootfs.mount: Deactivated successfully. Dec 13 02:17:12.392040 env[1210]: time="2024-12-13T02:17:12.391111029Z" level=info msg="shim disconnected" id=797000a02f7dceb20f80b26cce306970b504236c48a1307643772d8625d67cb3 Dec 13 02:17:12.392040 env[1210]: time="2024-12-13T02:17:12.391759178Z" level=warning msg="cleaning up after shim disconnected" id=797000a02f7dceb20f80b26cce306970b504236c48a1307643772d8625d67cb3 namespace=k8s.io Dec 13 02:17:12.392040 env[1210]: time="2024-12-13T02:17:12.391783083Z" level=info msg="cleaning up dead shim" Dec 13 02:17:12.399820 env[1210]: time="2024-12-13T02:17:12.399773117Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:17:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3627 runtime=io.containerd.runc.v2\n" Dec 13 02:17:12.403311 env[1210]: time="2024-12-13T02:17:12.403221704Z" level=info msg="StopContainer for \"4b2e5e5f15fdd384776688584f60d908205eb5af2196b081c83ebe7166cb74fe\" returns successfully" Dec 13 02:17:12.404070 env[1210]: time="2024-12-13T02:17:12.404027850Z" level=info msg="StopPodSandbox for \"c73a5a4b3b486772ba7cc75248332ab6772c1f6fe640bfd9512633dd073c8903\"" Dec 13 02:17:12.404200 env[1210]: time="2024-12-13T02:17:12.404118880Z" level=info msg="Container to stop \"4b2e5e5f15fdd384776688584f60d908205eb5af2196b081c83ebe7166cb74fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:17:12.413558 env[1210]: time="2024-12-13T02:17:12.413514125Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:17:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3639 runtime=io.containerd.runc.v2\n" Dec 13 02:17:12.416660 env[1210]: time="2024-12-13T02:17:12.416616195Z" level=info msg="StopContainer for \"797000a02f7dceb20f80b26cce306970b504236c48a1307643772d8625d67cb3\" returns successfully" Dec 13 02:17:12.417558 env[1210]: time="2024-12-13T02:17:12.417473446Z" level=info msg="StopPodSandbox for \"2d5f0a5866e5886e507472f38ef1e7aaf8ca7549a40bd73a2d12cfeb0e50bc71\"" Dec 13 02:17:12.417924 env[1210]: time="2024-12-13T02:17:12.417888938Z" level=info msg="Container to stop \"19cf06471a70084bf2037261054c06bc4bb9efcae33e90139becc0b8088a47a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:17:12.418164 env[1210]: time="2024-12-13T02:17:12.418099705Z" level=info msg="Container to stop \"49e1e95e2b0cbe20d430e2ad409ab429980df25af992c9dd662561fec6bfa4fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:17:12.418349 env[1210]: time="2024-12-13T02:17:12.418318382Z" level=info msg="Container to stop \"797000a02f7dceb20f80b26cce306970b504236c48a1307643772d8625d67cb3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:17:12.418682 env[1210]: time="2024-12-13T02:17:12.418650547Z" level=info msg="Container to stop \"90f739c81c4b493eb45a25c3f461e9fb46466a48237ba0e78b92239d269d1e00\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:17:12.418852 env[1210]: time="2024-12-13T02:17:12.418824605Z" level=info msg="Container to stop \"6d99540c88f5c6c7b64d373cef8a7eab14b8b38a39be92ab076ce6180582cd2c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:17:12.419429 systemd[1]: cri-containerd-c73a5a4b3b486772ba7cc75248332ab6772c1f6fe640bfd9512633dd073c8903.scope: Deactivated successfully. Dec 13 02:17:12.436889 systemd[1]: cri-containerd-2d5f0a5866e5886e507472f38ef1e7aaf8ca7549a40bd73a2d12cfeb0e50bc71.scope: Deactivated successfully. Dec 13 02:17:12.475799 env[1210]: time="2024-12-13T02:17:12.475732532Z" level=info msg="shim disconnected" id=c73a5a4b3b486772ba7cc75248332ab6772c1f6fe640bfd9512633dd073c8903 Dec 13 02:17:12.475799 env[1210]: time="2024-12-13T02:17:12.475805336Z" level=warning msg="cleaning up after shim disconnected" id=c73a5a4b3b486772ba7cc75248332ab6772c1f6fe640bfd9512633dd073c8903 namespace=k8s.io Dec 13 02:17:12.475799 env[1210]: time="2024-12-13T02:17:12.475820496Z" level=info msg="cleaning up dead shim" Dec 13 02:17:12.484193 env[1210]: time="2024-12-13T02:17:12.484131350Z" level=info msg="shim disconnected" id=2d5f0a5866e5886e507472f38ef1e7aaf8ca7549a40bd73a2d12cfeb0e50bc71 Dec 13 02:17:12.484505 env[1210]: time="2024-12-13T02:17:12.484196072Z" level=warning msg="cleaning up after shim disconnected" id=2d5f0a5866e5886e507472f38ef1e7aaf8ca7549a40bd73a2d12cfeb0e50bc71 namespace=k8s.io Dec 13 02:17:12.484505 env[1210]: time="2024-12-13T02:17:12.484210020Z" level=info msg="cleaning up dead shim" Dec 13 02:17:12.492787 env[1210]: time="2024-12-13T02:17:12.492718764Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:17:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3691 runtime=io.containerd.runc.v2\n" Dec 13 02:17:12.493251 env[1210]: time="2024-12-13T02:17:12.493187229Z" level=info msg="TearDown network for sandbox \"c73a5a4b3b486772ba7cc75248332ab6772c1f6fe640bfd9512633dd073c8903\" successfully" Dec 13 02:17:12.493372 env[1210]: time="2024-12-13T02:17:12.493255469Z" level=info msg="StopPodSandbox for \"c73a5a4b3b486772ba7cc75248332ab6772c1f6fe640bfd9512633dd073c8903\" returns successfully" Dec 13 02:17:12.525082 env[1210]: time="2024-12-13T02:17:12.524937876Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:17:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3700 runtime=io.containerd.runc.v2\n" Dec 13 02:17:12.525657 env[1210]: time="2024-12-13T02:17:12.525579600Z" level=info msg="TearDown network for sandbox \"2d5f0a5866e5886e507472f38ef1e7aaf8ca7549a40bd73a2d12cfeb0e50bc71\" successfully" Dec 13 02:17:12.525657 env[1210]: time="2024-12-13T02:17:12.525635291Z" level=info msg="StopPodSandbox for \"2d5f0a5866e5886e507472f38ef1e7aaf8ca7549a40bd73a2d12cfeb0e50bc71\" returns successfully" Dec 13 02:17:12.529197 kubelet[1987]: I1213 02:17:12.528960 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w98l\" (UniqueName: \"kubernetes.io/projected/942189c6-fc48-4d5d-981a-5b8d2fcca44a-kube-api-access-2w98l\") pod \"942189c6-fc48-4d5d-981a-5b8d2fcca44a\" (UID: \"942189c6-fc48-4d5d-981a-5b8d2fcca44a\") " Dec 13 02:17:12.529197 kubelet[1987]: I1213 02:17:12.529042 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/942189c6-fc48-4d5d-981a-5b8d2fcca44a-cilium-config-path\") pod \"942189c6-fc48-4d5d-981a-5b8d2fcca44a\" (UID: \"942189c6-fc48-4d5d-981a-5b8d2fcca44a\") " Dec 13 02:17:12.535647 kubelet[1987]: I1213 02:17:12.535604 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/942189c6-fc48-4d5d-981a-5b8d2fcca44a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "942189c6-fc48-4d5d-981a-5b8d2fcca44a" (UID: "942189c6-fc48-4d5d-981a-5b8d2fcca44a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:17:12.537812 kubelet[1987]: I1213 02:17:12.537766 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/942189c6-fc48-4d5d-981a-5b8d2fcca44a-kube-api-access-2w98l" (OuterVolumeSpecName: "kube-api-access-2w98l") pod "942189c6-fc48-4d5d-981a-5b8d2fcca44a" (UID: "942189c6-fc48-4d5d-981a-5b8d2fcca44a"). InnerVolumeSpecName "kube-api-access-2w98l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:17:12.630308 kubelet[1987]: I1213 02:17:12.630188 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-host-proc-sys-kernel\") pod \"db03aa3d-9bc2-4735-8067-99540f039d93\" (UID: \"db03aa3d-9bc2-4735-8067-99540f039d93\") " Dec 13 02:17:12.630308 kubelet[1987]: I1213 02:17:12.630278 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-cni-path\") pod \"db03aa3d-9bc2-4735-8067-99540f039d93\" (UID: \"db03aa3d-9bc2-4735-8067-99540f039d93\") " Dec 13 02:17:12.630308 kubelet[1987]: I1213 02:17:12.630306 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-cilium-run\") pod \"db03aa3d-9bc2-4735-8067-99540f039d93\" (UID: \"db03aa3d-9bc2-4735-8067-99540f039d93\") " Dec 13 02:17:12.630686 kubelet[1987]: I1213 02:17:12.630421 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "db03aa3d-9bc2-4735-8067-99540f039d93" (UID: "db03aa3d-9bc2-4735-8067-99540f039d93"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:17:12.630686 kubelet[1987]: I1213 02:17:12.630478 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-cni-path" (OuterVolumeSpecName: "cni-path") pod "db03aa3d-9bc2-4735-8067-99540f039d93" (UID: "db03aa3d-9bc2-4735-8067-99540f039d93"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:17:12.630686 kubelet[1987]: I1213 02:17:12.630522 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-lib-modules\") pod \"db03aa3d-9bc2-4735-8067-99540f039d93\" (UID: \"db03aa3d-9bc2-4735-8067-99540f039d93\") " Dec 13 02:17:12.630686 kubelet[1987]: I1213 02:17:12.630574 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "db03aa3d-9bc2-4735-8067-99540f039d93" (UID: "db03aa3d-9bc2-4735-8067-99540f039d93"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:17:12.630686 kubelet[1987]: I1213 02:17:12.630615 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/db03aa3d-9bc2-4735-8067-99540f039d93-clustermesh-secrets\") pod \"db03aa3d-9bc2-4735-8067-99540f039d93\" (UID: \"db03aa3d-9bc2-4735-8067-99540f039d93\") " Dec 13 02:17:12.630971 kubelet[1987]: I1213 02:17:12.630670 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "db03aa3d-9bc2-4735-8067-99540f039d93" (UID: "db03aa3d-9bc2-4735-8067-99540f039d93"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:17:12.631119 kubelet[1987]: I1213 02:17:12.631090 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/db03aa3d-9bc2-4735-8067-99540f039d93-hubble-tls\") pod \"db03aa3d-9bc2-4735-8067-99540f039d93\" (UID: \"db03aa3d-9bc2-4735-8067-99540f039d93\") " Dec 13 02:17:12.631263 kubelet[1987]: I1213 02:17:12.631243 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-hostproc\") pod \"db03aa3d-9bc2-4735-8067-99540f039d93\" (UID: \"db03aa3d-9bc2-4735-8067-99540f039d93\") " Dec 13 02:17:12.631425 kubelet[1987]: I1213 02:17:12.631371 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-cilium-cgroup\") pod \"db03aa3d-9bc2-4735-8067-99540f039d93\" (UID: \"db03aa3d-9bc2-4735-8067-99540f039d93\") " Dec 13 02:17:12.632454 kubelet[1987]: I1213 02:17:12.632422 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/db03aa3d-9bc2-4735-8067-99540f039d93-cilium-config-path\") pod \"db03aa3d-9bc2-4735-8067-99540f039d93\" (UID: \"db03aa3d-9bc2-4735-8067-99540f039d93\") " Dec 13 02:17:12.632591 kubelet[1987]: I1213 02:17:12.632470 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5mhp\" (UniqueName: \"kubernetes.io/projected/db03aa3d-9bc2-4735-8067-99540f039d93-kube-api-access-t5mhp\") pod \"db03aa3d-9bc2-4735-8067-99540f039d93\" (UID: \"db03aa3d-9bc2-4735-8067-99540f039d93\") " Dec 13 02:17:12.632591 kubelet[1987]: I1213 02:17:12.632517 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-etc-cni-netd\") pod \"db03aa3d-9bc2-4735-8067-99540f039d93\" (UID: \"db03aa3d-9bc2-4735-8067-99540f039d93\") " Dec 13 02:17:12.632591 kubelet[1987]: I1213 02:17:12.632544 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-xtables-lock\") pod \"db03aa3d-9bc2-4735-8067-99540f039d93\" (UID: \"db03aa3d-9bc2-4735-8067-99540f039d93\") " Dec 13 02:17:12.632764 kubelet[1987]: I1213 02:17:12.632592 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-bpf-maps\") pod \"db03aa3d-9bc2-4735-8067-99540f039d93\" (UID: \"db03aa3d-9bc2-4735-8067-99540f039d93\") " Dec 13 02:17:12.632764 kubelet[1987]: I1213 02:17:12.632622 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-host-proc-sys-net\") pod \"db03aa3d-9bc2-4735-8067-99540f039d93\" (UID: \"db03aa3d-9bc2-4735-8067-99540f039d93\") " Dec 13 02:17:12.632764 kubelet[1987]: I1213 02:17:12.632700 1987 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-host-proc-sys-kernel\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:12.632764 kubelet[1987]: I1213 02:17:12.632721 1987 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-cni-path\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:12.632764 kubelet[1987]: I1213 02:17:12.632761 1987 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-cilium-run\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:12.633152 kubelet[1987]: I1213 02:17:12.632779 1987 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-lib-modules\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:12.633152 kubelet[1987]: I1213 02:17:12.632796 1987 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/942189c6-fc48-4d5d-981a-5b8d2fcca44a-cilium-config-path\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:12.633152 kubelet[1987]: I1213 02:17:12.632813 1987 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2w98l\" (UniqueName: \"kubernetes.io/projected/942189c6-fc48-4d5d-981a-5b8d2fcca44a-kube-api-access-2w98l\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:12.633152 kubelet[1987]: I1213 02:17:12.631539 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "db03aa3d-9bc2-4735-8067-99540f039d93" (UID: "db03aa3d-9bc2-4735-8067-99540f039d93"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:17:12.633152 kubelet[1987]: I1213 02:17:12.632871 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "db03aa3d-9bc2-4735-8067-99540f039d93" (UID: "db03aa3d-9bc2-4735-8067-99540f039d93"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:17:12.633565 kubelet[1987]: I1213 02:17:12.633537 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-hostproc" (OuterVolumeSpecName: "hostproc") pod "db03aa3d-9bc2-4735-8067-99540f039d93" (UID: "db03aa3d-9bc2-4735-8067-99540f039d93"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:17:12.635984 kubelet[1987]: I1213 02:17:12.633709 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "db03aa3d-9bc2-4735-8067-99540f039d93" (UID: "db03aa3d-9bc2-4735-8067-99540f039d93"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:17:12.636143 kubelet[1987]: I1213 02:17:12.635894 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "db03aa3d-9bc2-4735-8067-99540f039d93" (UID: "db03aa3d-9bc2-4735-8067-99540f039d93"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:17:12.636258 kubelet[1987]: I1213 02:17:12.635923 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "db03aa3d-9bc2-4735-8067-99540f039d93" (UID: "db03aa3d-9bc2-4735-8067-99540f039d93"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:17:12.639176 kubelet[1987]: I1213 02:17:12.639142 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db03aa3d-9bc2-4735-8067-99540f039d93-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "db03aa3d-9bc2-4735-8067-99540f039d93" (UID: "db03aa3d-9bc2-4735-8067-99540f039d93"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:17:12.641265 kubelet[1987]: I1213 02:17:12.641227 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db03aa3d-9bc2-4735-8067-99540f039d93-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "db03aa3d-9bc2-4735-8067-99540f039d93" (UID: "db03aa3d-9bc2-4735-8067-99540f039d93"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:17:12.643182 kubelet[1987]: I1213 02:17:12.643149 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db03aa3d-9bc2-4735-8067-99540f039d93-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "db03aa3d-9bc2-4735-8067-99540f039d93" (UID: "db03aa3d-9bc2-4735-8067-99540f039d93"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:17:12.643370 kubelet[1987]: I1213 02:17:12.643241 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db03aa3d-9bc2-4735-8067-99540f039d93-kube-api-access-t5mhp" (OuterVolumeSpecName: "kube-api-access-t5mhp") pod "db03aa3d-9bc2-4735-8067-99540f039d93" (UID: "db03aa3d-9bc2-4735-8067-99540f039d93"). InnerVolumeSpecName "kube-api-access-t5mhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:17:12.733968 kubelet[1987]: I1213 02:17:12.733804 1987 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-etc-cni-netd\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:12.733968 kubelet[1987]: I1213 02:17:12.733848 1987 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-xtables-lock\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:12.733968 kubelet[1987]: I1213 02:17:12.733867 1987 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-bpf-maps\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:12.733968 kubelet[1987]: I1213 02:17:12.733883 1987 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-host-proc-sys-net\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:12.733968 kubelet[1987]: I1213 02:17:12.733898 1987 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/db03aa3d-9bc2-4735-8067-99540f039d93-hubble-tls\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:12.733968 kubelet[1987]: I1213 02:17:12.733913 1987 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/db03aa3d-9bc2-4735-8067-99540f039d93-clustermesh-secrets\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:12.733968 kubelet[1987]: I1213 02:17:12.733935 1987 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-cilium-cgroup\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:12.734506 kubelet[1987]: I1213 02:17:12.733952 1987 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/db03aa3d-9bc2-4735-8067-99540f039d93-cilium-config-path\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:12.734506 kubelet[1987]: I1213 02:17:12.733969 1987 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-t5mhp\" (UniqueName: \"kubernetes.io/projected/db03aa3d-9bc2-4735-8067-99540f039d93-kube-api-access-t5mhp\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:12.734506 kubelet[1987]: I1213 02:17:12.733985 1987 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/db03aa3d-9bc2-4735-8067-99540f039d93-hostproc\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:12.883991 kubelet[1987]: I1213 02:17:12.883951 1987 scope.go:117] "RemoveContainer" containerID="4b2e5e5f15fdd384776688584f60d908205eb5af2196b081c83ebe7166cb74fe" Dec 13 02:17:12.892904 systemd[1]: Removed slice kubepods-besteffort-pod942189c6_fc48_4d5d_981a_5b8d2fcca44a.slice. Dec 13 02:17:12.893753 env[1210]: time="2024-12-13T02:17:12.893707654Z" level=info msg="RemoveContainer for \"4b2e5e5f15fdd384776688584f60d908205eb5af2196b081c83ebe7166cb74fe\"" Dec 13 02:17:12.902115 env[1210]: time="2024-12-13T02:17:12.902042078Z" level=info msg="RemoveContainer for \"4b2e5e5f15fdd384776688584f60d908205eb5af2196b081c83ebe7166cb74fe\" returns successfully" Dec 13 02:17:12.904480 kubelet[1987]: I1213 02:17:12.903375 1987 scope.go:117] "RemoveContainer" containerID="4b2e5e5f15fdd384776688584f60d908205eb5af2196b081c83ebe7166cb74fe" Dec 13 02:17:12.904480 kubelet[1987]: E1213 02:17:12.904005 1987 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4b2e5e5f15fdd384776688584f60d908205eb5af2196b081c83ebe7166cb74fe\": not found" containerID="4b2e5e5f15fdd384776688584f60d908205eb5af2196b081c83ebe7166cb74fe" Dec 13 02:17:12.904480 kubelet[1987]: I1213 02:17:12.904043 1987 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4b2e5e5f15fdd384776688584f60d908205eb5af2196b081c83ebe7166cb74fe"} err="failed to get container status \"4b2e5e5f15fdd384776688584f60d908205eb5af2196b081c83ebe7166cb74fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"4b2e5e5f15fdd384776688584f60d908205eb5af2196b081c83ebe7166cb74fe\": not found" Dec 13 02:17:12.904480 kubelet[1987]: I1213 02:17:12.904151 1987 scope.go:117] "RemoveContainer" containerID="797000a02f7dceb20f80b26cce306970b504236c48a1307643772d8625d67cb3" Dec 13 02:17:12.904891 env[1210]: time="2024-12-13T02:17:12.903751517Z" level=error msg="ContainerStatus for \"4b2e5e5f15fdd384776688584f60d908205eb5af2196b081c83ebe7166cb74fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4b2e5e5f15fdd384776688584f60d908205eb5af2196b081c83ebe7166cb74fe\": not found" Dec 13 02:17:12.905855 env[1210]: time="2024-12-13T02:17:12.905592762Z" level=info msg="RemoveContainer for \"797000a02f7dceb20f80b26cce306970b504236c48a1307643772d8625d67cb3\"" Dec 13 02:17:12.911801 systemd[1]: Removed slice kubepods-burstable-poddb03aa3d_9bc2_4735_8067_99540f039d93.slice. Dec 13 02:17:12.911972 systemd[1]: kubepods-burstable-poddb03aa3d_9bc2_4735_8067_99540f039d93.slice: Consumed 9.211s CPU time. Dec 13 02:17:12.917574 env[1210]: time="2024-12-13T02:17:12.917511100Z" level=info msg="RemoveContainer for \"797000a02f7dceb20f80b26cce306970b504236c48a1307643772d8625d67cb3\" returns successfully" Dec 13 02:17:12.918194 kubelet[1987]: I1213 02:17:12.918129 1987 scope.go:117] "RemoveContainer" containerID="49e1e95e2b0cbe20d430e2ad409ab429980df25af992c9dd662561fec6bfa4fb" Dec 13 02:17:12.924054 env[1210]: time="2024-12-13T02:17:12.923454484Z" level=info msg="RemoveContainer for \"49e1e95e2b0cbe20d430e2ad409ab429980df25af992c9dd662561fec6bfa4fb\"" Dec 13 02:17:12.929371 env[1210]: time="2024-12-13T02:17:12.929269743Z" level=info msg="RemoveContainer for \"49e1e95e2b0cbe20d430e2ad409ab429980df25af992c9dd662561fec6bfa4fb\" returns successfully" Dec 13 02:17:12.930361 kubelet[1987]: I1213 02:17:12.930313 1987 scope.go:117] "RemoveContainer" containerID="19cf06471a70084bf2037261054c06bc4bb9efcae33e90139becc0b8088a47a1" Dec 13 02:17:12.933553 env[1210]: time="2024-12-13T02:17:12.933372918Z" level=info msg="RemoveContainer for \"19cf06471a70084bf2037261054c06bc4bb9efcae33e90139becc0b8088a47a1\"" Dec 13 02:17:12.942445 env[1210]: time="2024-12-13T02:17:12.942245513Z" level=info msg="RemoveContainer for \"19cf06471a70084bf2037261054c06bc4bb9efcae33e90139becc0b8088a47a1\" returns successfully" Dec 13 02:17:12.944612 kubelet[1987]: I1213 02:17:12.944543 1987 scope.go:117] "RemoveContainer" containerID="6d99540c88f5c6c7b64d373cef8a7eab14b8b38a39be92ab076ce6180582cd2c" Dec 13 02:17:12.947535 env[1210]: time="2024-12-13T02:17:12.947467765Z" level=info msg="RemoveContainer for \"6d99540c88f5c6c7b64d373cef8a7eab14b8b38a39be92ab076ce6180582cd2c\"" Dec 13 02:17:12.953484 env[1210]: time="2024-12-13T02:17:12.953415613Z" level=info msg="RemoveContainer for \"6d99540c88f5c6c7b64d373cef8a7eab14b8b38a39be92ab076ce6180582cd2c\" returns successfully" Dec 13 02:17:12.953858 kubelet[1987]: I1213 02:17:12.953828 1987 scope.go:117] "RemoveContainer" containerID="90f739c81c4b493eb45a25c3f461e9fb46466a48237ba0e78b92239d269d1e00" Dec 13 02:17:12.955648 env[1210]: time="2024-12-13T02:17:12.955568027Z" level=info msg="RemoveContainer for \"90f739c81c4b493eb45a25c3f461e9fb46466a48237ba0e78b92239d269d1e00\"" Dec 13 02:17:12.962298 env[1210]: time="2024-12-13T02:17:12.962129434Z" level=info msg="RemoveContainer for \"90f739c81c4b493eb45a25c3f461e9fb46466a48237ba0e78b92239d269d1e00\" returns successfully" Dec 13 02:17:12.962978 kubelet[1987]: I1213 02:17:12.962937 1987 scope.go:117] "RemoveContainer" containerID="797000a02f7dceb20f80b26cce306970b504236c48a1307643772d8625d67cb3" Dec 13 02:17:12.963532 env[1210]: time="2024-12-13T02:17:12.963295513Z" level=error msg="ContainerStatus for \"797000a02f7dceb20f80b26cce306970b504236c48a1307643772d8625d67cb3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"797000a02f7dceb20f80b26cce306970b504236c48a1307643772d8625d67cb3\": not found" Dec 13 02:17:12.963764 kubelet[1987]: E1213 02:17:12.963727 1987 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"797000a02f7dceb20f80b26cce306970b504236c48a1307643772d8625d67cb3\": not found" containerID="797000a02f7dceb20f80b26cce306970b504236c48a1307643772d8625d67cb3" Dec 13 02:17:12.963906 kubelet[1987]: I1213 02:17:12.963773 1987 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"797000a02f7dceb20f80b26cce306970b504236c48a1307643772d8625d67cb3"} err="failed to get container status \"797000a02f7dceb20f80b26cce306970b504236c48a1307643772d8625d67cb3\": rpc error: code = NotFound desc = an error occurred when try to find container \"797000a02f7dceb20f80b26cce306970b504236c48a1307643772d8625d67cb3\": not found" Dec 13 02:17:12.963906 kubelet[1987]: I1213 02:17:12.963812 1987 scope.go:117] "RemoveContainer" containerID="49e1e95e2b0cbe20d430e2ad409ab429980df25af992c9dd662561fec6bfa4fb" Dec 13 02:17:12.964157 env[1210]: time="2024-12-13T02:17:12.964081193Z" level=error msg="ContainerStatus for \"49e1e95e2b0cbe20d430e2ad409ab429980df25af992c9dd662561fec6bfa4fb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"49e1e95e2b0cbe20d430e2ad409ab429980df25af992c9dd662561fec6bfa4fb\": not found" Dec 13 02:17:12.964407 kubelet[1987]: E1213 02:17:12.964326 1987 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"49e1e95e2b0cbe20d430e2ad409ab429980df25af992c9dd662561fec6bfa4fb\": not found" containerID="49e1e95e2b0cbe20d430e2ad409ab429980df25af992c9dd662561fec6bfa4fb" Dec 13 02:17:12.964559 kubelet[1987]: I1213 02:17:12.964372 1987 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"49e1e95e2b0cbe20d430e2ad409ab429980df25af992c9dd662561fec6bfa4fb"} err="failed to get container status \"49e1e95e2b0cbe20d430e2ad409ab429980df25af992c9dd662561fec6bfa4fb\": rpc error: code = NotFound desc = an error occurred when try to find container \"49e1e95e2b0cbe20d430e2ad409ab429980df25af992c9dd662561fec6bfa4fb\": not found" Dec 13 02:17:12.964559 kubelet[1987]: I1213 02:17:12.964441 1987 scope.go:117] "RemoveContainer" containerID="19cf06471a70084bf2037261054c06bc4bb9efcae33e90139becc0b8088a47a1" Dec 13 02:17:12.964753 env[1210]: time="2024-12-13T02:17:12.964677286Z" level=error msg="ContainerStatus for \"19cf06471a70084bf2037261054c06bc4bb9efcae33e90139becc0b8088a47a1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"19cf06471a70084bf2037261054c06bc4bb9efcae33e90139becc0b8088a47a1\": not found" Dec 13 02:17:12.964921 kubelet[1987]: E1213 02:17:12.964890 1987 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"19cf06471a70084bf2037261054c06bc4bb9efcae33e90139becc0b8088a47a1\": not found" containerID="19cf06471a70084bf2037261054c06bc4bb9efcae33e90139becc0b8088a47a1" Dec 13 02:17:12.965015 kubelet[1987]: I1213 02:17:12.964926 1987 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"19cf06471a70084bf2037261054c06bc4bb9efcae33e90139becc0b8088a47a1"} err="failed to get container status \"19cf06471a70084bf2037261054c06bc4bb9efcae33e90139becc0b8088a47a1\": rpc error: code = NotFound desc = an error occurred when try to find container \"19cf06471a70084bf2037261054c06bc4bb9efcae33e90139becc0b8088a47a1\": not found" Dec 13 02:17:12.965015 kubelet[1987]: I1213 02:17:12.964953 1987 scope.go:117] "RemoveContainer" containerID="6d99540c88f5c6c7b64d373cef8a7eab14b8b38a39be92ab076ce6180582cd2c" Dec 13 02:17:12.965303 env[1210]: time="2024-12-13T02:17:12.965158516Z" level=error msg="ContainerStatus for \"6d99540c88f5c6c7b64d373cef8a7eab14b8b38a39be92ab076ce6180582cd2c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6d99540c88f5c6c7b64d373cef8a7eab14b8b38a39be92ab076ce6180582cd2c\": not found" Dec 13 02:17:12.965502 kubelet[1987]: E1213 02:17:12.965471 1987 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6d99540c88f5c6c7b64d373cef8a7eab14b8b38a39be92ab076ce6180582cd2c\": not found" containerID="6d99540c88f5c6c7b64d373cef8a7eab14b8b38a39be92ab076ce6180582cd2c" Dec 13 02:17:12.965580 kubelet[1987]: I1213 02:17:12.965504 1987 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6d99540c88f5c6c7b64d373cef8a7eab14b8b38a39be92ab076ce6180582cd2c"} err="failed to get container status \"6d99540c88f5c6c7b64d373cef8a7eab14b8b38a39be92ab076ce6180582cd2c\": rpc error: code = NotFound desc = an error occurred when try to find container \"6d99540c88f5c6c7b64d373cef8a7eab14b8b38a39be92ab076ce6180582cd2c\": not found" Dec 13 02:17:12.965580 kubelet[1987]: I1213 02:17:12.965531 1987 scope.go:117] "RemoveContainer" containerID="90f739c81c4b493eb45a25c3f461e9fb46466a48237ba0e78b92239d269d1e00" Dec 13 02:17:12.965963 env[1210]: time="2024-12-13T02:17:12.965873964Z" level=error msg="ContainerStatus for \"90f739c81c4b493eb45a25c3f461e9fb46466a48237ba0e78b92239d269d1e00\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"90f739c81c4b493eb45a25c3f461e9fb46466a48237ba0e78b92239d269d1e00\": not found" Dec 13 02:17:12.966208 kubelet[1987]: E1213 02:17:12.966114 1987 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"90f739c81c4b493eb45a25c3f461e9fb46466a48237ba0e78b92239d269d1e00\": not found" containerID="90f739c81c4b493eb45a25c3f461e9fb46466a48237ba0e78b92239d269d1e00" Dec 13 02:17:12.966340 kubelet[1987]: I1213 02:17:12.966232 1987 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"90f739c81c4b493eb45a25c3f461e9fb46466a48237ba0e78b92239d269d1e00"} err="failed to get container status \"90f739c81c4b493eb45a25c3f461e9fb46466a48237ba0e78b92239d269d1e00\": rpc error: code = NotFound desc = an error occurred when try to find container \"90f739c81c4b493eb45a25c3f461e9fb46466a48237ba0e78b92239d269d1e00\": not found" Dec 13 02:17:13.188336 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c73a5a4b3b486772ba7cc75248332ab6772c1f6fe640bfd9512633dd073c8903-rootfs.mount: Deactivated successfully. Dec 13 02:17:13.188535 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c73a5a4b3b486772ba7cc75248332ab6772c1f6fe640bfd9512633dd073c8903-shm.mount: Deactivated successfully. Dec 13 02:17:13.188659 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d5f0a5866e5886e507472f38ef1e7aaf8ca7549a40bd73a2d12cfeb0e50bc71-rootfs.mount: Deactivated successfully. Dec 13 02:17:13.188759 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2d5f0a5866e5886e507472f38ef1e7aaf8ca7549a40bd73a2d12cfeb0e50bc71-shm.mount: Deactivated successfully. Dec 13 02:17:13.188865 systemd[1]: var-lib-kubelet-pods-942189c6\x2dfc48\x2d4d5d\x2d981a\x2d5b8d2fcca44a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2w98l.mount: Deactivated successfully. Dec 13 02:17:13.188978 systemd[1]: var-lib-kubelet-pods-db03aa3d\x2d9bc2\x2d4735\x2d8067\x2d99540f039d93-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt5mhp.mount: Deactivated successfully. Dec 13 02:17:13.189080 systemd[1]: var-lib-kubelet-pods-db03aa3d\x2d9bc2\x2d4735\x2d8067\x2d99540f039d93-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:17:13.189182 systemd[1]: var-lib-kubelet-pods-db03aa3d\x2d9bc2\x2d4735\x2d8067\x2d99540f039d93-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:17:14.116103 sshd[3559]: pam_unix(sshd:session): session closed for user core Dec 13 02:17:14.121041 systemd-logind[1214]: Session 23 logged out. Waiting for processes to exit. Dec 13 02:17:14.121295 systemd[1]: sshd@23-10.128.0.98:22-139.178.68.195:43008.service: Deactivated successfully. Dec 13 02:17:14.122462 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 02:17:14.122680 systemd[1]: session-23.scope: Consumed 1.049s CPU time. Dec 13 02:17:14.123866 systemd-logind[1214]: Removed session 23. Dec 13 02:17:14.163471 systemd[1]: Started sshd@24-10.128.0.98:22-139.178.68.195:43018.service. Dec 13 02:17:14.458553 sshd[3725]: Accepted publickey for core from 139.178.68.195 port 43018 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:17:14.460350 sshd[3725]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:17:14.467515 systemd[1]: Started session-24.scope. Dec 13 02:17:14.468421 systemd-logind[1214]: New session 24 of user core. Dec 13 02:17:14.518301 kubelet[1987]: I1213 02:17:14.518249 1987 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="942189c6-fc48-4d5d-981a-5b8d2fcca44a" path="/var/lib/kubelet/pods/942189c6-fc48-4d5d-981a-5b8d2fcca44a/volumes" Dec 13 02:17:14.519069 kubelet[1987]: I1213 02:17:14.519037 1987 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db03aa3d-9bc2-4735-8067-99540f039d93" path="/var/lib/kubelet/pods/db03aa3d-9bc2-4735-8067-99540f039d93/volumes" Dec 13 02:17:14.523069 env[1210]: time="2024-12-13T02:17:14.523013198Z" level=info msg="StopPodSandbox for \"2d5f0a5866e5886e507472f38ef1e7aaf8ca7549a40bd73a2d12cfeb0e50bc71\"" Dec 13 02:17:14.523705 env[1210]: time="2024-12-13T02:17:14.523143504Z" level=info msg="TearDown network for sandbox \"2d5f0a5866e5886e507472f38ef1e7aaf8ca7549a40bd73a2d12cfeb0e50bc71\" successfully" Dec 13 02:17:14.523705 env[1210]: time="2024-12-13T02:17:14.523209989Z" level=info msg="StopPodSandbox for \"2d5f0a5866e5886e507472f38ef1e7aaf8ca7549a40bd73a2d12cfeb0e50bc71\" returns successfully" Dec 13 02:17:14.523834 env[1210]: time="2024-12-13T02:17:14.523760989Z" level=info msg="RemovePodSandbox for \"2d5f0a5866e5886e507472f38ef1e7aaf8ca7549a40bd73a2d12cfeb0e50bc71\"" Dec 13 02:17:14.523834 env[1210]: time="2024-12-13T02:17:14.523798926Z" level=info msg="Forcibly stopping sandbox \"2d5f0a5866e5886e507472f38ef1e7aaf8ca7549a40bd73a2d12cfeb0e50bc71\"" Dec 13 02:17:14.523939 env[1210]: time="2024-12-13T02:17:14.523910608Z" level=info msg="TearDown network for sandbox \"2d5f0a5866e5886e507472f38ef1e7aaf8ca7549a40bd73a2d12cfeb0e50bc71\" successfully" Dec 13 02:17:14.529135 env[1210]: time="2024-12-13T02:17:14.528987789Z" level=info msg="RemovePodSandbox \"2d5f0a5866e5886e507472f38ef1e7aaf8ca7549a40bd73a2d12cfeb0e50bc71\" returns successfully" Dec 13 02:17:14.529770 env[1210]: time="2024-12-13T02:17:14.529718290Z" level=info msg="StopPodSandbox for \"c73a5a4b3b486772ba7cc75248332ab6772c1f6fe640bfd9512633dd073c8903\"" Dec 13 02:17:14.529907 env[1210]: time="2024-12-13T02:17:14.529820677Z" level=info msg="TearDown network for sandbox \"c73a5a4b3b486772ba7cc75248332ab6772c1f6fe640bfd9512633dd073c8903\" successfully" Dec 13 02:17:14.529907 env[1210]: time="2024-12-13T02:17:14.529871868Z" level=info msg="StopPodSandbox for \"c73a5a4b3b486772ba7cc75248332ab6772c1f6fe640bfd9512633dd073c8903\" returns successfully" Dec 13 02:17:14.530437 env[1210]: time="2024-12-13T02:17:14.530244316Z" level=info msg="RemovePodSandbox for \"c73a5a4b3b486772ba7cc75248332ab6772c1f6fe640bfd9512633dd073c8903\"" Dec 13 02:17:14.530437 env[1210]: time="2024-12-13T02:17:14.530281348Z" level=info msg="Forcibly stopping sandbox \"c73a5a4b3b486772ba7cc75248332ab6772c1f6fe640bfd9512633dd073c8903\"" Dec 13 02:17:14.530635 env[1210]: time="2024-12-13T02:17:14.530380015Z" level=info msg="TearDown network for sandbox \"c73a5a4b3b486772ba7cc75248332ab6772c1f6fe640bfd9512633dd073c8903\" successfully" Dec 13 02:17:14.535723 env[1210]: time="2024-12-13T02:17:14.535582474Z" level=info msg="RemovePodSandbox \"c73a5a4b3b486772ba7cc75248332ab6772c1f6fe640bfd9512633dd073c8903\" returns successfully" Dec 13 02:17:14.693050 kubelet[1987]: E1213 02:17:14.692923 1987 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:17:15.434103 kubelet[1987]: E1213 02:17:15.434030 1987 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="db03aa3d-9bc2-4735-8067-99540f039d93" containerName="mount-cgroup" Dec 13 02:17:15.434379 kubelet[1987]: E1213 02:17:15.434355 1987 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="942189c6-fc48-4d5d-981a-5b8d2fcca44a" containerName="cilium-operator" Dec 13 02:17:15.434541 kubelet[1987]: E1213 02:17:15.434521 1987 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="db03aa3d-9bc2-4735-8067-99540f039d93" containerName="apply-sysctl-overwrites" Dec 13 02:17:15.434682 kubelet[1987]: E1213 02:17:15.434663 1987 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="db03aa3d-9bc2-4735-8067-99540f039d93" containerName="mount-bpf-fs" Dec 13 02:17:15.434787 kubelet[1987]: E1213 02:17:15.434770 1987 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="db03aa3d-9bc2-4735-8067-99540f039d93" containerName="clean-cilium-state" Dec 13 02:17:15.434914 kubelet[1987]: E1213 02:17:15.434895 1987 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="db03aa3d-9bc2-4735-8067-99540f039d93" containerName="cilium-agent" Dec 13 02:17:15.435092 kubelet[1987]: I1213 02:17:15.435073 1987 memory_manager.go:354] "RemoveStaleState removing state" podUID="db03aa3d-9bc2-4735-8067-99540f039d93" containerName="cilium-agent" Dec 13 02:17:15.435211 kubelet[1987]: I1213 02:17:15.435194 1987 memory_manager.go:354] "RemoveStaleState removing state" podUID="942189c6-fc48-4d5d-981a-5b8d2fcca44a" containerName="cilium-operator" Dec 13 02:17:15.442917 sshd[3725]: pam_unix(sshd:session): session closed for user core Dec 13 02:17:15.446677 systemd[1]: Created slice kubepods-burstable-pod09c92c1e_bbba_42c5_bf1b_fa93a0f85bef.slice. Dec 13 02:17:15.451711 systemd[1]: sshd@24-10.128.0.98:22-139.178.68.195:43018.service: Deactivated successfully. Dec 13 02:17:15.452961 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 02:17:15.455311 systemd-logind[1214]: Session 24 logged out. Waiting for processes to exit. Dec 13 02:17:15.457584 systemd-logind[1214]: Removed session 24. Dec 13 02:17:15.496185 systemd[1]: Started sshd@25-10.128.0.98:22-139.178.68.195:43022.service. Dec 13 02:17:15.554477 kubelet[1987]: I1213 02:17:15.554420 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-etc-cni-netd\") pod \"cilium-lnbbd\" (UID: \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\") " pod="kube-system/cilium-lnbbd" Dec 13 02:17:15.555155 kubelet[1987]: I1213 02:17:15.555127 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-hubble-tls\") pod \"cilium-lnbbd\" (UID: \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\") " pod="kube-system/cilium-lnbbd" Dec 13 02:17:15.555304 kubelet[1987]: I1213 02:17:15.555287 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vck7j\" (UniqueName: \"kubernetes.io/projected/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-kube-api-access-vck7j\") pod \"cilium-lnbbd\" (UID: \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\") " pod="kube-system/cilium-lnbbd" Dec 13 02:17:15.555466 kubelet[1987]: I1213 02:17:15.555437 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-lib-modules\") pod \"cilium-lnbbd\" (UID: \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\") " pod="kube-system/cilium-lnbbd" Dec 13 02:17:15.555559 kubelet[1987]: I1213 02:17:15.555526 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-cilium-config-path\") pod \"cilium-lnbbd\" (UID: \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\") " pod="kube-system/cilium-lnbbd" Dec 13 02:17:15.555642 kubelet[1987]: I1213 02:17:15.555560 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-xtables-lock\") pod \"cilium-lnbbd\" (UID: \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\") " pod="kube-system/cilium-lnbbd" Dec 13 02:17:15.555642 kubelet[1987]: I1213 02:17:15.555586 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-cilium-ipsec-secrets\") pod \"cilium-lnbbd\" (UID: \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\") " pod="kube-system/cilium-lnbbd" Dec 13 02:17:15.555642 kubelet[1987]: I1213 02:17:15.555618 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-cni-path\") pod \"cilium-lnbbd\" (UID: \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\") " pod="kube-system/cilium-lnbbd" Dec 13 02:17:15.555810 kubelet[1987]: I1213 02:17:15.555645 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-hostproc\") pod \"cilium-lnbbd\" (UID: \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\") " pod="kube-system/cilium-lnbbd" Dec 13 02:17:15.555810 kubelet[1987]: I1213 02:17:15.555671 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-clustermesh-secrets\") pod \"cilium-lnbbd\" (UID: \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\") " pod="kube-system/cilium-lnbbd" Dec 13 02:17:15.555810 kubelet[1987]: I1213 02:17:15.555700 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-host-proc-sys-kernel\") pod \"cilium-lnbbd\" (UID: \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\") " pod="kube-system/cilium-lnbbd" Dec 13 02:17:15.555810 kubelet[1987]: I1213 02:17:15.555727 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-cilium-run\") pod \"cilium-lnbbd\" (UID: \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\") " pod="kube-system/cilium-lnbbd" Dec 13 02:17:15.555810 kubelet[1987]: I1213 02:17:15.555759 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-bpf-maps\") pod \"cilium-lnbbd\" (UID: \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\") " pod="kube-system/cilium-lnbbd" Dec 13 02:17:15.555810 kubelet[1987]: I1213 02:17:15.555786 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-cilium-cgroup\") pod \"cilium-lnbbd\" (UID: \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\") " pod="kube-system/cilium-lnbbd" Dec 13 02:17:15.556057 kubelet[1987]: I1213 02:17:15.555813 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-host-proc-sys-net\") pod \"cilium-lnbbd\" (UID: \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\") " pod="kube-system/cilium-lnbbd" Dec 13 02:17:15.760100 env[1210]: time="2024-12-13T02:17:15.759613337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lnbbd,Uid:09c92c1e-bbba-42c5-bf1b-fa93a0f85bef,Namespace:kube-system,Attempt:0,}" Dec 13 02:17:15.790087 env[1210]: time="2024-12-13T02:17:15.789974170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:17:15.790307 env[1210]: time="2024-12-13T02:17:15.790108177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:17:15.790307 env[1210]: time="2024-12-13T02:17:15.790149203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:17:15.790682 env[1210]: time="2024-12-13T02:17:15.790491836Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d2ae87056a6317b46aab991ccc287c5330ca7f6cb183b91dcbb0c2b1f6eabd2c pid=3750 runtime=io.containerd.runc.v2 Dec 13 02:17:15.808370 systemd[1]: Started cri-containerd-d2ae87056a6317b46aab991ccc287c5330ca7f6cb183b91dcbb0c2b1f6eabd2c.scope. Dec 13 02:17:15.829163 sshd[3737]: Accepted publickey for core from 139.178.68.195 port 43022 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:17:15.830333 sshd[3737]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:17:15.841510 systemd[1]: Started session-25.scope. Dec 13 02:17:15.843374 systemd-logind[1214]: New session 25 of user core. Dec 13 02:17:15.868277 env[1210]: time="2024-12-13T02:17:15.868216793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lnbbd,Uid:09c92c1e-bbba-42c5-bf1b-fa93a0f85bef,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2ae87056a6317b46aab991ccc287c5330ca7f6cb183b91dcbb0c2b1f6eabd2c\"" Dec 13 02:17:15.875328 env[1210]: time="2024-12-13T02:17:15.874835734Z" level=info msg="CreateContainer within sandbox \"d2ae87056a6317b46aab991ccc287c5330ca7f6cb183b91dcbb0c2b1f6eabd2c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:17:15.894600 env[1210]: time="2024-12-13T02:17:15.894537568Z" level=info msg="CreateContainer within sandbox \"d2ae87056a6317b46aab991ccc287c5330ca7f6cb183b91dcbb0c2b1f6eabd2c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b141a03af0dd6b333eac3d327f1bbf51eeffe041c065097d9e60ce82d2d22992\"" Dec 13 02:17:15.897952 env[1210]: time="2024-12-13T02:17:15.897890798Z" level=info msg="StartContainer for \"b141a03af0dd6b333eac3d327f1bbf51eeffe041c065097d9e60ce82d2d22992\"" Dec 13 02:17:15.931040 systemd[1]: Started cri-containerd-b141a03af0dd6b333eac3d327f1bbf51eeffe041c065097d9e60ce82d2d22992.scope. Dec 13 02:17:15.947804 systemd[1]: cri-containerd-b141a03af0dd6b333eac3d327f1bbf51eeffe041c065097d9e60ce82d2d22992.scope: Deactivated successfully. Dec 13 02:17:15.965773 env[1210]: time="2024-12-13T02:17:15.965680030Z" level=info msg="shim disconnected" id=b141a03af0dd6b333eac3d327f1bbf51eeffe041c065097d9e60ce82d2d22992 Dec 13 02:17:15.965773 env[1210]: time="2024-12-13T02:17:15.965749951Z" level=warning msg="cleaning up after shim disconnected" id=b141a03af0dd6b333eac3d327f1bbf51eeffe041c065097d9e60ce82d2d22992 namespace=k8s.io Dec 13 02:17:15.965773 env[1210]: time="2024-12-13T02:17:15.965765287Z" level=info msg="cleaning up dead shim" Dec 13 02:17:15.977159 env[1210]: time="2024-12-13T02:17:15.977089847Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:17:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3809 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T02:17:15Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/b141a03af0dd6b333eac3d327f1bbf51eeffe041c065097d9e60ce82d2d22992/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 02:17:15.977609 env[1210]: time="2024-12-13T02:17:15.977468980Z" level=error msg="copy shim log" error="read /proc/self/fd/31: file already closed" Dec 13 02:17:15.979684 env[1210]: time="2024-12-13T02:17:15.979484159Z" level=error msg="Failed to pipe stderr of container \"b141a03af0dd6b333eac3d327f1bbf51eeffe041c065097d9e60ce82d2d22992\"" error="reading from a closed fifo" Dec 13 02:17:15.979822 env[1210]: time="2024-12-13T02:17:15.979715589Z" level=error msg="Failed to pipe stdout of container \"b141a03af0dd6b333eac3d327f1bbf51eeffe041c065097d9e60ce82d2d22992\"" error="reading from a closed fifo" Dec 13 02:17:15.982354 env[1210]: time="2024-12-13T02:17:15.982276921Z" level=error msg="StartContainer for \"b141a03af0dd6b333eac3d327f1bbf51eeffe041c065097d9e60ce82d2d22992\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 02:17:15.982711 kubelet[1987]: E1213 02:17:15.982656 1987 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="b141a03af0dd6b333eac3d327f1bbf51eeffe041c065097d9e60ce82d2d22992" Dec 13 02:17:15.982903 kubelet[1987]: E1213 02:17:15.982871 1987 kuberuntime_manager.go:1272] "Unhandled Error" err=< Dec 13 02:17:15.982903 kubelet[1987]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 02:17:15.982903 kubelet[1987]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 02:17:15.982903 kubelet[1987]: rm /hostbin/cilium-mount Dec 13 02:17:15.984484 kubelet[1987]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vck7j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-lnbbd_kube-system(09c92c1e-bbba-42c5-bf1b-fa93a0f85bef): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 02:17:15.984484 kubelet[1987]: > logger="UnhandledError" Dec 13 02:17:15.984484 kubelet[1987]: E1213 02:17:15.984364 1987 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lnbbd" podUID="09c92c1e-bbba-42c5-bf1b-fa93a0f85bef" Dec 13 02:17:16.135161 sshd[3737]: pam_unix(sshd:session): session closed for user core Dec 13 02:17:16.141664 systemd-logind[1214]: Session 25 logged out. Waiting for processes to exit. Dec 13 02:17:16.141985 systemd[1]: sshd@25-10.128.0.98:22-139.178.68.195:43022.service: Deactivated successfully. Dec 13 02:17:16.143187 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 02:17:16.144994 systemd-logind[1214]: Removed session 25. Dec 13 02:17:16.181768 systemd[1]: Started sshd@26-10.128.0.98:22-139.178.68.195:54146.service. Dec 13 02:17:16.470626 sshd[3831]: Accepted publickey for core from 139.178.68.195 port 54146 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:17:16.472620 sshd[3831]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:17:16.479590 systemd[1]: Started session-26.scope. Dec 13 02:17:16.480199 systemd-logind[1214]: New session 26 of user core. Dec 13 02:17:16.923226 env[1210]: time="2024-12-13T02:17:16.920875277Z" level=info msg="CreateContainer within sandbox \"d2ae87056a6317b46aab991ccc287c5330ca7f6cb183b91dcbb0c2b1f6eabd2c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Dec 13 02:17:16.945987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount154343461.mount: Deactivated successfully. Dec 13 02:17:16.958507 env[1210]: time="2024-12-13T02:17:16.958423081Z" level=info msg="CreateContainer within sandbox \"d2ae87056a6317b46aab991ccc287c5330ca7f6cb183b91dcbb0c2b1f6eabd2c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"5c5796c50a2894d94a95c23bb33cc772f639a66698c60e3f4f7609d1cebda7c7\"" Dec 13 02:17:16.959669 env[1210]: time="2024-12-13T02:17:16.959625261Z" level=info msg="StartContainer for \"5c5796c50a2894d94a95c23bb33cc772f639a66698c60e3f4f7609d1cebda7c7\"" Dec 13 02:17:16.964774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount226897777.mount: Deactivated successfully. Dec 13 02:17:16.998964 systemd[1]: Started cri-containerd-5c5796c50a2894d94a95c23bb33cc772f639a66698c60e3f4f7609d1cebda7c7.scope. Dec 13 02:17:17.013965 systemd[1]: cri-containerd-5c5796c50a2894d94a95c23bb33cc772f639a66698c60e3f4f7609d1cebda7c7.scope: Deactivated successfully. Dec 13 02:17:17.026333 env[1210]: time="2024-12-13T02:17:17.026257010Z" level=info msg="shim disconnected" id=5c5796c50a2894d94a95c23bb33cc772f639a66698c60e3f4f7609d1cebda7c7 Dec 13 02:17:17.026333 env[1210]: time="2024-12-13T02:17:17.026336946Z" level=warning msg="cleaning up after shim disconnected" id=5c5796c50a2894d94a95c23bb33cc772f639a66698c60e3f4f7609d1cebda7c7 namespace=k8s.io Dec 13 02:17:17.026711 env[1210]: time="2024-12-13T02:17:17.026350861Z" level=info msg="cleaning up dead shim" Dec 13 02:17:17.039556 env[1210]: time="2024-12-13T02:17:17.039474681Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:17:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3867 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T02:17:17Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/5c5796c50a2894d94a95c23bb33cc772f639a66698c60e3f4f7609d1cebda7c7/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 02:17:17.039928 env[1210]: time="2024-12-13T02:17:17.039846095Z" level=error msg="copy shim log" error="read /proc/self/fd/31: file already closed" Dec 13 02:17:17.040520 env[1210]: time="2024-12-13T02:17:17.040462716Z" level=error msg="Failed to pipe stdout of container \"5c5796c50a2894d94a95c23bb33cc772f639a66698c60e3f4f7609d1cebda7c7\"" error="reading from a closed fifo" Dec 13 02:17:17.042270 env[1210]: time="2024-12-13T02:17:17.042171188Z" level=error msg="Failed to pipe stderr of container \"5c5796c50a2894d94a95c23bb33cc772f639a66698c60e3f4f7609d1cebda7c7\"" error="reading from a closed fifo" Dec 13 02:17:17.044705 env[1210]: time="2024-12-13T02:17:17.044640794Z" level=error msg="StartContainer for \"5c5796c50a2894d94a95c23bb33cc772f639a66698c60e3f4f7609d1cebda7c7\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 02:17:17.044950 kubelet[1987]: E1213 02:17:17.044895 1987 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="5c5796c50a2894d94a95c23bb33cc772f639a66698c60e3f4f7609d1cebda7c7" Dec 13 02:17:17.045529 kubelet[1987]: E1213 02:17:17.045068 1987 kuberuntime_manager.go:1272] "Unhandled Error" err=< Dec 13 02:17:17.045529 kubelet[1987]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 02:17:17.045529 kubelet[1987]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 02:17:17.045529 kubelet[1987]: rm /hostbin/cilium-mount Dec 13 02:17:17.045529 kubelet[1987]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vck7j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-lnbbd_kube-system(09c92c1e-bbba-42c5-bf1b-fa93a0f85bef): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 02:17:17.045529 kubelet[1987]: > logger="UnhandledError" Dec 13 02:17:17.047199 kubelet[1987]: E1213 02:17:17.046946 1987 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lnbbd" podUID="09c92c1e-bbba-42c5-bf1b-fa93a0f85bef" Dec 13 02:17:17.558127 kubelet[1987]: I1213 02:17:17.558044 1987 setters.go:600] "Node became not ready" node="ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T02:17:17Z","lastTransitionTime":"2024-12-13T02:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 02:17:17.923273 kubelet[1987]: I1213 02:17:17.923147 1987 scope.go:117] "RemoveContainer" containerID="b141a03af0dd6b333eac3d327f1bbf51eeffe041c065097d9e60ce82d2d22992" Dec 13 02:17:17.925013 env[1210]: time="2024-12-13T02:17:17.924183572Z" level=info msg="StopPodSandbox for \"d2ae87056a6317b46aab991ccc287c5330ca7f6cb183b91dcbb0c2b1f6eabd2c\"" Dec 13 02:17:17.925013 env[1210]: time="2024-12-13T02:17:17.924281502Z" level=info msg="Container to stop \"b141a03af0dd6b333eac3d327f1bbf51eeffe041c065097d9e60ce82d2d22992\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:17:17.925013 env[1210]: time="2024-12-13T02:17:17.924306944Z" level=info msg="Container to stop \"5c5796c50a2894d94a95c23bb33cc772f639a66698c60e3f4f7609d1cebda7c7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:17:17.928042 env[1210]: time="2024-12-13T02:17:17.928004061Z" level=info msg="RemoveContainer for \"b141a03af0dd6b333eac3d327f1bbf51eeffe041c065097d9e60ce82d2d22992\"" Dec 13 02:17:17.930790 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d2ae87056a6317b46aab991ccc287c5330ca7f6cb183b91dcbb0c2b1f6eabd2c-shm.mount: Deactivated successfully. Dec 13 02:17:17.945856 env[1210]: time="2024-12-13T02:17:17.945659546Z" level=info msg="RemoveContainer for \"b141a03af0dd6b333eac3d327f1bbf51eeffe041c065097d9e60ce82d2d22992\" returns successfully" Dec 13 02:17:17.950424 systemd[1]: cri-containerd-d2ae87056a6317b46aab991ccc287c5330ca7f6cb183b91dcbb0c2b1f6eabd2c.scope: Deactivated successfully. Dec 13 02:17:17.983756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2ae87056a6317b46aab991ccc287c5330ca7f6cb183b91dcbb0c2b1f6eabd2c-rootfs.mount: Deactivated successfully. Dec 13 02:17:17.989559 env[1210]: time="2024-12-13T02:17:17.989502777Z" level=info msg="shim disconnected" id=d2ae87056a6317b46aab991ccc287c5330ca7f6cb183b91dcbb0c2b1f6eabd2c Dec 13 02:17:17.989860 env[1210]: time="2024-12-13T02:17:17.989830913Z" level=warning msg="cleaning up after shim disconnected" id=d2ae87056a6317b46aab991ccc287c5330ca7f6cb183b91dcbb0c2b1f6eabd2c namespace=k8s.io Dec 13 02:17:17.990015 env[1210]: time="2024-12-13T02:17:17.989988013Z" level=info msg="cleaning up dead shim" Dec 13 02:17:18.002217 env[1210]: time="2024-12-13T02:17:18.002164942Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:17:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3898 runtime=io.containerd.runc.v2\n" Dec 13 02:17:18.002692 env[1210]: time="2024-12-13T02:17:18.002652570Z" level=info msg="TearDown network for sandbox \"d2ae87056a6317b46aab991ccc287c5330ca7f6cb183b91dcbb0c2b1f6eabd2c\" successfully" Dec 13 02:17:18.002808 env[1210]: time="2024-12-13T02:17:18.002694078Z" level=info msg="StopPodSandbox for \"d2ae87056a6317b46aab991ccc287c5330ca7f6cb183b91dcbb0c2b1f6eabd2c\" returns successfully" Dec 13 02:17:18.070774 kubelet[1987]: I1213 02:17:18.070724 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vck7j\" (UniqueName: \"kubernetes.io/projected/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-kube-api-access-vck7j\") pod \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\" (UID: \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\") " Dec 13 02:17:18.071527 kubelet[1987]: I1213 02:17:18.070780 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-clustermesh-secrets\") pod \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\" (UID: \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\") " Dec 13 02:17:18.071527 kubelet[1987]: I1213 02:17:18.070836 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-hostproc\") pod \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\" (UID: \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\") " Dec 13 02:17:18.071527 kubelet[1987]: I1213 02:17:18.070859 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-host-proc-sys-kernel\") pod \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\" (UID: \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\") " Dec 13 02:17:18.071527 kubelet[1987]: I1213 02:17:18.070883 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-etc-cni-netd\") pod \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\" (UID: \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\") " Dec 13 02:17:18.071527 kubelet[1987]: I1213 02:17:18.070911 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-cilium-ipsec-secrets\") pod \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\" (UID: \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\") " Dec 13 02:17:18.071527 kubelet[1987]: I1213 02:17:18.070935 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-cni-path\") pod \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\" (UID: \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\") " Dec 13 02:17:18.071527 kubelet[1987]: I1213 02:17:18.070967 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-cilium-run\") pod \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\" (UID: \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\") " Dec 13 02:17:18.071527 kubelet[1987]: I1213 02:17:18.070998 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-hubble-tls\") pod \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\" (UID: \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\") " Dec 13 02:17:18.071527 kubelet[1987]: I1213 02:17:18.071028 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-cilium-config-path\") pod \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\" (UID: \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\") " Dec 13 02:17:18.071527 kubelet[1987]: I1213 02:17:18.071051 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-cilium-cgroup\") pod \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\" (UID: \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\") " Dec 13 02:17:18.071527 kubelet[1987]: I1213 02:17:18.071080 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-lib-modules\") pod \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\" (UID: \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\") " Dec 13 02:17:18.071527 kubelet[1987]: I1213 02:17:18.071105 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-xtables-lock\") pod \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\" (UID: \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\") " Dec 13 02:17:18.071527 kubelet[1987]: I1213 02:17:18.071133 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-bpf-maps\") pod \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\" (UID: \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\") " Dec 13 02:17:18.071527 kubelet[1987]: I1213 02:17:18.071162 1987 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-host-proc-sys-net\") pod \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\" (UID: \"09c92c1e-bbba-42c5-bf1b-fa93a0f85bef\") " Dec 13 02:17:18.071527 kubelet[1987]: I1213 02:17:18.071261 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "09c92c1e-bbba-42c5-bf1b-fa93a0f85bef" (UID: "09c92c1e-bbba-42c5-bf1b-fa93a0f85bef"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:17:18.072428 kubelet[1987]: I1213 02:17:18.071749 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "09c92c1e-bbba-42c5-bf1b-fa93a0f85bef" (UID: "09c92c1e-bbba-42c5-bf1b-fa93a0f85bef"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:17:18.072725 kubelet[1987]: I1213 02:17:18.072684 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-hostproc" (OuterVolumeSpecName: "hostproc") pod "09c92c1e-bbba-42c5-bf1b-fa93a0f85bef" (UID: "09c92c1e-bbba-42c5-bf1b-fa93a0f85bef"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:17:18.073294 kubelet[1987]: I1213 02:17:18.073247 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "09c92c1e-bbba-42c5-bf1b-fa93a0f85bef" (UID: "09c92c1e-bbba-42c5-bf1b-fa93a0f85bef"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:17:18.073515 kubelet[1987]: I1213 02:17:18.073489 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "09c92c1e-bbba-42c5-bf1b-fa93a0f85bef" (UID: "09c92c1e-bbba-42c5-bf1b-fa93a0f85bef"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:17:18.075245 kubelet[1987]: I1213 02:17:18.075197 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-cni-path" (OuterVolumeSpecName: "cni-path") pod "09c92c1e-bbba-42c5-bf1b-fa93a0f85bef" (UID: "09c92c1e-bbba-42c5-bf1b-fa93a0f85bef"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:17:18.076583 kubelet[1987]: I1213 02:17:18.076552 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "09c92c1e-bbba-42c5-bf1b-fa93a0f85bef" (UID: "09c92c1e-bbba-42c5-bf1b-fa93a0f85bef"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:17:18.076729 kubelet[1987]: I1213 02:17:18.076609 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "09c92c1e-bbba-42c5-bf1b-fa93a0f85bef" (UID: "09c92c1e-bbba-42c5-bf1b-fa93a0f85bef"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:17:18.076729 kubelet[1987]: I1213 02:17:18.076640 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "09c92c1e-bbba-42c5-bf1b-fa93a0f85bef" (UID: "09c92c1e-bbba-42c5-bf1b-fa93a0f85bef"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:17:18.076729 kubelet[1987]: I1213 02:17:18.076669 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "09c92c1e-bbba-42c5-bf1b-fa93a0f85bef" (UID: "09c92c1e-bbba-42c5-bf1b-fa93a0f85bef"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:17:18.078017 kubelet[1987]: I1213 02:17:18.077976 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "09c92c1e-bbba-42c5-bf1b-fa93a0f85bef" (UID: "09c92c1e-bbba-42c5-bf1b-fa93a0f85bef"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:17:18.085493 systemd[1]: var-lib-kubelet-pods-09c92c1e\x2dbbba\x2d42c5\x2dbf1b\x2dfa93a0f85bef-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:17:18.085662 systemd[1]: var-lib-kubelet-pods-09c92c1e\x2dbbba\x2d42c5\x2dbf1b\x2dfa93a0f85bef-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 02:17:18.085765 systemd[1]: var-lib-kubelet-pods-09c92c1e\x2dbbba\x2d42c5\x2dbf1b\x2dfa93a0f85bef-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:17:18.091899 systemd[1]: var-lib-kubelet-pods-09c92c1e\x2dbbba\x2d42c5\x2dbf1b\x2dfa93a0f85bef-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvck7j.mount: Deactivated successfully. Dec 13 02:17:18.094476 kubelet[1987]: I1213 02:17:18.094377 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-kube-api-access-vck7j" (OuterVolumeSpecName: "kube-api-access-vck7j") pod "09c92c1e-bbba-42c5-bf1b-fa93a0f85bef" (UID: "09c92c1e-bbba-42c5-bf1b-fa93a0f85bef"). InnerVolumeSpecName "kube-api-access-vck7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:17:18.095320 kubelet[1987]: I1213 02:17:18.095126 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "09c92c1e-bbba-42c5-bf1b-fa93a0f85bef" (UID: "09c92c1e-bbba-42c5-bf1b-fa93a0f85bef"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:17:18.095320 kubelet[1987]: I1213 02:17:18.095197 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "09c92c1e-bbba-42c5-bf1b-fa93a0f85bef" (UID: "09c92c1e-bbba-42c5-bf1b-fa93a0f85bef"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:17:18.095320 kubelet[1987]: I1213 02:17:18.095259 1987 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "09c92c1e-bbba-42c5-bf1b-fa93a0f85bef" (UID: "09c92c1e-bbba-42c5-bf1b-fa93a0f85bef"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:17:18.171889 kubelet[1987]: I1213 02:17:18.171837 1987 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-cilium-run\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:18.171889 kubelet[1987]: I1213 02:17:18.171882 1987 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-cilium-cgroup\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:18.171889 kubelet[1987]: I1213 02:17:18.171899 1987 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-hubble-tls\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:18.172234 kubelet[1987]: I1213 02:17:18.171917 1987 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-cilium-config-path\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:18.172234 kubelet[1987]: I1213 02:17:18.171935 1987 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-lib-modules\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:18.172234 kubelet[1987]: I1213 02:17:18.171948 1987 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-xtables-lock\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:18.172234 kubelet[1987]: I1213 02:17:18.171962 1987 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-bpf-maps\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:18.172234 kubelet[1987]: I1213 02:17:18.171978 1987 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-host-proc-sys-net\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:18.172234 kubelet[1987]: I1213 02:17:18.171994 1987 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-vck7j\" (UniqueName: \"kubernetes.io/projected/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-kube-api-access-vck7j\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:18.172234 kubelet[1987]: I1213 02:17:18.172011 1987 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-clustermesh-secrets\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:18.172234 kubelet[1987]: I1213 02:17:18.172026 1987 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-hostproc\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:18.172234 kubelet[1987]: I1213 02:17:18.172044 1987 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-host-proc-sys-kernel\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:18.172234 kubelet[1987]: I1213 02:17:18.172059 1987 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-etc-cni-netd\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:18.172234 kubelet[1987]: I1213 02:17:18.172075 1987 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-cilium-ipsec-secrets\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:18.172234 kubelet[1987]: I1213 02:17:18.172090 1987 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef-cni-path\") on node \"ci-3510-3-6-aa4ae51aab2e20ef9227.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:17:18.523684 systemd[1]: Removed slice kubepods-burstable-pod09c92c1e_bbba_42c5_bf1b_fa93a0f85bef.slice. Dec 13 02:17:18.928018 kubelet[1987]: I1213 02:17:18.927886 1987 scope.go:117] "RemoveContainer" containerID="5c5796c50a2894d94a95c23bb33cc772f639a66698c60e3f4f7609d1cebda7c7" Dec 13 02:17:18.930585 env[1210]: time="2024-12-13T02:17:18.930092155Z" level=info msg="RemoveContainer for \"5c5796c50a2894d94a95c23bb33cc772f639a66698c60e3f4f7609d1cebda7c7\"" Dec 13 02:17:18.940043 env[1210]: time="2024-12-13T02:17:18.939980899Z" level=info msg="RemoveContainer for \"5c5796c50a2894d94a95c23bb33cc772f639a66698c60e3f4f7609d1cebda7c7\" returns successfully" Dec 13 02:17:18.988027 kubelet[1987]: E1213 02:17:18.987977 1987 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="09c92c1e-bbba-42c5-bf1b-fa93a0f85bef" containerName="mount-cgroup" Dec 13 02:17:18.988295 kubelet[1987]: E1213 02:17:18.988275 1987 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="09c92c1e-bbba-42c5-bf1b-fa93a0f85bef" containerName="mount-cgroup" Dec 13 02:17:18.988483 kubelet[1987]: I1213 02:17:18.988464 1987 memory_manager.go:354] "RemoveStaleState removing state" podUID="09c92c1e-bbba-42c5-bf1b-fa93a0f85bef" containerName="mount-cgroup" Dec 13 02:17:18.988649 kubelet[1987]: I1213 02:17:18.988632 1987 memory_manager.go:354] "RemoveStaleState removing state" podUID="09c92c1e-bbba-42c5-bf1b-fa93a0f85bef" containerName="mount-cgroup" Dec 13 02:17:18.997258 systemd[1]: Created slice kubepods-burstable-podd0d4d2c9_fba5_4bf6_bc4a_5ae1bc9b4370.slice. Dec 13 02:17:19.072193 kubelet[1987]: W1213 02:17:19.072110 1987 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09c92c1e_bbba_42c5_bf1b_fa93a0f85bef.slice/cri-containerd-b141a03af0dd6b333eac3d327f1bbf51eeffe041c065097d9e60ce82d2d22992.scope WatchSource:0}: container "b141a03af0dd6b333eac3d327f1bbf51eeffe041c065097d9e60ce82d2d22992" in namespace "k8s.io": not found Dec 13 02:17:19.077195 kubelet[1987]: I1213 02:17:19.077147 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370-etc-cni-netd\") pod \"cilium-mngkc\" (UID: \"d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370\") " pod="kube-system/cilium-mngkc" Dec 13 02:17:19.077195 kubelet[1987]: I1213 02:17:19.077197 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370-clustermesh-secrets\") pod \"cilium-mngkc\" (UID: \"d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370\") " pod="kube-system/cilium-mngkc" Dec 13 02:17:19.077542 kubelet[1987]: I1213 02:17:19.077233 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370-cilium-config-path\") pod \"cilium-mngkc\" (UID: \"d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370\") " pod="kube-system/cilium-mngkc" Dec 13 02:17:19.077542 kubelet[1987]: I1213 02:17:19.077258 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370-bpf-maps\") pod \"cilium-mngkc\" (UID: \"d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370\") " pod="kube-system/cilium-mngkc" Dec 13 02:17:19.077542 kubelet[1987]: I1213 02:17:19.077284 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvct6\" (UniqueName: \"kubernetes.io/projected/d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370-kube-api-access-xvct6\") pod \"cilium-mngkc\" (UID: \"d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370\") " pod="kube-system/cilium-mngkc" Dec 13 02:17:19.077542 kubelet[1987]: I1213 02:17:19.077312 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370-xtables-lock\") pod \"cilium-mngkc\" (UID: \"d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370\") " pod="kube-system/cilium-mngkc" Dec 13 02:17:19.077542 kubelet[1987]: I1213 02:17:19.077337 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370-hostproc\") pod \"cilium-mngkc\" (UID: \"d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370\") " pod="kube-system/cilium-mngkc" Dec 13 02:17:19.077542 kubelet[1987]: I1213 02:17:19.077361 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370-cni-path\") pod \"cilium-mngkc\" (UID: \"d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370\") " pod="kube-system/cilium-mngkc" Dec 13 02:17:19.077542 kubelet[1987]: I1213 02:17:19.077383 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370-hubble-tls\") pod \"cilium-mngkc\" (UID: \"d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370\") " pod="kube-system/cilium-mngkc" Dec 13 02:17:19.077542 kubelet[1987]: I1213 02:17:19.077437 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370-host-proc-sys-net\") pod \"cilium-mngkc\" (UID: \"d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370\") " pod="kube-system/cilium-mngkc" Dec 13 02:17:19.077542 kubelet[1987]: I1213 02:17:19.077463 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370-cilium-run\") pod \"cilium-mngkc\" (UID: \"d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370\") " pod="kube-system/cilium-mngkc" Dec 13 02:17:19.077542 kubelet[1987]: I1213 02:17:19.077490 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370-cilium-cgroup\") pod \"cilium-mngkc\" (UID: \"d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370\") " pod="kube-system/cilium-mngkc" Dec 13 02:17:19.077542 kubelet[1987]: I1213 02:17:19.077518 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370-lib-modules\") pod \"cilium-mngkc\" (UID: \"d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370\") " pod="kube-system/cilium-mngkc" Dec 13 02:17:19.077542 kubelet[1987]: I1213 02:17:19.077546 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370-cilium-ipsec-secrets\") pod \"cilium-mngkc\" (UID: \"d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370\") " pod="kube-system/cilium-mngkc" Dec 13 02:17:19.078008 kubelet[1987]: I1213 02:17:19.077573 1987 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370-host-proc-sys-kernel\") pod \"cilium-mngkc\" (UID: \"d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370\") " pod="kube-system/cilium-mngkc" Dec 13 02:17:19.301992 env[1210]: time="2024-12-13T02:17:19.301927614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mngkc,Uid:d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370,Namespace:kube-system,Attempt:0,}" Dec 13 02:17:19.324624 env[1210]: time="2024-12-13T02:17:19.324504415Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:17:19.324624 env[1210]: time="2024-12-13T02:17:19.324564112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:17:19.324947 env[1210]: time="2024-12-13T02:17:19.324603997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:17:19.325471 env[1210]: time="2024-12-13T02:17:19.325155408Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4a5a53b34eb199df5223ac2d90b137da91d468d526a9a9cce0a52ba9c53315a pid=3927 runtime=io.containerd.runc.v2 Dec 13 02:17:19.343083 systemd[1]: Started cri-containerd-f4a5a53b34eb199df5223ac2d90b137da91d468d526a9a9cce0a52ba9c53315a.scope. Dec 13 02:17:19.389011 env[1210]: time="2024-12-13T02:17:19.388948936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mngkc,Uid:d0d4d2c9-fba5-4bf6-bc4a-5ae1bc9b4370,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4a5a53b34eb199df5223ac2d90b137da91d468d526a9a9cce0a52ba9c53315a\"" Dec 13 02:17:19.394934 env[1210]: time="2024-12-13T02:17:19.394639931Z" level=info msg="CreateContainer within sandbox \"f4a5a53b34eb199df5223ac2d90b137da91d468d526a9a9cce0a52ba9c53315a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:17:19.410862 env[1210]: time="2024-12-13T02:17:19.410819231Z" level=info msg="CreateContainer within sandbox \"f4a5a53b34eb199df5223ac2d90b137da91d468d526a9a9cce0a52ba9c53315a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"66e88c39921fde02bc2357c309c95064ef59b8b82efbd0f5d46c8fe531fbe84e\"" Dec 13 02:17:19.412007 env[1210]: time="2024-12-13T02:17:19.411970489Z" level=info msg="StartContainer for \"66e88c39921fde02bc2357c309c95064ef59b8b82efbd0f5d46c8fe531fbe84e\"" Dec 13 02:17:19.448374 systemd[1]: Started cri-containerd-66e88c39921fde02bc2357c309c95064ef59b8b82efbd0f5d46c8fe531fbe84e.scope. Dec 13 02:17:19.560897 env[1210]: time="2024-12-13T02:17:19.559718295Z" level=info msg="StartContainer for \"66e88c39921fde02bc2357c309c95064ef59b8b82efbd0f5d46c8fe531fbe84e\" returns successfully" Dec 13 02:17:19.573669 systemd[1]: cri-containerd-66e88c39921fde02bc2357c309c95064ef59b8b82efbd0f5d46c8fe531fbe84e.scope: Deactivated successfully. Dec 13 02:17:19.612691 env[1210]: time="2024-12-13T02:17:19.612625086Z" level=info msg="shim disconnected" id=66e88c39921fde02bc2357c309c95064ef59b8b82efbd0f5d46c8fe531fbe84e Dec 13 02:17:19.612691 env[1210]: time="2024-12-13T02:17:19.612689362Z" level=warning msg="cleaning up after shim disconnected" id=66e88c39921fde02bc2357c309c95064ef59b8b82efbd0f5d46c8fe531fbe84e namespace=k8s.io Dec 13 02:17:19.613198 env[1210]: time="2024-12-13T02:17:19.612702546Z" level=info msg="cleaning up dead shim" Dec 13 02:17:19.626198 env[1210]: time="2024-12-13T02:17:19.626151551Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:17:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4008 runtime=io.containerd.runc.v2\n" Dec 13 02:17:19.694522 kubelet[1987]: E1213 02:17:19.694443 1987 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:17:19.940042 env[1210]: time="2024-12-13T02:17:19.939145861Z" level=info msg="CreateContainer within sandbox \"f4a5a53b34eb199df5223ac2d90b137da91d468d526a9a9cce0a52ba9c53315a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:17:19.961734 env[1210]: time="2024-12-13T02:17:19.961660029Z" level=info msg="CreateContainer within sandbox \"f4a5a53b34eb199df5223ac2d90b137da91d468d526a9a9cce0a52ba9c53315a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"439e325ae4ca18afd945d69ab1ba8a1d575d3080dd97cef440797eda653401ea\"" Dec 13 02:17:19.962596 env[1210]: time="2024-12-13T02:17:19.962555719Z" level=info msg="StartContainer for \"439e325ae4ca18afd945d69ab1ba8a1d575d3080dd97cef440797eda653401ea\"" Dec 13 02:17:19.988805 systemd[1]: Started cri-containerd-439e325ae4ca18afd945d69ab1ba8a1d575d3080dd97cef440797eda653401ea.scope. Dec 13 02:17:20.044062 env[1210]: time="2024-12-13T02:17:20.041645962Z" level=info msg="StartContainer for \"439e325ae4ca18afd945d69ab1ba8a1d575d3080dd97cef440797eda653401ea\" returns successfully" Dec 13 02:17:20.050290 systemd[1]: cri-containerd-439e325ae4ca18afd945d69ab1ba8a1d575d3080dd97cef440797eda653401ea.scope: Deactivated successfully. Dec 13 02:17:20.082735 env[1210]: time="2024-12-13T02:17:20.082669609Z" level=info msg="shim disconnected" id=439e325ae4ca18afd945d69ab1ba8a1d575d3080dd97cef440797eda653401ea Dec 13 02:17:20.082735 env[1210]: time="2024-12-13T02:17:20.082734095Z" level=warning msg="cleaning up after shim disconnected" id=439e325ae4ca18afd945d69ab1ba8a1d575d3080dd97cef440797eda653401ea namespace=k8s.io Dec 13 02:17:20.083122 env[1210]: time="2024-12-13T02:17:20.082747380Z" level=info msg="cleaning up dead shim" Dec 13 02:17:20.095126 env[1210]: time="2024-12-13T02:17:20.095054904Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:17:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4070 runtime=io.containerd.runc.v2\n" Dec 13 02:17:20.516199 kubelet[1987]: E1213 02:17:20.514312 1987 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-fgxbk" podUID="d24b8d4b-7838-452a-932a-45aecf377ed6" Dec 13 02:17:20.517938 kubelet[1987]: I1213 02:17:20.517875 1987 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09c92c1e-bbba-42c5-bf1b-fa93a0f85bef" path="/var/lib/kubelet/pods/09c92c1e-bbba-42c5-bf1b-fa93a0f85bef/volumes" Dec 13 02:17:20.756566 update_engine[1201]: I1213 02:17:20.756479 1201 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 02:17:20.757347 update_engine[1201]: I1213 02:17:20.756858 1201 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 02:17:20.757347 update_engine[1201]: I1213 02:17:20.757278 1201 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 02:17:20.769231 update_engine[1201]: E1213 02:17:20.769086 1201 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 02:17:20.769432 update_engine[1201]: I1213 02:17:20.769243 1201 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 02:17:20.946592 env[1210]: time="2024-12-13T02:17:20.946533520Z" level=info msg="CreateContainer within sandbox \"f4a5a53b34eb199df5223ac2d90b137da91d468d526a9a9cce0a52ba9c53315a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:17:20.978980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3476053540.mount: Deactivated successfully. Dec 13 02:17:20.989164 env[1210]: time="2024-12-13T02:17:20.989087390Z" level=info msg="CreateContainer within sandbox \"f4a5a53b34eb199df5223ac2d90b137da91d468d526a9a9cce0a52ba9c53315a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4c7b3e72f2043553844150d5812ef4f1e224f40cf9f713424736dd8b4699d249\"" Dec 13 02:17:21.000770 env[1210]: time="2024-12-13T02:17:21.000709544Z" level=info msg="StartContainer for \"4c7b3e72f2043553844150d5812ef4f1e224f40cf9f713424736dd8b4699d249\"" Dec 13 02:17:21.043192 systemd[1]: Started cri-containerd-4c7b3e72f2043553844150d5812ef4f1e224f40cf9f713424736dd8b4699d249.scope. Dec 13 02:17:21.091605 env[1210]: time="2024-12-13T02:17:21.091528816Z" level=info msg="StartContainer for \"4c7b3e72f2043553844150d5812ef4f1e224f40cf9f713424736dd8b4699d249\" returns successfully" Dec 13 02:17:21.094829 systemd[1]: cri-containerd-4c7b3e72f2043553844150d5812ef4f1e224f40cf9f713424736dd8b4699d249.scope: Deactivated successfully. Dec 13 02:17:21.127544 env[1210]: time="2024-12-13T02:17:21.127486006Z" level=info msg="shim disconnected" id=4c7b3e72f2043553844150d5812ef4f1e224f40cf9f713424736dd8b4699d249 Dec 13 02:17:21.127969 env[1210]: time="2024-12-13T02:17:21.127937364Z" level=warning msg="cleaning up after shim disconnected" id=4c7b3e72f2043553844150d5812ef4f1e224f40cf9f713424736dd8b4699d249 namespace=k8s.io Dec 13 02:17:21.128114 env[1210]: time="2024-12-13T02:17:21.128084203Z" level=info msg="cleaning up dead shim" Dec 13 02:17:21.139335 env[1210]: time="2024-12-13T02:17:21.139281086Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:17:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4130 runtime=io.containerd.runc.v2\n" Dec 13 02:17:21.188960 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c7b3e72f2043553844150d5812ef4f1e224f40cf9f713424736dd8b4699d249-rootfs.mount: Deactivated successfully. Dec 13 02:17:21.950472 env[1210]: time="2024-12-13T02:17:21.950420329Z" level=info msg="CreateContainer within sandbox \"f4a5a53b34eb199df5223ac2d90b137da91d468d526a9a9cce0a52ba9c53315a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:17:21.971841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3536197196.mount: Deactivated successfully. Dec 13 02:17:21.986173 env[1210]: time="2024-12-13T02:17:21.985174319Z" level=info msg="CreateContainer within sandbox \"f4a5a53b34eb199df5223ac2d90b137da91d468d526a9a9cce0a52ba9c53315a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b7bbcdaeeb5bd494619526f8c475c227ef517d465c71b007353b66501e24fd87\"" Dec 13 02:17:21.987190 env[1210]: time="2024-12-13T02:17:21.987142255Z" level=info msg="StartContainer for \"b7bbcdaeeb5bd494619526f8c475c227ef517d465c71b007353b66501e24fd87\"" Dec 13 02:17:21.988849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1305031443.mount: Deactivated successfully. Dec 13 02:17:22.021191 systemd[1]: Started cri-containerd-b7bbcdaeeb5bd494619526f8c475c227ef517d465c71b007353b66501e24fd87.scope. Dec 13 02:17:22.064478 systemd[1]: cri-containerd-b7bbcdaeeb5bd494619526f8c475c227ef517d465c71b007353b66501e24fd87.scope: Deactivated successfully. Dec 13 02:17:22.066657 env[1210]: time="2024-12-13T02:17:22.065744343Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0d4d2c9_fba5_4bf6_bc4a_5ae1bc9b4370.slice/cri-containerd-b7bbcdaeeb5bd494619526f8c475c227ef517d465c71b007353b66501e24fd87.scope/memory.events\": no such file or directory" Dec 13 02:17:22.069572 env[1210]: time="2024-12-13T02:17:22.069521979Z" level=info msg="StartContainer for \"b7bbcdaeeb5bd494619526f8c475c227ef517d465c71b007353b66501e24fd87\" returns successfully" Dec 13 02:17:22.098739 env[1210]: time="2024-12-13T02:17:22.098631505Z" level=info msg="shim disconnected" id=b7bbcdaeeb5bd494619526f8c475c227ef517d465c71b007353b66501e24fd87 Dec 13 02:17:22.098739 env[1210]: time="2024-12-13T02:17:22.098695790Z" level=warning msg="cleaning up after shim disconnected" id=b7bbcdaeeb5bd494619526f8c475c227ef517d465c71b007353b66501e24fd87 namespace=k8s.io Dec 13 02:17:22.098739 env[1210]: time="2024-12-13T02:17:22.098711268Z" level=info msg="cleaning up dead shim" Dec 13 02:17:22.110490 env[1210]: time="2024-12-13T02:17:22.110387606Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:17:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4185 runtime=io.containerd.runc.v2\n" Dec 13 02:17:22.514303 kubelet[1987]: E1213 02:17:22.514244 1987 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-fgxbk" podUID="d24b8d4b-7838-452a-932a-45aecf377ed6" Dec 13 02:17:22.956055 env[1210]: time="2024-12-13T02:17:22.955999281Z" level=info msg="CreateContainer within sandbox \"f4a5a53b34eb199df5223ac2d90b137da91d468d526a9a9cce0a52ba9c53315a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:17:22.983651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount938018277.mount: Deactivated successfully. Dec 13 02:17:22.993723 env[1210]: time="2024-12-13T02:17:22.993637819Z" level=info msg="CreateContainer within sandbox \"f4a5a53b34eb199df5223ac2d90b137da91d468d526a9a9cce0a52ba9c53315a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5c5615df8967877b886ae46da9b4324c9c208b70a37da0e0848f4823517b6a0f\"" Dec 13 02:17:22.994759 env[1210]: time="2024-12-13T02:17:22.994711834Z" level=info msg="StartContainer for \"5c5615df8967877b886ae46da9b4324c9c208b70a37da0e0848f4823517b6a0f\"" Dec 13 02:17:23.038469 systemd[1]: Started cri-containerd-5c5615df8967877b886ae46da9b4324c9c208b70a37da0e0848f4823517b6a0f.scope. Dec 13 02:17:23.093703 env[1210]: time="2024-12-13T02:17:23.093639309Z" level=info msg="StartContainer for \"5c5615df8967877b886ae46da9b4324c9c208b70a37da0e0848f4823517b6a0f\" returns successfully" Dec 13 02:17:23.575434 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 02:17:24.514867 kubelet[1987]: E1213 02:17:24.514318 1987 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-fgxbk" podUID="d24b8d4b-7838-452a-932a-45aecf377ed6" Dec 13 02:17:25.162656 systemd[1]: run-containerd-runc-k8s.io-5c5615df8967877b886ae46da9b4324c9c208b70a37da0e0848f4823517b6a0f-runc.XYOiP5.mount: Deactivated successfully. Dec 13 02:17:26.921668 systemd-networkd[1017]: lxc_health: Link UP Dec 13 02:17:26.944629 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:17:26.944993 systemd-networkd[1017]: lxc_health: Gained carrier Dec 13 02:17:27.335819 kubelet[1987]: I1213 02:17:27.335739 1987 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mngkc" podStartSLOduration=9.33570853 podStartE2EDuration="9.33570853s" podCreationTimestamp="2024-12-13 02:17:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:17:23.980651722 +0000 UTC m=+129.635727980" watchObservedRunningTime="2024-12-13 02:17:27.33570853 +0000 UTC m=+132.990784753" Dec 13 02:17:28.679655 systemd-networkd[1017]: lxc_health: Gained IPv6LL Dec 13 02:17:30.764214 update_engine[1201]: I1213 02:17:30.763470 1201 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 02:17:30.764214 update_engine[1201]: I1213 02:17:30.763849 1201 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 02:17:30.764214 update_engine[1201]: I1213 02:17:30.764153 1201 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 02:17:30.780616 update_engine[1201]: E1213 02:17:30.780375 1201 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 02:17:30.780616 update_engine[1201]: I1213 02:17:30.780567 1201 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 02:17:32.066772 systemd[1]: run-containerd-runc-k8s.io-5c5615df8967877b886ae46da9b4324c9c208b70a37da0e0848f4823517b6a0f-runc.4HyEaZ.mount: Deactivated successfully. Dec 13 02:17:32.231623 sshd[3831]: pam_unix(sshd:session): session closed for user core Dec 13 02:17:32.236793 systemd[1]: sshd@26-10.128.0.98:22-139.178.68.195:54146.service: Deactivated successfully. Dec 13 02:17:32.237885 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 02:17:32.238845 systemd-logind[1214]: Session 26 logged out. Waiting for processes to exit. Dec 13 02:17:32.240166 systemd-logind[1214]: Removed session 26.