Dec 13 14:25:28.095986 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:25:28.096026 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:25:28.096045 kernel: BIOS-provided physical RAM map: Dec 13 14:25:28.096058 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Dec 13 14:25:28.096071 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Dec 13 14:25:28.096225 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Dec 13 14:25:28.096248 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Dec 13 14:25:28.096262 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Dec 13 14:25:28.096276 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd276fff] usable Dec 13 14:25:28.096290 kernel: BIOS-e820: [mem 0x00000000bd277000-0x00000000bd280fff] ACPI data Dec 13 14:25:28.096447 kernel: BIOS-e820: [mem 0x00000000bd281000-0x00000000bf8ecfff] usable Dec 13 14:25:28.096464 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Dec 13 14:25:28.096478 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Dec 13 14:25:28.096492 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Dec 13 14:25:28.096513 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Dec 13 14:25:28.096682 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Dec 13 14:25:28.096699 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Dec 13 14:25:28.096714 kernel: NX (Execute Disable) protection: active Dec 13 14:25:28.096728 kernel: efi: EFI v2.70 by EDK II Dec 13 14:25:28.096820 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd277018 Dec 13 14:25:28.096837 kernel: random: crng init done Dec 13 14:25:28.096853 kernel: SMBIOS 2.4 present. Dec 13 14:25:28.096872 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Dec 13 14:25:28.096887 kernel: Hypervisor detected: KVM Dec 13 14:25:28.096902 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 14:25:28.096917 kernel: kvm-clock: cpu 0, msr 6719a001, primary cpu clock Dec 13 14:25:28.096931 kernel: kvm-clock: using sched offset of 12987178920 cycles Dec 13 14:25:28.096947 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 14:25:28.096962 kernel: tsc: Detected 2299.998 MHz processor Dec 13 14:25:28.096977 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:25:28.096993 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:25:28.097007 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Dec 13 14:25:28.097026 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:25:28.097041 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 13 14:25:28.097055 kernel: Using GB pages for direct mapping Dec 13 14:25:28.097070 kernel: Secure boot disabled Dec 13 14:25:28.097085 kernel: ACPI: Early table checksum verification disabled Dec 13 14:25:28.097099 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Dec 13 14:25:28.097114 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Dec 13 14:25:28.097130 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Dec 13 14:25:28.097154 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Dec 13 14:25:28.097170 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Dec 13 14:25:28.097186 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Dec 13 14:25:28.097201 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Dec 13 14:25:28.097218 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Dec 13 14:25:28.097233 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Dec 13 14:25:28.097253 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Dec 13 14:25:28.097269 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Dec 13 14:25:28.097285 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Dec 13 14:25:28.097301 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Dec 13 14:25:28.097324 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Dec 13 14:25:28.097340 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Dec 13 14:25:28.097355 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Dec 13 14:25:28.097371 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Dec 13 14:25:28.097387 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Dec 13 14:25:28.097406 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Dec 13 14:25:28.097422 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Dec 13 14:25:28.097437 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 14:25:28.097453 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 14:25:28.097469 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 14:25:28.097485 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Dec 13 14:25:28.097501 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Dec 13 14:25:28.097517 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Dec 13 14:25:28.097533 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Dec 13 14:25:28.097552 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Dec 13 14:25:28.097579 kernel: Zone ranges: Dec 13 14:25:28.097595 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:25:28.097611 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 14:25:28.097627 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Dec 13 14:25:28.097643 kernel: Movable zone start for each node Dec 13 14:25:28.097658 kernel: Early memory node ranges Dec 13 14:25:28.097674 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Dec 13 14:25:28.097690 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Dec 13 14:25:28.097709 kernel: node 0: [mem 0x0000000000100000-0x00000000bd276fff] Dec 13 14:25:28.097725 kernel: node 0: [mem 0x00000000bd281000-0x00000000bf8ecfff] Dec 13 14:25:28.097741 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Dec 13 14:25:28.097757 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Dec 13 14:25:28.097773 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Dec 13 14:25:28.097789 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:25:28.097805 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Dec 13 14:25:28.097821 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Dec 13 14:25:28.097836 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Dec 13 14:25:28.097856 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 13 14:25:28.097872 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Dec 13 14:25:28.097887 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 14:25:28.097903 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 14:25:28.097919 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 14:25:28.097935 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 14:25:28.097950 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:25:28.097966 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 14:25:28.097982 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 14:25:28.098002 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:25:28.098018 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 14:25:28.098034 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 14:25:28.098050 kernel: Booting paravirtualized kernel on KVM Dec 13 14:25:28.098066 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:25:28.098082 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 14:25:28.098097 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 14:25:28.098113 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 14:25:28.098128 kernel: pcpu-alloc: [0] 0 1 Dec 13 14:25:28.098147 kernel: kvm-guest: PV spinlocks enabled Dec 13 14:25:28.098163 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:25:28.098179 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932270 Dec 13 14:25:28.098194 kernel: Policy zone: Normal Dec 13 14:25:28.098212 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:25:28.098228 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:25:28.098243 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 14:25:28.098259 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:25:28.098275 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:25:28.098295 kernel: Memory: 7515400K/7860544K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 344884K reserved, 0K cma-reserved) Dec 13 14:25:28.098317 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:25:28.098333 kernel: Kernel/User page tables isolation: enabled Dec 13 14:25:28.098349 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:25:28.098365 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:25:28.098380 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:25:28.098397 kernel: rcu: RCU event tracing is enabled. Dec 13 14:25:28.098413 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:25:28.098434 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:25:28.098462 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:25:28.098479 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:25:28.098498 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:25:28.098515 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 14:25:28.098531 kernel: Console: colour dummy device 80x25 Dec 13 14:25:28.098548 kernel: printk: console [ttyS0] enabled Dec 13 14:25:28.098576 kernel: ACPI: Core revision 20210730 Dec 13 14:25:28.098594 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:25:28.098611 kernel: x2apic enabled Dec 13 14:25:28.098631 kernel: Switched APIC routing to physical x2apic. Dec 13 14:25:28.098648 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Dec 13 14:25:28.098665 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 14:25:28.098682 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Dec 13 14:25:28.098699 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Dec 13 14:25:28.098716 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Dec 13 14:25:28.098733 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:25:28.098753 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 14:25:28.098769 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 14:25:28.098786 kernel: Spectre V2 : Mitigation: IBRS Dec 13 14:25:28.098803 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:25:28.098820 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:25:28.098836 kernel: RETBleed: Mitigation: IBRS Dec 13 14:25:28.098853 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 14:25:28.098870 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Dec 13 14:25:28.098887 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 14:25:28.098907 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 14:25:28.098924 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:25:28.098941 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:25:28.098957 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:25:28.098974 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:25:28.098989 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:25:28.099005 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 14:25:28.099021 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:25:28.099035 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:25:28.099055 kernel: LSM: Security Framework initializing Dec 13 14:25:28.099072 kernel: SELinux: Initializing. Dec 13 14:25:28.099089 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:25:28.099107 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:25:28.099124 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Dec 13 14:25:28.099142 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Dec 13 14:25:28.099159 kernel: signal: max sigframe size: 1776 Dec 13 14:25:28.099177 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:25:28.099194 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 14:25:28.099213 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:25:28.099231 kernel: x86: Booting SMP configuration: Dec 13 14:25:28.099248 kernel: .... node #0, CPUs: #1 Dec 13 14:25:28.099265 kernel: kvm-clock: cpu 1, msr 6719a041, secondary cpu clock Dec 13 14:25:28.099283 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 14:25:28.099302 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 14:25:28.099327 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:25:28.099344 kernel: smpboot: Max logical packages: 1 Dec 13 14:25:28.099365 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 13 14:25:28.099382 kernel: devtmpfs: initialized Dec 13 14:25:28.099400 kernel: x86/mm: Memory block size: 128MB Dec 13 14:25:28.099418 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Dec 13 14:25:28.099436 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:25:28.099453 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:25:28.099471 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:25:28.099489 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:25:28.099506 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:25:28.099526 kernel: audit: type=2000 audit(1734099927.140:1): state=initialized audit_enabled=0 res=1 Dec 13 14:25:28.099543 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:25:28.101968 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:25:28.101997 kernel: cpuidle: using governor menu Dec 13 14:25:28.102014 kernel: ACPI: bus type PCI registered Dec 13 14:25:28.102030 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:25:28.102047 kernel: dca service started, version 1.12.1 Dec 13 14:25:28.102199 kernel: PCI: Using configuration type 1 for base access Dec 13 14:25:28.102219 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:25:28.102243 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:25:28.102259 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:25:28.102275 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:25:28.102291 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:25:28.102447 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:25:28.102466 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:25:28.102482 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:25:28.102498 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:25:28.102515 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:25:28.102702 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 14:25:28.102721 kernel: ACPI: Interpreter enabled Dec 13 14:25:28.102738 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 14:25:28.102859 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:25:28.102880 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:25:28.102898 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 14:25:28.102916 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:25:28.103133 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:25:28.103313 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 14:25:28.103336 kernel: PCI host bridge to bus 0000:00 Dec 13 14:25:28.103494 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 14:25:28.103659 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 14:25:28.103839 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 14:25:28.104022 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Dec 13 14:25:28.104191 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:25:28.104379 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 14:25:28.104548 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Dec 13 14:25:28.104732 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 14:25:28.104893 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 14:25:28.105055 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Dec 13 14:25:28.105210 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Dec 13 14:25:28.105378 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Dec 13 14:25:28.105650 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 14:25:28.105838 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Dec 13 14:25:28.106018 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Dec 13 14:25:28.106205 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 14:25:28.106384 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Dec 13 14:25:28.106581 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Dec 13 14:25:28.106611 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 14:25:28.106630 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 14:25:28.106648 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 14:25:28.106665 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 14:25:28.106683 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 14:25:28.106701 kernel: iommu: Default domain type: Translated Dec 13 14:25:28.106719 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:25:28.106736 kernel: vgaarb: loaded Dec 13 14:25:28.106754 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:25:28.106776 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:25:28.106794 kernel: PTP clock support registered Dec 13 14:25:28.106812 kernel: Registered efivars operations Dec 13 14:25:28.106830 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:25:28.106847 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 14:25:28.106864 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Dec 13 14:25:28.106882 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Dec 13 14:25:28.106900 kernel: e820: reserve RAM buffer [mem 0xbd277000-0xbfffffff] Dec 13 14:25:28.106917 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Dec 13 14:25:28.106938 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Dec 13 14:25:28.106955 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 14:25:28.106973 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:25:28.106990 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:25:28.107007 kernel: pnp: PnP ACPI init Dec 13 14:25:28.107025 kernel: pnp: PnP ACPI: found 7 devices Dec 13 14:25:28.107042 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:25:28.107060 kernel: NET: Registered PF_INET protocol family Dec 13 14:25:28.107078 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 14:25:28.107100 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 14:25:28.107118 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:25:28.107135 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:25:28.107154 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 14:25:28.107171 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 14:25:28.107189 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 14:25:28.107207 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 14:25:28.107225 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:25:28.107246 kernel: NET: Registered PF_XDP protocol family Dec 13 14:25:28.107415 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 14:25:28.107609 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 14:25:28.107771 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 14:25:28.107928 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Dec 13 14:25:28.108104 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 14:25:28.108128 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:25:28.108151 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 14:25:28.108169 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Dec 13 14:25:28.108187 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 14:25:28.108205 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 14:25:28.108223 kernel: clocksource: Switched to clocksource tsc Dec 13 14:25:28.108241 kernel: Initialise system trusted keyrings Dec 13 14:25:28.108258 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 14:25:28.108275 kernel: Key type asymmetric registered Dec 13 14:25:28.108293 kernel: Asymmetric key parser 'x509' registered Dec 13 14:25:28.108314 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:25:28.108332 kernel: io scheduler mq-deadline registered Dec 13 14:25:28.108349 kernel: io scheduler kyber registered Dec 13 14:25:28.108367 kernel: io scheduler bfq registered Dec 13 14:25:28.108385 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:25:28.108403 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 14:25:28.108608 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Dec 13 14:25:28.108633 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Dec 13 14:25:28.108809 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Dec 13 14:25:28.108837 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 14:25:28.109011 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Dec 13 14:25:28.109034 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:25:28.109052 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:25:28.109070 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 14:25:28.109088 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Dec 13 14:25:28.109105 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Dec 13 14:25:28.109280 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Dec 13 14:25:28.109309 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 14:25:28.109327 kernel: i8042: Warning: Keylock active Dec 13 14:25:28.109344 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 14:25:28.109362 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 14:25:28.109541 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 14:25:28.109727 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 14:25:28.109890 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T14:25:27 UTC (1734099927) Dec 13 14:25:28.110050 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 14:25:28.110076 kernel: intel_pstate: CPU model not supported Dec 13 14:25:28.110095 kernel: pstore: Registered efi as persistent store backend Dec 13 14:25:28.110112 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:25:28.110130 kernel: Segment Routing with IPv6 Dec 13 14:25:28.110148 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:25:28.110165 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:25:28.110183 kernel: Key type dns_resolver registered Dec 13 14:25:28.110200 kernel: IPI shorthand broadcast: enabled Dec 13 14:25:28.110217 kernel: sched_clock: Marking stable (764410396, 169965945)->(978473497, -44097156) Dec 13 14:25:28.110239 kernel: registered taskstats version 1 Dec 13 14:25:28.110257 kernel: Loading compiled-in X.509 certificates Dec 13 14:25:28.110275 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 14:25:28.110293 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:25:28.110310 kernel: Key type .fscrypt registered Dec 13 14:25:28.110327 kernel: Key type fscrypt-provisioning registered Dec 13 14:25:28.110345 kernel: pstore: Using crash dump compression: deflate Dec 13 14:25:28.110362 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:25:28.110379 kernel: ima: No architecture policies found Dec 13 14:25:28.110400 kernel: clk: Disabling unused clocks Dec 13 14:25:28.110418 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:25:28.110436 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:25:28.110453 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:25:28.110471 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:25:28.110489 kernel: Run /init as init process Dec 13 14:25:28.110507 kernel: with arguments: Dec 13 14:25:28.110524 kernel: /init Dec 13 14:25:28.110541 kernel: with environment: Dec 13 14:25:28.111620 kernel: HOME=/ Dec 13 14:25:28.111642 kernel: TERM=linux Dec 13 14:25:28.111657 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:25:28.111678 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:25:28.111698 systemd[1]: Detected virtualization kvm. Dec 13 14:25:28.111714 systemd[1]: Detected architecture x86-64. Dec 13 14:25:28.115716 systemd[1]: Running in initrd. Dec 13 14:25:28.115745 systemd[1]: No hostname configured, using default hostname. Dec 13 14:25:28.115763 systemd[1]: Hostname set to . Dec 13 14:25:28.115783 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:25:28.115800 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:25:28.115817 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:25:28.115834 systemd[1]: Reached target cryptsetup.target. Dec 13 14:25:28.115852 systemd[1]: Reached target paths.target. Dec 13 14:25:28.115869 systemd[1]: Reached target slices.target. Dec 13 14:25:28.115889 systemd[1]: Reached target swap.target. Dec 13 14:25:28.115906 systemd[1]: Reached target timers.target. Dec 13 14:25:28.115925 systemd[1]: Listening on iscsid.socket. Dec 13 14:25:28.115943 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:25:28.115961 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:25:28.115978 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:25:28.115996 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:25:28.116014 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:25:28.116035 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:25:28.116053 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:25:28.116089 systemd[1]: Reached target sockets.target. Dec 13 14:25:28.116111 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:25:28.116131 systemd[1]: Finished network-cleanup.service. Dec 13 14:25:28.116150 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:25:28.116173 systemd[1]: Starting systemd-journald.service... Dec 13 14:25:28.116192 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:25:28.116211 systemd[1]: Starting systemd-resolved.service... Dec 13 14:25:28.116229 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:25:28.116248 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:25:28.116266 kernel: audit: type=1130 audit(1734099928.110:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:28.116285 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:25:28.116308 systemd-journald[189]: Journal started Dec 13 14:25:28.116396 systemd-journald[189]: Runtime Journal (/run/log/journal/2b8321be2b3c2d0546070de205056099) is 8.0M, max 148.8M, 140.8M free. Dec 13 14:25:28.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:28.110747 systemd-modules-load[190]: Inserted module 'overlay' Dec 13 14:25:28.142703 kernel: audit: type=1130 audit(1734099928.118:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:28.142739 systemd[1]: Started systemd-journald.service. Dec 13 14:25:28.142766 kernel: audit: type=1130 audit(1734099928.123:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:28.142796 kernel: audit: type=1130 audit(1734099928.123:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:28.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:28.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:28.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:28.125253 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:25:28.127274 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:25:28.155720 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:25:28.165767 systemd-resolved[191]: Positive Trust Anchors: Dec 13 14:25:28.166132 systemd-resolved[191]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:25:28.166287 systemd-resolved[191]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:25:28.169078 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:25:28.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:28.173585 kernel: audit: type=1130 audit(1734099928.167:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:28.176758 systemd-resolved[191]: Defaulting to hostname 'linux'. Dec 13 14:25:28.179080 systemd[1]: Started systemd-resolved.service. Dec 13 14:25:28.179226 systemd[1]: Reached target nss-lookup.target. Dec 13 14:25:28.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:28.196220 kernel: audit: type=1130 audit(1734099928.177:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:28.196585 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:25:28.205884 kernel: audit: type=1130 audit(1734099928.196:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:28.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:28.201352 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:25:28.209669 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:25:28.218842 systemd-modules-load[190]: Inserted module 'br_netfilter' Dec 13 14:25:28.222672 kernel: Bridge firewalling registered Dec 13 14:25:28.222703 dracut-cmdline[206]: dracut-dracut-053 Dec 13 14:25:28.226665 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:25:28.254584 kernel: SCSI subsystem initialized Dec 13 14:25:28.272931 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:25:28.272986 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:25:28.274465 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:25:28.279706 systemd-modules-load[190]: Inserted module 'dm_multipath' Dec 13 14:25:28.294671 kernel: audit: type=1130 audit(1734099928.284:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:28.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:28.280788 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:25:28.286881 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:25:28.299722 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:25:28.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:28.316598 kernel: audit: type=1130 audit(1734099928.311:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:28.325596 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:25:28.345583 kernel: iscsi: registered transport (tcp) Dec 13 14:25:28.371808 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:25:28.371877 kernel: QLogic iSCSI HBA Driver Dec 13 14:25:28.416545 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:25:28.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:28.418732 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:25:28.474614 kernel: raid6: avx2x4 gen() 18256 MB/s Dec 13 14:25:28.491608 kernel: raid6: avx2x4 xor() 7250 MB/s Dec 13 14:25:28.508592 kernel: raid6: avx2x2 gen() 18243 MB/s Dec 13 14:25:28.525607 kernel: raid6: avx2x2 xor() 18586 MB/s Dec 13 14:25:28.542612 kernel: raid6: avx2x1 gen() 13858 MB/s Dec 13 14:25:28.559613 kernel: raid6: avx2x1 xor() 15876 MB/s Dec 13 14:25:28.576605 kernel: raid6: sse2x4 gen() 10970 MB/s Dec 13 14:25:28.593604 kernel: raid6: sse2x4 xor() 6626 MB/s Dec 13 14:25:28.611604 kernel: raid6: sse2x2 gen() 12057 MB/s Dec 13 14:25:28.628615 kernel: raid6: sse2x2 xor() 7253 MB/s Dec 13 14:25:28.645602 kernel: raid6: sse2x1 gen() 10412 MB/s Dec 13 14:25:28.663555 kernel: raid6: sse2x1 xor() 5161 MB/s Dec 13 14:25:28.663627 kernel: raid6: using algorithm avx2x4 gen() 18256 MB/s Dec 13 14:25:28.663652 kernel: raid6: .... xor() 7250 MB/s, rmw enabled Dec 13 14:25:28.665015 kernel: raid6: using avx2x2 recovery algorithm Dec 13 14:25:28.679600 kernel: xor: automatically using best checksumming function avx Dec 13 14:25:28.786595 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:25:28.798038 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:25:28.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:28.797000 audit: BPF prog-id=7 op=LOAD Dec 13 14:25:28.797000 audit: BPF prog-id=8 op=LOAD Dec 13 14:25:28.799699 systemd[1]: Starting systemd-udevd.service... Dec 13 14:25:28.817394 systemd-udevd[388]: Using default interface naming scheme 'v252'. Dec 13 14:25:28.824868 systemd[1]: Started systemd-udevd.service. Dec 13 14:25:28.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:28.827180 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:25:28.852072 dracut-pre-trigger[395]: rd.md=0: removing MD RAID activation Dec 13 14:25:28.893787 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:25:28.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:28.895350 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:25:28.962976 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:25:28.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:29.044590 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:25:29.054589 kernel: scsi host0: Virtio SCSI HBA Dec 13 14:25:29.097586 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Dec 13 14:25:29.143544 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 14:25:29.143624 kernel: AES CTR mode by8 optimization enabled Dec 13 14:25:29.188046 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Dec 13 14:25:29.246867 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Dec 13 14:25:29.247103 kernel: sd 0:0:1:0: [sda] Write Protect is off Dec 13 14:25:29.247315 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Dec 13 14:25:29.247526 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 14:25:29.247753 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:25:29.247779 kernel: GPT:17805311 != 25165823 Dec 13 14:25:29.247801 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:25:29.247823 kernel: GPT:17805311 != 25165823 Dec 13 14:25:29.247845 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:25:29.247866 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:25:29.247894 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Dec 13 14:25:29.306588 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (450) Dec 13 14:25:29.320372 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:25:29.330700 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:25:29.354344 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:25:29.367258 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:25:29.382074 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:25:29.397250 systemd[1]: Starting disk-uuid.service... Dec 13 14:25:29.415958 disk-uuid[514]: Primary Header is updated. Dec 13 14:25:29.415958 disk-uuid[514]: Secondary Entries is updated. Dec 13 14:25:29.415958 disk-uuid[514]: Secondary Header is updated. Dec 13 14:25:29.450663 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:25:29.463595 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:25:29.476605 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:25:30.474585 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 14:25:30.474841 disk-uuid[515]: The operation has completed successfully. Dec 13 14:25:30.545334 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:25:30.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:30.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:30.545467 systemd[1]: Finished disk-uuid.service. Dec 13 14:25:30.558606 systemd[1]: Starting verity-setup.service... Dec 13 14:25:30.586589 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 14:25:30.662953 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:25:30.665370 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:25:30.677107 systemd[1]: Finished verity-setup.service. Dec 13 14:25:30.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:30.764600 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:25:30.765137 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:25:30.773908 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:25:30.810590 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:25:30.810643 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:25:30.810668 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:25:30.774847 systemd[1]: Starting ignition-setup.service... Dec 13 14:25:30.830830 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 14:25:30.823847 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:25:30.841108 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:25:30.854595 systemd[1]: Finished ignition-setup.service. Dec 13 14:25:30.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:30.873714 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:25:30.942262 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:25:30.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:30.950000 audit: BPF prog-id=9 op=LOAD Dec 13 14:25:30.952889 systemd[1]: Starting systemd-networkd.service... Dec 13 14:25:30.989105 systemd-networkd[689]: lo: Link UP Dec 13 14:25:30.989118 systemd-networkd[689]: lo: Gained carrier Dec 13 14:25:30.990511 systemd-networkd[689]: Enumeration completed Dec 13 14:25:30.990652 systemd[1]: Started systemd-networkd.service. Dec 13 14:25:30.994470 systemd-networkd[689]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:25:30.997472 systemd-networkd[689]: eth0: Link UP Dec 13 14:25:30.997480 systemd-networkd[689]: eth0: Gained carrier Dec 13 14:25:31.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:31.006812 systemd-networkd[689]: eth0: DHCPv4 address 10.128.0.103/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 14:25:31.037160 systemd[1]: Reached target network.target. Dec 13 14:25:31.051112 systemd[1]: Starting iscsiuio.service... Dec 13 14:25:31.090795 systemd[1]: Started iscsiuio.service. Dec 13 14:25:31.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:31.092444 systemd[1]: Starting iscsid.service... Dec 13 14:25:31.117700 iscsid[700]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:25:31.117700 iscsid[700]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 14:25:31.117700 iscsid[700]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:25:31.117700 iscsid[700]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:25:31.117700 iscsid[700]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:25:31.117700 iscsid[700]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:25:31.117700 iscsid[700]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:25:31.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:31.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:31.110892 systemd[1]: Started iscsid.service. Dec 13 14:25:31.147312 ignition[622]: Ignition 2.14.0 Dec 13 14:25:31.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:31.178246 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:25:31.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:31.147325 ignition[622]: Stage: fetch-offline Dec 13 14:25:31.193082 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:25:31.147399 ignition[622]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:25:31.209827 systemd[1]: Starting ignition-fetch.service... Dec 13 14:25:31.147439 ignition[622]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:25:31.244266 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:25:31.166625 ignition[622]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:25:31.247693 unknown[709]: fetched base config from "system" Dec 13 14:25:31.166819 ignition[622]: parsed url from cmdline: "" Dec 13 14:25:31.247707 unknown[709]: fetched base config from "system" Dec 13 14:25:31.166827 ignition[622]: no config URL provided Dec 13 14:25:31.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:31.247717 unknown[709]: fetched user config from "gcp" Dec 13 14:25:31.166835 ignition[622]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:25:31.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:31.259148 systemd[1]: Finished ignition-fetch.service. Dec 13 14:25:31.166845 ignition[622]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:25:31.272979 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:25:31.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:31.166854 ignition[622]: failed to fetch config: resource requires networking Dec 13 14:25:31.288682 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:25:31.167004 ignition[622]: Ignition finished successfully Dec 13 14:25:31.301686 systemd[1]: Reached target remote-fs.target. Dec 13 14:25:31.221618 ignition[709]: Ignition 2.14.0 Dec 13 14:25:31.316760 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:25:31.221629 ignition[709]: Stage: fetch Dec 13 14:25:31.340777 systemd[1]: Starting ignition-kargs.service... Dec 13 14:25:31.221784 ignition[709]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:25:31.371313 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:25:31.221826 ignition[709]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:25:31.386124 systemd[1]: Finished ignition-kargs.service. Dec 13 14:25:31.230946 ignition[709]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:25:31.407288 systemd[1]: Starting ignition-disks.service... Dec 13 14:25:31.231273 ignition[709]: parsed url from cmdline: "" Dec 13 14:25:31.429027 systemd[1]: Finished ignition-disks.service. Dec 13 14:25:31.231281 ignition[709]: no config URL provided Dec 13 14:25:31.444878 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:25:31.231288 ignition[709]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:25:31.460706 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:25:31.231304 ignition[709]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:25:31.475689 systemd[1]: Reached target local-fs.target. Dec 13 14:25:31.231352 ignition[709]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Dec 13 14:25:31.489686 systemd[1]: Reached target sysinit.target. Dec 13 14:25:31.241179 ignition[709]: GET result: OK Dec 13 14:25:31.502678 systemd[1]: Reached target basic.target. Dec 13 14:25:31.241271 ignition[709]: parsing config with SHA512: 34748f0966c26f98c5221853e712d499c961d132a60b09c6e606f9efd803102ab5fbdd10abebab8c3f00b1bb3bfe56f15e34744a9cd32b29bbb6039819ad2c95 Dec 13 14:25:31.515878 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:25:31.248500 ignition[709]: fetch: fetch complete Dec 13 14:25:31.248506 ignition[709]: fetch: fetch passed Dec 13 14:25:31.248551 ignition[709]: Ignition finished successfully Dec 13 14:25:31.354542 ignition[720]: Ignition 2.14.0 Dec 13 14:25:31.354550 ignition[720]: Stage: kargs Dec 13 14:25:31.354702 ignition[720]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:25:31.354735 ignition[720]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:25:31.360936 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:25:31.362317 ignition[720]: kargs: kargs passed Dec 13 14:25:31.362365 ignition[720]: Ignition finished successfully Dec 13 14:25:31.419414 ignition[726]: Ignition 2.14.0 Dec 13 14:25:31.419425 ignition[726]: Stage: disks Dec 13 14:25:31.419553 ignition[726]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:25:31.419612 ignition[726]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:25:31.426664 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:25:31.428044 ignition[726]: disks: disks passed Dec 13 14:25:31.428093 ignition[726]: Ignition finished successfully Dec 13 14:25:31.562599 systemd-fsck[734]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 14:25:31.740499 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:25:31.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:31.749808 systemd[1]: Mounting sysroot.mount... Dec 13 14:25:31.780762 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:25:31.778186 systemd[1]: Mounted sysroot.mount. Dec 13 14:25:31.787987 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:25:31.807167 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:25:31.818316 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:25:31.818377 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:25:31.818416 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:25:31.902857 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (740) Dec 13 14:25:31.902901 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:25:31.902917 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:25:31.902931 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:25:31.826155 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:25:31.922713 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 14:25:31.856388 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:25:31.916822 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:25:31.946732 initrd-setup-root[763]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:25:31.942532 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:25:31.973690 initrd-setup-root[771]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:25:31.983668 initrd-setup-root[779]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:25:31.994686 initrd-setup-root[787]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:25:32.016617 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:25:32.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:32.017981 systemd[1]: Starting ignition-mount.service... Dec 13 14:25:32.045764 systemd[1]: Starting sysroot-boot.service... Dec 13 14:25:32.054949 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:25:32.055051 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:25:32.080859 ignition[805]: INFO : Ignition 2.14.0 Dec 13 14:25:32.080859 ignition[805]: INFO : Stage: mount Dec 13 14:25:32.080859 ignition[805]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:25:32.080859 ignition[805]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:25:32.181749 kernel: kauditd_printk_skb: 24 callbacks suppressed Dec 13 14:25:32.181797 kernel: audit: type=1130 audit(1734099932.093:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:32.181827 kernel: audit: type=1130 audit(1734099932.135:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:32.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:32.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:32.181979 ignition[805]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:25:32.181979 ignition[805]: INFO : mount: mount passed Dec 13 14:25:32.181979 ignition[805]: INFO : Ignition finished successfully Dec 13 14:25:32.252820 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (815) Dec 13 14:25:32.252856 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:25:32.252871 kernel: BTRFS info (device sda6): using free space tree Dec 13 14:25:32.252892 kernel: BTRFS info (device sda6): has skinny extents Dec 13 14:25:32.252906 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 14:25:32.088313 systemd[1]: Finished sysroot-boot.service. Dec 13 14:25:32.095208 systemd[1]: Finished ignition-mount.service. Dec 13 14:25:32.138363 systemd[1]: Starting ignition-files.service... Dec 13 14:25:32.281777 ignition[834]: INFO : Ignition 2.14.0 Dec 13 14:25:32.281777 ignition[834]: INFO : Stage: files Dec 13 14:25:32.281777 ignition[834]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:25:32.281777 ignition[834]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:25:32.337705 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (839) Dec 13 14:25:32.192784 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:25:32.346730 ignition[834]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:25:32.346730 ignition[834]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:25:32.346730 ignition[834]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:25:32.346730 ignition[834]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:25:32.346730 ignition[834]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:25:32.346730 ignition[834]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:25:32.346730 ignition[834]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:25:32.346730 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/hosts" Dec 13 14:25:32.346730 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:25:32.346730 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1016966170" Dec 13 14:25:32.346730 ignition[834]: CRITICAL : files: createFilesystemsFiles: createFiles: op(3): op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1016966170": device or resource busy Dec 13 14:25:32.346730 ignition[834]: ERROR : files: createFilesystemsFiles: createFiles: op(3): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1016966170", trying btrfs: device or resource busy Dec 13 14:25:32.346730 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1016966170" Dec 13 14:25:32.346730 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1016966170" Dec 13 14:25:32.346730 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [started] unmounting "/mnt/oem1016966170" Dec 13 14:25:32.346730 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [finished] unmounting "/mnt/oem1016966170" Dec 13 14:25:32.346730 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/hosts" Dec 13 14:25:32.346730 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:25:32.252431 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:25:32.616705 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 14:25:32.616705 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Dec 13 14:25:32.269812 systemd-networkd[689]: eth0: Gained IPv6LL Dec 13 14:25:32.299322 unknown[834]: wrote ssh authorized keys file for user: core Dec 13 14:25:32.859530 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:25:32.876840 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:25:32.876840 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 14:25:33.157949 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Dec 13 14:25:33.313482 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:25:33.328704 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Dec 13 14:25:33.328704 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:25:33.328704 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3988684376" Dec 13 14:25:33.328704 ignition[834]: CRITICAL : files: createFilesystemsFiles: createFiles: op(9): op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3988684376": device or resource busy Dec 13 14:25:33.328704 ignition[834]: ERROR : files: createFilesystemsFiles: createFiles: op(9): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3988684376", trying btrfs: device or resource busy Dec 13 14:25:33.328704 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3988684376" Dec 13 14:25:33.434714 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3988684376" Dec 13 14:25:33.434714 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [started] unmounting "/mnt/oem3988684376" Dec 13 14:25:33.434714 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [finished] unmounting "/mnt/oem3988684376" Dec 13 14:25:33.434714 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Dec 13 14:25:33.434714 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:25:33.434714 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:25:33.434714 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:25:33.434714 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:25:33.434714 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:25:33.434714 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:25:33.434714 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:25:33.434714 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:25:33.434714 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:25:33.434714 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:25:33.434714 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:25:33.333290 systemd[1]: mnt-oem3988684376.mount: Deactivated successfully. Dec 13 14:25:33.679696 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:25:33.679696 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Dec 13 14:25:33.679696 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(13): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:25:33.679696 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem481009104" Dec 13 14:25:33.679696 ignition[834]: CRITICAL : files: createFilesystemsFiles: createFiles: op(13): op(14): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem481009104": device or resource busy Dec 13 14:25:33.679696 ignition[834]: ERROR : files: createFilesystemsFiles: createFiles: op(13): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem481009104", trying btrfs: device or resource busy Dec 13 14:25:33.679696 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem481009104" Dec 13 14:25:33.679696 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem481009104" Dec 13 14:25:33.679696 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [started] unmounting "/mnt/oem481009104" Dec 13 14:25:33.679696 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [finished] unmounting "/mnt/oem481009104" Dec 13 14:25:33.679696 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Dec 13 14:25:33.679696 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(17): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:25:33.679696 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 14:25:33.679696 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET result: OK Dec 13 14:25:33.353449 systemd[1]: mnt-oem481009104.mount: Deactivated successfully. Dec 13 14:25:33.935752 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(17): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:25:33.935752 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(18): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Dec 13 14:25:33.935752 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(18): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:25:33.935752 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(19): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3799139771" Dec 13 14:25:33.935752 ignition[834]: CRITICAL : files: createFilesystemsFiles: createFiles: op(18): op(19): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3799139771": device or resource busy Dec 13 14:25:33.935752 ignition[834]: ERROR : files: createFilesystemsFiles: createFiles: op(18): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3799139771", trying btrfs: device or resource busy Dec 13 14:25:33.935752 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3799139771" Dec 13 14:25:33.935752 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3799139771" Dec 13 14:25:33.935752 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [started] unmounting "/mnt/oem3799139771" Dec 13 14:25:33.935752 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [finished] unmounting "/mnt/oem3799139771" Dec 13 14:25:33.935752 ignition[834]: INFO : files: createFilesystemsFiles: createFiles: op(18): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Dec 13 14:25:33.935752 ignition[834]: INFO : files: op(1c): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:25:33.935752 ignition[834]: INFO : files: op(1c): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:25:33.935752 ignition[834]: INFO : files: op(1d): [started] processing unit "oem-gce.service" Dec 13 14:25:33.935752 ignition[834]: INFO : files: op(1d): [finished] processing unit "oem-gce.service" Dec 13 14:25:33.935752 ignition[834]: INFO : files: op(1e): [started] processing unit "oem-gce-enable-oslogin.service" Dec 13 14:25:33.935752 ignition[834]: INFO : files: op(1e): [finished] processing unit "oem-gce-enable-oslogin.service" Dec 13 14:25:34.417720 kernel: audit: type=1130 audit(1734099933.969:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.417772 kernel: audit: type=1130 audit(1734099934.059:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.417799 kernel: audit: type=1130 audit(1734099934.108:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.417822 kernel: audit: type=1131 audit(1734099934.108:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.417844 kernel: audit: type=1130 audit(1734099934.237:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.417874 kernel: audit: type=1131 audit(1734099934.237:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.417893 kernel: audit: type=1130 audit(1734099934.361:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.418144 ignition[834]: INFO : files: op(1f): [started] processing unit "prepare-helm.service" Dec 13 14:25:34.418144 ignition[834]: INFO : files: op(1f): op(20): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:25:34.418144 ignition[834]: INFO : files: op(1f): op(20): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:25:34.418144 ignition[834]: INFO : files: op(1f): [finished] processing unit "prepare-helm.service" Dec 13 14:25:34.418144 ignition[834]: INFO : files: op(21): [started] setting preset to enabled for "oem-gce.service" Dec 13 14:25:34.418144 ignition[834]: INFO : files: op(21): [finished] setting preset to enabled for "oem-gce.service" Dec 13 14:25:34.418144 ignition[834]: INFO : files: op(22): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Dec 13 14:25:34.418144 ignition[834]: INFO : files: op(22): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Dec 13 14:25:34.418144 ignition[834]: INFO : files: op(23): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:25:34.418144 ignition[834]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:25:34.418144 ignition[834]: INFO : files: op(24): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:25:34.418144 ignition[834]: INFO : files: op(24): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:25:34.418144 ignition[834]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:25:34.418144 ignition[834]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:25:34.418144 ignition[834]: INFO : files: files passed Dec 13 14:25:34.418144 ignition[834]: INFO : Ignition finished successfully Dec 13 14:25:34.737721 kernel: audit: type=1131 audit(1734099934.511:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.945814 systemd[1]: mnt-oem3799139771.mount: Deactivated successfully. Dec 13 14:25:33.960878 systemd[1]: Finished ignition-files.service. Dec 13 14:25:34.767834 initrd-setup-root-after-ignition[858]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:25:34.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:33.980734 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:25:34.013113 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:25:34.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.014442 systemd[1]: Starting ignition-quench.service... Dec 13 14:25:34.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.037211 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:25:34.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.061392 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:25:34.061554 systemd[1]: Finished ignition-quench.service. Dec 13 14:25:34.110137 systemd[1]: Reached target ignition-complete.target. Dec 13 14:25:34.897706 ignition[873]: INFO : Ignition 2.14.0 Dec 13 14:25:34.897706 ignition[873]: INFO : Stage: umount Dec 13 14:25:34.897706 ignition[873]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:25:34.897706 ignition[873]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 14:25:34.897706 ignition[873]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 14:25:34.897706 ignition[873]: INFO : umount: umount passed Dec 13 14:25:34.897706 ignition[873]: INFO : Ignition finished successfully Dec 13 14:25:34.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.172969 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:25:35.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:35.025924 iscsid[700]: iscsid shutting down. Dec 13 14:25:35.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.216202 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:25:35.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.216320 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:25:35.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.239060 systemd[1]: Reached target initrd-fs.target. Dec 13 14:25:35.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.303772 systemd[1]: Reached target initrd.target. Dec 13 14:25:34.321842 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:25:34.323038 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:25:34.346154 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:25:34.364322 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:25:35.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.408864 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:25:35.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.425995 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:25:34.461045 systemd[1]: Stopped target timers.target. Dec 13 14:25:35.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.478110 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:25:35.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:35.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.478335 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:25:34.513300 systemd[1]: Stopped target initrd.target. Dec 13 14:25:34.565022 systemd[1]: Stopped target basic.target. Dec 13 14:25:34.579058 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:25:34.604902 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:25:34.624874 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:25:34.646928 systemd[1]: Stopped target remote-fs.target. Dec 13 14:25:35.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.668906 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:25:35.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:35.318000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:25:34.691977 systemd[1]: Stopped target sysinit.target. Dec 13 14:25:34.714963 systemd[1]: Stopped target local-fs.target. Dec 13 14:25:34.729939 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:25:35.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.745979 systemd[1]: Stopped target swap.target. Dec 13 14:25:35.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.760875 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:25:35.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.761067 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:25:34.776057 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:25:35.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.797906 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:25:34.798090 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:25:35.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.816037 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:25:35.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.816230 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:25:35.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.843059 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:25:34.843228 systemd[1]: Stopped ignition-files.service. Dec 13 14:25:34.859587 systemd[1]: Stopping ignition-mount.service... Dec 13 14:25:35.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.890007 systemd[1]: Stopping iscsid.service... Dec 13 14:25:35.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.904766 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:25:35.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:35.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:34.904989 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:25:34.913366 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:25:34.924888 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:25:34.925162 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:25:34.942109 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:25:35.636701 systemd-journald[189]: Received SIGTERM from PID 1 (systemd). Dec 13 14:25:34.942277 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:25:34.971064 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:25:34.972098 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:25:34.972212 systemd[1]: Stopped iscsid.service. Dec 13 14:25:34.984490 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:25:34.984614 systemd[1]: Stopped ignition-mount.service. Dec 13 14:25:34.997348 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:25:34.997455 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:25:35.018683 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:25:35.018823 systemd[1]: Stopped ignition-disks.service. Dec 13 14:25:35.033742 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:25:35.033816 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:25:35.048768 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:25:35.048842 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:25:35.064752 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:25:35.064823 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:25:35.079743 systemd[1]: Stopped target paths.target. Dec 13 14:25:35.079827 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:25:35.084634 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:25:35.099705 systemd[1]: Stopped target slices.target. Dec 13 14:25:35.106851 systemd[1]: Stopped target sockets.target. Dec 13 14:25:35.123928 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:25:35.123985 systemd[1]: Closed iscsid.socket. Dec 13 14:25:35.137868 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:25:35.137926 systemd[1]: Stopped ignition-setup.service. Dec 13 14:25:35.149932 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:25:35.149984 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:25:35.171972 systemd[1]: Stopping iscsiuio.service... Dec 13 14:25:35.186126 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:25:35.186231 systemd[1]: Stopped iscsiuio.service. Dec 13 14:25:35.194238 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:25:35.194336 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:25:35.215823 systemd[1]: Stopped target network.target. Dec 13 14:25:35.230738 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:25:35.230810 systemd[1]: Closed iscsiuio.socket. Dec 13 14:25:35.243954 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:25:35.247649 systemd-networkd[689]: eth0: DHCPv6 lease lost Dec 13 14:25:35.263991 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:25:35.288354 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:25:35.288517 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:25:35.305360 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:25:35.305501 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:25:35.320357 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:25:35.320419 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:25:35.336673 systemd[1]: Stopping network-cleanup.service... Dec 13 14:25:35.349677 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:25:35.349770 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:25:35.363791 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:25:35.363866 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:25:35.378932 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:25:35.378993 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:25:35.393957 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:25:35.411254 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:25:35.411919 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:25:35.412070 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:25:35.426096 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:25:35.426180 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:25:35.439730 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:25:35.439787 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:25:35.455685 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:25:35.455755 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:25:35.455893 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:25:35.455942 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:25:35.477780 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:25:35.477862 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:25:35.496833 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:25:35.520686 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:25:35.520785 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:25:35.535257 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:25:35.535402 systemd[1]: Stopped network-cleanup.service. Dec 13 14:25:35.549980 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:25:35.550089 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:25:35.564902 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:25:35.583702 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:25:35.598245 systemd[1]: Switching root. Dec 13 14:25:35.639959 systemd-journald[189]: Journal stopped Dec 13 14:25:40.286233 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:25:40.286341 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:25:40.286368 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:25:40.286397 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:25:40.286425 kernel: SELinux: policy capability open_perms=1 Dec 13 14:25:40.286453 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:25:40.286476 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:25:40.286504 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:25:40.286527 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:25:40.286549 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:25:40.290643 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:25:40.290683 systemd[1]: Successfully loaded SELinux policy in 106.663ms. Dec 13 14:25:40.290732 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.918ms. Dec 13 14:25:40.290764 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:25:40.290789 systemd[1]: Detected virtualization kvm. Dec 13 14:25:40.290819 systemd[1]: Detected architecture x86-64. Dec 13 14:25:40.290842 systemd[1]: Detected first boot. Dec 13 14:25:40.290871 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:25:40.290896 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:25:40.290920 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:25:40.290949 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:25:40.290987 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:25:40.291014 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:25:40.291042 kernel: kauditd_printk_skb: 50 callbacks suppressed Dec 13 14:25:40.291065 kernel: audit: type=1334 audit(1734099939.375:88): prog-id=12 op=LOAD Dec 13 14:25:40.291088 kernel: audit: type=1334 audit(1734099939.375:89): prog-id=3 op=UNLOAD Dec 13 14:25:40.291111 kernel: audit: type=1334 audit(1734099939.389:90): prog-id=13 op=LOAD Dec 13 14:25:40.291132 kernel: audit: type=1334 audit(1734099939.405:91): prog-id=14 op=LOAD Dec 13 14:25:40.291159 kernel: audit: type=1334 audit(1734099939.405:92): prog-id=4 op=UNLOAD Dec 13 14:25:40.291181 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:25:40.291204 kernel: audit: type=1334 audit(1734099939.405:93): prog-id=5 op=UNLOAD Dec 13 14:25:40.291226 kernel: audit: type=1131 audit(1734099939.413:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.291250 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:25:40.291274 kernel: audit: type=1334 audit(1734099939.461:95): prog-id=12 op=UNLOAD Dec 13 14:25:40.291296 kernel: audit: type=1130 audit(1734099939.477:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.291319 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:25:40.291347 kernel: audit: type=1131 audit(1734099939.477:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.291370 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:25:40.291393 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:25:40.291417 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 14:25:40.291442 systemd[1]: Created slice system-getty.slice. Dec 13 14:25:40.291465 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:25:40.291489 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:25:40.291516 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:25:40.291540 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:25:40.291587 systemd[1]: Created slice user.slice. Dec 13 14:25:40.291611 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:25:40.291634 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:25:40.291658 systemd[1]: Set up automount boot.automount. Dec 13 14:25:40.291681 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:25:40.291706 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:25:40.291729 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:25:40.291756 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:25:40.291780 systemd[1]: Reached target integritysetup.target. Dec 13 14:25:40.291803 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:25:40.291826 systemd[1]: Reached target remote-fs.target. Dec 13 14:25:40.291849 systemd[1]: Reached target slices.target. Dec 13 14:25:40.291872 systemd[1]: Reached target swap.target. Dec 13 14:25:40.291896 systemd[1]: Reached target torcx.target. Dec 13 14:25:40.291920 systemd[1]: Reached target veritysetup.target. Dec 13 14:25:40.291943 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:25:40.291979 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:25:40.292002 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:25:40.292029 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:25:40.292052 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:25:40.292075 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:25:40.292099 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:25:40.292122 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:25:40.292147 systemd[1]: Mounting media.mount... Dec 13 14:25:40.292171 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:25:40.292193 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:25:40.292221 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:25:40.292245 systemd[1]: Mounting tmp.mount... Dec 13 14:25:40.292268 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:25:40.292291 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:25:40.292314 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:25:40.292337 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:25:40.292361 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:25:40.292384 systemd[1]: Starting modprobe@drm.service... Dec 13 14:25:40.292406 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:25:40.292433 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:25:40.292457 systemd[1]: Starting modprobe@loop.service... Dec 13 14:25:40.292479 kernel: fuse: init (API version 7.34) Dec 13 14:25:40.292502 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:25:40.292526 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:25:40.292550 kernel: loop: module loaded Dec 13 14:25:40.292583 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:25:40.292607 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:25:40.292634 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:25:40.292657 systemd[1]: Stopped systemd-journald.service. Dec 13 14:25:40.292681 systemd[1]: Starting systemd-journald.service... Dec 13 14:25:40.292704 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:25:40.292729 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:25:40.292758 systemd-journald[997]: Journal started Dec 13 14:25:40.292845 systemd-journald[997]: Runtime Journal (/run/log/journal/2b8321be2b3c2d0546070de205056099) is 8.0M, max 148.8M, 140.8M free. Dec 13 14:25:35.638000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:25:35.925000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:25:36.077000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:25:36.077000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:25:36.077000 audit: BPF prog-id=10 op=LOAD Dec 13 14:25:36.077000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:25:36.077000 audit: BPF prog-id=11 op=LOAD Dec 13 14:25:36.077000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:25:36.250000 audit[906]: AVC avc: denied { associate } for pid=906 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:25:36.250000 audit[906]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001858e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=889 pid=906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:36.250000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:25:36.261000 audit[906]: AVC avc: denied { associate } for pid=906 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:25:36.261000 audit[906]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001859b9 a2=1ed a3=0 items=2 ppid=889 pid=906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:36.261000 audit: CWD cwd="/" Dec 13 14:25:36.261000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:36.261000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:36.261000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:25:39.375000 audit: BPF prog-id=12 op=LOAD Dec 13 14:25:39.375000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:25:39.389000 audit: BPF prog-id=13 op=LOAD Dec 13 14:25:39.405000 audit: BPF prog-id=14 op=LOAD Dec 13 14:25:39.405000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:25:39.405000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:25:39.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:39.461000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:25:39.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:39.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.253000 audit: BPF prog-id=15 op=LOAD Dec 13 14:25:40.253000 audit: BPF prog-id=16 op=LOAD Dec 13 14:25:40.253000 audit: BPF prog-id=17 op=LOAD Dec 13 14:25:40.253000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:25:40.253000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:25:40.282000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:25:40.282000 audit[997]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffcbe6f6620 a2=4000 a3=7ffcbe6f66bc items=0 ppid=1 pid=997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:40.282000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:25:39.374962 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:25:36.246112 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-12-13T14:25:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:25:39.414531 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:25:36.247176 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-12-13T14:25:36Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:25:36.247203 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-12-13T14:25:36Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:25:36.247243 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-12-13T14:25:36Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:25:36.247256 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-12-13T14:25:36Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:25:36.247305 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-12-13T14:25:36Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:25:36.247321 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-12-13T14:25:36Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:25:36.247556 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-12-13T14:25:36Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:25:36.247647 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-12-13T14:25:36Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:25:36.247665 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-12-13T14:25:36Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:25:36.249255 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-12-13T14:25:36Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:25:36.249331 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-12-13T14:25:36Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:25:36.249363 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-12-13T14:25:36Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:25:36.249382 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-12-13T14:25:36Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:25:36.249406 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-12-13T14:25:36Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:25:36.249427 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-12-13T14:25:36Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:25:38.756653 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-12-13T14:25:38Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:25:38.756956 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-12-13T14:25:38Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:25:38.757099 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-12-13T14:25:38Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:25:38.757325 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-12-13T14:25:38Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:25:38.757382 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-12-13T14:25:38Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:25:38.757451 /usr/lib/systemd/system-generators/torcx-generator[906]: time="2024-12-13T14:25:38Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:25:40.311619 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:25:40.325597 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:25:40.343763 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:25:40.343839 systemd[1]: Stopped verity-setup.service. Dec 13 14:25:40.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.363733 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:25:40.372602 systemd[1]: Started systemd-journald.service. Dec 13 14:25:40.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.381925 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:25:40.388869 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:25:40.395836 systemd[1]: Mounted media.mount. Dec 13 14:25:40.402837 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:25:40.411822 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:25:40.419866 systemd[1]: Mounted tmp.mount. Dec 13 14:25:40.426973 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:25:40.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.436031 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:25:40.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.446026 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:25:40.446227 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:25:40.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.455083 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:25:40.455290 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:25:40.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.465031 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:25:40.465233 systemd[1]: Finished modprobe@drm.service. Dec 13 14:25:40.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.475108 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:25:40.475324 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:25:40.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.484034 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:25:40.484232 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:25:40.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.493066 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:25:40.493278 systemd[1]: Finished modprobe@loop.service. Dec 13 14:25:40.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.502056 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:25:40.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.511071 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:25:40.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.519996 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:25:40.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.529022 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:25:40.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.538369 systemd[1]: Reached target network-pre.target. Dec 13 14:25:40.548048 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:25:40.557998 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:25:40.564675 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:25:40.567860 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:25:40.576262 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:25:40.587972 systemd-journald[997]: Time spent on flushing to /var/log/journal/2b8321be2b3c2d0546070de205056099 is 68.767ms for 1153 entries. Dec 13 14:25:40.587972 systemd-journald[997]: System Journal (/var/log/journal/2b8321be2b3c2d0546070de205056099) is 8.0M, max 584.8M, 576.8M free. Dec 13 14:25:40.704541 systemd-journald[997]: Received client request to flush runtime journal. Dec 13 14:25:40.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:40.584718 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:25:40.586259 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:25:40.601726 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:25:40.603275 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:25:40.707489 udevadm[1011]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 14:25:40.612276 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:25:40.621220 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:25:40.632872 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:25:40.641802 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:25:40.650033 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:25:40.659069 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:25:40.671394 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:25:40.680246 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:25:40.705810 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:25:40.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:41.257867 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:25:41.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:41.265000 audit: BPF prog-id=18 op=LOAD Dec 13 14:25:41.266000 audit: BPF prog-id=19 op=LOAD Dec 13 14:25:41.266000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:25:41.266000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:25:41.268508 systemd[1]: Starting systemd-udevd.service... Dec 13 14:25:41.290444 systemd-udevd[1014]: Using default interface naming scheme 'v252'. Dec 13 14:25:41.338473 systemd[1]: Started systemd-udevd.service. Dec 13 14:25:41.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:41.347000 audit: BPF prog-id=20 op=LOAD Dec 13 14:25:41.350342 systemd[1]: Starting systemd-networkd.service... Dec 13 14:25:41.362000 audit: BPF prog-id=21 op=LOAD Dec 13 14:25:41.362000 audit: BPF prog-id=22 op=LOAD Dec 13 14:25:41.362000 audit: BPF prog-id=23 op=LOAD Dec 13 14:25:41.364881 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:25:41.410062 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 14:25:41.449220 systemd[1]: Started systemd-userdbd.service. Dec 13 14:25:41.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:41.514594 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 14:25:41.575109 systemd-networkd[1028]: lo: Link UP Dec 13 14:25:41.575123 systemd-networkd[1028]: lo: Gained carrier Dec 13 14:25:41.575880 systemd-networkd[1028]: Enumeration completed Dec 13 14:25:41.576203 systemd[1]: Started systemd-networkd.service. Dec 13 14:25:41.576723 systemd-networkd[1028]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:25:41.578893 systemd-networkd[1028]: eth0: Link UP Dec 13 14:25:41.578910 systemd-networkd[1028]: eth0: Gained carrier Dec 13 14:25:41.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:41.589594 kernel: ACPI: button: Power Button [PWRF] Dec 13 14:25:41.590741 systemd-networkd[1028]: eth0: DHCPv4 address 10.128.0.103/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 14:25:41.644156 kernel: EDAC MC: Ver: 3.0.0 Dec 13 14:25:41.644282 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1026) Dec 13 14:25:41.659900 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Dec 13 14:25:41.666585 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 14:25:41.601000 audit[1020]: AVC avc: denied { confidentiality } for pid=1020 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:25:41.601000 audit[1020]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=559e23af2ce0 a1=337fc a2=7f251bea5bc5 a3=5 items=110 ppid=1014 pid=1020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:41.601000 audit: CWD cwd="/" Dec 13 14:25:41.601000 audit: PATH item=0 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=1 name=(null) inode=14188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=2 name=(null) inode=14188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=3 name=(null) inode=14189 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=4 name=(null) inode=14188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=5 name=(null) inode=14190 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=6 name=(null) inode=14188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=7 name=(null) inode=14191 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=8 name=(null) inode=14191 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=9 name=(null) inode=14192 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=10 name=(null) inode=14191 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=11 name=(null) inode=14193 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=12 name=(null) inode=14191 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=13 name=(null) inode=14194 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=14 name=(null) inode=14191 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=15 name=(null) inode=14195 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=16 name=(null) inode=14191 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=17 name=(null) inode=14196 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=18 name=(null) inode=14188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=19 name=(null) inode=14197 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=20 name=(null) inode=14197 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=21 name=(null) inode=14198 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=22 name=(null) inode=14197 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=23 name=(null) inode=14199 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=24 name=(null) inode=14197 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=25 name=(null) inode=14200 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=26 name=(null) inode=14197 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=27 name=(null) inode=14201 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=28 name=(null) inode=14197 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=29 name=(null) inode=14202 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=30 name=(null) inode=14188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=31 name=(null) inode=14203 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=32 name=(null) inode=14203 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=33 name=(null) inode=14204 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=34 name=(null) inode=14203 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=35 name=(null) inode=14205 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=36 name=(null) inode=14203 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=37 name=(null) inode=14206 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=38 name=(null) inode=14203 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=39 name=(null) inode=14207 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=40 name=(null) inode=14203 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=41 name=(null) inode=14208 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=42 name=(null) inode=14188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=43 name=(null) inode=14209 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=44 name=(null) inode=14209 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=45 name=(null) inode=14210 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=46 name=(null) inode=14209 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=47 name=(null) inode=14211 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=48 name=(null) inode=14209 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=49 name=(null) inode=14212 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=50 name=(null) inode=14209 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=51 name=(null) inode=14213 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=52 name=(null) inode=14209 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=53 name=(null) inode=14214 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=54 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=55 name=(null) inode=14215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=56 name=(null) inode=14215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=57 name=(null) inode=14216 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=58 name=(null) inode=14215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=59 name=(null) inode=14217 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=60 name=(null) inode=14215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=61 name=(null) inode=14218 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=62 name=(null) inode=14218 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=63 name=(null) inode=14219 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=64 name=(null) inode=14218 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=65 name=(null) inode=14220 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=66 name=(null) inode=14218 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=67 name=(null) inode=14221 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=68 name=(null) inode=14218 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=69 name=(null) inode=14222 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=70 name=(null) inode=14218 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=71 name=(null) inode=14223 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=72 name=(null) inode=14215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=73 name=(null) inode=14224 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=74 name=(null) inode=14224 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=75 name=(null) inode=14225 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=76 name=(null) inode=14224 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=77 name=(null) inode=14226 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=78 name=(null) inode=14224 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=79 name=(null) inode=14227 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=80 name=(null) inode=14224 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=81 name=(null) inode=14228 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=82 name=(null) inode=14224 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=83 name=(null) inode=14229 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=84 name=(null) inode=14215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=85 name=(null) inode=14230 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=86 name=(null) inode=14230 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=87 name=(null) inode=14231 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=88 name=(null) inode=14230 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=89 name=(null) inode=14232 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=90 name=(null) inode=14230 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=91 name=(null) inode=14233 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=92 name=(null) inode=14230 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=93 name=(null) inode=14234 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=94 name=(null) inode=14230 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=95 name=(null) inode=14235 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=96 name=(null) inode=14215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=97 name=(null) inode=14236 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=98 name=(null) inode=14236 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=99 name=(null) inode=14237 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=100 name=(null) inode=14236 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=101 name=(null) inode=14238 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=102 name=(null) inode=14236 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=103 name=(null) inode=14239 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=104 name=(null) inode=14236 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=105 name=(null) inode=14240 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=106 name=(null) inode=14236 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=107 name=(null) inode=14241 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PATH item=109 name=(null) inode=14242 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:41.601000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:25:41.723604 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 14:25:41.734076 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:25:41.746608 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 13 14:25:41.756641 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:25:41.774081 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:25:41.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:41.784243 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:25:41.814539 lvm[1051]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:25:41.846874 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:25:41.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:41.855882 systemd[1]: Reached target cryptsetup.target. Dec 13 14:25:41.866273 systemd[1]: Starting lvm2-activation.service... Dec 13 14:25:41.873079 lvm[1052]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:25:41.901413 systemd[1]: Finished lvm2-activation.service. Dec 13 14:25:41.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:41.909893 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:25:41.918826 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:25:41.918889 systemd[1]: Reached target local-fs.target. Dec 13 14:25:41.927759 systemd[1]: Reached target machines.target. Dec 13 14:25:41.937256 systemd[1]: Starting ldconfig.service... Dec 13 14:25:41.945878 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:25:41.946036 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:25:41.949114 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:25:41.958854 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:25:41.970414 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:25:41.980633 systemd[1]: Starting systemd-sysext.service... Dec 13 14:25:41.982654 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1054 (bootctl) Dec 13 14:25:41.985505 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:25:42.006248 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:25:42.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.009502 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:25:42.019109 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:25:42.019813 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:25:42.046654 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 14:25:42.128936 systemd-fsck[1065]: fsck.fat 4.2 (2021-01-31) Dec 13 14:25:42.128936 systemd-fsck[1065]: /dev/sda1: 789 files, 119291/258078 clusters Dec 13 14:25:42.134083 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:25:42.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.145703 systemd[1]: Mounting boot.mount... Dec 13 14:25:42.181826 systemd[1]: Mounted boot.mount. Dec 13 14:25:42.206741 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:25:42.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.520270 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:25:42.521176 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:25:42.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.548141 ldconfig[1053]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:25:42.552904 systemd[1]: Finished ldconfig.service. Dec 13 14:25:42.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.567608 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:25:42.592604 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 14:25:42.615169 (sd-sysext)[1070]: Using extensions 'kubernetes'. Dec 13 14:25:42.615799 (sd-sysext)[1070]: Merged extensions into '/usr'. Dec 13 14:25:42.637167 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:25:42.639143 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:25:42.646892 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:25:42.648693 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:25:42.657101 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:25:42.665077 systemd[1]: Starting modprobe@loop.service... Dec 13 14:25:42.671764 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:25:42.671997 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:25:42.672212 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:25:42.676489 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:25:42.684091 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:25:42.684294 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:25:42.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.693423 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:25:42.693666 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:25:42.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.702324 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:25:42.702529 systemd[1]: Finished modprobe@loop.service. Dec 13 14:25:42.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.712347 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:25:42.712585 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:25:42.714041 systemd[1]: Finished systemd-sysext.service. Dec 13 14:25:42.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:42.724156 systemd[1]: Starting ensure-sysext.service... Dec 13 14:25:42.731958 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:25:42.744139 systemd[1]: Reloading. Dec 13 14:25:42.761684 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:25:42.771363 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:25:42.784063 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:25:42.844958 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-12-13T14:25:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:25:42.844997 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-12-13T14:25:42Z" level=info msg="torcx already run" Dec 13 14:25:42.996186 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:25:42.996220 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:25:43.035553 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:25:43.114000 audit: BPF prog-id=24 op=LOAD Dec 13 14:25:43.114000 audit: BPF prog-id=25 op=LOAD Dec 13 14:25:43.114000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:25:43.114000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:25:43.116000 audit: BPF prog-id=26 op=LOAD Dec 13 14:25:43.116000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:25:43.118000 audit: BPF prog-id=27 op=LOAD Dec 13 14:25:43.118000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:25:43.118000 audit: BPF prog-id=28 op=LOAD Dec 13 14:25:43.118000 audit: BPF prog-id=29 op=LOAD Dec 13 14:25:43.118000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:25:43.118000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:25:43.119000 audit: BPF prog-id=30 op=LOAD Dec 13 14:25:43.119000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:25:43.119000 audit: BPF prog-id=31 op=LOAD Dec 13 14:25:43.119000 audit: BPF prog-id=32 op=LOAD Dec 13 14:25:43.119000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:25:43.119000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:25:43.128005 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:25:43.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:43.142492 systemd[1]: Starting audit-rules.service... Dec 13 14:25:43.151891 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:25:43.162713 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 14:25:43.172453 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:25:43.180000 audit: BPF prog-id=33 op=LOAD Dec 13 14:25:43.183257 systemd[1]: Starting systemd-resolved.service... Dec 13 14:25:43.190000 audit: BPF prog-id=34 op=LOAD Dec 13 14:25:43.193207 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:25:43.203492 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:25:43.210000 audit[1168]: SYSTEM_BOOT pid=1168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:25:43.212513 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:25:43.218000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:25:43.220702 augenrules[1171]: No rules Dec 13 14:25:43.218000 audit[1171]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffde2c469c0 a2=420 a3=0 items=0 ppid=1141 pid=1171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:43.218000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:25:43.222372 systemd[1]: Finished audit-rules.service. Dec 13 14:25:43.231238 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 14:25:43.231456 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 14:25:43.241283 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:25:43.258480 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:25:43.269675 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:25:43.270180 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:25:43.272717 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:25:43.281602 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:25:43.290924 systemd[1]: Starting modprobe@loop.service... Dec 13 14:25:43.299766 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 14:25:43.307751 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:25:43.307983 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:25:43.308253 enable-oslogin[1179]: /etc/pam.d/sshd already exists. Not enabling OS Login Dec 13 14:25:43.310355 systemd[1]: Starting systemd-update-done.service... Dec 13 14:25:43.317689 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:25:43.317894 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:25:43.320289 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:25:43.320531 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:25:43.329383 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:25:43.329606 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:25:43.338342 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:25:43.338545 systemd[1]: Finished modprobe@loop.service. Dec 13 14:25:43.347370 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 14:25:43.347645 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 14:25:43.356249 systemd[1]: Finished systemd-update-done.service. Dec 13 14:25:43.366511 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:25:43.366730 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:25:43.371975 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:25:43.372496 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:25:43.376535 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:25:43.382208 systemd-timesyncd[1163]: Contacted time server 169.254.169.254:123 (169.254.169.254). Dec 13 14:25:43.382643 systemd-timesyncd[1163]: Initial clock synchronization to Fri 2024-12-13 14:25:43.692123 UTC. Dec 13 14:25:43.384381 systemd-resolved[1159]: Positive Trust Anchors: Dec 13 14:25:43.384398 systemd-resolved[1159]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:25:43.384456 systemd-resolved[1159]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:25:43.385367 systemd[1]: Starting modprobe@drm.service... Dec 13 14:25:43.394216 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:25:43.403232 systemd[1]: Starting modprobe@loop.service... Dec 13 14:25:43.412317 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 14:25:43.418998 enable-oslogin[1186]: /etc/pam.d/sshd already exists. Not enabling OS Login Dec 13 14:25:43.420797 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:25:43.421029 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:25:43.422696 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:25:43.423899 systemd-resolved[1159]: Defaulting to hostname 'linux'. Dec 13 14:25:43.430757 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:25:43.430970 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:25:43.432875 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:25:43.442700 systemd[1]: Started systemd-resolved.service. Dec 13 14:25:43.451204 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:25:43.451422 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:25:43.460168 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:25:43.460372 systemd[1]: Finished modprobe@drm.service. Dec 13 14:25:43.469160 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:25:43.469353 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:25:43.478243 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:25:43.478444 systemd[1]: Finished modprobe@loop.service. Dec 13 14:25:43.487299 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 14:25:43.487581 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 14:25:43.496664 systemd[1]: Reached target network.target. Dec 13 14:25:43.505744 systemd[1]: Reached target nss-lookup.target. Dec 13 14:25:43.513670 systemd[1]: Reached target time-set.target. Dec 13 14:25:43.521730 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:25:43.521791 systemd[1]: Reached target sysinit.target. Dec 13 14:25:43.529791 systemd[1]: Started motdgen.path. Dec 13 14:25:43.536739 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:25:43.546854 systemd[1]: Started logrotate.timer. Dec 13 14:25:43.553778 systemd[1]: Started mdadm.timer. Dec 13 14:25:43.560678 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:25:43.568708 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:25:43.568769 systemd[1]: Reached target paths.target. Dec 13 14:25:43.575690 systemd[1]: Reached target timers.target. Dec 13 14:25:43.583050 systemd[1]: Listening on dbus.socket. Dec 13 14:25:43.590943 systemd[1]: Starting docker.socket... Dec 13 14:25:43.596702 systemd-networkd[1028]: eth0: Gained IPv6LL Dec 13 14:25:43.602144 systemd[1]: Listening on sshd.socket. Dec 13 14:25:43.609839 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:25:43.609924 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:25:43.610878 systemd[1]: Finished ensure-sysext.service. Dec 13 14:25:43.620091 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:25:43.629868 systemd[1]: Listening on docker.socket. Dec 13 14:25:43.637772 systemd[1]: Reached target network-online.target. Dec 13 14:25:43.645662 systemd[1]: Reached target sockets.target. Dec 13 14:25:43.653655 systemd[1]: Reached target basic.target. Dec 13 14:25:43.660780 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:25:43.660824 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:25:43.662347 systemd[1]: Starting containerd.service... Dec 13 14:25:43.670917 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 14:25:43.681202 systemd[1]: Starting dbus.service... Dec 13 14:25:43.688228 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:25:43.697292 systemd[1]: Starting extend-filesystems.service... Dec 13 14:25:43.704713 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:25:43.706679 systemd[1]: Starting kubelet.service... Dec 13 14:25:43.711401 jq[1193]: false Dec 13 14:25:43.715258 systemd[1]: Starting motdgen.service... Dec 13 14:25:43.724185 systemd[1]: Starting oem-gce.service... Dec 13 14:25:43.732159 systemd[1]: Starting prepare-helm.service... Dec 13 14:25:43.740378 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:25:43.742283 extend-filesystems[1194]: Found loop1 Dec 13 14:25:43.749292 systemd[1]: Starting sshd-keygen.service... Dec 13 14:25:43.809731 extend-filesystems[1194]: Found sda Dec 13 14:25:43.809731 extend-filesystems[1194]: Found sda1 Dec 13 14:25:43.809731 extend-filesystems[1194]: Found sda2 Dec 13 14:25:43.809731 extend-filesystems[1194]: Found sda3 Dec 13 14:25:43.809731 extend-filesystems[1194]: Found usr Dec 13 14:25:43.809731 extend-filesystems[1194]: Found sda4 Dec 13 14:25:43.809731 extend-filesystems[1194]: Found sda6 Dec 13 14:25:43.809731 extend-filesystems[1194]: Found sda7 Dec 13 14:25:43.809731 extend-filesystems[1194]: Found sda9 Dec 13 14:25:43.809731 extend-filesystems[1194]: Checking size of /dev/sda9 Dec 13 14:25:43.760512 systemd[1]: Starting systemd-logind.service... Dec 13 14:25:43.899624 extend-filesystems[1194]: Resized partition /dev/sda9 Dec 13 14:25:43.767706 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:25:43.916301 extend-filesystems[1238]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:25:43.931302 kernel: loop2: detected capacity change from 0 to 2097152 Dec 13 14:25:43.767811 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Dec 13 14:25:43.768508 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:25:43.932149 jq[1218]: true Dec 13 14:25:43.769677 systemd[1]: Starting update-engine.service... Dec 13 14:25:43.779046 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:25:43.790780 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:25:43.791075 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:25:43.796102 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:25:43.796433 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:25:43.934504 mkfs.ext4[1227]: mke2fs 1.46.5 (30-Dec-2021) Dec 13 14:25:43.934504 mkfs.ext4[1227]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Dec 13 14:25:43.934504 mkfs.ext4[1227]: Creating filesystem with 262144 4k blocks and 65536 inodes Dec 13 14:25:43.934504 mkfs.ext4[1227]: Filesystem UUID: 17f1ad03-d3c3-424a-b6ed-6aba4d5b0d3f Dec 13 14:25:43.934504 mkfs.ext4[1227]: Superblock backups stored on blocks: Dec 13 14:25:43.934504 mkfs.ext4[1227]: 32768, 98304, 163840, 229376 Dec 13 14:25:43.934504 mkfs.ext4[1227]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 14:25:43.934504 mkfs.ext4[1227]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 14:25:43.934504 mkfs.ext4[1227]: Creating journal (8192 blocks): done Dec 13 14:25:43.934504 mkfs.ext4[1227]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 14:25:43.885717 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:25:43.935339 jq[1225]: true Dec 13 14:25:43.885988 systemd[1]: Finished motdgen.service. Dec 13 14:25:43.936594 umount[1232]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Dec 13 14:25:43.947876 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Dec 13 14:25:43.949802 tar[1223]: linux-amd64/helm Dec 13 14:25:43.973555 dbus-daemon[1192]: [system] SELinux support is enabled Dec 13 14:25:43.973858 systemd[1]: Started dbus.service. Dec 13 14:25:43.978668 dbus-daemon[1192]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1028 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 14:25:43.984439 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:25:43.984485 systemd[1]: Reached target system-config.target. Dec 13 14:25:43.993775 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:25:43.993815 systemd[1]: Reached target user-config.target. Dec 13 14:25:44.009830 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Dec 13 14:25:44.018530 dbus-daemon[1192]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 14:25:44.084798 systemd[1]: Starting systemd-hostnamed.service... Dec 13 14:25:44.109250 kernel: EXT4-fs (loop2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:25:44.109337 update_engine[1216]: I1213 14:25:44.010056 1216 main.cc:92] Flatcar Update Engine starting Dec 13 14:25:44.109337 update_engine[1216]: I1213 14:25:44.032764 1216 update_check_scheduler.cc:74] Next update check in 7m25s Dec 13 14:25:44.108924 systemd[1]: Started update-engine.service. Dec 13 14:25:44.111780 extend-filesystems[1238]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 14:25:44.111780 extend-filesystems[1238]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 13 14:25:44.111780 extend-filesystems[1238]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Dec 13 14:25:44.193293 extend-filesystems[1194]: Resized filesystem in /dev/sda9 Dec 13 14:25:44.225721 bash[1257]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:25:44.225907 env[1226]: time="2024-12-13T14:25:44.170464044Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:25:44.135641 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:25:44.135917 systemd[1]: Finished extend-filesystems.service. Dec 13 14:25:44.149858 systemd-logind[1212]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 14:25:44.149900 systemd-logind[1212]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 14:25:44.149934 systemd-logind[1212]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:25:44.160228 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:25:44.160555 systemd-logind[1212]: New seat seat0. Dec 13 14:25:44.179321 systemd[1]: Started systemd-logind.service. Dec 13 14:25:44.218161 systemd[1]: Started locksmithd.service. Dec 13 14:25:44.291921 env[1226]: time="2024-12-13T14:25:44.291777647Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:25:44.292118 env[1226]: time="2024-12-13T14:25:44.292044840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:25:44.305005 env[1226]: time="2024-12-13T14:25:44.304937198Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:25:44.305005 env[1226]: time="2024-12-13T14:25:44.305000036Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:25:44.305414 env[1226]: time="2024-12-13T14:25:44.305373730Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:25:44.305504 env[1226]: time="2024-12-13T14:25:44.305412907Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:25:44.305504 env[1226]: time="2024-12-13T14:25:44.305437340Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:25:44.305504 env[1226]: time="2024-12-13T14:25:44.305454473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:25:44.305777 env[1226]: time="2024-12-13T14:25:44.305581030Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:25:44.305992 env[1226]: time="2024-12-13T14:25:44.305959011Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:25:44.306311 env[1226]: time="2024-12-13T14:25:44.306256432Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:25:44.306311 env[1226]: time="2024-12-13T14:25:44.306292030Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:25:44.306460 env[1226]: time="2024-12-13T14:25:44.306388511Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:25:44.306460 env[1226]: time="2024-12-13T14:25:44.306410222Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:25:44.321741 env[1226]: time="2024-12-13T14:25:44.321121133Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:25:44.321741 env[1226]: time="2024-12-13T14:25:44.321187444Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:25:44.321741 env[1226]: time="2024-12-13T14:25:44.321213557Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:25:44.321741 env[1226]: time="2024-12-13T14:25:44.321278621Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:25:44.321741 env[1226]: time="2024-12-13T14:25:44.321304157Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:25:44.321741 env[1226]: time="2024-12-13T14:25:44.321390500Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:25:44.321741 env[1226]: time="2024-12-13T14:25:44.321415158Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:25:44.321741 env[1226]: time="2024-12-13T14:25:44.321439405Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:25:44.321741 env[1226]: time="2024-12-13T14:25:44.321463778Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:25:44.321741 env[1226]: time="2024-12-13T14:25:44.321487639Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:25:44.321741 env[1226]: time="2024-12-13T14:25:44.321510723Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:25:44.321741 env[1226]: time="2024-12-13T14:25:44.321535866Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:25:44.321741 env[1226]: time="2024-12-13T14:25:44.321698827Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:25:44.322435 env[1226]: time="2024-12-13T14:25:44.321818972Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:25:44.322435 env[1226]: time="2024-12-13T14:25:44.322371674Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:25:44.322435 env[1226]: time="2024-12-13T14:25:44.322418407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:25:44.322619 env[1226]: time="2024-12-13T14:25:44.322443538Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:25:44.322619 env[1226]: time="2024-12-13T14:25:44.322527875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:25:44.322619 env[1226]: time="2024-12-13T14:25:44.322552956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:25:44.322760 env[1226]: time="2024-12-13T14:25:44.322666936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:25:44.322760 env[1226]: time="2024-12-13T14:25:44.322692095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:25:44.322760 env[1226]: time="2024-12-13T14:25:44.322715322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:25:44.322760 env[1226]: time="2024-12-13T14:25:44.322737046Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:25:44.322951 env[1226]: time="2024-12-13T14:25:44.322758359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:25:44.322951 env[1226]: time="2024-12-13T14:25:44.322780160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:25:44.322951 env[1226]: time="2024-12-13T14:25:44.322805789Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:25:44.323112 env[1226]: time="2024-12-13T14:25:44.322986139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:25:44.323112 env[1226]: time="2024-12-13T14:25:44.323016258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:25:44.323112 env[1226]: time="2024-12-13T14:25:44.323042272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:25:44.323112 env[1226]: time="2024-12-13T14:25:44.323063481Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:25:44.323112 env[1226]: time="2024-12-13T14:25:44.323091695Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:25:44.323335 env[1226]: time="2024-12-13T14:25:44.323112502Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:25:44.323335 env[1226]: time="2024-12-13T14:25:44.323147337Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:25:44.323335 env[1226]: time="2024-12-13T14:25:44.323199428Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:25:44.323683 env[1226]: time="2024-12-13T14:25:44.323553282Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:25:44.327042 env[1226]: time="2024-12-13T14:25:44.323694810Z" level=info msg="Connect containerd service" Dec 13 14:25:44.327042 env[1226]: time="2024-12-13T14:25:44.323746106Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:25:44.327042 env[1226]: time="2024-12-13T14:25:44.324645293Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:25:44.327042 env[1226]: time="2024-12-13T14:25:44.325721189Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:25:44.327042 env[1226]: time="2024-12-13T14:25:44.325794989Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:25:44.327042 env[1226]: time="2024-12-13T14:25:44.326911229Z" level=info msg="containerd successfully booted in 0.284495s" Dec 13 14:25:44.325987 systemd[1]: Started containerd.service. Dec 13 14:25:44.327768 env[1226]: time="2024-12-13T14:25:44.327563455Z" level=info msg="Start subscribing containerd event" Dec 13 14:25:44.327768 env[1226]: time="2024-12-13T14:25:44.327685268Z" level=info msg="Start recovering state" Dec 13 14:25:44.330816 env[1226]: time="2024-12-13T14:25:44.330783210Z" level=info msg="Start event monitor" Dec 13 14:25:44.330904 env[1226]: time="2024-12-13T14:25:44.330838955Z" level=info msg="Start snapshots syncer" Dec 13 14:25:44.330904 env[1226]: time="2024-12-13T14:25:44.330859536Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:25:44.330904 env[1226]: time="2024-12-13T14:25:44.330880489Z" level=info msg="Start streaming server" Dec 13 14:25:44.395134 coreos-metadata[1191]: Dec 13 14:25:44.393 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Dec 13 14:25:44.397982 coreos-metadata[1191]: Dec 13 14:25:44.397 INFO Fetch failed with 404: resource not found Dec 13 14:25:44.398088 coreos-metadata[1191]: Dec 13 14:25:44.397 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Dec 13 14:25:44.398964 coreos-metadata[1191]: Dec 13 14:25:44.398 INFO Fetch successful Dec 13 14:25:44.399056 coreos-metadata[1191]: Dec 13 14:25:44.398 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Dec 13 14:25:44.399849 coreos-metadata[1191]: Dec 13 14:25:44.399 INFO Fetch failed with 404: resource not found Dec 13 14:25:44.399946 coreos-metadata[1191]: Dec 13 14:25:44.399 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Dec 13 14:25:44.401838 coreos-metadata[1191]: Dec 13 14:25:44.400 INFO Fetch failed with 404: resource not found Dec 13 14:25:44.401838 coreos-metadata[1191]: Dec 13 14:25:44.400 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Dec 13 14:25:44.401838 coreos-metadata[1191]: Dec 13 14:25:44.401 INFO Fetch successful Dec 13 14:25:44.407751 unknown[1191]: wrote ssh authorized keys file for user: core Dec 13 14:25:44.432239 update-ssh-keys[1268]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:25:44.432858 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 14:25:44.641028 dbus-daemon[1192]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 14:25:44.641243 systemd[1]: Started systemd-hostnamed.service. Dec 13 14:25:44.642823 dbus-daemon[1192]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1260 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 14:25:44.654620 systemd[1]: Starting polkit.service... Dec 13 14:25:44.753668 polkitd[1269]: Started polkitd version 121 Dec 13 14:25:44.796793 polkitd[1269]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 14:25:44.796915 polkitd[1269]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 14:25:44.806306 polkitd[1269]: Finished loading, compiling and executing 2 rules Dec 13 14:25:44.807115 dbus-daemon[1192]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 14:25:44.807358 systemd[1]: Started polkit.service. Dec 13 14:25:44.808894 polkitd[1269]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 14:25:44.871346 systemd-hostnamed[1260]: Hostname set to (transient) Dec 13 14:25:44.874690 systemd-resolved[1159]: System hostname changed to 'ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal'. Dec 13 14:25:46.038742 tar[1223]: linux-amd64/LICENSE Dec 13 14:25:46.042645 tar[1223]: linux-amd64/README.md Dec 13 14:25:46.055908 systemd[1]: Finished prepare-helm.service. Dec 13 14:25:46.258784 systemd[1]: Started kubelet.service. Dec 13 14:25:46.502673 sshd_keygen[1219]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:25:46.548297 systemd[1]: Finished sshd-keygen.service. Dec 13 14:25:46.557291 systemd[1]: Starting issuegen.service... Dec 13 14:25:46.571932 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:25:46.572187 systemd[1]: Finished issuegen.service. Dec 13 14:25:46.582411 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:25:46.597806 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:25:46.607943 systemd[1]: Started getty@tty1.service. Dec 13 14:25:46.619891 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:25:46.627507 systemd[1]: Reached target getty.target. Dec 13 14:25:46.741658 locksmithd[1264]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:25:47.543435 kubelet[1279]: E1213 14:25:47.543348 1279 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:25:47.546721 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:25:47.546986 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:25:47.547424 systemd[1]: kubelet.service: Consumed 1.468s CPU time. Dec 13 14:25:49.602232 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Dec 13 14:25:51.688612 kernel: loop2: detected capacity change from 0 to 2097152 Dec 13 14:25:51.712713 systemd-nspawn[1304]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Dec 13 14:25:51.712713 systemd-nspawn[1304]: Press ^] three times within 1s to kill container. Dec 13 14:25:51.725611 kernel: EXT4-fs (loop2): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:25:51.808767 systemd[1]: Started oem-gce.service. Dec 13 14:25:51.809244 systemd[1]: Reached target multi-user.target. Dec 13 14:25:51.811573 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:25:51.822336 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:25:51.822607 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:25:51.822857 systemd[1]: Startup finished in 1.047s (kernel) + 8.002s (initrd) + 16.018s (userspace) = 25.068s. Dec 13 14:25:51.897310 systemd-nspawn[1304]: + '[' -e /etc/default/instance_configs.cfg.template ']' Dec 13 14:25:51.897505 systemd-nspawn[1304]: + echo -e '[InstanceSetup]\nset_host_keys = false' Dec 13 14:25:51.897614 systemd-nspawn[1304]: + /usr/bin/google_instance_setup Dec 13 14:25:52.533591 instance-setup[1310]: INFO Running google_set_multiqueue. Dec 13 14:25:52.548437 instance-setup[1310]: INFO Set channels for eth0 to 2. Dec 13 14:25:52.552150 instance-setup[1310]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Dec 13 14:25:52.553523 instance-setup[1310]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Dec 13 14:25:52.553914 instance-setup[1310]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Dec 13 14:25:52.555280 instance-setup[1310]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Dec 13 14:25:52.555701 instance-setup[1310]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Dec 13 14:25:52.557046 instance-setup[1310]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Dec 13 14:25:52.557449 instance-setup[1310]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Dec 13 14:25:52.558802 instance-setup[1310]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Dec 13 14:25:52.569952 instance-setup[1310]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Dec 13 14:25:52.570118 instance-setup[1310]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Dec 13 14:25:52.607931 systemd-nspawn[1304]: + /usr/bin/google_metadata_script_runner --script-type startup Dec 13 14:25:52.610703 systemd[1]: Created slice system-sshd.slice. Dec 13 14:25:52.612735 systemd[1]: Started sshd@0-10.128.0.103:22-139.178.68.195:40020.service. Dec 13 14:25:52.944946 sshd[1343]: Accepted publickey for core from 139.178.68.195 port 40020 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:25:52.949062 sshd[1343]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:25:52.967564 systemd[1]: Created slice user-500.slice. Dec 13 14:25:52.971143 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:25:52.972788 startup-script[1342]: INFO Starting startup scripts. Dec 13 14:25:52.988821 systemd-logind[1212]: New session 1 of user core. Dec 13 14:25:52.996001 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:25:52.991670 startup-script[1342]: INFO No startup scripts found in metadata. Dec 13 14:25:52.991824 startup-script[1342]: INFO Finished running startup scripts. Dec 13 14:25:52.999294 systemd[1]: Starting user@500.service... Dec 13 14:25:53.015963 (systemd)[1348]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:25:53.043958 systemd-nspawn[1304]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Dec 13 14:25:53.043958 systemd-nspawn[1304]: + daemon_pids=() Dec 13 14:25:53.043958 systemd-nspawn[1304]: + for d in accounts clock_skew network Dec 13 14:25:53.043958 systemd-nspawn[1304]: + daemon_pids+=($!) Dec 13 14:25:53.043958 systemd-nspawn[1304]: + for d in accounts clock_skew network Dec 13 14:25:53.043958 systemd-nspawn[1304]: + daemon_pids+=($!) Dec 13 14:25:53.043958 systemd-nspawn[1304]: + for d in accounts clock_skew network Dec 13 14:25:53.043958 systemd-nspawn[1304]: + /usr/bin/google_accounts_daemon Dec 13 14:25:53.043958 systemd-nspawn[1304]: + daemon_pids+=($!) Dec 13 14:25:53.043958 systemd-nspawn[1304]: + NOTIFY_SOCKET=/run/systemd/notify Dec 13 14:25:53.043958 systemd-nspawn[1304]: + /usr/bin/systemd-notify --ready Dec 13 14:25:53.045018 systemd-nspawn[1304]: + /usr/bin/google_network_daemon Dec 13 14:25:53.045250 systemd-nspawn[1304]: + /usr/bin/google_clock_skew_daemon Dec 13 14:25:53.095836 systemd-nspawn[1304]: + wait -n 36 37 38 Dec 13 14:25:53.206911 systemd[1348]: Queued start job for default target default.target. Dec 13 14:25:53.207748 systemd[1348]: Reached target paths.target. Dec 13 14:25:53.207783 systemd[1348]: Reached target sockets.target. Dec 13 14:25:53.207806 systemd[1348]: Reached target timers.target. Dec 13 14:25:53.207826 systemd[1348]: Reached target basic.target. Dec 13 14:25:53.207906 systemd[1348]: Reached target default.target. Dec 13 14:25:53.207973 systemd[1348]: Startup finished in 180ms. Dec 13 14:25:53.207987 systemd[1]: Started user@500.service. Dec 13 14:25:53.209705 systemd[1]: Started session-1.scope. Dec 13 14:25:53.438935 systemd[1]: Started sshd@1-10.128.0.103:22-139.178.68.195:40030.service. Dec 13 14:25:53.764914 sshd[1361]: Accepted publickey for core from 139.178.68.195 port 40030 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:25:53.766432 sshd[1361]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:25:53.775028 systemd[1]: Started session-2.scope. Dec 13 14:25:53.777649 systemd-logind[1212]: New session 2 of user core. Dec 13 14:25:53.805673 groupadd[1365]: group added to /etc/group: name=google-sudoers, GID=1000 Dec 13 14:25:53.809654 groupadd[1365]: group added to /etc/gshadow: name=google-sudoers Dec 13 14:25:53.836046 groupadd[1365]: new group: name=google-sudoers, GID=1000 Dec 13 14:25:53.871473 google-accounts[1352]: INFO Starting Google Accounts daemon. Dec 13 14:25:53.916404 google-clock-skew[1353]: INFO Starting Google Clock Skew daemon. Dec 13 14:25:53.929029 google-accounts[1352]: WARNING OS Login not installed. Dec 13 14:25:53.929898 google-clock-skew[1353]: INFO Clock drift token has changed: 0. Dec 13 14:25:53.932342 google-accounts[1352]: INFO Creating a new user account for 0. Dec 13 14:25:53.939778 systemd-nspawn[1304]: hwclock: Cannot access the Hardware Clock via any known method. Dec 13 14:25:53.939778 systemd-nspawn[1304]: hwclock: Use the --verbose option to see the details of our search for an access method. Dec 13 14:25:53.940501 google-clock-skew[1353]: WARNING Failed to sync system time with hardware clock. Dec 13 14:25:53.949286 systemd-nspawn[1304]: useradd: invalid user name '0': use --badname to ignore Dec 13 14:25:53.950235 google-accounts[1352]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Dec 13 14:25:53.988194 google-networking[1354]: INFO Starting Google Networking daemon. Dec 13 14:25:53.988820 sshd[1361]: pam_unix(sshd:session): session closed for user core Dec 13 14:25:53.996142 systemd[1]: sshd@1-10.128.0.103:22-139.178.68.195:40030.service: Deactivated successfully. Dec 13 14:25:53.997354 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:25:53.999808 systemd-logind[1212]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:25:54.001461 systemd-logind[1212]: Removed session 2. Dec 13 14:25:54.034457 systemd[1]: Started sshd@2-10.128.0.103:22-139.178.68.195:40042.service. Dec 13 14:25:54.327453 sshd[1386]: Accepted publickey for core from 139.178.68.195 port 40042 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:25:54.329341 sshd[1386]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:25:54.336824 systemd[1]: Started session-3.scope. Dec 13 14:25:54.337435 systemd-logind[1212]: New session 3 of user core. Dec 13 14:25:54.539310 sshd[1386]: pam_unix(sshd:session): session closed for user core Dec 13 14:25:54.543315 systemd[1]: sshd@2-10.128.0.103:22-139.178.68.195:40042.service: Deactivated successfully. Dec 13 14:25:54.544362 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:25:54.545531 systemd-logind[1212]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:25:54.546904 systemd-logind[1212]: Removed session 3. Dec 13 14:25:54.585939 systemd[1]: Started sshd@3-10.128.0.103:22-139.178.68.195:40044.service. Dec 13 14:25:54.876975 sshd[1392]: Accepted publickey for core from 139.178.68.195 port 40044 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:25:54.878789 sshd[1392]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:25:54.885821 systemd[1]: Started session-4.scope. Dec 13 14:25:54.886624 systemd-logind[1212]: New session 4 of user core. Dec 13 14:25:55.094490 sshd[1392]: pam_unix(sshd:session): session closed for user core Dec 13 14:25:55.098456 systemd[1]: sshd@3-10.128.0.103:22-139.178.68.195:40044.service: Deactivated successfully. Dec 13 14:25:55.099543 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:25:55.100393 systemd-logind[1212]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:25:55.101646 systemd-logind[1212]: Removed session 4. Dec 13 14:25:55.140036 systemd[1]: Started sshd@4-10.128.0.103:22-139.178.68.195:40046.service. Dec 13 14:25:55.425466 sshd[1398]: Accepted publickey for core from 139.178.68.195 port 40046 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:25:55.427315 sshd[1398]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:25:55.433693 systemd-logind[1212]: New session 5 of user core. Dec 13 14:25:55.434024 systemd[1]: Started session-5.scope. Dec 13 14:25:55.621699 sudo[1401]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:25:55.622129 sudo[1401]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:25:55.653389 systemd[1]: Starting docker.service... Dec 13 14:25:55.703141 env[1411]: time="2024-12-13T14:25:55.702998763Z" level=info msg="Starting up" Dec 13 14:25:55.705019 env[1411]: time="2024-12-13T14:25:55.704993396Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:25:55.705152 env[1411]: time="2024-12-13T14:25:55.705129335Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:25:55.705288 env[1411]: time="2024-12-13T14:25:55.705269422Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:25:55.705359 env[1411]: time="2024-12-13T14:25:55.705345791Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:25:55.707885 env[1411]: time="2024-12-13T14:25:55.707691423Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:25:55.707885 env[1411]: time="2024-12-13T14:25:55.707715615Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:25:55.707885 env[1411]: time="2024-12-13T14:25:55.707739317Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:25:55.707885 env[1411]: time="2024-12-13T14:25:55.707756008Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:25:55.719312 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3074981916-merged.mount: Deactivated successfully. Dec 13 14:25:55.748744 env[1411]: time="2024-12-13T14:25:55.748708570Z" level=info msg="Loading containers: start." Dec 13 14:25:55.919605 kernel: Initializing XFRM netlink socket Dec 13 14:25:55.962154 env[1411]: time="2024-12-13T14:25:55.962014005Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 14:25:56.042953 systemd-networkd[1028]: docker0: Link UP Dec 13 14:25:56.060323 env[1411]: time="2024-12-13T14:25:56.060272653Z" level=info msg="Loading containers: done." Dec 13 14:25:56.077282 env[1411]: time="2024-12-13T14:25:56.077218059Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:25:56.077633 env[1411]: time="2024-12-13T14:25:56.077514163Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 14:25:56.077744 env[1411]: time="2024-12-13T14:25:56.077704019Z" level=info msg="Daemon has completed initialization" Dec 13 14:25:56.097811 systemd[1]: Started docker.service. Dec 13 14:25:56.109907 env[1411]: time="2024-12-13T14:25:56.109853748Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:25:57.396428 env[1226]: time="2024-12-13T14:25:57.396346808Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 14:25:57.798314 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:25:57.798632 systemd[1]: Stopped kubelet.service. Dec 13 14:25:57.798713 systemd[1]: kubelet.service: Consumed 1.468s CPU time. Dec 13 14:25:57.800972 systemd[1]: Starting kubelet.service... Dec 13 14:25:57.978311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3724479310.mount: Deactivated successfully. Dec 13 14:25:58.038798 systemd[1]: Started kubelet.service. Dec 13 14:25:58.123844 kubelet[1543]: E1213 14:25:58.123704 1543 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:25:58.130035 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:25:58.130273 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:26:00.158748 env[1226]: time="2024-12-13T14:26:00.158655795Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:00.161718 env[1226]: time="2024-12-13T14:26:00.161639111Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:00.164003 env[1226]: time="2024-12-13T14:26:00.163956328Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:00.166365 env[1226]: time="2024-12-13T14:26:00.166324821Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:00.167500 env[1226]: time="2024-12-13T14:26:00.167441333Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 14:26:00.183898 env[1226]: time="2024-12-13T14:26:00.183855470Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 14:26:02.151265 env[1226]: time="2024-12-13T14:26:02.151193094Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:02.154340 env[1226]: time="2024-12-13T14:26:02.154299511Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:02.157332 env[1226]: time="2024-12-13T14:26:02.157287759Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:02.159989 env[1226]: time="2024-12-13T14:26:02.159948229Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:02.161450 env[1226]: time="2024-12-13T14:26:02.161292324Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 14:26:02.178160 env[1226]: time="2024-12-13T14:26:02.178112325Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 14:26:03.535956 env[1226]: time="2024-12-13T14:26:03.535876756Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:03.538980 env[1226]: time="2024-12-13T14:26:03.538932403Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:03.548769 env[1226]: time="2024-12-13T14:26:03.548701588Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:03.549873 env[1226]: time="2024-12-13T14:26:03.549833442Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:03.550924 env[1226]: time="2024-12-13T14:26:03.550873661Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 14:26:03.566364 env[1226]: time="2024-12-13T14:26:03.566315318Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:26:04.677776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2195143270.mount: Deactivated successfully. Dec 13 14:26:05.353886 env[1226]: time="2024-12-13T14:26:05.353814924Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:05.356265 env[1226]: time="2024-12-13T14:26:05.356212243Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:05.358272 env[1226]: time="2024-12-13T14:26:05.358234268Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:05.359937 env[1226]: time="2024-12-13T14:26:05.359897916Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:05.360553 env[1226]: time="2024-12-13T14:26:05.360512657Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 14:26:05.374198 env[1226]: time="2024-12-13T14:26:05.374157748Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:26:05.754263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3832485739.mount: Deactivated successfully. Dec 13 14:26:06.883982 env[1226]: time="2024-12-13T14:26:06.883900598Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:06.889503 env[1226]: time="2024-12-13T14:26:06.889437427Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:06.895739 env[1226]: time="2024-12-13T14:26:06.895692024Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:06.898348 env[1226]: time="2024-12-13T14:26:06.898313357Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:06.899383 env[1226]: time="2024-12-13T14:26:06.899332065Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 14:26:06.913048 env[1226]: time="2024-12-13T14:26:06.912980365Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 14:26:07.292519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3534330868.mount: Deactivated successfully. Dec 13 14:26:07.298985 env[1226]: time="2024-12-13T14:26:07.298901384Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:07.302002 env[1226]: time="2024-12-13T14:26:07.301950505Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:07.304429 env[1226]: time="2024-12-13T14:26:07.304380081Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:07.306824 env[1226]: time="2024-12-13T14:26:07.306788433Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:07.308355 env[1226]: time="2024-12-13T14:26:07.307771204Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 14:26:07.322323 env[1226]: time="2024-12-13T14:26:07.322266798Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 14:26:07.738266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1838821050.mount: Deactivated successfully. Dec 13 14:26:08.381345 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:26:08.381689 systemd[1]: Stopped kubelet.service. Dec 13 14:26:08.383817 systemd[1]: Starting kubelet.service... Dec 13 14:26:09.454188 systemd[1]: Started kubelet.service. Dec 13 14:26:09.548695 kubelet[1586]: E1213 14:26:09.548637 1586 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:26:09.551826 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:26:09.552044 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:26:11.087383 env[1226]: time="2024-12-13T14:26:11.087304224Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:11.090770 env[1226]: time="2024-12-13T14:26:11.090708029Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:11.093700 env[1226]: time="2024-12-13T14:26:11.093661897Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:11.096582 env[1226]: time="2024-12-13T14:26:11.096533059Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:11.098030 env[1226]: time="2024-12-13T14:26:11.097980881Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 14:26:14.903859 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 14:26:14.971653 systemd[1]: Stopped kubelet.service. Dec 13 14:26:14.974824 systemd[1]: Starting kubelet.service... Dec 13 14:26:15.000551 systemd[1]: Reloading. Dec 13 14:26:15.131735 /usr/lib/systemd/system-generators/torcx-generator[1677]: time="2024-12-13T14:26:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:26:15.131785 /usr/lib/systemd/system-generators/torcx-generator[1677]: time="2024-12-13T14:26:15Z" level=info msg="torcx already run" Dec 13 14:26:15.257198 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:26:15.257227 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:26:15.282267 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:26:15.432597 systemd[1]: Started kubelet.service. Dec 13 14:26:15.436184 systemd[1]: Stopping kubelet.service... Dec 13 14:26:15.436707 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:26:15.436971 systemd[1]: Stopped kubelet.service. Dec 13 14:26:15.439113 systemd[1]: Starting kubelet.service... Dec 13 14:26:15.633955 systemd[1]: Started kubelet.service. Dec 13 14:26:15.705272 kubelet[1728]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:26:15.705272 kubelet[1728]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:26:15.705272 kubelet[1728]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:26:15.705906 kubelet[1728]: I1213 14:26:15.705368 1728 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:26:16.254964 kubelet[1728]: I1213 14:26:16.254912 1728 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:26:16.254964 kubelet[1728]: I1213 14:26:16.254951 1728 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:26:16.255335 kubelet[1728]: I1213 14:26:16.255293 1728 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:26:16.296635 kubelet[1728]: E1213 14:26:16.296601 1728 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.103:6443: connect: connection refused Dec 13 14:26:16.300182 kubelet[1728]: I1213 14:26:16.300156 1728 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:26:16.315913 kubelet[1728]: I1213 14:26:16.315884 1728 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:26:16.316281 kubelet[1728]: I1213 14:26:16.316244 1728 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:26:16.316545 kubelet[1728]: I1213 14:26:16.316505 1728 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:26:16.316784 kubelet[1728]: I1213 14:26:16.316545 1728 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:26:16.316784 kubelet[1728]: I1213 14:26:16.316594 1728 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:26:16.316784 kubelet[1728]: I1213 14:26:16.316741 1728 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:26:16.316959 kubelet[1728]: I1213 14:26:16.316884 1728 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:26:16.316959 kubelet[1728]: I1213 14:26:16.316907 1728 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:26:16.316959 kubelet[1728]: I1213 14:26:16.316949 1728 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:26:16.317099 kubelet[1728]: I1213 14:26:16.316990 1728 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:26:16.323667 kubelet[1728]: I1213 14:26:16.323638 1728 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:26:16.335697 kubelet[1728]: I1213 14:26:16.335650 1728 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:26:16.339348 kubelet[1728]: W1213 14:26:16.339317 1728 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:26:16.347831 kubelet[1728]: W1213 14:26:16.347755 1728 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.128.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.103:6443: connect: connection refused Dec 13 14:26:16.347831 kubelet[1728]: E1213 14:26:16.347827 1728 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.103:6443: connect: connection refused Dec 13 14:26:16.347999 kubelet[1728]: W1213 14:26:16.347925 1728 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.128.0.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.103:6443: connect: connection refused Dec 13 14:26:16.347999 kubelet[1728]: E1213 14:26:16.347978 1728 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.103:6443: connect: connection refused Dec 13 14:26:16.349283 kubelet[1728]: I1213 14:26:16.348441 1728 server.go:1256] "Started kubelet" Dec 13 14:26:16.354035 kubelet[1728]: I1213 14:26:16.353975 1728 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:26:16.355600 kubelet[1728]: I1213 14:26:16.355125 1728 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:26:16.362468 kubelet[1728]: I1213 14:26:16.362415 1728 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:26:16.362764 kubelet[1728]: I1213 14:26:16.362726 1728 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:26:16.367599 kubelet[1728]: E1213 14:26:16.365273 1728 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.103:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.103:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal.1810c2bda7429c78 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal,UID:ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal,},FirstTimestamp:2024-12-13 14:26:16.348408952 +0000 UTC m=+0.706285095,LastTimestamp:2024-12-13 14:26:16.348408952 +0000 UTC m=+0.706285095,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal,}" Dec 13 14:26:16.373929 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:26:16.375089 kubelet[1728]: I1213 14:26:16.374111 1728 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:26:16.377769 kubelet[1728]: E1213 14:26:16.377749 1728 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:26:16.382914 kubelet[1728]: I1213 14:26:16.382776 1728 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:26:16.383385 kubelet[1728]: E1213 14:26:16.383360 1728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.103:6443: connect: connection refused" interval="200ms" Dec 13 14:26:16.384368 kubelet[1728]: I1213 14:26:16.384344 1728 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:26:16.384655 kubelet[1728]: I1213 14:26:16.384629 1728 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:26:16.386916 kubelet[1728]: I1213 14:26:16.386805 1728 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:26:16.386916 kubelet[1728]: I1213 14:26:16.386890 1728 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:26:16.387409 kubelet[1728]: I1213 14:26:16.387389 1728 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:26:16.400847 kubelet[1728]: I1213 14:26:16.400819 1728 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:26:16.402994 kubelet[1728]: I1213 14:26:16.402500 1728 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:26:16.402994 kubelet[1728]: I1213 14:26:16.402553 1728 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:26:16.402994 kubelet[1728]: I1213 14:26:16.402607 1728 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:26:16.402994 kubelet[1728]: E1213 14:26:16.402693 1728 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:26:16.413658 kubelet[1728]: W1213 14:26:16.413554 1728 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.128.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.103:6443: connect: connection refused Dec 13 14:26:16.413658 kubelet[1728]: E1213 14:26:16.413658 1728 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.103:6443: connect: connection refused Dec 13 14:26:16.413945 kubelet[1728]: W1213 14:26:16.413881 1728 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.128.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.103:6443: connect: connection refused Dec 13 14:26:16.413945 kubelet[1728]: E1213 14:26:16.413943 1728 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.103:6443: connect: connection refused Dec 13 14:26:16.450297 kubelet[1728]: I1213 14:26:16.450260 1728 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:26:16.450297 kubelet[1728]: I1213 14:26:16.450293 1728 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:26:16.450535 kubelet[1728]: I1213 14:26:16.450317 1728 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:26:16.452779 kubelet[1728]: I1213 14:26:16.452737 1728 policy_none.go:49] "None policy: Start" Dec 13 14:26:16.453595 kubelet[1728]: I1213 14:26:16.453551 1728 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:26:16.453784 kubelet[1728]: I1213 14:26:16.453760 1728 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:26:16.461177 systemd[1]: Created slice kubepods.slice. Dec 13 14:26:16.468219 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:26:16.472611 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:26:16.479525 kubelet[1728]: I1213 14:26:16.479501 1728 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:26:16.479828 kubelet[1728]: I1213 14:26:16.479806 1728 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:26:16.482845 kubelet[1728]: E1213 14:26:16.482812 1728 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" not found" Dec 13 14:26:16.492895 kubelet[1728]: I1213 14:26:16.492559 1728 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:16.493034 kubelet[1728]: E1213 14:26:16.493018 1728 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.103:6443/api/v1/nodes\": dial tcp 10.128.0.103:6443: connect: connection refused" node="ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:16.503460 kubelet[1728]: I1213 14:26:16.503432 1728 topology_manager.go:215] "Topology Admit Handler" podUID="e364a33cb3ecd7dc5fae69faa87ba5c4" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:16.509098 kubelet[1728]: I1213 14:26:16.508858 1728 topology_manager.go:215] "Topology Admit Handler" podUID="8bd1ce3d9902f2f5b6bc2130eab88c58" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:16.516065 kubelet[1728]: I1213 14:26:16.516040 1728 topology_manager.go:215] "Topology Admit Handler" podUID="fefba3638401eb942b3020267158f85b" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:16.521894 systemd[1]: Created slice kubepods-burstable-pode364a33cb3ecd7dc5fae69faa87ba5c4.slice. Dec 13 14:26:16.545472 systemd[1]: Created slice kubepods-burstable-pod8bd1ce3d9902f2f5b6bc2130eab88c58.slice. Dec 13 14:26:16.552615 systemd[1]: Created slice kubepods-burstable-podfefba3638401eb942b3020267158f85b.slice. Dec 13 14:26:16.584497 kubelet[1728]: E1213 14:26:16.584442 1728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.103:6443: connect: connection refused" interval="400ms" Dec 13 14:26:16.688169 kubelet[1728]: I1213 14:26:16.688129 1728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fefba3638401eb942b3020267158f85b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" (UID: \"fefba3638401eb942b3020267158f85b\") " pod="kube-system/kube-apiserver-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:16.688379 kubelet[1728]: I1213 14:26:16.688191 1728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e364a33cb3ecd7dc5fae69faa87ba5c4-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" (UID: \"e364a33cb3ecd7dc5fae69faa87ba5c4\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:16.688379 kubelet[1728]: I1213 14:26:16.688226 1728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fefba3638401eb942b3020267158f85b-k8s-certs\") pod \"kube-apiserver-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" (UID: \"fefba3638401eb942b3020267158f85b\") " pod="kube-system/kube-apiserver-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:16.688379 kubelet[1728]: I1213 14:26:16.688257 1728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e364a33cb3ecd7dc5fae69faa87ba5c4-ca-certs\") pod \"kube-controller-manager-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" (UID: \"e364a33cb3ecd7dc5fae69faa87ba5c4\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:16.688379 kubelet[1728]: I1213 14:26:16.688301 1728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e364a33cb3ecd7dc5fae69faa87ba5c4-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" (UID: \"e364a33cb3ecd7dc5fae69faa87ba5c4\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:16.688648 kubelet[1728]: I1213 14:26:16.688341 1728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e364a33cb3ecd7dc5fae69faa87ba5c4-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" (UID: \"e364a33cb3ecd7dc5fae69faa87ba5c4\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:16.688648 kubelet[1728]: I1213 14:26:16.688377 1728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e364a33cb3ecd7dc5fae69faa87ba5c4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" (UID: \"e364a33cb3ecd7dc5fae69faa87ba5c4\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:16.688648 kubelet[1728]: I1213 14:26:16.688413 1728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8bd1ce3d9902f2f5b6bc2130eab88c58-kubeconfig\") pod \"kube-scheduler-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" (UID: \"8bd1ce3d9902f2f5b6bc2130eab88c58\") " pod="kube-system/kube-scheduler-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:16.688648 kubelet[1728]: I1213 14:26:16.688452 1728 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fefba3638401eb942b3020267158f85b-ca-certs\") pod \"kube-apiserver-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" (UID: \"fefba3638401eb942b3020267158f85b\") " pod="kube-system/kube-apiserver-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:16.698526 kubelet[1728]: I1213 14:26:16.698489 1728 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:16.698926 kubelet[1728]: E1213 14:26:16.698903 1728 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.103:6443/api/v1/nodes\": dial tcp 10.128.0.103:6443: connect: connection refused" node="ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:16.842867 env[1226]: time="2024-12-13T14:26:16.842810458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal,Uid:e364a33cb3ecd7dc5fae69faa87ba5c4,Namespace:kube-system,Attempt:0,}" Dec 13 14:26:16.850502 env[1226]: time="2024-12-13T14:26:16.850449235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal,Uid:8bd1ce3d9902f2f5b6bc2130eab88c58,Namespace:kube-system,Attempt:0,}" Dec 13 14:26:16.856192 env[1226]: time="2024-12-13T14:26:16.856150511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal,Uid:fefba3638401eb942b3020267158f85b,Namespace:kube-system,Attempt:0,}" Dec 13 14:26:16.985636 kubelet[1728]: E1213 14:26:16.985584 1728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.103:6443: connect: connection refused" interval="800ms" Dec 13 14:26:17.107160 kubelet[1728]: I1213 14:26:17.106629 1728 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:17.107637 kubelet[1728]: E1213 14:26:17.107610 1728 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.103:6443/api/v1/nodes\": dial tcp 10.128.0.103:6443: connect: connection refused" node="ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:17.317405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3472919937.mount: Deactivated successfully. Dec 13 14:26:17.324851 env[1226]: time="2024-12-13T14:26:17.324792479Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:17.327031 env[1226]: time="2024-12-13T14:26:17.326912133Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:17.330029 env[1226]: time="2024-12-13T14:26:17.329977674Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:17.332808 env[1226]: time="2024-12-13T14:26:17.332773145Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:17.335383 env[1226]: time="2024-12-13T14:26:17.335344446Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:17.337190 env[1226]: time="2024-12-13T14:26:17.337154398Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:17.338670 env[1226]: time="2024-12-13T14:26:17.338636987Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:17.342058 env[1226]: time="2024-12-13T14:26:17.342007251Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:17.344803 env[1226]: time="2024-12-13T14:26:17.344772828Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:17.347682 env[1226]: time="2024-12-13T14:26:17.347640733Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:17.352324 env[1226]: time="2024-12-13T14:26:17.352271301Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:17.355690 env[1226]: time="2024-12-13T14:26:17.355646014Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:17.383969 env[1226]: time="2024-12-13T14:26:17.383124894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:26:17.390573 env[1226]: time="2024-12-13T14:26:17.390479575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:26:17.390794 env[1226]: time="2024-12-13T14:26:17.390508144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:26:17.392910 env[1226]: time="2024-12-13T14:26:17.392369943Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/22c9d5a91d5b1ab3078faa9ce30ba0f9ac7c5c3d1ec51e28f29049583549ac24 pid=1764 runtime=io.containerd.runc.v2 Dec 13 14:26:17.429299 systemd[1]: Started cri-containerd-22c9d5a91d5b1ab3078faa9ce30ba0f9ac7c5c3d1ec51e28f29049583549ac24.scope. Dec 13 14:26:17.443474 kubelet[1728]: W1213 14:26:17.442086 1728 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.128.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.103:6443: connect: connection refused Dec 13 14:26:17.443474 kubelet[1728]: E1213 14:26:17.442180 1728 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.103:6443: connect: connection refused Dec 13 14:26:17.447604 env[1226]: time="2024-12-13T14:26:17.444891039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:26:17.447604 env[1226]: time="2024-12-13T14:26:17.444942805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:26:17.447604 env[1226]: time="2024-12-13T14:26:17.444972182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:26:17.447604 env[1226]: time="2024-12-13T14:26:17.447488433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:26:17.447604 env[1226]: time="2024-12-13T14:26:17.447585241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:26:17.447930 env[1226]: time="2024-12-13T14:26:17.447631508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:26:17.447994 env[1226]: time="2024-12-13T14:26:17.447895789Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/728bea01f8af53cfb69dbd8f52c6f22e80b6806997171e7ea7095738c4b334a3 pid=1800 runtime=io.containerd.runc.v2 Dec 13 14:26:17.449071 env[1226]: time="2024-12-13T14:26:17.449009879Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c7e643bd0cddac4e8ec1f3ab14dea2b3c3c9c4ff87ac98e0842fdac90397bf6f pid=1792 runtime=io.containerd.runc.v2 Dec 13 14:26:17.474198 systemd[1]: Started cri-containerd-c7e643bd0cddac4e8ec1f3ab14dea2b3c3c9c4ff87ac98e0842fdac90397bf6f.scope. Dec 13 14:26:17.484896 systemd[1]: Started cri-containerd-728bea01f8af53cfb69dbd8f52c6f22e80b6806997171e7ea7095738c4b334a3.scope. Dec 13 14:26:17.560193 env[1226]: time="2024-12-13T14:26:17.560122958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal,Uid:e364a33cb3ecd7dc5fae69faa87ba5c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"22c9d5a91d5b1ab3078faa9ce30ba0f9ac7c5c3d1ec51e28f29049583549ac24\"" Dec 13 14:26:17.563500 kubelet[1728]: E1213 14:26:17.563141 1728 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-3510-3-6-b37b390daf8f2086bc27.c.flat" Dec 13 14:26:17.570150 env[1226]: time="2024-12-13T14:26:17.570107087Z" level=info msg="CreateContainer within sandbox \"22c9d5a91d5b1ab3078faa9ce30ba0f9ac7c5c3d1ec51e28f29049583549ac24\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:26:17.597038 env[1226]: time="2024-12-13T14:26:17.596985632Z" level=info msg="CreateContainer within sandbox \"22c9d5a91d5b1ab3078faa9ce30ba0f9ac7c5c3d1ec51e28f29049583549ac24\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9c1cbbfad0150cd289c18e1a8926fe1e9606bc214f3a645229a02d0cf8a7334f\"" Dec 13 14:26:17.597361 env[1226]: time="2024-12-13T14:26:17.596985081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal,Uid:fefba3638401eb942b3020267158f85b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7e643bd0cddac4e8ec1f3ab14dea2b3c3c9c4ff87ac98e0842fdac90397bf6f\"" Dec 13 14:26:17.598412 env[1226]: time="2024-12-13T14:26:17.598370978Z" level=info msg="StartContainer for \"9c1cbbfad0150cd289c18e1a8926fe1e9606bc214f3a645229a02d0cf8a7334f\"" Dec 13 14:26:17.601513 kubelet[1728]: E1213 14:26:17.601483 1728 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-21291" Dec 13 14:26:17.603845 env[1226]: time="2024-12-13T14:26:17.603787133Z" level=info msg="CreateContainer within sandbox \"c7e643bd0cddac4e8ec1f3ab14dea2b3c3c9c4ff87ac98e0842fdac90397bf6f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:26:17.627957 env[1226]: time="2024-12-13T14:26:17.627885883Z" level=info msg="CreateContainer within sandbox \"c7e643bd0cddac4e8ec1f3ab14dea2b3c3c9c4ff87ac98e0842fdac90397bf6f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a87005a45abf9263a8bb17bc2e252ce6becf9b76b336ebe2ecc6d239a624f051\"" Dec 13 14:26:17.628854 env[1226]: time="2024-12-13T14:26:17.628804477Z" level=info msg="StartContainer for \"a87005a45abf9263a8bb17bc2e252ce6becf9b76b336ebe2ecc6d239a624f051\"" Dec 13 14:26:17.638486 systemd[1]: Started cri-containerd-9c1cbbfad0150cd289c18e1a8926fe1e9606bc214f3a645229a02d0cf8a7334f.scope. Dec 13 14:26:17.653369 env[1226]: time="2024-12-13T14:26:17.653318724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal,Uid:8bd1ce3d9902f2f5b6bc2130eab88c58,Namespace:kube-system,Attempt:0,} returns sandbox id \"728bea01f8af53cfb69dbd8f52c6f22e80b6806997171e7ea7095738c4b334a3\"" Dec 13 14:26:17.656024 kubelet[1728]: E1213 14:26:17.655625 1728 kubelet_pods.go:417] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-21291" Dec 13 14:26:17.657733 env[1226]: time="2024-12-13T14:26:17.657689757Z" level=info msg="CreateContainer within sandbox \"728bea01f8af53cfb69dbd8f52c6f22e80b6806997171e7ea7095738c4b334a3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:26:17.672792 kubelet[1728]: W1213 14:26:17.672657 1728 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.128.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.103:6443: connect: connection refused Dec 13 14:26:17.672792 kubelet[1728]: E1213 14:26:17.672736 1728 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.103:6443: connect: connection refused Dec 13 14:26:17.682942 env[1226]: time="2024-12-13T14:26:17.682887087Z" level=info msg="CreateContainer within sandbox \"728bea01f8af53cfb69dbd8f52c6f22e80b6806997171e7ea7095738c4b334a3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9fd35660132f0ee021b1ad99c059f14a769123649aed4a3d12b8c40f04f3dc2c\"" Dec 13 14:26:17.683518 env[1226]: time="2024-12-13T14:26:17.683481179Z" level=info msg="StartContainer for \"9fd35660132f0ee021b1ad99c059f14a769123649aed4a3d12b8c40f04f3dc2c\"" Dec 13 14:26:17.691428 systemd[1]: Started cri-containerd-a87005a45abf9263a8bb17bc2e252ce6becf9b76b336ebe2ecc6d239a624f051.scope. Dec 13 14:26:17.731330 kubelet[1728]: W1213 14:26:17.731205 1728 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.128.0.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.103:6443: connect: connection refused Dec 13 14:26:17.731330 kubelet[1728]: E1213 14:26:17.731290 1728 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.103:6443: connect: connection refused Dec 13 14:26:17.735506 systemd[1]: Started cri-containerd-9fd35660132f0ee021b1ad99c059f14a769123649aed4a3d12b8c40f04f3dc2c.scope. Dec 13 14:26:17.786938 kubelet[1728]: E1213 14:26:17.786750 1728 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.103:6443: connect: connection refused" interval="1.6s" Dec 13 14:26:17.790056 env[1226]: time="2024-12-13T14:26:17.790012568Z" level=info msg="StartContainer for \"a87005a45abf9263a8bb17bc2e252ce6becf9b76b336ebe2ecc6d239a624f051\" returns successfully" Dec 13 14:26:17.814846 env[1226]: time="2024-12-13T14:26:17.814778412Z" level=info msg="StartContainer for \"9c1cbbfad0150cd289c18e1a8926fe1e9606bc214f3a645229a02d0cf8a7334f\" returns successfully" Dec 13 14:26:17.850265 env[1226]: time="2024-12-13T14:26:17.850215320Z" level=info msg="StartContainer for \"9fd35660132f0ee021b1ad99c059f14a769123649aed4a3d12b8c40f04f3dc2c\" returns successfully" Dec 13 14:26:17.914905 kubelet[1728]: I1213 14:26:17.914296 1728 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:17.914905 kubelet[1728]: E1213 14:26:17.914758 1728 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.103:6443/api/v1/nodes\": dial tcp 10.128.0.103:6443: connect: connection refused" node="ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:17.932140 kubelet[1728]: W1213 14:26:17.932002 1728 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.128.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.103:6443: connect: connection refused Dec 13 14:26:17.932140 kubelet[1728]: E1213 14:26:17.932108 1728 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.103:6443: connect: connection refused Dec 13 14:26:19.525786 kubelet[1728]: I1213 14:26:19.525749 1728 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:21.464301 kubelet[1728]: E1213 14:26:21.464243 1728 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" not found" node="ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:21.471923 kubelet[1728]: I1213 14:26:21.471880 1728 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:22.330774 kubelet[1728]: I1213 14:26:22.330730 1728 apiserver.go:52] "Watching apiserver" Dec 13 14:26:22.387905 kubelet[1728]: I1213 14:26:22.387866 1728 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:26:22.631697 kubelet[1728]: W1213 14:26:22.631539 1728 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 14:26:24.306646 systemd[1]: Reloading. Dec 13 14:26:24.438585 kubelet[1728]: W1213 14:26:24.438326 1728 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 14:26:24.465020 /usr/lib/systemd/system-generators/torcx-generator[2016]: time="2024-12-13T14:26:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:26:24.465083 /usr/lib/systemd/system-generators/torcx-generator[2016]: time="2024-12-13T14:26:24Z" level=info msg="torcx already run" Dec 13 14:26:24.566915 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:26:24.566941 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:26:24.591834 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:26:24.788418 systemd[1]: Stopping kubelet.service... Dec 13 14:26:24.788966 kubelet[1728]: I1213 14:26:24.788908 1728 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:26:24.807541 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:26:24.807834 systemd[1]: Stopped kubelet.service. Dec 13 14:26:24.807914 systemd[1]: kubelet.service: Consumed 1.193s CPU time. Dec 13 14:26:24.810352 systemd[1]: Starting kubelet.service... Dec 13 14:26:25.064208 systemd[1]: Started kubelet.service. Dec 13 14:26:25.164331 kubelet[2064]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:26:25.164331 kubelet[2064]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:26:25.164331 kubelet[2064]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:26:25.164982 kubelet[2064]: I1213 14:26:25.164396 2064 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:26:25.173248 kubelet[2064]: I1213 14:26:25.173212 2064 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:26:25.173248 kubelet[2064]: I1213 14:26:25.173244 2064 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:26:25.174113 kubelet[2064]: I1213 14:26:25.174047 2064 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:26:25.177017 kubelet[2064]: I1213 14:26:25.176745 2064 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:26:25.180513 kubelet[2064]: I1213 14:26:25.180081 2064 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:26:25.210878 kubelet[2064]: I1213 14:26:25.202329 2064 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:26:25.210878 kubelet[2064]: I1213 14:26:25.202760 2064 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:26:25.210878 kubelet[2064]: I1213 14:26:25.203131 2064 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:26:25.210878 kubelet[2064]: I1213 14:26:25.203169 2064 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:26:25.210878 kubelet[2064]: I1213 14:26:25.203187 2064 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:26:25.210878 kubelet[2064]: I1213 14:26:25.203292 2064 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:26:25.211414 kubelet[2064]: I1213 14:26:25.203510 2064 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:26:25.211414 kubelet[2064]: I1213 14:26:25.203535 2064 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:26:25.211414 kubelet[2064]: I1213 14:26:25.203648 2064 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:26:25.211414 kubelet[2064]: I1213 14:26:25.203703 2064 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:26:25.220098 kubelet[2064]: I1213 14:26:25.212227 2064 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:26:25.220831 kubelet[2064]: I1213 14:26:25.220811 2064 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:26:25.227489 kubelet[2064]: I1213 14:26:25.227467 2064 server.go:1256] "Started kubelet" Dec 13 14:26:25.239198 sudo[2077]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 14:26:25.239693 sudo[2077]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 14:26:25.241594 kubelet[2064]: I1213 14:26:25.241540 2064 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:26:25.252890 kubelet[2064]: I1213 14:26:25.252848 2064 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:26:25.255814 kubelet[2064]: I1213 14:26:25.255786 2064 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:26:25.257641 kubelet[2064]: I1213 14:26:25.257616 2064 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:26:25.258045 kubelet[2064]: I1213 14:26:25.258025 2064 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:26:25.270641 kubelet[2064]: I1213 14:26:25.268530 2064 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:26:25.270922 kubelet[2064]: I1213 14:26:25.270900 2064 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:26:25.271473 kubelet[2064]: I1213 14:26:25.271449 2064 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:26:25.271839 kubelet[2064]: I1213 14:26:25.271820 2064 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:26:25.273062 kubelet[2064]: I1213 14:26:25.272940 2064 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:26:25.273341 kubelet[2064]: I1213 14:26:25.273311 2064 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:26:25.285624 kubelet[2064]: I1213 14:26:25.275471 2064 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:26:25.285624 kubelet[2064]: I1213 14:26:25.275537 2064 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:26:25.285624 kubelet[2064]: I1213 14:26:25.275615 2064 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:26:25.285624 kubelet[2064]: E1213 14:26:25.275724 2064 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:26:25.302896 kubelet[2064]: E1213 14:26:25.302871 2064 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:26:25.310916 kubelet[2064]: I1213 14:26:25.310858 2064 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:26:25.376219 kubelet[2064]: E1213 14:26:25.376116 2064 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:26:25.397766 kubelet[2064]: I1213 14:26:25.397738 2064 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:25.415003 kubelet[2064]: I1213 14:26:25.414974 2064 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:25.415326 kubelet[2064]: I1213 14:26:25.415310 2064 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:25.433474 kubelet[2064]: I1213 14:26:25.433444 2064 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:26:25.435689 kubelet[2064]: I1213 14:26:25.435645 2064 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:26:25.435877 kubelet[2064]: I1213 14:26:25.435861 2064 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:26:25.436165 kubelet[2064]: I1213 14:26:25.436147 2064 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:26:25.436305 kubelet[2064]: I1213 14:26:25.436291 2064 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:26:25.436414 kubelet[2064]: I1213 14:26:25.436402 2064 policy_none.go:49] "None policy: Start" Dec 13 14:26:25.437772 kubelet[2064]: I1213 14:26:25.437753 2064 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:26:25.437929 kubelet[2064]: I1213 14:26:25.437916 2064 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:26:25.438207 kubelet[2064]: I1213 14:26:25.438190 2064 state_mem.go:75] "Updated machine memory state" Dec 13 14:26:25.449920 kubelet[2064]: I1213 14:26:25.449890 2064 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:26:25.451290 kubelet[2064]: I1213 14:26:25.451268 2064 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:26:25.597007 kubelet[2064]: I1213 14:26:25.596963 2064 topology_manager.go:215] "Topology Admit Handler" podUID="fefba3638401eb942b3020267158f85b" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:25.597209 kubelet[2064]: I1213 14:26:25.597131 2064 topology_manager.go:215] "Topology Admit Handler" podUID="e364a33cb3ecd7dc5fae69faa87ba5c4" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:25.597209 kubelet[2064]: I1213 14:26:25.597185 2064 topology_manager.go:215] "Topology Admit Handler" podUID="8bd1ce3d9902f2f5b6bc2130eab88c58" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:25.610025 kubelet[2064]: W1213 14:26:25.608824 2064 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 14:26:25.610025 kubelet[2064]: E1213 14:26:25.608976 2064 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:25.610025 kubelet[2064]: W1213 14:26:25.609432 2064 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 14:26:25.610312 kubelet[2064]: W1213 14:26:25.610149 2064 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 14:26:25.610312 kubelet[2064]: E1213 14:26:25.610242 2064 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:25.674983 kubelet[2064]: I1213 14:26:25.674875 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e364a33cb3ecd7dc5fae69faa87ba5c4-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" (UID: \"e364a33cb3ecd7dc5fae69faa87ba5c4\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:25.675255 kubelet[2064]: I1213 14:26:25.675236 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e364a33cb3ecd7dc5fae69faa87ba5c4-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" (UID: \"e364a33cb3ecd7dc5fae69faa87ba5c4\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:25.675413 kubelet[2064]: I1213 14:26:25.675395 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e364a33cb3ecd7dc5fae69faa87ba5c4-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" (UID: \"e364a33cb3ecd7dc5fae69faa87ba5c4\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:25.675630 kubelet[2064]: I1213 14:26:25.675599 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e364a33cb3ecd7dc5fae69faa87ba5c4-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" (UID: \"e364a33cb3ecd7dc5fae69faa87ba5c4\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:25.675867 kubelet[2064]: I1213 14:26:25.675850 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8bd1ce3d9902f2f5b6bc2130eab88c58-kubeconfig\") pod \"kube-scheduler-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" (UID: \"8bd1ce3d9902f2f5b6bc2130eab88c58\") " pod="kube-system/kube-scheduler-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:25.676032 kubelet[2064]: I1213 14:26:25.676015 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fefba3638401eb942b3020267158f85b-ca-certs\") pod \"kube-apiserver-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" (UID: \"fefba3638401eb942b3020267158f85b\") " pod="kube-system/kube-apiserver-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:25.676225 kubelet[2064]: I1213 14:26:25.676195 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fefba3638401eb942b3020267158f85b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" (UID: \"fefba3638401eb942b3020267158f85b\") " pod="kube-system/kube-apiserver-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:25.676432 kubelet[2064]: I1213 14:26:25.676415 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fefba3638401eb942b3020267158f85b-k8s-certs\") pod \"kube-apiserver-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" (UID: \"fefba3638401eb942b3020267158f85b\") " pod="kube-system/kube-apiserver-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:25.676645 kubelet[2064]: I1213 14:26:25.676597 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e364a33cb3ecd7dc5fae69faa87ba5c4-ca-certs\") pod \"kube-controller-manager-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" (UID: \"e364a33cb3ecd7dc5fae69faa87ba5c4\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" Dec 13 14:26:26.111045 sudo[2077]: pam_unix(sudo:session): session closed for user root Dec 13 14:26:26.217420 kubelet[2064]: I1213 14:26:26.217368 2064 apiserver.go:52] "Watching apiserver" Dec 13 14:26:26.272459 kubelet[2064]: I1213 14:26:26.272414 2064 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:26:26.424642 kubelet[2064]: I1213 14:26:26.424481 2064 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" podStartSLOduration=4.424395854 podStartE2EDuration="4.424395854s" podCreationTimestamp="2024-12-13 14:26:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:26:26.422611365 +0000 UTC m=+1.350724453" watchObservedRunningTime="2024-12-13 14:26:26.424395854 +0000 UTC m=+1.352508936" Dec 13 14:26:26.424900 kubelet[2064]: I1213 14:26:26.424695 2064 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" podStartSLOduration=1.424645022 podStartE2EDuration="1.424645022s" podCreationTimestamp="2024-12-13 14:26:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:26:26.411282613 +0000 UTC m=+1.339395694" watchObservedRunningTime="2024-12-13 14:26:26.424645022 +0000 UTC m=+1.352758107" Dec 13 14:26:26.440766 kubelet[2064]: I1213 14:26:26.440726 2064 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" podStartSLOduration=2.44063164 podStartE2EDuration="2.44063164s" podCreationTimestamp="2024-12-13 14:26:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:26:26.438702122 +0000 UTC m=+1.366815215" watchObservedRunningTime="2024-12-13 14:26:26.44063164 +0000 UTC m=+1.368744721" Dec 13 14:26:28.445139 sudo[1401]: pam_unix(sudo:session): session closed for user root Dec 13 14:26:28.488479 sshd[1398]: pam_unix(sshd:session): session closed for user core Dec 13 14:26:28.492747 systemd-logind[1212]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:26:28.493042 systemd[1]: sshd@4-10.128.0.103:22-139.178.68.195:40046.service: Deactivated successfully. Dec 13 14:26:28.494158 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:26:28.494332 systemd[1]: session-5.scope: Consumed 6.979s CPU time. Dec 13 14:26:28.496185 systemd-logind[1212]: Removed session 5. Dec 13 14:26:29.756430 update_engine[1216]: I1213 14:26:29.755637 1216 update_attempter.cc:509] Updating boot flags... Dec 13 14:26:37.857341 kubelet[2064]: I1213 14:26:37.857284 2064 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:26:37.858196 env[1226]: time="2024-12-13T14:26:37.858128128Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:26:37.858830 kubelet[2064]: I1213 14:26:37.858803 2064 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:26:38.411548 kubelet[2064]: I1213 14:26:38.411514 2064 topology_manager.go:215] "Topology Admit Handler" podUID="09a0caf8-6732-479a-bb85-b8cc8de8d8ea" podNamespace="kube-system" podName="kube-proxy-8l9lw" Dec 13 14:26:38.420243 systemd[1]: Created slice kubepods-besteffort-pod09a0caf8_6732_479a_bb85_b8cc8de8d8ea.slice. Dec 13 14:26:38.423063 kubelet[2064]: I1213 14:26:38.423025 2064 topology_manager.go:215] "Topology Admit Handler" podUID="5a7d6e17-7e2d-407c-a9e3-2540c0dfb879" podNamespace="kube-system" podName="cilium-r8cq7" Dec 13 14:26:38.431997 systemd[1]: Created slice kubepods-burstable-pod5a7d6e17_7e2d_407c_a9e3_2540c0dfb879.slice. Dec 13 14:26:38.446949 kubelet[2064]: W1213 14:26:38.446922 2064 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal' and this object Dec 13 14:26:38.447345 kubelet[2064]: E1213 14:26:38.447304 2064 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal' and this object Dec 13 14:26:38.447345 kubelet[2064]: W1213 14:26:38.447182 2064 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal' and this object Dec 13 14:26:38.447526 kubelet[2064]: E1213 14:26:38.447368 2064 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal' and this object Dec 13 14:26:38.447526 kubelet[2064]: W1213 14:26:38.447273 2064 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal' and this object Dec 13 14:26:38.447526 kubelet[2064]: E1213 14:26:38.447387 2064 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal' and this object Dec 13 14:26:38.461058 kubelet[2064]: I1213 14:26:38.461030 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-cilium-config-path\") pod \"cilium-r8cq7\" (UID: \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\") " pod="kube-system/cilium-r8cq7" Dec 13 14:26:38.461287 kubelet[2064]: I1213 14:26:38.461258 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjlvf\" (UniqueName: \"kubernetes.io/projected/09a0caf8-6732-479a-bb85-b8cc8de8d8ea-kube-api-access-vjlvf\") pod \"kube-proxy-8l9lw\" (UID: \"09a0caf8-6732-479a-bb85-b8cc8de8d8ea\") " pod="kube-system/kube-proxy-8l9lw" Dec 13 14:26:38.461514 kubelet[2064]: I1213 14:26:38.461496 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-cilium-run\") pod \"cilium-r8cq7\" (UID: \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\") " pod="kube-system/cilium-r8cq7" Dec 13 14:26:38.461756 kubelet[2064]: I1213 14:26:38.461738 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09a0caf8-6732-479a-bb85-b8cc8de8d8ea-xtables-lock\") pod \"kube-proxy-8l9lw\" (UID: \"09a0caf8-6732-479a-bb85-b8cc8de8d8ea\") " pod="kube-system/kube-proxy-8l9lw" Dec 13 14:26:38.461987 kubelet[2064]: I1213 14:26:38.461948 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-etc-cni-netd\") pod \"cilium-r8cq7\" (UID: \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\") " pod="kube-system/cilium-r8cq7" Dec 13 14:26:38.462212 kubelet[2064]: I1213 14:26:38.462196 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-cni-path\") pod \"cilium-r8cq7\" (UID: \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\") " pod="kube-system/cilium-r8cq7" Dec 13 14:26:38.462427 kubelet[2064]: I1213 14:26:38.462410 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-lib-modules\") pod \"cilium-r8cq7\" (UID: \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\") " pod="kube-system/cilium-r8cq7" Dec 13 14:26:38.462642 kubelet[2064]: I1213 14:26:38.462624 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-xtables-lock\") pod \"cilium-r8cq7\" (UID: \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\") " pod="kube-system/cilium-r8cq7" Dec 13 14:26:38.462857 kubelet[2064]: I1213 14:26:38.462841 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-host-proc-sys-net\") pod \"cilium-r8cq7\" (UID: \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\") " pod="kube-system/cilium-r8cq7" Dec 13 14:26:38.463155 kubelet[2064]: I1213 14:26:38.463138 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-hubble-tls\") pod \"cilium-r8cq7\" (UID: \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\") " pod="kube-system/cilium-r8cq7" Dec 13 14:26:38.463371 kubelet[2064]: I1213 14:26:38.463332 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-clustermesh-secrets\") pod \"cilium-r8cq7\" (UID: \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\") " pod="kube-system/cilium-r8cq7" Dec 13 14:26:38.463609 kubelet[2064]: I1213 14:26:38.463592 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-cilium-cgroup\") pod \"cilium-r8cq7\" (UID: \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\") " pod="kube-system/cilium-r8cq7" Dec 13 14:26:38.463797 kubelet[2064]: I1213 14:26:38.463782 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-hostproc\") pod \"cilium-r8cq7\" (UID: \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\") " pod="kube-system/cilium-r8cq7" Dec 13 14:26:38.463972 kubelet[2064]: I1213 14:26:38.463958 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/09a0caf8-6732-479a-bb85-b8cc8de8d8ea-kube-proxy\") pod \"kube-proxy-8l9lw\" (UID: \"09a0caf8-6732-479a-bb85-b8cc8de8d8ea\") " pod="kube-system/kube-proxy-8l9lw" Dec 13 14:26:38.464162 kubelet[2064]: I1213 14:26:38.464148 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09a0caf8-6732-479a-bb85-b8cc8de8d8ea-lib-modules\") pod \"kube-proxy-8l9lw\" (UID: \"09a0caf8-6732-479a-bb85-b8cc8de8d8ea\") " pod="kube-system/kube-proxy-8l9lw" Dec 13 14:26:38.464311 kubelet[2064]: I1213 14:26:38.464299 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-host-proc-sys-kernel\") pod \"cilium-r8cq7\" (UID: \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\") " pod="kube-system/cilium-r8cq7" Dec 13 14:26:38.464455 kubelet[2064]: I1213 14:26:38.464443 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h45h7\" (UniqueName: \"kubernetes.io/projected/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-kube-api-access-h45h7\") pod \"cilium-r8cq7\" (UID: \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\") " pod="kube-system/cilium-r8cq7" Dec 13 14:26:38.464654 kubelet[2064]: I1213 14:26:38.464637 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-bpf-maps\") pod \"cilium-r8cq7\" (UID: \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\") " pod="kube-system/cilium-r8cq7" Dec 13 14:26:38.747850 env[1226]: time="2024-12-13T14:26:38.747704081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8l9lw,Uid:09a0caf8-6732-479a-bb85-b8cc8de8d8ea,Namespace:kube-system,Attempt:0,}" Dec 13 14:26:38.778309 env[1226]: time="2024-12-13T14:26:38.778177423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:26:38.778309 env[1226]: time="2024-12-13T14:26:38.778235285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:26:38.778309 env[1226]: time="2024-12-13T14:26:38.778254485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:26:38.779055 env[1226]: time="2024-12-13T14:26:38.778994562Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/96fb4ba442867655cb6f6ac4259cfd4ce32249c3e17fbc01618d910e75fbc6cf pid=2168 runtime=io.containerd.runc.v2 Dec 13 14:26:38.811617 systemd[1]: Started cri-containerd-96fb4ba442867655cb6f6ac4259cfd4ce32249c3e17fbc01618d910e75fbc6cf.scope. Dec 13 14:26:38.867407 env[1226]: time="2024-12-13T14:26:38.867324308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8l9lw,Uid:09a0caf8-6732-479a-bb85-b8cc8de8d8ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"96fb4ba442867655cb6f6ac4259cfd4ce32249c3e17fbc01618d910e75fbc6cf\"" Dec 13 14:26:38.869370 kubelet[2064]: I1213 14:26:38.869334 2064 topology_manager.go:215] "Topology Admit Handler" podUID="61afd605-291c-4b3a-9769-dc35bff3785f" podNamespace="kube-system" podName="cilium-operator-5cc964979-59lqw" Dec 13 14:26:38.879096 systemd[1]: Created slice kubepods-besteffort-pod61afd605_291c_4b3a_9769_dc35bff3785f.slice. Dec 13 14:26:38.883601 env[1226]: time="2024-12-13T14:26:38.882819877Z" level=info msg="CreateContainer within sandbox \"96fb4ba442867655cb6f6ac4259cfd4ce32249c3e17fbc01618d910e75fbc6cf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:26:38.913883 env[1226]: time="2024-12-13T14:26:38.913820537Z" level=info msg="CreateContainer within sandbox \"96fb4ba442867655cb6f6ac4259cfd4ce32249c3e17fbc01618d910e75fbc6cf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"146ad17f2d993153a1131d828975efacb26c82cf9a495a5688a70c10e5765dff\"" Dec 13 14:26:38.914693 env[1226]: time="2024-12-13T14:26:38.914655055Z" level=info msg="StartContainer for \"146ad17f2d993153a1131d828975efacb26c82cf9a495a5688a70c10e5765dff\"" Dec 13 14:26:38.946379 systemd[1]: Started cri-containerd-146ad17f2d993153a1131d828975efacb26c82cf9a495a5688a70c10e5765dff.scope. Dec 13 14:26:38.969513 kubelet[2064]: I1213 14:26:38.969352 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmm7l\" (UniqueName: \"kubernetes.io/projected/61afd605-291c-4b3a-9769-dc35bff3785f-kube-api-access-jmm7l\") pod \"cilium-operator-5cc964979-59lqw\" (UID: \"61afd605-291c-4b3a-9769-dc35bff3785f\") " pod="kube-system/cilium-operator-5cc964979-59lqw" Dec 13 14:26:38.969513 kubelet[2064]: I1213 14:26:38.969439 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61afd605-291c-4b3a-9769-dc35bff3785f-cilium-config-path\") pod \"cilium-operator-5cc964979-59lqw\" (UID: \"61afd605-291c-4b3a-9769-dc35bff3785f\") " pod="kube-system/cilium-operator-5cc964979-59lqw" Dec 13 14:26:39.062807 env[1226]: time="2024-12-13T14:26:39.062755321Z" level=info msg="StartContainer for \"146ad17f2d993153a1131d828975efacb26c82cf9a495a5688a70c10e5765dff\" returns successfully" Dec 13 14:26:39.398157 kubelet[2064]: I1213 14:26:39.398023 2064 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-8l9lw" podStartSLOduration=1.397970262 podStartE2EDuration="1.397970262s" podCreationTimestamp="2024-12-13 14:26:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:26:39.396249425 +0000 UTC m=+14.324362525" watchObservedRunningTime="2024-12-13 14:26:39.397970262 +0000 UTC m=+14.326083353" Dec 13 14:26:39.566440 kubelet[2064]: E1213 14:26:39.566383 2064 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Dec 13 14:26:39.566685 kubelet[2064]: E1213 14:26:39.566517 2064 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-clustermesh-secrets podName:5a7d6e17-7e2d-407c-a9e3-2540c0dfb879 nodeName:}" failed. No retries permitted until 2024-12-13 14:26:40.06648733 +0000 UTC m=+14.994600417 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-clustermesh-secrets") pod "cilium-r8cq7" (UID: "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879") : failed to sync secret cache: timed out waiting for the condition Dec 13 14:26:39.566909 kubelet[2064]: E1213 14:26:39.566865 2064 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Dec 13 14:26:39.566999 kubelet[2064]: E1213 14:26:39.566940 2064 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-cilium-config-path podName:5a7d6e17-7e2d-407c-a9e3-2540c0dfb879 nodeName:}" failed. No retries permitted until 2024-12-13 14:26:40.066921201 +0000 UTC m=+14.995034320 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-cilium-config-path") pod "cilium-r8cq7" (UID: "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879") : failed to sync configmap cache: timed out waiting for the condition Dec 13 14:26:39.568008 kubelet[2064]: E1213 14:26:39.567976 2064 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Dec 13 14:26:39.568171 kubelet[2064]: E1213 14:26:39.568153 2064 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-r8cq7: failed to sync secret cache: timed out waiting for the condition Dec 13 14:26:39.568372 kubelet[2064]: E1213 14:26:39.568355 2064 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-hubble-tls podName:5a7d6e17-7e2d-407c-a9e3-2540c0dfb879 nodeName:}" failed. No retries permitted until 2024-12-13 14:26:40.068332004 +0000 UTC m=+14.996445094 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-hubble-tls") pod "cilium-r8cq7" (UID: "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879") : failed to sync secret cache: timed out waiting for the condition Dec 13 14:26:39.583388 systemd[1]: run-containerd-runc-k8s.io-96fb4ba442867655cb6f6ac4259cfd4ce32249c3e17fbc01618d910e75fbc6cf-runc.gedxAz.mount: Deactivated successfully. Dec 13 14:26:40.092598 env[1226]: time="2024-12-13T14:26:40.092447788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-59lqw,Uid:61afd605-291c-4b3a-9769-dc35bff3785f,Namespace:kube-system,Attempt:0,}" Dec 13 14:26:40.128307 env[1226]: time="2024-12-13T14:26:40.128221284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:26:40.128526 env[1226]: time="2024-12-13T14:26:40.128272773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:26:40.128526 env[1226]: time="2024-12-13T14:26:40.128291358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:26:40.128526 env[1226]: time="2024-12-13T14:26:40.128469803Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2cca33d94cf9bb192fd5bba97f8c58dbffa293fa89aab68a9e8dec86eef1a09 pid=2365 runtime=io.containerd.runc.v2 Dec 13 14:26:40.158472 systemd[1]: Started cri-containerd-a2cca33d94cf9bb192fd5bba97f8c58dbffa293fa89aab68a9e8dec86eef1a09.scope. Dec 13 14:26:40.226596 env[1226]: time="2024-12-13T14:26:40.226459146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-59lqw,Uid:61afd605-291c-4b3a-9769-dc35bff3785f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2cca33d94cf9bb192fd5bba97f8c58dbffa293fa89aab68a9e8dec86eef1a09\"" Dec 13 14:26:40.230769 env[1226]: time="2024-12-13T14:26:40.230134246Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:26:40.248495 env[1226]: time="2024-12-13T14:26:40.248431736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r8cq7,Uid:5a7d6e17-7e2d-407c-a9e3-2540c0dfb879,Namespace:kube-system,Attempt:0,}" Dec 13 14:26:40.271642 env[1226]: time="2024-12-13T14:26:40.270620045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:26:40.271642 env[1226]: time="2024-12-13T14:26:40.270668877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:26:40.271642 env[1226]: time="2024-12-13T14:26:40.270688110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:26:40.271642 env[1226]: time="2024-12-13T14:26:40.270986587Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ec88e9de196cdf9718efbef9f49c014d56d7da0f51c70101e04e4b8a087fad01 pid=2409 runtime=io.containerd.runc.v2 Dec 13 14:26:40.289353 systemd[1]: Started cri-containerd-ec88e9de196cdf9718efbef9f49c014d56d7da0f51c70101e04e4b8a087fad01.scope. Dec 13 14:26:40.334182 env[1226]: time="2024-12-13T14:26:40.334120041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r8cq7,Uid:5a7d6e17-7e2d-407c-a9e3-2540c0dfb879,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec88e9de196cdf9718efbef9f49c014d56d7da0f51c70101e04e4b8a087fad01\"" Dec 13 14:26:41.444969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2018170419.mount: Deactivated successfully. Dec 13 14:26:42.513679 env[1226]: time="2024-12-13T14:26:42.513611035Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:42.516369 env[1226]: time="2024-12-13T14:26:42.516326403Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:42.518515 env[1226]: time="2024-12-13T14:26:42.518475361Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:42.519378 env[1226]: time="2024-12-13T14:26:42.519337375Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 14:26:42.522854 env[1226]: time="2024-12-13T14:26:42.522815914Z" level=info msg="CreateContainer within sandbox \"a2cca33d94cf9bb192fd5bba97f8c58dbffa293fa89aab68a9e8dec86eef1a09\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:26:42.523012 env[1226]: time="2024-12-13T14:26:42.522974772Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:26:42.547380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1066312176.mount: Deactivated successfully. Dec 13 14:26:42.551698 env[1226]: time="2024-12-13T14:26:42.551651765Z" level=info msg="CreateContainer within sandbox \"a2cca33d94cf9bb192fd5bba97f8c58dbffa293fa89aab68a9e8dec86eef1a09\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"262aa2cdddf42ce4ffa9ba8cd1c542aa2f4f042aa034bca80c228eec63c30cf6\"" Dec 13 14:26:42.553527 env[1226]: time="2024-12-13T14:26:42.553461685Z" level=info msg="StartContainer for \"262aa2cdddf42ce4ffa9ba8cd1c542aa2f4f042aa034bca80c228eec63c30cf6\"" Dec 13 14:26:42.592778 systemd[1]: Started cri-containerd-262aa2cdddf42ce4ffa9ba8cd1c542aa2f4f042aa034bca80c228eec63c30cf6.scope. Dec 13 14:26:42.631357 env[1226]: time="2024-12-13T14:26:42.631310495Z" level=info msg="StartContainer for \"262aa2cdddf42ce4ffa9ba8cd1c542aa2f4f042aa034bca80c228eec63c30cf6\" returns successfully" Dec 13 14:26:48.638032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount626047249.mount: Deactivated successfully. Dec 13 14:26:52.567832 env[1226]: time="2024-12-13T14:26:52.567740059Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:52.570599 env[1226]: time="2024-12-13T14:26:52.570534348Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:52.573299 env[1226]: time="2024-12-13T14:26:52.573258990Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:52.574268 env[1226]: time="2024-12-13T14:26:52.574226868Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 14:26:52.578184 env[1226]: time="2024-12-13T14:26:52.578142636Z" level=info msg="CreateContainer within sandbox \"ec88e9de196cdf9718efbef9f49c014d56d7da0f51c70101e04e4b8a087fad01\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:26:52.598123 env[1226]: time="2024-12-13T14:26:52.595867026Z" level=info msg="CreateContainer within sandbox \"ec88e9de196cdf9718efbef9f49c014d56d7da0f51c70101e04e4b8a087fad01\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"803ff51ded69c4d612d745ca9c82034508b4de476c06f42cba4bf0683f388062\"" Dec 13 14:26:52.600253 env[1226]: time="2024-12-13T14:26:52.600214152Z" level=info msg="StartContainer for \"803ff51ded69c4d612d745ca9c82034508b4de476c06f42cba4bf0683f388062\"" Dec 13 14:26:52.640084 systemd[1]: run-containerd-runc-k8s.io-803ff51ded69c4d612d745ca9c82034508b4de476c06f42cba4bf0683f388062-runc.1Mul9T.mount: Deactivated successfully. Dec 13 14:26:52.646477 systemd[1]: Started cri-containerd-803ff51ded69c4d612d745ca9c82034508b4de476c06f42cba4bf0683f388062.scope. Dec 13 14:26:52.681606 env[1226]: time="2024-12-13T14:26:52.681377149Z" level=info msg="StartContainer for \"803ff51ded69c4d612d745ca9c82034508b4de476c06f42cba4bf0683f388062\" returns successfully" Dec 13 14:26:52.692586 systemd[1]: cri-containerd-803ff51ded69c4d612d745ca9c82034508b4de476c06f42cba4bf0683f388062.scope: Deactivated successfully. Dec 13 14:26:53.438961 kubelet[2064]: I1213 14:26:53.438926 2064 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-59lqw" podStartSLOduration=13.147370533 podStartE2EDuration="15.438847082s" podCreationTimestamp="2024-12-13 14:26:38 +0000 UTC" firstStartedPulling="2024-12-13 14:26:40.228401431 +0000 UTC m=+15.156514506" lastFinishedPulling="2024-12-13 14:26:42.519877963 +0000 UTC m=+17.447991055" observedRunningTime="2024-12-13 14:26:43.461573643 +0000 UTC m=+18.389686729" watchObservedRunningTime="2024-12-13 14:26:53.438847082 +0000 UTC m=+28.366960175" Dec 13 14:26:53.590945 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-803ff51ded69c4d612d745ca9c82034508b4de476c06f42cba4bf0683f388062-rootfs.mount: Deactivated successfully. Dec 13 14:26:54.758995 env[1226]: time="2024-12-13T14:26:54.758931101Z" level=info msg="shim disconnected" id=803ff51ded69c4d612d745ca9c82034508b4de476c06f42cba4bf0683f388062 Dec 13 14:26:54.759672 env[1226]: time="2024-12-13T14:26:54.759639692Z" level=warning msg="cleaning up after shim disconnected" id=803ff51ded69c4d612d745ca9c82034508b4de476c06f42cba4bf0683f388062 namespace=k8s.io Dec 13 14:26:54.760053 env[1226]: time="2024-12-13T14:26:54.760011985Z" level=info msg="cleaning up dead shim" Dec 13 14:26:54.771788 env[1226]: time="2024-12-13T14:26:54.771736971Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2532 runtime=io.containerd.runc.v2\n" Dec 13 14:26:55.431096 env[1226]: time="2024-12-13T14:26:55.429419552Z" level=info msg="CreateContainer within sandbox \"ec88e9de196cdf9718efbef9f49c014d56d7da0f51c70101e04e4b8a087fad01\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:26:55.448856 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1459602200.mount: Deactivated successfully. Dec 13 14:26:55.465220 env[1226]: time="2024-12-13T14:26:55.465164836Z" level=info msg="CreateContainer within sandbox \"ec88e9de196cdf9718efbef9f49c014d56d7da0f51c70101e04e4b8a087fad01\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"942f1115f010ab81b39422fcbd7e7c22b5de48de77b5cdedd3a67c5fa96e94f4\"" Dec 13 14:26:55.466118 env[1226]: time="2024-12-13T14:26:55.466084943Z" level=info msg="StartContainer for \"942f1115f010ab81b39422fcbd7e7c22b5de48de77b5cdedd3a67c5fa96e94f4\"" Dec 13 14:26:55.468752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1669234033.mount: Deactivated successfully. Dec 13 14:26:55.499497 systemd[1]: Started cri-containerd-942f1115f010ab81b39422fcbd7e7c22b5de48de77b5cdedd3a67c5fa96e94f4.scope. Dec 13 14:26:55.569624 env[1226]: time="2024-12-13T14:26:55.568977996Z" level=info msg="StartContainer for \"942f1115f010ab81b39422fcbd7e7c22b5de48de77b5cdedd3a67c5fa96e94f4\" returns successfully" Dec 13 14:26:55.582375 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:26:55.583126 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:26:55.583690 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:26:55.586396 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:26:55.595089 systemd[1]: cri-containerd-942f1115f010ab81b39422fcbd7e7c22b5de48de77b5cdedd3a67c5fa96e94f4.scope: Deactivated successfully. Dec 13 14:26:55.605497 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:26:55.629971 env[1226]: time="2024-12-13T14:26:55.629909906Z" level=info msg="shim disconnected" id=942f1115f010ab81b39422fcbd7e7c22b5de48de77b5cdedd3a67c5fa96e94f4 Dec 13 14:26:55.629971 env[1226]: time="2024-12-13T14:26:55.629960826Z" level=warning msg="cleaning up after shim disconnected" id=942f1115f010ab81b39422fcbd7e7c22b5de48de77b5cdedd3a67c5fa96e94f4 namespace=k8s.io Dec 13 14:26:55.629971 env[1226]: time="2024-12-13T14:26:55.629975225Z" level=info msg="cleaning up dead shim" Dec 13 14:26:55.640328 env[1226]: time="2024-12-13T14:26:55.640281319Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2596 runtime=io.containerd.runc.v2\n" Dec 13 14:26:56.433078 env[1226]: time="2024-12-13T14:26:56.432824345Z" level=info msg="CreateContainer within sandbox \"ec88e9de196cdf9718efbef9f49c014d56d7da0f51c70101e04e4b8a087fad01\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:26:56.443374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-942f1115f010ab81b39422fcbd7e7c22b5de48de77b5cdedd3a67c5fa96e94f4-rootfs.mount: Deactivated successfully. Dec 13 14:26:56.464105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount24481574.mount: Deactivated successfully. Dec 13 14:26:56.480695 env[1226]: time="2024-12-13T14:26:56.480631155Z" level=info msg="CreateContainer within sandbox \"ec88e9de196cdf9718efbef9f49c014d56d7da0f51c70101e04e4b8a087fad01\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"24c431fc54ae168cac45f335a98d7d249dd1920f29d230a6100e36eab03c8f59\"" Dec 13 14:26:56.487463 env[1226]: time="2024-12-13T14:26:56.481643258Z" level=info msg="StartContainer for \"24c431fc54ae168cac45f335a98d7d249dd1920f29d230a6100e36eab03c8f59\"" Dec 13 14:26:56.516251 systemd[1]: Started cri-containerd-24c431fc54ae168cac45f335a98d7d249dd1920f29d230a6100e36eab03c8f59.scope. Dec 13 14:26:56.560508 env[1226]: time="2024-12-13T14:26:56.560445239Z" level=info msg="StartContainer for \"24c431fc54ae168cac45f335a98d7d249dd1920f29d230a6100e36eab03c8f59\" returns successfully" Dec 13 14:26:56.566272 systemd[1]: cri-containerd-24c431fc54ae168cac45f335a98d7d249dd1920f29d230a6100e36eab03c8f59.scope: Deactivated successfully. Dec 13 14:26:56.603786 env[1226]: time="2024-12-13T14:26:56.603719047Z" level=info msg="shim disconnected" id=24c431fc54ae168cac45f335a98d7d249dd1920f29d230a6100e36eab03c8f59 Dec 13 14:26:56.603786 env[1226]: time="2024-12-13T14:26:56.603785471Z" level=warning msg="cleaning up after shim disconnected" id=24c431fc54ae168cac45f335a98d7d249dd1920f29d230a6100e36eab03c8f59 namespace=k8s.io Dec 13 14:26:56.604137 env[1226]: time="2024-12-13T14:26:56.603799089Z" level=info msg="cleaning up dead shim" Dec 13 14:26:56.616145 env[1226]: time="2024-12-13T14:26:56.616096443Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2655 runtime=io.containerd.runc.v2\n" Dec 13 14:26:57.438604 env[1226]: time="2024-12-13T14:26:57.438385569Z" level=info msg="CreateContainer within sandbox \"ec88e9de196cdf9718efbef9f49c014d56d7da0f51c70101e04e4b8a087fad01\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:26:57.443110 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24c431fc54ae168cac45f335a98d7d249dd1920f29d230a6100e36eab03c8f59-rootfs.mount: Deactivated successfully. Dec 13 14:26:57.463631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2621283599.mount: Deactivated successfully. Dec 13 14:26:57.467610 env[1226]: time="2024-12-13T14:26:57.467535219Z" level=info msg="CreateContainer within sandbox \"ec88e9de196cdf9718efbef9f49c014d56d7da0f51c70101e04e4b8a087fad01\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6b0ef21617ed5cc2a48ca1d33edec02521ad9e32af3b0b7b83cbbd8931af263f\"" Dec 13 14:26:57.468734 env[1226]: time="2024-12-13T14:26:57.468693974Z" level=info msg="StartContainer for \"6b0ef21617ed5cc2a48ca1d33edec02521ad9e32af3b0b7b83cbbd8931af263f\"" Dec 13 14:26:57.504738 systemd[1]: Started cri-containerd-6b0ef21617ed5cc2a48ca1d33edec02521ad9e32af3b0b7b83cbbd8931af263f.scope. Dec 13 14:26:57.541684 systemd[1]: cri-containerd-6b0ef21617ed5cc2a48ca1d33edec02521ad9e32af3b0b7b83cbbd8931af263f.scope: Deactivated successfully. Dec 13 14:26:57.545091 env[1226]: time="2024-12-13T14:26:57.545041392Z" level=info msg="StartContainer for \"6b0ef21617ed5cc2a48ca1d33edec02521ad9e32af3b0b7b83cbbd8931af263f\" returns successfully" Dec 13 14:26:57.573358 env[1226]: time="2024-12-13T14:26:57.573293850Z" level=info msg="shim disconnected" id=6b0ef21617ed5cc2a48ca1d33edec02521ad9e32af3b0b7b83cbbd8931af263f Dec 13 14:26:57.573687 env[1226]: time="2024-12-13T14:26:57.573361553Z" level=warning msg="cleaning up after shim disconnected" id=6b0ef21617ed5cc2a48ca1d33edec02521ad9e32af3b0b7b83cbbd8931af263f namespace=k8s.io Dec 13 14:26:57.573687 env[1226]: time="2024-12-13T14:26:57.573376754Z" level=info msg="cleaning up dead shim" Dec 13 14:26:57.584907 env[1226]: time="2024-12-13T14:26:57.584854124Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2709 runtime=io.containerd.runc.v2\n" Dec 13 14:26:58.444603 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b0ef21617ed5cc2a48ca1d33edec02521ad9e32af3b0b7b83cbbd8931af263f-rootfs.mount: Deactivated successfully. Dec 13 14:26:58.448780 env[1226]: time="2024-12-13T14:26:58.448727808Z" level=info msg="CreateContainer within sandbox \"ec88e9de196cdf9718efbef9f49c014d56d7da0f51c70101e04e4b8a087fad01\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:26:58.474477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4183317224.mount: Deactivated successfully. Dec 13 14:26:58.478353 env[1226]: time="2024-12-13T14:26:58.478251161Z" level=info msg="CreateContainer within sandbox \"ec88e9de196cdf9718efbef9f49c014d56d7da0f51c70101e04e4b8a087fad01\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5546c9331a1b1448a009111ec4bb2482f579b16f4fc45afe1b5e3aa07a0f6c0d\"" Dec 13 14:26:58.484124 env[1226]: time="2024-12-13T14:26:58.484059777Z" level=info msg="StartContainer for \"5546c9331a1b1448a009111ec4bb2482f579b16f4fc45afe1b5e3aa07a0f6c0d\"" Dec 13 14:26:58.516085 systemd[1]: Started cri-containerd-5546c9331a1b1448a009111ec4bb2482f579b16f4fc45afe1b5e3aa07a0f6c0d.scope. Dec 13 14:26:58.568685 env[1226]: time="2024-12-13T14:26:58.568556688Z" level=info msg="StartContainer for \"5546c9331a1b1448a009111ec4bb2482f579b16f4fc45afe1b5e3aa07a0f6c0d\" returns successfully" Dec 13 14:26:58.742183 kubelet[2064]: I1213 14:26:58.742075 2064 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:26:58.788721 kubelet[2064]: I1213 14:26:58.788679 2064 topology_manager.go:215] "Topology Admit Handler" podUID="c3fe6440-942b-4bd3-8a28-e84e0cdef085" podNamespace="kube-system" podName="coredns-76f75df574-gp9gl" Dec 13 14:26:58.793781 kubelet[2064]: I1213 14:26:58.793754 2064 topology_manager.go:215] "Topology Admit Handler" podUID="791348f5-6dca-4aa8-9829-c0fd3d8d0b82" podNamespace="kube-system" podName="coredns-76f75df574-2mvk2" Dec 13 14:26:58.800695 systemd[1]: Created slice kubepods-burstable-podc3fe6440_942b_4bd3_8a28_e84e0cdef085.slice. Dec 13 14:26:58.808408 systemd[1]: Created slice kubepods-burstable-pod791348f5_6dca_4aa8_9829_c0fd3d8d0b82.slice. Dec 13 14:26:58.912995 kubelet[2064]: I1213 14:26:58.912942 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3fe6440-942b-4bd3-8a28-e84e0cdef085-config-volume\") pod \"coredns-76f75df574-gp9gl\" (UID: \"c3fe6440-942b-4bd3-8a28-e84e0cdef085\") " pod="kube-system/coredns-76f75df574-gp9gl" Dec 13 14:26:58.913217 kubelet[2064]: I1213 14:26:58.913030 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76v8n\" (UniqueName: \"kubernetes.io/projected/791348f5-6dca-4aa8-9829-c0fd3d8d0b82-kube-api-access-76v8n\") pod \"coredns-76f75df574-2mvk2\" (UID: \"791348f5-6dca-4aa8-9829-c0fd3d8d0b82\") " pod="kube-system/coredns-76f75df574-2mvk2" Dec 13 14:26:58.913217 kubelet[2064]: I1213 14:26:58.913072 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5k4b\" (UniqueName: \"kubernetes.io/projected/c3fe6440-942b-4bd3-8a28-e84e0cdef085-kube-api-access-b5k4b\") pod \"coredns-76f75df574-gp9gl\" (UID: \"c3fe6440-942b-4bd3-8a28-e84e0cdef085\") " pod="kube-system/coredns-76f75df574-gp9gl" Dec 13 14:26:58.913217 kubelet[2064]: I1213 14:26:58.913105 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/791348f5-6dca-4aa8-9829-c0fd3d8d0b82-config-volume\") pod \"coredns-76f75df574-2mvk2\" (UID: \"791348f5-6dca-4aa8-9829-c0fd3d8d0b82\") " pod="kube-system/coredns-76f75df574-2mvk2" Dec 13 14:26:59.118605 env[1226]: time="2024-12-13T14:26:59.118528137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gp9gl,Uid:c3fe6440-942b-4bd3-8a28-e84e0cdef085,Namespace:kube-system,Attempt:0,}" Dec 13 14:26:59.121456 env[1226]: time="2024-12-13T14:26:59.121409879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2mvk2,Uid:791348f5-6dca-4aa8-9829-c0fd3d8d0b82,Namespace:kube-system,Attempt:0,}" Dec 13 14:26:59.475929 kubelet[2064]: I1213 14:26:59.475805 2064 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-r8cq7" podStartSLOduration=9.237080667 podStartE2EDuration="21.475745105s" podCreationTimestamp="2024-12-13 14:26:38 +0000 UTC" firstStartedPulling="2024-12-13 14:26:40.336009209 +0000 UTC m=+15.264122286" lastFinishedPulling="2024-12-13 14:26:52.574673651 +0000 UTC m=+27.502786724" observedRunningTime="2024-12-13 14:26:59.47486715 +0000 UTC m=+34.402980308" watchObservedRunningTime="2024-12-13 14:26:59.475745105 +0000 UTC m=+34.403858198" Dec 13 14:27:00.875102 systemd-networkd[1028]: cilium_host: Link UP Dec 13 14:27:00.883132 systemd-networkd[1028]: cilium_net: Link UP Dec 13 14:27:00.883433 systemd-networkd[1028]: cilium_net: Gained carrier Dec 13 14:27:00.890219 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:27:00.890410 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:27:00.890679 systemd-networkd[1028]: cilium_host: Gained carrier Dec 13 14:27:00.895796 systemd-networkd[1028]: cilium_net: Gained IPv6LL Dec 13 14:27:01.034554 systemd-networkd[1028]: cilium_vxlan: Link UP Dec 13 14:27:01.034583 systemd-networkd[1028]: cilium_vxlan: Gained carrier Dec 13 14:27:01.337618 kernel: NET: Registered PF_ALG protocol family Dec 13 14:27:01.932793 systemd-networkd[1028]: cilium_host: Gained IPv6LL Dec 13 14:27:02.237860 systemd-networkd[1028]: lxc_health: Link UP Dec 13 14:27:02.252743 systemd-networkd[1028]: cilium_vxlan: Gained IPv6LL Dec 13 14:27:02.266703 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:27:02.266358 systemd-networkd[1028]: lxc_health: Gained carrier Dec 13 14:27:02.691655 systemd-networkd[1028]: lxc6e42e79906e5: Link UP Dec 13 14:27:02.704647 kernel: eth0: renamed from tmp6807c Dec 13 14:27:02.715399 systemd-networkd[1028]: lxcc1a8e940c3fb: Link UP Dec 13 14:27:02.732595 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6e42e79906e5: link becomes ready Dec 13 14:27:02.733775 systemd-networkd[1028]: lxc6e42e79906e5: Gained carrier Dec 13 14:27:02.741608 kernel: eth0: renamed from tmp8a9f9 Dec 13 14:27:02.759557 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc1a8e940c3fb: link becomes ready Dec 13 14:27:02.760022 systemd-networkd[1028]: lxcc1a8e940c3fb: Gained carrier Dec 13 14:27:03.788731 systemd-networkd[1028]: lxc_health: Gained IPv6LL Dec 13 14:27:03.980795 systemd-networkd[1028]: lxcc1a8e940c3fb: Gained IPv6LL Dec 13 14:27:04.428734 systemd-networkd[1028]: lxc6e42e79906e5: Gained IPv6LL Dec 13 14:27:07.715053 env[1226]: time="2024-12-13T14:27:07.714943852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:27:07.715682 env[1226]: time="2024-12-13T14:27:07.715027392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:27:07.715942 env[1226]: time="2024-12-13T14:27:07.715670374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:27:07.716364 env[1226]: time="2024-12-13T14:27:07.716307971Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6807cb49e112f75e3db2ed2b5c5c2be0bbaa3b2d9a98f50505e8df56af8ecda2 pid=3251 runtime=io.containerd.runc.v2 Dec 13 14:27:07.732147 env[1226]: time="2024-12-13T14:27:07.723829255Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:27:07.732147 env[1226]: time="2024-12-13T14:27:07.723919422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:27:07.732147 env[1226]: time="2024-12-13T14:27:07.723957330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:27:07.732147 env[1226]: time="2024-12-13T14:27:07.724188899Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a9f9c74cfde1f99a33021d8bce66351c3a5b5533ca6145943765abbb1ea4a50 pid=3266 runtime=io.containerd.runc.v2 Dec 13 14:27:07.774813 systemd[1]: Started cri-containerd-8a9f9c74cfde1f99a33021d8bce66351c3a5b5533ca6145943765abbb1ea4a50.scope. Dec 13 14:27:07.810829 systemd[1]: run-containerd-runc-k8s.io-6807cb49e112f75e3db2ed2b5c5c2be0bbaa3b2d9a98f50505e8df56af8ecda2-runc.RzqWAJ.mount: Deactivated successfully. Dec 13 14:27:07.818599 systemd[1]: Started cri-containerd-6807cb49e112f75e3db2ed2b5c5c2be0bbaa3b2d9a98f50505e8df56af8ecda2.scope. Dec 13 14:27:07.913945 env[1226]: time="2024-12-13T14:27:07.913886162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-2mvk2,Uid:791348f5-6dca-4aa8-9829-c0fd3d8d0b82,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a9f9c74cfde1f99a33021d8bce66351c3a5b5533ca6145943765abbb1ea4a50\"" Dec 13 14:27:07.920764 env[1226]: time="2024-12-13T14:27:07.920723194Z" level=info msg="CreateContainer within sandbox \"8a9f9c74cfde1f99a33021d8bce66351c3a5b5533ca6145943765abbb1ea4a50\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:27:07.937885 env[1226]: time="2024-12-13T14:27:07.937831883Z" level=info msg="CreateContainer within sandbox \"8a9f9c74cfde1f99a33021d8bce66351c3a5b5533ca6145943765abbb1ea4a50\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"60a7caa13437398a331dae057e4415c2d1487073e7ed6bebba39dc44145eb1ce\"" Dec 13 14:27:07.938575 env[1226]: time="2024-12-13T14:27:07.938522843Z" level=info msg="StartContainer for \"60a7caa13437398a331dae057e4415c2d1487073e7ed6bebba39dc44145eb1ce\"" Dec 13 14:27:07.980531 systemd[1]: Started cri-containerd-60a7caa13437398a331dae057e4415c2d1487073e7ed6bebba39dc44145eb1ce.scope. Dec 13 14:27:08.002743 env[1226]: time="2024-12-13T14:27:08.002693156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gp9gl,Uid:c3fe6440-942b-4bd3-8a28-e84e0cdef085,Namespace:kube-system,Attempt:0,} returns sandbox id \"6807cb49e112f75e3db2ed2b5c5c2be0bbaa3b2d9a98f50505e8df56af8ecda2\"" Dec 13 14:27:08.007173 env[1226]: time="2024-12-13T14:27:08.007134404Z" level=info msg="CreateContainer within sandbox \"6807cb49e112f75e3db2ed2b5c5c2be0bbaa3b2d9a98f50505e8df56af8ecda2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:27:08.031292 env[1226]: time="2024-12-13T14:27:08.031242361Z" level=info msg="CreateContainer within sandbox \"6807cb49e112f75e3db2ed2b5c5c2be0bbaa3b2d9a98f50505e8df56af8ecda2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5be903cd67929e9feb704d293a49aa8a74dc1b0e14a3bb7dc55ab785e69d4c56\"" Dec 13 14:27:08.033620 env[1226]: time="2024-12-13T14:27:08.033586206Z" level=info msg="StartContainer for \"5be903cd67929e9feb704d293a49aa8a74dc1b0e14a3bb7dc55ab785e69d4c56\"" Dec 13 14:27:08.061484 env[1226]: time="2024-12-13T14:27:08.061436642Z" level=info msg="StartContainer for \"60a7caa13437398a331dae057e4415c2d1487073e7ed6bebba39dc44145eb1ce\" returns successfully" Dec 13 14:27:08.090927 systemd[1]: Started cri-containerd-5be903cd67929e9feb704d293a49aa8a74dc1b0e14a3bb7dc55ab785e69d4c56.scope. Dec 13 14:27:08.150592 env[1226]: time="2024-12-13T14:27:08.150527624Z" level=info msg="StartContainer for \"5be903cd67929e9feb704d293a49aa8a74dc1b0e14a3bb7dc55ab785e69d4c56\" returns successfully" Dec 13 14:27:08.499364 kubelet[2064]: I1213 14:27:08.499301 2064 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-gp9gl" podStartSLOduration=30.499250914 podStartE2EDuration="30.499250914s" podCreationTimestamp="2024-12-13 14:26:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:27:08.497637845 +0000 UTC m=+43.425750937" watchObservedRunningTime="2024-12-13 14:27:08.499250914 +0000 UTC m=+43.427364002" Dec 13 14:27:08.542692 kubelet[2064]: I1213 14:27:08.542652 2064 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-2mvk2" podStartSLOduration=30.542596366 podStartE2EDuration="30.542596366s" podCreationTimestamp="2024-12-13 14:26:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:27:08.541183872 +0000 UTC m=+43.469296964" watchObservedRunningTime="2024-12-13 14:27:08.542596366 +0000 UTC m=+43.470709457" Dec 13 14:27:24.227123 systemd[1]: Started sshd@5-10.128.0.103:22-139.178.68.195:34264.service. Dec 13 14:27:24.526369 sshd[3410]: Accepted publickey for core from 139.178.68.195 port 34264 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:24.528961 sshd[3410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:24.537040 systemd[1]: Started session-6.scope. Dec 13 14:27:24.537857 systemd-logind[1212]: New session 6 of user core. Dec 13 14:27:24.832006 sshd[3410]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:24.836282 systemd-logind[1212]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:27:24.836823 systemd[1]: sshd@5-10.128.0.103:22-139.178.68.195:34264.service: Deactivated successfully. Dec 13 14:27:24.838071 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:27:24.839415 systemd-logind[1212]: Removed session 6. Dec 13 14:27:29.879145 systemd[1]: Started sshd@6-10.128.0.103:22-139.178.68.195:49102.service. Dec 13 14:27:30.170277 sshd[3425]: Accepted publickey for core from 139.178.68.195 port 49102 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:30.172722 sshd[3425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:30.179989 systemd[1]: Started session-7.scope. Dec 13 14:27:30.180640 systemd-logind[1212]: New session 7 of user core. Dec 13 14:27:30.468938 sshd[3425]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:30.473983 systemd[1]: sshd@6-10.128.0.103:22-139.178.68.195:49102.service: Deactivated successfully. Dec 13 14:27:30.475116 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:27:30.475641 systemd-logind[1212]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:27:30.476883 systemd-logind[1212]: Removed session 7. Dec 13 14:27:35.515388 systemd[1]: Started sshd@7-10.128.0.103:22-139.178.68.195:49104.service. Dec 13 14:27:35.800737 sshd[3438]: Accepted publickey for core from 139.178.68.195 port 49104 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:35.802736 sshd[3438]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:35.809820 systemd[1]: Started session-8.scope. Dec 13 14:27:35.810817 systemd-logind[1212]: New session 8 of user core. Dec 13 14:27:36.086769 sshd[3438]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:36.091587 systemd-logind[1212]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:27:36.092110 systemd[1]: sshd@7-10.128.0.103:22-139.178.68.195:49104.service: Deactivated successfully. Dec 13 14:27:36.093334 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:27:36.094834 systemd-logind[1212]: Removed session 8. Dec 13 14:27:41.133897 systemd[1]: Started sshd@8-10.128.0.103:22-139.178.68.195:35240.service. Dec 13 14:27:41.419696 sshd[3453]: Accepted publickey for core from 139.178.68.195 port 35240 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:41.421877 sshd[3453]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:41.428824 systemd[1]: Started session-9.scope. Dec 13 14:27:41.429664 systemd-logind[1212]: New session 9 of user core. Dec 13 14:27:41.706931 sshd[3453]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:41.711335 systemd[1]: sshd@8-10.128.0.103:22-139.178.68.195:35240.service: Deactivated successfully. Dec 13 14:27:41.712498 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:27:41.713742 systemd-logind[1212]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:27:41.715104 systemd-logind[1212]: Removed session 9. Dec 13 14:27:46.756346 systemd[1]: Started sshd@9-10.128.0.103:22-139.178.68.195:48058.service. Dec 13 14:27:47.047085 sshd[3465]: Accepted publickey for core from 139.178.68.195 port 48058 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:47.049343 sshd[3465]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:47.056708 systemd[1]: Started session-10.scope. Dec 13 14:27:47.057790 systemd-logind[1212]: New session 10 of user core. Dec 13 14:27:47.343349 sshd[3465]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:47.348015 systemd[1]: sshd@9-10.128.0.103:22-139.178.68.195:48058.service: Deactivated successfully. Dec 13 14:27:47.349221 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:27:47.350194 systemd-logind[1212]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:27:47.351380 systemd-logind[1212]: Removed session 10. Dec 13 14:27:52.391369 systemd[1]: Started sshd@10-10.128.0.103:22-139.178.68.195:48072.service. Dec 13 14:27:52.682027 sshd[3477]: Accepted publickey for core from 139.178.68.195 port 48072 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:52.684155 sshd[3477]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:52.691155 systemd[1]: Started session-11.scope. Dec 13 14:27:52.692198 systemd-logind[1212]: New session 11 of user core. Dec 13 14:27:52.978099 sshd[3477]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:52.983103 systemd[1]: sshd@10-10.128.0.103:22-139.178.68.195:48072.service: Deactivated successfully. Dec 13 14:27:52.984318 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:27:52.985315 systemd-logind[1212]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:27:52.986634 systemd-logind[1212]: Removed session 11. Dec 13 14:27:53.023535 systemd[1]: Started sshd@11-10.128.0.103:22-139.178.68.195:48084.service. Dec 13 14:27:53.309065 sshd[3489]: Accepted publickey for core from 139.178.68.195 port 48084 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:53.311141 sshd[3489]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:53.318126 systemd[1]: Started session-12.scope. Dec 13 14:27:53.318746 systemd-logind[1212]: New session 12 of user core. Dec 13 14:27:53.652180 sshd[3489]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:53.657052 systemd[1]: sshd@11-10.128.0.103:22-139.178.68.195:48084.service: Deactivated successfully. Dec 13 14:27:53.658195 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:27:53.659302 systemd-logind[1212]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:27:53.660966 systemd-logind[1212]: Removed session 12. Dec 13 14:27:53.701235 systemd[1]: Started sshd@12-10.128.0.103:22-139.178.68.195:48086.service. Dec 13 14:27:53.994180 sshd[3499]: Accepted publickey for core from 139.178.68.195 port 48086 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:53.996466 sshd[3499]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:54.003383 systemd[1]: Started session-13.scope. Dec 13 14:27:54.004345 systemd-logind[1212]: New session 13 of user core. Dec 13 14:27:54.298989 sshd[3499]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:54.303621 systemd-logind[1212]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:27:54.303903 systemd[1]: sshd@12-10.128.0.103:22-139.178.68.195:48086.service: Deactivated successfully. Dec 13 14:27:54.305106 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:27:54.306596 systemd-logind[1212]: Removed session 13. Dec 13 14:27:59.346009 systemd[1]: Started sshd@13-10.128.0.103:22-139.178.68.195:33836.service. Dec 13 14:27:59.631875 sshd[3512]: Accepted publickey for core from 139.178.68.195 port 33836 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:27:59.634022 sshd[3512]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:59.641024 systemd[1]: Started session-14.scope. Dec 13 14:27:59.642134 systemd-logind[1212]: New session 14 of user core. Dec 13 14:27:59.919993 sshd[3512]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:59.924767 systemd-logind[1212]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:27:59.925275 systemd[1]: sshd@13-10.128.0.103:22-139.178.68.195:33836.service: Deactivated successfully. Dec 13 14:27:59.926473 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:27:59.927926 systemd-logind[1212]: Removed session 14. Dec 13 14:28:04.966218 systemd[1]: Started sshd@14-10.128.0.103:22-139.178.68.195:33848.service. Dec 13 14:28:05.255292 sshd[3524]: Accepted publickey for core from 139.178.68.195 port 33848 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:28:05.257632 sshd[3524]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:05.264541 systemd[1]: Started session-15.scope. Dec 13 14:28:05.265182 systemd-logind[1212]: New session 15 of user core. Dec 13 14:28:05.546466 sshd[3524]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:05.551084 systemd[1]: sshd@14-10.128.0.103:22-139.178.68.195:33848.service: Deactivated successfully. Dec 13 14:28:05.552245 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:28:05.553398 systemd-logind[1212]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:28:05.554637 systemd-logind[1212]: Removed session 15. Dec 13 14:28:05.596044 systemd[1]: Started sshd@15-10.128.0.103:22-139.178.68.195:33858.service. Dec 13 14:28:05.890921 sshd[3536]: Accepted publickey for core from 139.178.68.195 port 33858 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:28:05.893277 sshd[3536]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:05.900397 systemd[1]: Started session-16.scope. Dec 13 14:28:05.901023 systemd-logind[1212]: New session 16 of user core. Dec 13 14:28:06.258523 sshd[3536]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:06.263246 systemd[1]: sshd@15-10.128.0.103:22-139.178.68.195:33858.service: Deactivated successfully. Dec 13 14:28:06.264427 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:28:06.265464 systemd-logind[1212]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:28:06.266805 systemd-logind[1212]: Removed session 16. Dec 13 14:28:06.307292 systemd[1]: Started sshd@16-10.128.0.103:22-139.178.68.195:53926.service. Dec 13 14:28:06.601048 sshd[3545]: Accepted publickey for core from 139.178.68.195 port 53926 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:28:06.603071 sshd[3545]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:06.610268 systemd[1]: Started session-17.scope. Dec 13 14:28:06.611329 systemd-logind[1212]: New session 17 of user core. Dec 13 14:28:08.407435 sshd[3545]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:08.414708 systemd-logind[1212]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:28:08.415176 systemd[1]: sshd@16-10.128.0.103:22-139.178.68.195:53926.service: Deactivated successfully. Dec 13 14:28:08.416362 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:28:08.417799 systemd-logind[1212]: Removed session 17. Dec 13 14:28:08.452134 systemd[1]: Started sshd@17-10.128.0.103:22-139.178.68.195:53936.service. Dec 13 14:28:08.737720 sshd[3565]: Accepted publickey for core from 139.178.68.195 port 53936 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:28:08.739506 sshd[3565]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:08.747386 systemd[1]: Started session-18.scope. Dec 13 14:28:08.748086 systemd-logind[1212]: New session 18 of user core. Dec 13 14:28:09.155865 sshd[3565]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:09.160078 systemd[1]: sshd@17-10.128.0.103:22-139.178.68.195:53936.service: Deactivated successfully. Dec 13 14:28:09.161302 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:28:09.162464 systemd-logind[1212]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:28:09.164417 systemd-logind[1212]: Removed session 18. Dec 13 14:28:09.202486 systemd[1]: Started sshd@18-10.128.0.103:22-139.178.68.195:53952.service. Dec 13 14:28:09.490775 sshd[3576]: Accepted publickey for core from 139.178.68.195 port 53952 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:28:09.493154 sshd[3576]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:09.500170 systemd[1]: Started session-19.scope. Dec 13 14:28:09.501232 systemd-logind[1212]: New session 19 of user core. Dec 13 14:28:09.772298 sshd[3576]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:09.777153 systemd[1]: sshd@18-10.128.0.103:22-139.178.68.195:53952.service: Deactivated successfully. Dec 13 14:28:09.778323 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:28:09.779341 systemd-logind[1212]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:28:09.780667 systemd-logind[1212]: Removed session 19. Dec 13 14:28:14.819985 systemd[1]: Started sshd@19-10.128.0.103:22-139.178.68.195:53968.service. Dec 13 14:28:15.106438 sshd[3591]: Accepted publickey for core from 139.178.68.195 port 53968 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:28:15.109055 sshd[3591]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:15.117531 systemd[1]: Started session-20.scope. Dec 13 14:28:15.118397 systemd-logind[1212]: New session 20 of user core. Dec 13 14:28:15.400665 sshd[3591]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:15.405342 systemd[1]: sshd@19-10.128.0.103:22-139.178.68.195:53968.service: Deactivated successfully. Dec 13 14:28:15.406522 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:28:15.407658 systemd-logind[1212]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:28:15.409211 systemd-logind[1212]: Removed session 20. Dec 13 14:28:20.446827 systemd[1]: Started sshd@20-10.128.0.103:22-139.178.68.195:48426.service. Dec 13 14:28:20.732736 sshd[3604]: Accepted publickey for core from 139.178.68.195 port 48426 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:28:20.734849 sshd[3604]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:20.741637 systemd-logind[1212]: New session 21 of user core. Dec 13 14:28:20.742307 systemd[1]: Started session-21.scope. Dec 13 14:28:21.014986 sshd[3604]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:21.019905 systemd[1]: sshd@20-10.128.0.103:22-139.178.68.195:48426.service: Deactivated successfully. Dec 13 14:28:21.021263 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:28:21.022309 systemd-logind[1212]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:28:21.023637 systemd-logind[1212]: Removed session 21. Dec 13 14:28:26.061341 systemd[1]: Started sshd@21-10.128.0.103:22-139.178.68.195:35826.service. Dec 13 14:28:26.350385 sshd[3618]: Accepted publickey for core from 139.178.68.195 port 35826 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:28:26.352670 sshd[3618]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:26.360226 systemd[1]: Started session-22.scope. Dec 13 14:28:26.360953 systemd-logind[1212]: New session 22 of user core. Dec 13 14:28:26.636194 sshd[3618]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:26.640917 systemd[1]: sshd@21-10.128.0.103:22-139.178.68.195:35826.service: Deactivated successfully. Dec 13 14:28:26.642223 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:28:26.643434 systemd-logind[1212]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:28:26.644765 systemd-logind[1212]: Removed session 22. Dec 13 14:28:26.683418 systemd[1]: Started sshd@22-10.128.0.103:22-139.178.68.195:35828.service. Dec 13 14:28:26.969438 sshd[3630]: Accepted publickey for core from 139.178.68.195 port 35828 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:28:26.971740 sshd[3630]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:26.977904 systemd-logind[1212]: New session 23 of user core. Dec 13 14:28:26.978628 systemd[1]: Started session-23.scope. Dec 13 14:28:28.476305 env[1226]: time="2024-12-13T14:28:28.476238759Z" level=info msg="StopContainer for \"262aa2cdddf42ce4ffa9ba8cd1c542aa2f4f042aa034bca80c228eec63c30cf6\" with timeout 30 (s)" Dec 13 14:28:28.479885 env[1226]: time="2024-12-13T14:28:28.476830520Z" level=info msg="Stop container \"262aa2cdddf42ce4ffa9ba8cd1c542aa2f4f042aa034bca80c228eec63c30cf6\" with signal terminated" Dec 13 14:28:28.492137 systemd[1]: run-containerd-runc-k8s.io-5546c9331a1b1448a009111ec4bb2482f579b16f4fc45afe1b5e3aa07a0f6c0d-runc.KCHDtm.mount: Deactivated successfully. Dec 13 14:28:28.505930 systemd[1]: cri-containerd-262aa2cdddf42ce4ffa9ba8cd1c542aa2f4f042aa034bca80c228eec63c30cf6.scope: Deactivated successfully. Dec 13 14:28:28.537601 env[1226]: time="2024-12-13T14:28:28.535795789Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:28:28.546316 env[1226]: time="2024-12-13T14:28:28.546275215Z" level=info msg="StopContainer for \"5546c9331a1b1448a009111ec4bb2482f579b16f4fc45afe1b5e3aa07a0f6c0d\" with timeout 2 (s)" Dec 13 14:28:28.555808 env[1226]: time="2024-12-13T14:28:28.547744899Z" level=info msg="Stop container \"5546c9331a1b1448a009111ec4bb2482f579b16f4fc45afe1b5e3aa07a0f6c0d\" with signal terminated" Dec 13 14:28:28.554147 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-262aa2cdddf42ce4ffa9ba8cd1c542aa2f4f042aa034bca80c228eec63c30cf6-rootfs.mount: Deactivated successfully. Dec 13 14:28:28.561761 systemd-networkd[1028]: lxc_health: Link DOWN Dec 13 14:28:28.561771 systemd-networkd[1028]: lxc_health: Lost carrier Dec 13 14:28:28.582016 env[1226]: time="2024-12-13T14:28:28.581961868Z" level=info msg="shim disconnected" id=262aa2cdddf42ce4ffa9ba8cd1c542aa2f4f042aa034bca80c228eec63c30cf6 Dec 13 14:28:28.582408 env[1226]: time="2024-12-13T14:28:28.582354555Z" level=warning msg="cleaning up after shim disconnected" id=262aa2cdddf42ce4ffa9ba8cd1c542aa2f4f042aa034bca80c228eec63c30cf6 namespace=k8s.io Dec 13 14:28:28.582408 env[1226]: time="2024-12-13T14:28:28.582395536Z" level=info msg="cleaning up dead shim" Dec 13 14:28:28.587446 systemd[1]: cri-containerd-5546c9331a1b1448a009111ec4bb2482f579b16f4fc45afe1b5e3aa07a0f6c0d.scope: Deactivated successfully. Dec 13 14:28:28.587882 systemd[1]: cri-containerd-5546c9331a1b1448a009111ec4bb2482f579b16f4fc45afe1b5e3aa07a0f6c0d.scope: Consumed 9.302s CPU time. Dec 13 14:28:28.598783 env[1226]: time="2024-12-13T14:28:28.598722926Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3685 runtime=io.containerd.runc.v2\n" Dec 13 14:28:28.601692 env[1226]: time="2024-12-13T14:28:28.601634731Z" level=info msg="StopContainer for \"262aa2cdddf42ce4ffa9ba8cd1c542aa2f4f042aa034bca80c228eec63c30cf6\" returns successfully" Dec 13 14:28:28.602629 env[1226]: time="2024-12-13T14:28:28.602534161Z" level=info msg="StopPodSandbox for \"a2cca33d94cf9bb192fd5bba97f8c58dbffa293fa89aab68a9e8dec86eef1a09\"" Dec 13 14:28:28.603042 env[1226]: time="2024-12-13T14:28:28.602993056Z" level=info msg="Container to stop \"262aa2cdddf42ce4ffa9ba8cd1c542aa2f4f042aa034bca80c228eec63c30cf6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:28:28.608689 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a2cca33d94cf9bb192fd5bba97f8c58dbffa293fa89aab68a9e8dec86eef1a09-shm.mount: Deactivated successfully. Dec 13 14:28:28.625020 systemd[1]: cri-containerd-a2cca33d94cf9bb192fd5bba97f8c58dbffa293fa89aab68a9e8dec86eef1a09.scope: Deactivated successfully. Dec 13 14:28:28.636940 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5546c9331a1b1448a009111ec4bb2482f579b16f4fc45afe1b5e3aa07a0f6c0d-rootfs.mount: Deactivated successfully. Dec 13 14:28:28.649754 env[1226]: time="2024-12-13T14:28:28.649694757Z" level=info msg="shim disconnected" id=5546c9331a1b1448a009111ec4bb2482f579b16f4fc45afe1b5e3aa07a0f6c0d Dec 13 14:28:28.650188 env[1226]: time="2024-12-13T14:28:28.650152242Z" level=warning msg="cleaning up after shim disconnected" id=5546c9331a1b1448a009111ec4bb2482f579b16f4fc45afe1b5e3aa07a0f6c0d namespace=k8s.io Dec 13 14:28:28.650381 env[1226]: time="2024-12-13T14:28:28.650353480Z" level=info msg="cleaning up dead shim" Dec 13 14:28:28.667038 env[1226]: time="2024-12-13T14:28:28.666986866Z" level=info msg="shim disconnected" id=a2cca33d94cf9bb192fd5bba97f8c58dbffa293fa89aab68a9e8dec86eef1a09 Dec 13 14:28:28.667473 env[1226]: time="2024-12-13T14:28:28.667439674Z" level=warning msg="cleaning up after shim disconnected" id=a2cca33d94cf9bb192fd5bba97f8c58dbffa293fa89aab68a9e8dec86eef1a09 namespace=k8s.io Dec 13 14:28:28.668698 env[1226]: time="2024-12-13T14:28:28.668662312Z" level=info msg="cleaning up dead shim" Dec 13 14:28:28.674909 env[1226]: time="2024-12-13T14:28:28.674870027Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3724 runtime=io.containerd.runc.v2\n" Dec 13 14:28:28.677310 env[1226]: time="2024-12-13T14:28:28.677265746Z" level=info msg="StopContainer for \"5546c9331a1b1448a009111ec4bb2482f579b16f4fc45afe1b5e3aa07a0f6c0d\" returns successfully" Dec 13 14:28:28.677768 env[1226]: time="2024-12-13T14:28:28.677725976Z" level=info msg="StopPodSandbox for \"ec88e9de196cdf9718efbef9f49c014d56d7da0f51c70101e04e4b8a087fad01\"" Dec 13 14:28:28.677876 env[1226]: time="2024-12-13T14:28:28.677805846Z" level=info msg="Container to stop \"24c431fc54ae168cac45f335a98d7d249dd1920f29d230a6100e36eab03c8f59\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:28:28.677876 env[1226]: time="2024-12-13T14:28:28.677829506Z" level=info msg="Container to stop \"6b0ef21617ed5cc2a48ca1d33edec02521ad9e32af3b0b7b83cbbd8931af263f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:28:28.677876 env[1226]: time="2024-12-13T14:28:28.677849638Z" level=info msg="Container to stop \"803ff51ded69c4d612d745ca9c82034508b4de476c06f42cba4bf0683f388062\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:28:28.678079 env[1226]: time="2024-12-13T14:28:28.677869864Z" level=info msg="Container to stop \"942f1115f010ab81b39422fcbd7e7c22b5de48de77b5cdedd3a67c5fa96e94f4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:28:28.678079 env[1226]: time="2024-12-13T14:28:28.677889882Z" level=info msg="Container to stop \"5546c9331a1b1448a009111ec4bb2482f579b16f4fc45afe1b5e3aa07a0f6c0d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:28:28.684815 env[1226]: time="2024-12-13T14:28:28.684781497Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3738 runtime=io.containerd.runc.v2\n" Dec 13 14:28:28.685396 env[1226]: time="2024-12-13T14:28:28.685359342Z" level=info msg="TearDown network for sandbox \"a2cca33d94cf9bb192fd5bba97f8c58dbffa293fa89aab68a9e8dec86eef1a09\" successfully" Dec 13 14:28:28.685594 env[1226]: time="2024-12-13T14:28:28.685542975Z" level=info msg="StopPodSandbox for \"a2cca33d94cf9bb192fd5bba97f8c58dbffa293fa89aab68a9e8dec86eef1a09\" returns successfully" Dec 13 14:28:28.689985 systemd[1]: cri-containerd-ec88e9de196cdf9718efbef9f49c014d56d7da0f51c70101e04e4b8a087fad01.scope: Deactivated successfully. Dec 13 14:28:28.729142 env[1226]: time="2024-12-13T14:28:28.727259180Z" level=info msg="shim disconnected" id=ec88e9de196cdf9718efbef9f49c014d56d7da0f51c70101e04e4b8a087fad01 Dec 13 14:28:28.730215 env[1226]: time="2024-12-13T14:28:28.730156462Z" level=warning msg="cleaning up after shim disconnected" id=ec88e9de196cdf9718efbef9f49c014d56d7da0f51c70101e04e4b8a087fad01 namespace=k8s.io Dec 13 14:28:28.730215 env[1226]: time="2024-12-13T14:28:28.730192774Z" level=info msg="cleaning up dead shim" Dec 13 14:28:28.741253 env[1226]: time="2024-12-13T14:28:28.741211052Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3775 runtime=io.containerd.runc.v2\n" Dec 13 14:28:28.741670 env[1226]: time="2024-12-13T14:28:28.741633423Z" level=info msg="TearDown network for sandbox \"ec88e9de196cdf9718efbef9f49c014d56d7da0f51c70101e04e4b8a087fad01\" successfully" Dec 13 14:28:28.741783 env[1226]: time="2024-12-13T14:28:28.741670100Z" level=info msg="StopPodSandbox for \"ec88e9de196cdf9718efbef9f49c014d56d7da0f51c70101e04e4b8a087fad01\" returns successfully" Dec 13 14:28:28.794264 kubelet[2064]: I1213 14:28:28.794200 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmm7l\" (UniqueName: \"kubernetes.io/projected/61afd605-291c-4b3a-9769-dc35bff3785f-kube-api-access-jmm7l\") pod \"61afd605-291c-4b3a-9769-dc35bff3785f\" (UID: \"61afd605-291c-4b3a-9769-dc35bff3785f\") " Dec 13 14:28:28.794264 kubelet[2064]: I1213 14:28:28.794269 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61afd605-291c-4b3a-9769-dc35bff3785f-cilium-config-path\") pod \"61afd605-291c-4b3a-9769-dc35bff3785f\" (UID: \"61afd605-291c-4b3a-9769-dc35bff3785f\") " Dec 13 14:28:28.797478 kubelet[2064]: I1213 14:28:28.797435 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61afd605-291c-4b3a-9769-dc35bff3785f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "61afd605-291c-4b3a-9769-dc35bff3785f" (UID: "61afd605-291c-4b3a-9769-dc35bff3785f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:28:28.799816 kubelet[2064]: I1213 14:28:28.799777 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61afd605-291c-4b3a-9769-dc35bff3785f-kube-api-access-jmm7l" (OuterVolumeSpecName: "kube-api-access-jmm7l") pod "61afd605-291c-4b3a-9769-dc35bff3785f" (UID: "61afd605-291c-4b3a-9769-dc35bff3785f"). InnerVolumeSpecName "kube-api-access-jmm7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:28:28.895198 kubelet[2064]: I1213 14:28:28.895152 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h45h7\" (UniqueName: \"kubernetes.io/projected/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-kube-api-access-h45h7\") pod \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\" (UID: \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\") " Dec 13 14:28:28.895528 kubelet[2064]: I1213 14:28:28.895505 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-clustermesh-secrets\") pod \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\" (UID: \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\") " Dec 13 14:28:28.895793 kubelet[2064]: I1213 14:28:28.895772 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-hostproc\") pod \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\" (UID: \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\") " Dec 13 14:28:28.895981 kubelet[2064]: I1213 14:28:28.895964 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-cilium-run\") pod \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\" (UID: \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\") " Dec 13 14:28:28.896130 kubelet[2064]: I1213 14:28:28.896113 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-cilium-cgroup\") pod \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\" (UID: \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\") " Dec 13 14:28:28.896272 kubelet[2064]: I1213 14:28:28.896254 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-bpf-maps\") pod \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\" (UID: \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\") " Dec 13 14:28:28.896407 kubelet[2064]: I1213 14:28:28.896390 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-lib-modules\") pod \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\" (UID: \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\") " Dec 13 14:28:28.896536 kubelet[2064]: I1213 14:28:28.896520 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-xtables-lock\") pod \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\" (UID: \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\") " Dec 13 14:28:28.896697 kubelet[2064]: I1213 14:28:28.896680 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-host-proc-sys-net\") pod \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\" (UID: \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\") " Dec 13 14:28:28.897068 kubelet[2064]: I1213 14:28:28.897048 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-hubble-tls\") pod \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\" (UID: \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\") " Dec 13 14:28:28.897969 kubelet[2064]: I1213 14:28:28.896850 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879" (UID: "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:28.898108 kubelet[2064]: I1213 14:28:28.896879 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-hostproc" (OuterVolumeSpecName: "hostproc") pod "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879" (UID: "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:28.898332 kubelet[2064]: I1213 14:28:28.896899 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879" (UID: "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:28.898439 kubelet[2064]: I1213 14:28:28.896928 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879" (UID: "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:28.898537 kubelet[2064]: I1213 14:28:28.896949 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879" (UID: "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:28.900901 kubelet[2064]: I1213 14:28:28.896969 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879" (UID: "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:28.901050 kubelet[2064]: I1213 14:28:28.896988 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879" (UID: "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:28.902652 kubelet[2064]: I1213 14:28:28.898242 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-host-proc-sys-kernel\") pod \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\" (UID: \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\") " Dec 13 14:28:28.902946 kubelet[2064]: I1213 14:28:28.898269 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879" (UID: "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:28.903071 kubelet[2064]: I1213 14:28:28.902876 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879" (UID: "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:28:28.903233 kubelet[2064]: I1213 14:28:28.903213 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-cilium-config-path\") pod \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\" (UID: \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\") " Dec 13 14:28:28.904549 kubelet[2064]: I1213 14:28:28.904528 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-etc-cni-netd\") pod \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\" (UID: \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\") " Dec 13 14:28:28.904723 kubelet[2064]: I1213 14:28:28.904705 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-cni-path\") pod \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\" (UID: \"5a7d6e17-7e2d-407c-a9e3-2540c0dfb879\") " Dec 13 14:28:28.904885 kubelet[2064]: I1213 14:28:28.904870 2064 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61afd605-291c-4b3a-9769-dc35bff3785f-cilium-config-path\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:28.905020 kubelet[2064]: I1213 14:28:28.905005 2064 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-hostproc\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:28.905133 kubelet[2064]: I1213 14:28:28.905119 2064 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-cilium-run\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:28.905242 kubelet[2064]: I1213 14:28:28.905228 2064 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-cilium-cgroup\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:28.905343 kubelet[2064]: I1213 14:28:28.905329 2064 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-bpf-maps\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:28.905498 kubelet[2064]: I1213 14:28:28.905482 2064 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-lib-modules\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:28.906767 kubelet[2064]: I1213 14:28:28.906724 2064 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-xtables-lock\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:28.906872 kubelet[2064]: I1213 14:28:28.906778 2064 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-host-proc-sys-net\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:28.906872 kubelet[2064]: I1213 14:28:28.906800 2064 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-hubble-tls\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:28.906872 kubelet[2064]: I1213 14:28:28.906841 2064 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jmm7l\" (UniqueName: \"kubernetes.io/projected/61afd605-291c-4b3a-9769-dc35bff3785f-kube-api-access-jmm7l\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:28.906872 kubelet[2064]: I1213 14:28:28.906675 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879" (UID: "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:28:28.906872 kubelet[2064]: I1213 14:28:28.904475 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-kube-api-access-h45h7" (OuterVolumeSpecName: "kube-api-access-h45h7") pod "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879" (UID: "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879"). InnerVolumeSpecName "kube-api-access-h45h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:28:28.907132 kubelet[2064]: I1213 14:28:28.906914 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879" (UID: "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:28.907132 kubelet[2064]: I1213 14:28:28.906949 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-cni-path" (OuterVolumeSpecName: "cni-path") pod "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879" (UID: "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:28.907853 kubelet[2064]: I1213 14:28:28.907799 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879" (UID: "5a7d6e17-7e2d-407c-a9e3-2540c0dfb879"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:28:29.007339 kubelet[2064]: I1213 14:28:29.007153 2064 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-host-proc-sys-kernel\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:29.007339 kubelet[2064]: I1213 14:28:29.007200 2064 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-cilium-config-path\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:29.007339 kubelet[2064]: I1213 14:28:29.007222 2064 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-etc-cni-netd\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:29.007339 kubelet[2064]: I1213 14:28:29.007239 2064 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-cni-path\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:29.007339 kubelet[2064]: I1213 14:28:29.007263 2064 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-h45h7\" (UniqueName: \"kubernetes.io/projected/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-kube-api-access-h45h7\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:29.007339 kubelet[2064]: I1213 14:28:29.007282 2064 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879-clustermesh-secrets\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:29.284740 systemd[1]: Removed slice kubepods-burstable-pod5a7d6e17_7e2d_407c_a9e3_2540c0dfb879.slice. Dec 13 14:28:29.284918 systemd[1]: kubepods-burstable-pod5a7d6e17_7e2d_407c_a9e3_2540c0dfb879.slice: Consumed 9.444s CPU time. Dec 13 14:28:29.288008 systemd[1]: Removed slice kubepods-besteffort-pod61afd605_291c_4b3a_9769_dc35bff3785f.slice. Dec 13 14:28:29.484755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec88e9de196cdf9718efbef9f49c014d56d7da0f51c70101e04e4b8a087fad01-rootfs.mount: Deactivated successfully. Dec 13 14:28:29.484908 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec88e9de196cdf9718efbef9f49c014d56d7da0f51c70101e04e4b8a087fad01-shm.mount: Deactivated successfully. Dec 13 14:28:29.485023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2cca33d94cf9bb192fd5bba97f8c58dbffa293fa89aab68a9e8dec86eef1a09-rootfs.mount: Deactivated successfully. Dec 13 14:28:29.485162 systemd[1]: var-lib-kubelet-pods-5a7d6e17\x2d7e2d\x2d407c\x2da9e3\x2d2540c0dfb879-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:28:29.485266 systemd[1]: var-lib-kubelet-pods-5a7d6e17\x2d7e2d\x2d407c\x2da9e3\x2d2540c0dfb879-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:28:29.485365 systemd[1]: var-lib-kubelet-pods-61afd605\x2d291c\x2d4b3a\x2d9769\x2ddc35bff3785f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djmm7l.mount: Deactivated successfully. Dec 13 14:28:29.485474 systemd[1]: var-lib-kubelet-pods-5a7d6e17\x2d7e2d\x2d407c\x2da9e3\x2d2540c0dfb879-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh45h7.mount: Deactivated successfully. Dec 13 14:28:29.675612 kubelet[2064]: I1213 14:28:29.675536 2064 scope.go:117] "RemoveContainer" containerID="262aa2cdddf42ce4ffa9ba8cd1c542aa2f4f042aa034bca80c228eec63c30cf6" Dec 13 14:28:29.678264 env[1226]: time="2024-12-13T14:28:29.678220913Z" level=info msg="RemoveContainer for \"262aa2cdddf42ce4ffa9ba8cd1c542aa2f4f042aa034bca80c228eec63c30cf6\"" Dec 13 14:28:29.687312 env[1226]: time="2024-12-13T14:28:29.687182391Z" level=info msg="RemoveContainer for \"262aa2cdddf42ce4ffa9ba8cd1c542aa2f4f042aa034bca80c228eec63c30cf6\" returns successfully" Dec 13 14:28:29.689472 kubelet[2064]: I1213 14:28:29.687700 2064 scope.go:117] "RemoveContainer" containerID="5546c9331a1b1448a009111ec4bb2482f579b16f4fc45afe1b5e3aa07a0f6c0d" Dec 13 14:28:29.692644 env[1226]: time="2024-12-13T14:28:29.692550051Z" level=info msg="RemoveContainer for \"5546c9331a1b1448a009111ec4bb2482f579b16f4fc45afe1b5e3aa07a0f6c0d\"" Dec 13 14:28:29.700613 env[1226]: time="2024-12-13T14:28:29.699629332Z" level=info msg="RemoveContainer for \"5546c9331a1b1448a009111ec4bb2482f579b16f4fc45afe1b5e3aa07a0f6c0d\" returns successfully" Dec 13 14:28:29.700861 kubelet[2064]: I1213 14:28:29.700706 2064 scope.go:117] "RemoveContainer" containerID="6b0ef21617ed5cc2a48ca1d33edec02521ad9e32af3b0b7b83cbbd8931af263f" Dec 13 14:28:29.702609 env[1226]: time="2024-12-13T14:28:29.702548407Z" level=info msg="RemoveContainer for \"6b0ef21617ed5cc2a48ca1d33edec02521ad9e32af3b0b7b83cbbd8931af263f\"" Dec 13 14:28:29.708205 env[1226]: time="2024-12-13T14:28:29.707888575Z" level=info msg="RemoveContainer for \"6b0ef21617ed5cc2a48ca1d33edec02521ad9e32af3b0b7b83cbbd8931af263f\" returns successfully" Dec 13 14:28:29.708333 kubelet[2064]: I1213 14:28:29.708105 2064 scope.go:117] "RemoveContainer" containerID="24c431fc54ae168cac45f335a98d7d249dd1920f29d230a6100e36eab03c8f59" Dec 13 14:28:29.709930 env[1226]: time="2024-12-13T14:28:29.709893235Z" level=info msg="RemoveContainer for \"24c431fc54ae168cac45f335a98d7d249dd1920f29d230a6100e36eab03c8f59\"" Dec 13 14:28:29.715540 env[1226]: time="2024-12-13T14:28:29.715490563Z" level=info msg="RemoveContainer for \"24c431fc54ae168cac45f335a98d7d249dd1920f29d230a6100e36eab03c8f59\" returns successfully" Dec 13 14:28:29.715894 kubelet[2064]: I1213 14:28:29.715871 2064 scope.go:117] "RemoveContainer" containerID="942f1115f010ab81b39422fcbd7e7c22b5de48de77b5cdedd3a67c5fa96e94f4" Dec 13 14:28:29.718745 env[1226]: time="2024-12-13T14:28:29.718693517Z" level=info msg="RemoveContainer for \"942f1115f010ab81b39422fcbd7e7c22b5de48de77b5cdedd3a67c5fa96e94f4\"" Dec 13 14:28:29.722601 env[1226]: time="2024-12-13T14:28:29.722537840Z" level=info msg="RemoveContainer for \"942f1115f010ab81b39422fcbd7e7c22b5de48de77b5cdedd3a67c5fa96e94f4\" returns successfully" Dec 13 14:28:29.722790 kubelet[2064]: I1213 14:28:29.722749 2064 scope.go:117] "RemoveContainer" containerID="803ff51ded69c4d612d745ca9c82034508b4de476c06f42cba4bf0683f388062" Dec 13 14:28:29.724219 env[1226]: time="2024-12-13T14:28:29.724181893Z" level=info msg="RemoveContainer for \"803ff51ded69c4d612d745ca9c82034508b4de476c06f42cba4bf0683f388062\"" Dec 13 14:28:29.728494 env[1226]: time="2024-12-13T14:28:29.728442227Z" level=info msg="RemoveContainer for \"803ff51ded69c4d612d745ca9c82034508b4de476c06f42cba4bf0683f388062\" returns successfully" Dec 13 14:28:29.728741 kubelet[2064]: I1213 14:28:29.728719 2064 scope.go:117] "RemoveContainer" containerID="5546c9331a1b1448a009111ec4bb2482f579b16f4fc45afe1b5e3aa07a0f6c0d" Dec 13 14:28:29.729099 env[1226]: time="2024-12-13T14:28:29.728980925Z" level=error msg="ContainerStatus for \"5546c9331a1b1448a009111ec4bb2482f579b16f4fc45afe1b5e3aa07a0f6c0d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5546c9331a1b1448a009111ec4bb2482f579b16f4fc45afe1b5e3aa07a0f6c0d\": not found" Dec 13 14:28:29.729364 kubelet[2064]: E1213 14:28:29.729342 2064 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5546c9331a1b1448a009111ec4bb2482f579b16f4fc45afe1b5e3aa07a0f6c0d\": not found" containerID="5546c9331a1b1448a009111ec4bb2482f579b16f4fc45afe1b5e3aa07a0f6c0d" Dec 13 14:28:29.729481 kubelet[2064]: I1213 14:28:29.729468 2064 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5546c9331a1b1448a009111ec4bb2482f579b16f4fc45afe1b5e3aa07a0f6c0d"} err="failed to get container status \"5546c9331a1b1448a009111ec4bb2482f579b16f4fc45afe1b5e3aa07a0f6c0d\": rpc error: code = NotFound desc = an error occurred when try to find container \"5546c9331a1b1448a009111ec4bb2482f579b16f4fc45afe1b5e3aa07a0f6c0d\": not found" Dec 13 14:28:29.729550 kubelet[2064]: I1213 14:28:29.729495 2064 scope.go:117] "RemoveContainer" containerID="6b0ef21617ed5cc2a48ca1d33edec02521ad9e32af3b0b7b83cbbd8931af263f" Dec 13 14:28:29.729816 env[1226]: time="2024-12-13T14:28:29.729737635Z" level=error msg="ContainerStatus for \"6b0ef21617ed5cc2a48ca1d33edec02521ad9e32af3b0b7b83cbbd8931af263f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6b0ef21617ed5cc2a48ca1d33edec02521ad9e32af3b0b7b83cbbd8931af263f\": not found" Dec 13 14:28:29.730041 kubelet[2064]: E1213 14:28:29.730019 2064 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6b0ef21617ed5cc2a48ca1d33edec02521ad9e32af3b0b7b83cbbd8931af263f\": not found" containerID="6b0ef21617ed5cc2a48ca1d33edec02521ad9e32af3b0b7b83cbbd8931af263f" Dec 13 14:28:29.730170 kubelet[2064]: I1213 14:28:29.730075 2064 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6b0ef21617ed5cc2a48ca1d33edec02521ad9e32af3b0b7b83cbbd8931af263f"} err="failed to get container status \"6b0ef21617ed5cc2a48ca1d33edec02521ad9e32af3b0b7b83cbbd8931af263f\": rpc error: code = NotFound desc = an error occurred when try to find container \"6b0ef21617ed5cc2a48ca1d33edec02521ad9e32af3b0b7b83cbbd8931af263f\": not found" Dec 13 14:28:29.730170 kubelet[2064]: I1213 14:28:29.730098 2064 scope.go:117] "RemoveContainer" containerID="24c431fc54ae168cac45f335a98d7d249dd1920f29d230a6100e36eab03c8f59" Dec 13 14:28:29.730441 env[1226]: time="2024-12-13T14:28:29.730361171Z" level=error msg="ContainerStatus for \"24c431fc54ae168cac45f335a98d7d249dd1920f29d230a6100e36eab03c8f59\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"24c431fc54ae168cac45f335a98d7d249dd1920f29d230a6100e36eab03c8f59\": not found" Dec 13 14:28:29.730626 kubelet[2064]: E1213 14:28:29.730553 2064 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"24c431fc54ae168cac45f335a98d7d249dd1920f29d230a6100e36eab03c8f59\": not found" containerID="24c431fc54ae168cac45f335a98d7d249dd1920f29d230a6100e36eab03c8f59" Dec 13 14:28:29.730626 kubelet[2064]: I1213 14:28:29.730619 2064 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"24c431fc54ae168cac45f335a98d7d249dd1920f29d230a6100e36eab03c8f59"} err="failed to get container status \"24c431fc54ae168cac45f335a98d7d249dd1920f29d230a6100e36eab03c8f59\": rpc error: code = NotFound desc = an error occurred when try to find container \"24c431fc54ae168cac45f335a98d7d249dd1920f29d230a6100e36eab03c8f59\": not found" Dec 13 14:28:29.730788 kubelet[2064]: I1213 14:28:29.730637 2064 scope.go:117] "RemoveContainer" containerID="942f1115f010ab81b39422fcbd7e7c22b5de48de77b5cdedd3a67c5fa96e94f4" Dec 13 14:28:29.730905 env[1226]: time="2024-12-13T14:28:29.730836258Z" level=error msg="ContainerStatus for \"942f1115f010ab81b39422fcbd7e7c22b5de48de77b5cdedd3a67c5fa96e94f4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"942f1115f010ab81b39422fcbd7e7c22b5de48de77b5cdedd3a67c5fa96e94f4\": not found" Dec 13 14:28:29.731184 kubelet[2064]: E1213 14:28:29.731125 2064 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"942f1115f010ab81b39422fcbd7e7c22b5de48de77b5cdedd3a67c5fa96e94f4\": not found" containerID="942f1115f010ab81b39422fcbd7e7c22b5de48de77b5cdedd3a67c5fa96e94f4" Dec 13 14:28:29.731184 kubelet[2064]: I1213 14:28:29.731168 2064 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"942f1115f010ab81b39422fcbd7e7c22b5de48de77b5cdedd3a67c5fa96e94f4"} err="failed to get container status \"942f1115f010ab81b39422fcbd7e7c22b5de48de77b5cdedd3a67c5fa96e94f4\": rpc error: code = NotFound desc = an error occurred when try to find container \"942f1115f010ab81b39422fcbd7e7c22b5de48de77b5cdedd3a67c5fa96e94f4\": not found" Dec 13 14:28:29.731184 kubelet[2064]: I1213 14:28:29.731185 2064 scope.go:117] "RemoveContainer" containerID="803ff51ded69c4d612d745ca9c82034508b4de476c06f42cba4bf0683f388062" Dec 13 14:28:29.731895 kubelet[2064]: E1213 14:28:29.731793 2064 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"803ff51ded69c4d612d745ca9c82034508b4de476c06f42cba4bf0683f388062\": not found" containerID="803ff51ded69c4d612d745ca9c82034508b4de476c06f42cba4bf0683f388062" Dec 13 14:28:29.731895 kubelet[2064]: I1213 14:28:29.731843 2064 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"803ff51ded69c4d612d745ca9c82034508b4de476c06f42cba4bf0683f388062"} err="failed to get container status \"803ff51ded69c4d612d745ca9c82034508b4de476c06f42cba4bf0683f388062\": rpc error: code = NotFound desc = an error occurred when try to find container \"803ff51ded69c4d612d745ca9c82034508b4de476c06f42cba4bf0683f388062\": not found" Dec 13 14:28:29.732272 env[1226]: time="2024-12-13T14:28:29.731428521Z" level=error msg="ContainerStatus for \"803ff51ded69c4d612d745ca9c82034508b4de476c06f42cba4bf0683f388062\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"803ff51ded69c4d612d745ca9c82034508b4de476c06f42cba4bf0683f388062\": not found" Dec 13 14:28:30.455309 sshd[3630]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:30.459825 systemd[1]: sshd@22-10.128.0.103:22-139.178.68.195:35828.service: Deactivated successfully. Dec 13 14:28:30.461014 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 14:28:30.462189 systemd-logind[1212]: Session 23 logged out. Waiting for processes to exit. Dec 13 14:28:30.463448 systemd-logind[1212]: Removed session 23. Dec 13 14:28:30.482682 kubelet[2064]: E1213 14:28:30.482655 2064 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:28:30.502203 systemd[1]: Started sshd@23-10.128.0.103:22-139.178.68.195:35844.service. Dec 13 14:28:30.792467 sshd[3797]: Accepted publickey for core from 139.178.68.195 port 35844 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:28:30.794391 sshd[3797]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:30.801892 systemd[1]: Started session-24.scope. Dec 13 14:28:30.803343 systemd-logind[1212]: New session 24 of user core. Dec 13 14:28:31.280120 kubelet[2064]: I1213 14:28:31.280025 2064 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5a7d6e17-7e2d-407c-a9e3-2540c0dfb879" path="/var/lib/kubelet/pods/5a7d6e17-7e2d-407c-a9e3-2540c0dfb879/volumes" Dec 13 14:28:31.281421 kubelet[2064]: I1213 14:28:31.281398 2064 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="61afd605-291c-4b3a-9769-dc35bff3785f" path="/var/lib/kubelet/pods/61afd605-291c-4b3a-9769-dc35bff3785f/volumes" Dec 13 14:28:31.895558 kubelet[2064]: I1213 14:28:31.895482 2064 topology_manager.go:215] "Topology Admit Handler" podUID="f4b247f8-ccf8-4089-86e6-ae35ff0db423" podNamespace="kube-system" podName="cilium-9hb2p" Dec 13 14:28:31.896589 kubelet[2064]: E1213 14:28:31.896528 2064 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5a7d6e17-7e2d-407c-a9e3-2540c0dfb879" containerName="apply-sysctl-overwrites" Dec 13 14:28:31.896767 kubelet[2064]: E1213 14:28:31.896748 2064 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="61afd605-291c-4b3a-9769-dc35bff3785f" containerName="cilium-operator" Dec 13 14:28:31.896888 kubelet[2064]: E1213 14:28:31.896870 2064 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5a7d6e17-7e2d-407c-a9e3-2540c0dfb879" containerName="mount-cgroup" Dec 13 14:28:31.897019 kubelet[2064]: E1213 14:28:31.897002 2064 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5a7d6e17-7e2d-407c-a9e3-2540c0dfb879" containerName="mount-bpf-fs" Dec 13 14:28:31.897135 kubelet[2064]: E1213 14:28:31.897108 2064 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5a7d6e17-7e2d-407c-a9e3-2540c0dfb879" containerName="clean-cilium-state" Dec 13 14:28:31.897244 kubelet[2064]: E1213 14:28:31.897229 2064 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5a7d6e17-7e2d-407c-a9e3-2540c0dfb879" containerName="cilium-agent" Dec 13 14:28:31.897403 kubelet[2064]: I1213 14:28:31.897384 2064 memory_manager.go:354] "RemoveStaleState removing state" podUID="61afd605-291c-4b3a-9769-dc35bff3785f" containerName="cilium-operator" Dec 13 14:28:31.897525 kubelet[2064]: I1213 14:28:31.897509 2064 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a7d6e17-7e2d-407c-a9e3-2540c0dfb879" containerName="cilium-agent" Dec 13 14:28:31.909062 sshd[3797]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:31.911231 systemd[1]: Created slice kubepods-burstable-podf4b247f8_ccf8_4089_86e6_ae35ff0db423.slice. Dec 13 14:28:31.918166 systemd[1]: sshd@23-10.128.0.103:22-139.178.68.195:35844.service: Deactivated successfully. Dec 13 14:28:31.919312 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 14:28:31.923652 systemd-logind[1212]: Session 24 logged out. Waiting for processes to exit. Dec 13 14:28:31.925020 systemd-logind[1212]: Removed session 24. Dec 13 14:28:31.929326 kubelet[2064]: W1213 14:28:31.929295 2064 reflector.go:539] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal' and this object Dec 13 14:28:31.931361 kubelet[2064]: E1213 14:28:31.931292 2064 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal' and this object Dec 13 14:28:31.931506 kubelet[2064]: W1213 14:28:31.930318 2064 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal' and this object Dec 13 14:28:31.931732 kubelet[2064]: E1213 14:28:31.931713 2064 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal' and this object Dec 13 14:28:31.931848 kubelet[2064]: W1213 14:28:31.930370 2064 reflector.go:539] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal' and this object Dec 13 14:28:31.931957 kubelet[2064]: E1213 14:28:31.931942 2064 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal' and this object Dec 13 14:28:31.932052 kubelet[2064]: W1213 14:28:31.930415 2064 reflector.go:539] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal' and this object Dec 13 14:28:31.932168 kubelet[2064]: E1213 14:28:31.932151 2064 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal' and this object Dec 13 14:28:31.953805 systemd[1]: Started sshd@24-10.128.0.103:22-139.178.68.195:35850.service. Dec 13 14:28:32.031657 kubelet[2064]: I1213 14:28:32.031546 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-bpf-maps\") pod \"cilium-9hb2p\" (UID: \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\") " pod="kube-system/cilium-9hb2p" Dec 13 14:28:32.031657 kubelet[2064]: I1213 14:28:32.031623 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-cilium-cgroup\") pod \"cilium-9hb2p\" (UID: \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\") " pod="kube-system/cilium-9hb2p" Dec 13 14:28:32.031956 kubelet[2064]: I1213 14:28:32.031716 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-xtables-lock\") pod \"cilium-9hb2p\" (UID: \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\") " pod="kube-system/cilium-9hb2p" Dec 13 14:28:32.031956 kubelet[2064]: I1213 14:28:32.031827 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-hostproc\") pod \"cilium-9hb2p\" (UID: \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\") " pod="kube-system/cilium-9hb2p" Dec 13 14:28:32.031956 kubelet[2064]: I1213 14:28:32.031863 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-cilium-run\") pod \"cilium-9hb2p\" (UID: \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\") " pod="kube-system/cilium-9hb2p" Dec 13 14:28:32.031956 kubelet[2064]: I1213 14:28:32.031895 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-lib-modules\") pod \"cilium-9hb2p\" (UID: \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\") " pod="kube-system/cilium-9hb2p" Dec 13 14:28:32.031956 kubelet[2064]: I1213 14:28:32.031935 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f4b247f8-ccf8-4089-86e6-ae35ff0db423-clustermesh-secrets\") pod \"cilium-9hb2p\" (UID: \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\") " pod="kube-system/cilium-9hb2p" Dec 13 14:28:32.032312 kubelet[2064]: I1213 14:28:32.031969 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-host-proc-sys-kernel\") pod \"cilium-9hb2p\" (UID: \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\") " pod="kube-system/cilium-9hb2p" Dec 13 14:28:32.032312 kubelet[2064]: I1213 14:28:32.032001 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f4b247f8-ccf8-4089-86e6-ae35ff0db423-hubble-tls\") pod \"cilium-9hb2p\" (UID: \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\") " pod="kube-system/cilium-9hb2p" Dec 13 14:28:32.032312 kubelet[2064]: I1213 14:28:32.032040 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqcx6\" (UniqueName: \"kubernetes.io/projected/f4b247f8-ccf8-4089-86e6-ae35ff0db423-kube-api-access-bqcx6\") pod \"cilium-9hb2p\" (UID: \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\") " pod="kube-system/cilium-9hb2p" Dec 13 14:28:32.032312 kubelet[2064]: I1213 14:28:32.032073 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-cni-path\") pod \"cilium-9hb2p\" (UID: \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\") " pod="kube-system/cilium-9hb2p" Dec 13 14:28:32.032312 kubelet[2064]: I1213 14:28:32.032106 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4b247f8-ccf8-4089-86e6-ae35ff0db423-cilium-config-path\") pod \"cilium-9hb2p\" (UID: \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\") " pod="kube-system/cilium-9hb2p" Dec 13 14:28:32.032312 kubelet[2064]: I1213 14:28:32.032138 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-etc-cni-netd\") pod \"cilium-9hb2p\" (UID: \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\") " pod="kube-system/cilium-9hb2p" Dec 13 14:28:32.032633 kubelet[2064]: I1213 14:28:32.032183 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f4b247f8-ccf8-4089-86e6-ae35ff0db423-cilium-ipsec-secrets\") pod \"cilium-9hb2p\" (UID: \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\") " pod="kube-system/cilium-9hb2p" Dec 13 14:28:32.032633 kubelet[2064]: I1213 14:28:32.032221 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-host-proc-sys-net\") pod \"cilium-9hb2p\" (UID: \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\") " pod="kube-system/cilium-9hb2p" Dec 13 14:28:32.261740 sshd[3808]: Accepted publickey for core from 139.178.68.195 port 35850 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:28:32.263100 sshd[3808]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:32.269689 systemd-logind[1212]: New session 25 of user core. Dec 13 14:28:32.270131 systemd[1]: Started session-25.scope. Dec 13 14:28:32.539793 kubelet[2064]: E1213 14:28:32.539654 2064 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[cilium-config-path cilium-ipsec-secrets clustermesh-secrets hubble-tls], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-9hb2p" podUID="f4b247f8-ccf8-4089-86e6-ae35ff0db423" Dec 13 14:28:32.564371 sshd[3808]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:32.569583 systemd[1]: sshd@24-10.128.0.103:22-139.178.68.195:35850.service: Deactivated successfully. Dec 13 14:28:32.570509 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 14:28:32.572001 systemd-logind[1212]: Session 25 logged out. Waiting for processes to exit. Dec 13 14:28:32.573715 systemd-logind[1212]: Removed session 25. Dec 13 14:28:32.610976 systemd[1]: Started sshd@25-10.128.0.103:22-139.178.68.195:35860.service. Dec 13 14:28:32.838316 kubelet[2064]: I1213 14:28:32.838255 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-xtables-lock\") pod \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\" (UID: \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\") " Dec 13 14:28:32.838316 kubelet[2064]: I1213 14:28:32.838313 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-cilium-run\") pod \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\" (UID: \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\") " Dec 13 14:28:32.838636 kubelet[2064]: I1213 14:28:32.838340 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-bpf-maps\") pod \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\" (UID: \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\") " Dec 13 14:28:32.838636 kubelet[2064]: I1213 14:28:32.838371 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-host-proc-sys-kernel\") pod \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\" (UID: \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\") " Dec 13 14:28:32.838636 kubelet[2064]: I1213 14:28:32.838401 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-lib-modules\") pod \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\" (UID: \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\") " Dec 13 14:28:32.838636 kubelet[2064]: I1213 14:28:32.838428 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-cilium-cgroup\") pod \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\" (UID: \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\") " Dec 13 14:28:32.838636 kubelet[2064]: I1213 14:28:32.838454 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-hostproc\") pod \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\" (UID: \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\") " Dec 13 14:28:32.838636 kubelet[2064]: I1213 14:28:32.838536 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-host-proc-sys-net\") pod \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\" (UID: \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\") " Dec 13 14:28:32.838967 kubelet[2064]: I1213 14:28:32.838625 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-cni-path\") pod \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\" (UID: \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\") " Dec 13 14:28:32.838967 kubelet[2064]: I1213 14:28:32.838672 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqcx6\" (UniqueName: \"kubernetes.io/projected/f4b247f8-ccf8-4089-86e6-ae35ff0db423-kube-api-access-bqcx6\") pod \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\" (UID: \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\") " Dec 13 14:28:32.838967 kubelet[2064]: I1213 14:28:32.838708 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-etc-cni-netd\") pod \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\" (UID: \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\") " Dec 13 14:28:32.838967 kubelet[2064]: I1213 14:28:32.838833 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f4b247f8-ccf8-4089-86e6-ae35ff0db423" (UID: "f4b247f8-ccf8-4089-86e6-ae35ff0db423"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:32.838967 kubelet[2064]: I1213 14:28:32.838872 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f4b247f8-ccf8-4089-86e6-ae35ff0db423" (UID: "f4b247f8-ccf8-4089-86e6-ae35ff0db423"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:32.839241 kubelet[2064]: I1213 14:28:32.838903 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f4b247f8-ccf8-4089-86e6-ae35ff0db423" (UID: "f4b247f8-ccf8-4089-86e6-ae35ff0db423"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:32.839241 kubelet[2064]: I1213 14:28:32.838929 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f4b247f8-ccf8-4089-86e6-ae35ff0db423" (UID: "f4b247f8-ccf8-4089-86e6-ae35ff0db423"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:32.839241 kubelet[2064]: I1213 14:28:32.838955 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f4b247f8-ccf8-4089-86e6-ae35ff0db423" (UID: "f4b247f8-ccf8-4089-86e6-ae35ff0db423"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:32.839241 kubelet[2064]: I1213 14:28:32.838982 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f4b247f8-ccf8-4089-86e6-ae35ff0db423" (UID: "f4b247f8-ccf8-4089-86e6-ae35ff0db423"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:32.839241 kubelet[2064]: I1213 14:28:32.839009 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f4b247f8-ccf8-4089-86e6-ae35ff0db423" (UID: "f4b247f8-ccf8-4089-86e6-ae35ff0db423"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:32.839528 kubelet[2064]: I1213 14:28:32.839034 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-hostproc" (OuterVolumeSpecName: "hostproc") pod "f4b247f8-ccf8-4089-86e6-ae35ff0db423" (UID: "f4b247f8-ccf8-4089-86e6-ae35ff0db423"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:32.839528 kubelet[2064]: I1213 14:28:32.839059 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f4b247f8-ccf8-4089-86e6-ae35ff0db423" (UID: "f4b247f8-ccf8-4089-86e6-ae35ff0db423"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:32.839528 kubelet[2064]: I1213 14:28:32.839098 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-cni-path" (OuterVolumeSpecName: "cni-path") pod "f4b247f8-ccf8-4089-86e6-ae35ff0db423" (UID: "f4b247f8-ccf8-4089-86e6-ae35ff0db423"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:32.845864 kubelet[2064]: I1213 14:28:32.845741 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4b247f8-ccf8-4089-86e6-ae35ff0db423-kube-api-access-bqcx6" (OuterVolumeSpecName: "kube-api-access-bqcx6") pod "f4b247f8-ccf8-4089-86e6-ae35ff0db423" (UID: "f4b247f8-ccf8-4089-86e6-ae35ff0db423"). InnerVolumeSpecName "kube-api-access-bqcx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:28:32.846025 systemd[1]: var-lib-kubelet-pods-f4b247f8\x2dccf8\x2d4089\x2d86e6\x2dae35ff0db423-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbqcx6.mount: Deactivated successfully. Dec 13 14:28:32.905269 sshd[3820]: Accepted publickey for core from 139.178.68.195 port 35860 ssh2: RSA SHA256:46IhXbRhLpnxjtaVY1jZn9R5WA0GgkyNT5hX964MgBk Dec 13 14:28:32.907243 sshd[3820]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:32.912637 systemd-logind[1212]: New session 26 of user core. Dec 13 14:28:32.913952 systemd[1]: Started session-26.scope. Dec 13 14:28:32.940968 kubelet[2064]: I1213 14:28:32.940935 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f4b247f8-ccf8-4089-86e6-ae35ff0db423-clustermesh-secrets\") pod \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\" (UID: \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\") " Dec 13 14:28:32.941636 kubelet[2064]: I1213 14:28:32.941611 2064 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-xtables-lock\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:32.941749 kubelet[2064]: I1213 14:28:32.941646 2064 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-cilium-run\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:32.941749 kubelet[2064]: I1213 14:28:32.941668 2064 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-bpf-maps\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:32.941749 kubelet[2064]: I1213 14:28:32.941688 2064 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-host-proc-sys-kernel\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:32.941749 kubelet[2064]: I1213 14:28:32.941708 2064 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-hostproc\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:32.941749 kubelet[2064]: I1213 14:28:32.941727 2064 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-lib-modules\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:32.941749 kubelet[2064]: I1213 14:28:32.941746 2064 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-cilium-cgroup\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:32.942147 kubelet[2064]: I1213 14:28:32.941770 2064 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-host-proc-sys-net\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:32.942147 kubelet[2064]: I1213 14:28:32.941790 2064 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-cni-path\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:32.942147 kubelet[2064]: I1213 14:28:32.941811 2064 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-bqcx6\" (UniqueName: \"kubernetes.io/projected/f4b247f8-ccf8-4089-86e6-ae35ff0db423-kube-api-access-bqcx6\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:32.942147 kubelet[2064]: I1213 14:28:32.941831 2064 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f4b247f8-ccf8-4089-86e6-ae35ff0db423-etc-cni-netd\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:32.953277 kubelet[2064]: I1213 14:28:32.949711 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4b247f8-ccf8-4089-86e6-ae35ff0db423-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f4b247f8-ccf8-4089-86e6-ae35ff0db423" (UID: "f4b247f8-ccf8-4089-86e6-ae35ff0db423"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:28:32.952009 systemd[1]: var-lib-kubelet-pods-f4b247f8\x2dccf8\x2d4089\x2d86e6\x2dae35ff0db423-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:28:33.042108 kubelet[2064]: I1213 14:28:33.042043 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f4b247f8-ccf8-4089-86e6-ae35ff0db423-cilium-ipsec-secrets\") pod \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\" (UID: \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\") " Dec 13 14:28:33.042340 kubelet[2064]: I1213 14:28:33.042176 2064 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f4b247f8-ccf8-4089-86e6-ae35ff0db423-clustermesh-secrets\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:33.046141 kubelet[2064]: I1213 14:28:33.046094 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4b247f8-ccf8-4089-86e6-ae35ff0db423-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "f4b247f8-ccf8-4089-86e6-ae35ff0db423" (UID: "f4b247f8-ccf8-4089-86e6-ae35ff0db423"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:28:33.134324 kubelet[2064]: E1213 14:28:33.134194 2064 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Dec 13 14:28:33.134688 kubelet[2064]: E1213 14:28:33.134668 2064 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f4b247f8-ccf8-4089-86e6-ae35ff0db423-cilium-config-path podName:f4b247f8-ccf8-4089-86e6-ae35ff0db423 nodeName:}" failed. No retries permitted until 2024-12-13 14:28:33.634639146 +0000 UTC m=+128.562752228 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/f4b247f8-ccf8-4089-86e6-ae35ff0db423-cilium-config-path") pod "cilium-9hb2p" (UID: "f4b247f8-ccf8-4089-86e6-ae35ff0db423") : failed to sync configmap cache: timed out waiting for the condition Dec 13 14:28:33.135195 kubelet[2064]: E1213 14:28:33.134532 2064 projected.go:269] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Dec 13 14:28:33.135359 kubelet[2064]: E1213 14:28:33.135342 2064 projected.go:200] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-9hb2p: failed to sync secret cache: timed out waiting for the condition Dec 13 14:28:33.135513 kubelet[2064]: E1213 14:28:33.135500 2064 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f4b247f8-ccf8-4089-86e6-ae35ff0db423-hubble-tls podName:f4b247f8-ccf8-4089-86e6-ae35ff0db423 nodeName:}" failed. No retries permitted until 2024-12-13 14:28:33.635477093 +0000 UTC m=+128.563590175 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/f4b247f8-ccf8-4089-86e6-ae35ff0db423-hubble-tls") pod "cilium-9hb2p" (UID: "f4b247f8-ccf8-4089-86e6-ae35ff0db423") : failed to sync secret cache: timed out waiting for the condition Dec 13 14:28:33.142675 kubelet[2064]: I1213 14:28:33.142635 2064 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f4b247f8-ccf8-4089-86e6-ae35ff0db423-cilium-ipsec-secrets\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:33.148918 systemd[1]: var-lib-kubelet-pods-f4b247f8\x2dccf8\x2d4089\x2d86e6\x2dae35ff0db423-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:28:33.284731 systemd[1]: Removed slice kubepods-burstable-podf4b247f8_ccf8_4089_86e6_ae35ff0db423.slice. Dec 13 14:28:33.746540 kubelet[2064]: I1213 14:28:33.746491 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4b247f8-ccf8-4089-86e6-ae35ff0db423-cilium-config-path\") pod \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\" (UID: \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\") " Dec 13 14:28:33.746877 kubelet[2064]: I1213 14:28:33.746856 2064 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f4b247f8-ccf8-4089-86e6-ae35ff0db423-hubble-tls\") pod \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\" (UID: \"f4b247f8-ccf8-4089-86e6-ae35ff0db423\") " Dec 13 14:28:33.751354 kubelet[2064]: I1213 14:28:33.751304 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4b247f8-ccf8-4089-86e6-ae35ff0db423-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f4b247f8-ccf8-4089-86e6-ae35ff0db423" (UID: "f4b247f8-ccf8-4089-86e6-ae35ff0db423"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:28:33.760428 systemd[1]: var-lib-kubelet-pods-f4b247f8\x2dccf8\x2d4089\x2d86e6\x2dae35ff0db423-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:28:33.763262 kubelet[2064]: I1213 14:28:33.763229 2064 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4b247f8-ccf8-4089-86e6-ae35ff0db423-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f4b247f8-ccf8-4089-86e6-ae35ff0db423" (UID: "f4b247f8-ccf8-4089-86e6-ae35ff0db423"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:28:33.847284 kubelet[2064]: I1213 14:28:33.847238 2064 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4b247f8-ccf8-4089-86e6-ae35ff0db423-cilium-config-path\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:33.847284 kubelet[2064]: I1213 14:28:33.847288 2064 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f4b247f8-ccf8-4089-86e6-ae35ff0db423-hubble-tls\") on node \"ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 14:28:34.060409 kubelet[2064]: I1213 14:28:34.060354 2064 topology_manager.go:215] "Topology Admit Handler" podUID="3d489927-164a-4061-85ed-a3f47213c27e" podNamespace="kube-system" podName="cilium-8pjqf" Dec 13 14:28:34.071659 systemd[1]: Created slice kubepods-burstable-pod3d489927_164a_4061_85ed_a3f47213c27e.slice. Dec 13 14:28:34.148858 kubelet[2064]: I1213 14:28:34.148800 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d489927-164a-4061-85ed-a3f47213c27e-clustermesh-secrets\") pod \"cilium-8pjqf\" (UID: \"3d489927-164a-4061-85ed-a3f47213c27e\") " pod="kube-system/cilium-8pjqf" Dec 13 14:28:34.148858 kubelet[2064]: I1213 14:28:34.148864 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3d489927-164a-4061-85ed-a3f47213c27e-cilium-ipsec-secrets\") pod \"cilium-8pjqf\" (UID: \"3d489927-164a-4061-85ed-a3f47213c27e\") " pod="kube-system/cilium-8pjqf" Dec 13 14:28:34.149158 kubelet[2064]: I1213 14:28:34.148906 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d489927-164a-4061-85ed-a3f47213c27e-host-proc-sys-kernel\") pod \"cilium-8pjqf\" (UID: \"3d489927-164a-4061-85ed-a3f47213c27e\") " pod="kube-system/cilium-8pjqf" Dec 13 14:28:34.149158 kubelet[2064]: I1213 14:28:34.148937 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9httg\" (UniqueName: \"kubernetes.io/projected/3d489927-164a-4061-85ed-a3f47213c27e-kube-api-access-9httg\") pod \"cilium-8pjqf\" (UID: \"3d489927-164a-4061-85ed-a3f47213c27e\") " pod="kube-system/cilium-8pjqf" Dec 13 14:28:34.149158 kubelet[2064]: I1213 14:28:34.148971 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d489927-164a-4061-85ed-a3f47213c27e-cilium-cgroup\") pod \"cilium-8pjqf\" (UID: \"3d489927-164a-4061-85ed-a3f47213c27e\") " pod="kube-system/cilium-8pjqf" Dec 13 14:28:34.149158 kubelet[2064]: I1213 14:28:34.148998 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d489927-164a-4061-85ed-a3f47213c27e-cni-path\") pod \"cilium-8pjqf\" (UID: \"3d489927-164a-4061-85ed-a3f47213c27e\") " pod="kube-system/cilium-8pjqf" Dec 13 14:28:34.149158 kubelet[2064]: I1213 14:28:34.149030 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d489927-164a-4061-85ed-a3f47213c27e-host-proc-sys-net\") pod \"cilium-8pjqf\" (UID: \"3d489927-164a-4061-85ed-a3f47213c27e\") " pod="kube-system/cilium-8pjqf" Dec 13 14:28:34.149158 kubelet[2064]: I1213 14:28:34.149060 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d489927-164a-4061-85ed-a3f47213c27e-bpf-maps\") pod \"cilium-8pjqf\" (UID: \"3d489927-164a-4061-85ed-a3f47213c27e\") " pod="kube-system/cilium-8pjqf" Dec 13 14:28:34.149486 kubelet[2064]: I1213 14:28:34.149098 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d489927-164a-4061-85ed-a3f47213c27e-etc-cni-netd\") pod \"cilium-8pjqf\" (UID: \"3d489927-164a-4061-85ed-a3f47213c27e\") " pod="kube-system/cilium-8pjqf" Dec 13 14:28:34.149486 kubelet[2064]: I1213 14:28:34.149129 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d489927-164a-4061-85ed-a3f47213c27e-hubble-tls\") pod \"cilium-8pjqf\" (UID: \"3d489927-164a-4061-85ed-a3f47213c27e\") " pod="kube-system/cilium-8pjqf" Dec 13 14:28:34.149486 kubelet[2064]: I1213 14:28:34.149163 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d489927-164a-4061-85ed-a3f47213c27e-cilium-config-path\") pod \"cilium-8pjqf\" (UID: \"3d489927-164a-4061-85ed-a3f47213c27e\") " pod="kube-system/cilium-8pjqf" Dec 13 14:28:34.149486 kubelet[2064]: I1213 14:28:34.149196 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d489927-164a-4061-85ed-a3f47213c27e-hostproc\") pod \"cilium-8pjqf\" (UID: \"3d489927-164a-4061-85ed-a3f47213c27e\") " pod="kube-system/cilium-8pjqf" Dec 13 14:28:34.149486 kubelet[2064]: I1213 14:28:34.149230 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d489927-164a-4061-85ed-a3f47213c27e-lib-modules\") pod \"cilium-8pjqf\" (UID: \"3d489927-164a-4061-85ed-a3f47213c27e\") " pod="kube-system/cilium-8pjqf" Dec 13 14:28:34.149486 kubelet[2064]: I1213 14:28:34.149266 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d489927-164a-4061-85ed-a3f47213c27e-xtables-lock\") pod \"cilium-8pjqf\" (UID: \"3d489927-164a-4061-85ed-a3f47213c27e\") " pod="kube-system/cilium-8pjqf" Dec 13 14:28:34.149817 kubelet[2064]: I1213 14:28:34.149300 2064 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d489927-164a-4061-85ed-a3f47213c27e-cilium-run\") pod \"cilium-8pjqf\" (UID: \"3d489927-164a-4061-85ed-a3f47213c27e\") " pod="kube-system/cilium-8pjqf" Dec 13 14:28:34.376964 env[1226]: time="2024-12-13T14:28:34.375927526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8pjqf,Uid:3d489927-164a-4061-85ed-a3f47213c27e,Namespace:kube-system,Attempt:0,}" Dec 13 14:28:34.403093 env[1226]: time="2024-12-13T14:28:34.402971353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:28:34.403093 env[1226]: time="2024-12-13T14:28:34.403043164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:28:34.403093 env[1226]: time="2024-12-13T14:28:34.403062152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:28:34.403769 env[1226]: time="2024-12-13T14:28:34.403710934Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c4830c17a8ddbd70876ab2032b87e3f512c36d2700e1b5d18fa8cb3de8ccbff4 pid=3847 runtime=io.containerd.runc.v2 Dec 13 14:28:34.427521 systemd[1]: Started cri-containerd-c4830c17a8ddbd70876ab2032b87e3f512c36d2700e1b5d18fa8cb3de8ccbff4.scope. Dec 13 14:28:34.463739 env[1226]: time="2024-12-13T14:28:34.463687880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8pjqf,Uid:3d489927-164a-4061-85ed-a3f47213c27e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4830c17a8ddbd70876ab2032b87e3f512c36d2700e1b5d18fa8cb3de8ccbff4\"" Dec 13 14:28:34.469487 env[1226]: time="2024-12-13T14:28:34.469424000Z" level=info msg="CreateContainer within sandbox \"c4830c17a8ddbd70876ab2032b87e3f512c36d2700e1b5d18fa8cb3de8ccbff4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:28:34.484691 env[1226]: time="2024-12-13T14:28:34.484635121Z" level=info msg="CreateContainer within sandbox \"c4830c17a8ddbd70876ab2032b87e3f512c36d2700e1b5d18fa8cb3de8ccbff4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"46b6302fe7deffc39e78071caab3ba490f7cd3a7aa33cbc89143ef6d9d6f07f9\"" Dec 13 14:28:34.486755 env[1226]: time="2024-12-13T14:28:34.486708732Z" level=info msg="StartContainer for \"46b6302fe7deffc39e78071caab3ba490f7cd3a7aa33cbc89143ef6d9d6f07f9\"" Dec 13 14:28:34.510131 systemd[1]: Started cri-containerd-46b6302fe7deffc39e78071caab3ba490f7cd3a7aa33cbc89143ef6d9d6f07f9.scope. Dec 13 14:28:34.551691 env[1226]: time="2024-12-13T14:28:34.551554053Z" level=info msg="StartContainer for \"46b6302fe7deffc39e78071caab3ba490f7cd3a7aa33cbc89143ef6d9d6f07f9\" returns successfully" Dec 13 14:28:34.565611 systemd[1]: cri-containerd-46b6302fe7deffc39e78071caab3ba490f7cd3a7aa33cbc89143ef6d9d6f07f9.scope: Deactivated successfully. Dec 13 14:28:34.605494 env[1226]: time="2024-12-13T14:28:34.605427293Z" level=info msg="shim disconnected" id=46b6302fe7deffc39e78071caab3ba490f7cd3a7aa33cbc89143ef6d9d6f07f9 Dec 13 14:28:34.605494 env[1226]: time="2024-12-13T14:28:34.605495266Z" level=warning msg="cleaning up after shim disconnected" id=46b6302fe7deffc39e78071caab3ba490f7cd3a7aa33cbc89143ef6d9d6f07f9 namespace=k8s.io Dec 13 14:28:34.605494 env[1226]: time="2024-12-13T14:28:34.605510489Z" level=info msg="cleaning up dead shim" Dec 13 14:28:34.616672 env[1226]: time="2024-12-13T14:28:34.616600215Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3929 runtime=io.containerd.runc.v2\n" Dec 13 14:28:34.724232 env[1226]: time="2024-12-13T14:28:34.723112008Z" level=info msg="CreateContainer within sandbox \"c4830c17a8ddbd70876ab2032b87e3f512c36d2700e1b5d18fa8cb3de8ccbff4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:28:34.739280 env[1226]: time="2024-12-13T14:28:34.739229089Z" level=info msg="CreateContainer within sandbox \"c4830c17a8ddbd70876ab2032b87e3f512c36d2700e1b5d18fa8cb3de8ccbff4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b6680e80e2ba917b1be677ec5104994a36ffd757e7f257a3a9b04eb0be7e94e2\"" Dec 13 14:28:34.740086 env[1226]: time="2024-12-13T14:28:34.740038920Z" level=info msg="StartContainer for \"b6680e80e2ba917b1be677ec5104994a36ffd757e7f257a3a9b04eb0be7e94e2\"" Dec 13 14:28:34.765523 systemd[1]: Started cri-containerd-b6680e80e2ba917b1be677ec5104994a36ffd757e7f257a3a9b04eb0be7e94e2.scope. Dec 13 14:28:34.810052 env[1226]: time="2024-12-13T14:28:34.809998857Z" level=info msg="StartContainer for \"b6680e80e2ba917b1be677ec5104994a36ffd757e7f257a3a9b04eb0be7e94e2\" returns successfully" Dec 13 14:28:34.819893 systemd[1]: cri-containerd-b6680e80e2ba917b1be677ec5104994a36ffd757e7f257a3a9b04eb0be7e94e2.scope: Deactivated successfully. Dec 13 14:28:34.847105 env[1226]: time="2024-12-13T14:28:34.847046727Z" level=info msg="shim disconnected" id=b6680e80e2ba917b1be677ec5104994a36ffd757e7f257a3a9b04eb0be7e94e2 Dec 13 14:28:34.847504 env[1226]: time="2024-12-13T14:28:34.847466907Z" level=warning msg="cleaning up after shim disconnected" id=b6680e80e2ba917b1be677ec5104994a36ffd757e7f257a3a9b04eb0be7e94e2 namespace=k8s.io Dec 13 14:28:34.847504 env[1226]: time="2024-12-13T14:28:34.847499692Z" level=info msg="cleaning up dead shim" Dec 13 14:28:34.862147 env[1226]: time="2024-12-13T14:28:34.862090261Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3993 runtime=io.containerd.runc.v2\n" Dec 13 14:28:35.279827 kubelet[2064]: I1213 14:28:35.279784 2064 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f4b247f8-ccf8-4089-86e6-ae35ff0db423" path="/var/lib/kubelet/pods/f4b247f8-ccf8-4089-86e6-ae35ff0db423/volumes" Dec 13 14:28:35.484005 kubelet[2064]: E1213 14:28:35.483969 2064 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:28:35.724798 env[1226]: time="2024-12-13T14:28:35.724698463Z" level=info msg="CreateContainer within sandbox \"c4830c17a8ddbd70876ab2032b87e3f512c36d2700e1b5d18fa8cb3de8ccbff4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:28:35.779068 env[1226]: time="2024-12-13T14:28:35.778998907Z" level=info msg="CreateContainer within sandbox \"c4830c17a8ddbd70876ab2032b87e3f512c36d2700e1b5d18fa8cb3de8ccbff4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b1da6828a99ddf7508f533632925884cc0a2a53305e0ed19fa881c083aa117d5\"" Dec 13 14:28:35.780388 env[1226]: time="2024-12-13T14:28:35.780348181Z" level=info msg="StartContainer for \"b1da6828a99ddf7508f533632925884cc0a2a53305e0ed19fa881c083aa117d5\"" Dec 13 14:28:35.825339 systemd[1]: Started cri-containerd-b1da6828a99ddf7508f533632925884cc0a2a53305e0ed19fa881c083aa117d5.scope. Dec 13 14:28:35.897334 env[1226]: time="2024-12-13T14:28:35.897002335Z" level=info msg="StartContainer for \"b1da6828a99ddf7508f533632925884cc0a2a53305e0ed19fa881c083aa117d5\" returns successfully" Dec 13 14:28:35.899703 systemd[1]: cri-containerd-b1da6828a99ddf7508f533632925884cc0a2a53305e0ed19fa881c083aa117d5.scope: Deactivated successfully. Dec 13 14:28:35.931844 env[1226]: time="2024-12-13T14:28:35.931784657Z" level=info msg="shim disconnected" id=b1da6828a99ddf7508f533632925884cc0a2a53305e0ed19fa881c083aa117d5 Dec 13 14:28:35.931844 env[1226]: time="2024-12-13T14:28:35.931848593Z" level=warning msg="cleaning up after shim disconnected" id=b1da6828a99ddf7508f533632925884cc0a2a53305e0ed19fa881c083aa117d5 namespace=k8s.io Dec 13 14:28:35.932213 env[1226]: time="2024-12-13T14:28:35.931864615Z" level=info msg="cleaning up dead shim" Dec 13 14:28:35.943621 env[1226]: time="2024-12-13T14:28:35.943557339Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4051 runtime=io.containerd.runc.v2\n" Dec 13 14:28:36.259094 systemd[1]: run-containerd-runc-k8s.io-b1da6828a99ddf7508f533632925884cc0a2a53305e0ed19fa881c083aa117d5-runc.iYpoTd.mount: Deactivated successfully. Dec 13 14:28:36.259364 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1da6828a99ddf7508f533632925884cc0a2a53305e0ed19fa881c083aa117d5-rootfs.mount: Deactivated successfully. Dec 13 14:28:36.276851 kubelet[2064]: E1213 14:28:36.276805 2064 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-2mvk2" podUID="791348f5-6dca-4aa8-9829-c0fd3d8d0b82" Dec 13 14:28:36.731614 env[1226]: time="2024-12-13T14:28:36.730273739Z" level=info msg="CreateContainer within sandbox \"c4830c17a8ddbd70876ab2032b87e3f512c36d2700e1b5d18fa8cb3de8ccbff4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:28:36.763748 env[1226]: time="2024-12-13T14:28:36.763692877Z" level=info msg="CreateContainer within sandbox \"c4830c17a8ddbd70876ab2032b87e3f512c36d2700e1b5d18fa8cb3de8ccbff4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2dceca5df104e66d64f792accfb29c7bc276ffce9632b70a1b6a52e13867cd8b\"" Dec 13 14:28:36.764759 env[1226]: time="2024-12-13T14:28:36.764724106Z" level=info msg="StartContainer for \"2dceca5df104e66d64f792accfb29c7bc276ffce9632b70a1b6a52e13867cd8b\"" Dec 13 14:28:36.813248 systemd[1]: Started cri-containerd-2dceca5df104e66d64f792accfb29c7bc276ffce9632b70a1b6a52e13867cd8b.scope. Dec 13 14:28:36.853808 systemd[1]: cri-containerd-2dceca5df104e66d64f792accfb29c7bc276ffce9632b70a1b6a52e13867cd8b.scope: Deactivated successfully. Dec 13 14:28:36.859702 env[1226]: time="2024-12-13T14:28:36.859636815Z" level=info msg="StartContainer for \"2dceca5df104e66d64f792accfb29c7bc276ffce9632b70a1b6a52e13867cd8b\" returns successfully" Dec 13 14:28:36.894486 env[1226]: time="2024-12-13T14:28:36.894393021Z" level=info msg="shim disconnected" id=2dceca5df104e66d64f792accfb29c7bc276ffce9632b70a1b6a52e13867cd8b Dec 13 14:28:36.894486 env[1226]: time="2024-12-13T14:28:36.894468231Z" level=warning msg="cleaning up after shim disconnected" id=2dceca5df104e66d64f792accfb29c7bc276ffce9632b70a1b6a52e13867cd8b namespace=k8s.io Dec 13 14:28:36.894486 env[1226]: time="2024-12-13T14:28:36.894485080Z" level=info msg="cleaning up dead shim" Dec 13 14:28:36.907309 env[1226]: time="2024-12-13T14:28:36.907257848Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4106 runtime=io.containerd.runc.v2\n" Dec 13 14:28:37.259014 systemd[1]: run-containerd-runc-k8s.io-2dceca5df104e66d64f792accfb29c7bc276ffce9632b70a1b6a52e13867cd8b-runc.cFd0y3.mount: Deactivated successfully. Dec 13 14:28:37.259179 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2dceca5df104e66d64f792accfb29c7bc276ffce9632b70a1b6a52e13867cd8b-rootfs.mount: Deactivated successfully. Dec 13 14:28:37.735983 env[1226]: time="2024-12-13T14:28:37.735928970Z" level=info msg="CreateContainer within sandbox \"c4830c17a8ddbd70876ab2032b87e3f512c36d2700e1b5d18fa8cb3de8ccbff4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:28:37.767667 env[1226]: time="2024-12-13T14:28:37.767584664Z" level=info msg="CreateContainer within sandbox \"c4830c17a8ddbd70876ab2032b87e3f512c36d2700e1b5d18fa8cb3de8ccbff4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a77ae095d5a0c02fde6ace907f59eff89c8c14f9f515b2cb44164d91001f3c99\"" Dec 13 14:28:37.768841 env[1226]: time="2024-12-13T14:28:37.768802608Z" level=info msg="StartContainer for \"a77ae095d5a0c02fde6ace907f59eff89c8c14f9f515b2cb44164d91001f3c99\"" Dec 13 14:28:37.812643 systemd[1]: Started cri-containerd-a77ae095d5a0c02fde6ace907f59eff89c8c14f9f515b2cb44164d91001f3c99.scope. Dec 13 14:28:37.857938 env[1226]: time="2024-12-13T14:28:37.857875246Z" level=info msg="StartContainer for \"a77ae095d5a0c02fde6ace907f59eff89c8c14f9f515b2cb44164d91001f3c99\" returns successfully" Dec 13 14:28:38.198094 kubelet[2064]: I1213 14:28:38.197688 2064 setters.go:568] "Node became not ready" node="ci-3510-3-6-b37b390daf8f2086bc27.c.flatcar-212911.internal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:28:38Z","lastTransitionTime":"2024-12-13T14:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:28:38.276457 kubelet[2064]: E1213 14:28:38.276422 2064 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-2mvk2" podUID="791348f5-6dca-4aa8-9829-c0fd3d8d0b82" Dec 13 14:28:38.277815 kubelet[2064]: E1213 14:28:38.277791 2064 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-gp9gl" podUID="c3fe6440-942b-4bd3-8a28-e84e0cdef085" Dec 13 14:28:38.322617 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 14:28:38.762576 kubelet[2064]: I1213 14:28:38.762526 2064 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-8pjqf" podStartSLOduration=4.762451901 podStartE2EDuration="4.762451901s" podCreationTimestamp="2024-12-13 14:28:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:28:38.758166945 +0000 UTC m=+133.686280036" watchObservedRunningTime="2024-12-13 14:28:38.762451901 +0000 UTC m=+133.690564992" Dec 13 14:28:39.350953 systemd[1]: run-containerd-runc-k8s.io-a77ae095d5a0c02fde6ace907f59eff89c8c14f9f515b2cb44164d91001f3c99-runc.2c9oxc.mount: Deactivated successfully. Dec 13 14:28:40.276879 kubelet[2064]: E1213 14:28:40.276806 2064 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-2mvk2" podUID="791348f5-6dca-4aa8-9829-c0fd3d8d0b82" Dec 13 14:28:40.277483 kubelet[2064]: E1213 14:28:40.277442 2064 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-gp9gl" podUID="c3fe6440-942b-4bd3-8a28-e84e0cdef085" Dec 13 14:28:41.361512 systemd-networkd[1028]: lxc_health: Link UP Dec 13 14:28:41.384964 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:28:41.389710 systemd-networkd[1028]: lxc_health: Gained carrier Dec 13 14:28:41.596218 systemd[1]: run-containerd-runc-k8s.io-a77ae095d5a0c02fde6ace907f59eff89c8c14f9f515b2cb44164d91001f3c99-runc.jXrSAR.mount: Deactivated successfully. Dec 13 14:28:43.181284 systemd-networkd[1028]: lxc_health: Gained IPv6LL Dec 13 14:28:43.958358 systemd[1]: run-containerd-runc-k8s.io-a77ae095d5a0c02fde6ace907f59eff89c8c14f9f515b2cb44164d91001f3c99-runc.PwmJfr.mount: Deactivated successfully. Dec 13 14:28:46.197166 systemd[1]: run-containerd-runc-k8s.io-a77ae095d5a0c02fde6ace907f59eff89c8c14f9f515b2cb44164d91001f3c99-runc.z3R49S.mount: Deactivated successfully. Dec 13 14:28:48.455680 systemd[1]: run-containerd-runc-k8s.io-a77ae095d5a0c02fde6ace907f59eff89c8c14f9f515b2cb44164d91001f3c99-runc.82POvW.mount: Deactivated successfully. Dec 13 14:28:48.568001 sshd[3820]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:48.572782 systemd-logind[1212]: Session 26 logged out. Waiting for processes to exit. Dec 13 14:28:48.575601 systemd[1]: sshd@25-10.128.0.103:22-139.178.68.195:35860.service: Deactivated successfully. Dec 13 14:28:48.576857 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 14:28:48.579457 systemd-logind[1212]: Removed session 26.