Dec 13 02:13:11.104784 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 02:13:11.104828 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:13:11.104845 kernel: BIOS-provided physical RAM map: Dec 13 02:13:11.104858 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Dec 13 02:13:11.104871 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Dec 13 02:13:11.104885 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Dec 13 02:13:11.104903 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Dec 13 02:13:11.104917 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Dec 13 02:13:11.104930 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd276fff] usable Dec 13 02:13:11.104942 kernel: BIOS-e820: [mem 0x00000000bd277000-0x00000000bd280fff] ACPI data Dec 13 02:13:11.104954 kernel: BIOS-e820: [mem 0x00000000bd281000-0x00000000bf8ecfff] usable Dec 13 02:13:11.104967 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Dec 13 02:13:11.104981 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Dec 13 02:13:11.104996 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Dec 13 02:13:11.105019 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Dec 13 02:13:11.105034 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Dec 13 02:13:11.105048 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Dec 13 02:13:11.105060 kernel: NX (Execute Disable) protection: active Dec 13 02:13:11.105073 kernel: efi: EFI v2.70 by EDK II Dec 13 02:13:11.105087 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd277018 Dec 13 02:13:11.105100 kernel: random: crng init done Dec 13 02:13:11.105114 kernel: SMBIOS 2.4 present. Dec 13 02:13:11.105132 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Dec 13 02:13:11.105146 kernel: Hypervisor detected: KVM Dec 13 02:13:11.105161 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 02:13:11.105176 kernel: kvm-clock: cpu 0, msr 1e619b001, primary cpu clock Dec 13 02:13:11.105191 kernel: kvm-clock: using sched offset of 12620089926 cycles Dec 13 02:13:11.105207 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 02:13:11.105222 kernel: tsc: Detected 2299.998 MHz processor Dec 13 02:13:11.105235 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 02:13:11.105249 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 02:13:11.105271 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Dec 13 02:13:11.105289 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 02:13:11.105304 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Dec 13 02:13:11.105319 kernel: Using GB pages for direct mapping Dec 13 02:13:11.105332 kernel: Secure boot disabled Dec 13 02:13:11.105345 kernel: ACPI: Early table checksum verification disabled Dec 13 02:13:11.105359 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Dec 13 02:13:11.105373 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Dec 13 02:13:11.105388 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Dec 13 02:13:11.105414 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Dec 13 02:13:11.105430 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Dec 13 02:13:11.105447 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Dec 13 02:13:11.105463 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Dec 13 02:13:11.105480 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Dec 13 02:13:11.105496 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Dec 13 02:13:11.105517 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Dec 13 02:13:11.105533 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Dec 13 02:13:11.105549 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Dec 13 02:13:11.105565 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Dec 13 02:13:11.105579 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Dec 13 02:13:11.105594 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Dec 13 02:13:11.105610 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Dec 13 02:13:11.105641 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Dec 13 02:13:11.105659 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Dec 13 02:13:11.105680 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Dec 13 02:13:11.105696 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Dec 13 02:13:11.105712 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 02:13:11.105728 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 02:13:11.105744 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 02:13:11.105759 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Dec 13 02:13:11.105776 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Dec 13 02:13:11.105793 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Dec 13 02:13:11.105809 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Dec 13 02:13:11.105829 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Dec 13 02:13:11.105845 kernel: Zone ranges: Dec 13 02:13:11.105862 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 02:13:11.105877 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 02:13:11.105893 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Dec 13 02:13:11.105910 kernel: Movable zone start for each node Dec 13 02:13:11.105926 kernel: Early memory node ranges Dec 13 02:13:11.105942 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Dec 13 02:13:11.105958 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Dec 13 02:13:11.105978 kernel: node 0: [mem 0x0000000000100000-0x00000000bd276fff] Dec 13 02:13:11.105993 kernel: node 0: [mem 0x00000000bd281000-0x00000000bf8ecfff] Dec 13 02:13:11.106009 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Dec 13 02:13:11.106025 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Dec 13 02:13:11.106041 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Dec 13 02:13:11.106057 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 02:13:11.106073 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Dec 13 02:13:11.106089 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Dec 13 02:13:11.106105 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Dec 13 02:13:11.106125 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 13 02:13:11.106142 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Dec 13 02:13:11.106159 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 02:13:11.106175 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 02:13:11.106191 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 02:13:11.106207 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 02:13:11.106223 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 02:13:11.106238 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 02:13:11.106263 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 02:13:11.106283 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 02:13:11.106299 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 02:13:11.106315 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 02:13:11.106330 kernel: Booting paravirtualized kernel on KVM Dec 13 02:13:11.106346 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 02:13:11.106363 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 02:13:11.106378 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 02:13:11.106395 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 02:13:11.106410 kernel: pcpu-alloc: [0] 0 1 Dec 13 02:13:11.106430 kernel: kvm-guest: PV spinlocks enabled Dec 13 02:13:11.106446 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 02:13:11.106462 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932270 Dec 13 02:13:11.106477 kernel: Policy zone: Normal Dec 13 02:13:11.106495 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:13:11.106513 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 02:13:11.106528 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 02:13:11.106544 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 02:13:11.106561 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 02:13:11.106581 kernel: Memory: 7515408K/7860544K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 344876K reserved, 0K cma-reserved) Dec 13 02:13:11.106597 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 02:13:11.106613 kernel: Kernel/User page tables isolation: enabled Dec 13 02:13:11.106644 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 02:13:11.106661 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 02:13:11.106677 kernel: rcu: Hierarchical RCU implementation. Dec 13 02:13:11.106694 kernel: rcu: RCU event tracing is enabled. Dec 13 02:13:11.106711 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 02:13:11.106733 kernel: Rude variant of Tasks RCU enabled. Dec 13 02:13:11.106763 kernel: Tracing variant of Tasks RCU enabled. Dec 13 02:13:11.106780 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 02:13:11.106802 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 02:13:11.106819 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 02:13:11.106836 kernel: Console: colour dummy device 80x25 Dec 13 02:13:11.106853 kernel: printk: console [ttyS0] enabled Dec 13 02:13:11.106870 kernel: ACPI: Core revision 20210730 Dec 13 02:13:11.106887 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 02:13:11.106905 kernel: x2apic enabled Dec 13 02:13:11.106926 kernel: Switched APIC routing to physical x2apic. Dec 13 02:13:11.106944 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Dec 13 02:13:11.106960 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 02:13:11.106978 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Dec 13 02:13:11.106995 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Dec 13 02:13:11.107012 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Dec 13 02:13:11.107029 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 02:13:11.107050 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 02:13:11.107068 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 02:13:11.107084 kernel: Spectre V2 : Mitigation: IBRS Dec 13 02:13:11.107101 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 02:13:11.107119 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 02:13:11.107136 kernel: RETBleed: Mitigation: IBRS Dec 13 02:13:11.107153 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 02:13:11.107170 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Dec 13 02:13:11.107187 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 02:13:11.107208 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 02:13:11.107225 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 02:13:11.107242 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 02:13:11.107269 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 02:13:11.107287 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 02:13:11.107304 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 02:13:11.107322 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 02:13:11.107338 kernel: Freeing SMP alternatives memory: 32K Dec 13 02:13:11.107354 kernel: pid_max: default: 32768 minimum: 301 Dec 13 02:13:11.107374 kernel: LSM: Security Framework initializing Dec 13 02:13:11.107389 kernel: SELinux: Initializing. Dec 13 02:13:11.107406 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 02:13:11.107422 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 02:13:11.107439 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Dec 13 02:13:11.107455 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Dec 13 02:13:11.107472 kernel: signal: max sigframe size: 1776 Dec 13 02:13:11.107488 kernel: rcu: Hierarchical SRCU implementation. Dec 13 02:13:11.107504 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 02:13:11.107524 kernel: smp: Bringing up secondary CPUs ... Dec 13 02:13:11.107541 kernel: x86: Booting SMP configuration: Dec 13 02:13:11.107558 kernel: .... node #0, CPUs: #1 Dec 13 02:13:11.107575 kernel: kvm-clock: cpu 1, msr 1e619b041, secondary cpu clock Dec 13 02:13:11.107593 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 02:13:11.107612 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 02:13:11.108673 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 02:13:11.108697 kernel: smpboot: Max logical packages: 1 Dec 13 02:13:11.108723 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Dec 13 02:13:11.108742 kernel: devtmpfs: initialized Dec 13 02:13:11.108760 kernel: x86/mm: Memory block size: 128MB Dec 13 02:13:11.108778 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Dec 13 02:13:11.108796 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 02:13:11.108814 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 02:13:11.108831 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 02:13:11.108849 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 02:13:11.108867 kernel: audit: initializing netlink subsys (disabled) Dec 13 02:13:11.108889 kernel: audit: type=2000 audit(1734055989.609:1): state=initialized audit_enabled=0 res=1 Dec 13 02:13:11.108906 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 02:13:11.108924 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 02:13:11.108943 kernel: cpuidle: using governor menu Dec 13 02:13:11.108961 kernel: ACPI: bus type PCI registered Dec 13 02:13:11.108979 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 02:13:11.108997 kernel: dca service started, version 1.12.1 Dec 13 02:13:11.109014 kernel: PCI: Using configuration type 1 for base access Dec 13 02:13:11.109032 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 02:13:11.109055 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 02:13:11.109073 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 02:13:11.109090 kernel: ACPI: Added _OSI(Module Device) Dec 13 02:13:11.109109 kernel: ACPI: Added _OSI(Processor Device) Dec 13 02:13:11.109126 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 02:13:11.109144 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 02:13:11.109162 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 02:13:11.109180 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 02:13:11.109198 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 02:13:11.109220 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 02:13:11.109237 kernel: ACPI: Interpreter enabled Dec 13 02:13:11.109262 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 02:13:11.109281 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 02:13:11.109298 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 02:13:11.109316 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 02:13:11.109334 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 02:13:11.109576 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 02:13:11.110929 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 02:13:11.110962 kernel: PCI host bridge to bus 0000:00 Dec 13 02:13:11.111129 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 02:13:11.111290 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 02:13:11.111433 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 02:13:11.111574 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Dec 13 02:13:11.111743 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 02:13:11.111925 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 02:13:11.112096 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Dec 13 02:13:11.112269 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Dec 13 02:13:11.112437 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 02:13:11.112620 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Dec 13 02:13:11.117843 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Dec 13 02:13:11.118028 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Dec 13 02:13:11.118213 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 02:13:11.118393 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Dec 13 02:13:11.118562 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Dec 13 02:13:11.118780 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 02:13:11.118951 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Dec 13 02:13:11.119119 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Dec 13 02:13:11.119148 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 02:13:11.119167 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 02:13:11.119186 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 02:13:11.119205 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 02:13:11.119223 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 02:13:11.119240 kernel: iommu: Default domain type: Translated Dec 13 02:13:11.119266 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 02:13:11.119285 kernel: vgaarb: loaded Dec 13 02:13:11.119302 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 02:13:11.119323 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 02:13:11.119340 kernel: PTP clock support registered Dec 13 02:13:11.119359 kernel: Registered efivars operations Dec 13 02:13:11.119376 kernel: PCI: Using ACPI for IRQ routing Dec 13 02:13:11.119394 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 02:13:11.119412 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Dec 13 02:13:11.119430 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Dec 13 02:13:11.119448 kernel: e820: reserve RAM buffer [mem 0xbd277000-0xbfffffff] Dec 13 02:13:11.119465 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Dec 13 02:13:11.119486 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Dec 13 02:13:11.119504 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 02:13:11.119521 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 02:13:11.119539 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 02:13:11.119558 kernel: pnp: PnP ACPI init Dec 13 02:13:11.119575 kernel: pnp: PnP ACPI: found 7 devices Dec 13 02:13:11.119593 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 02:13:11.119611 kernel: NET: Registered PF_INET protocol family Dec 13 02:13:11.119645 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 02:13:11.119668 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 02:13:11.119686 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 02:13:11.119704 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 02:13:11.119722 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 02:13:11.119740 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 02:13:11.119758 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 02:13:11.119776 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 02:13:11.119794 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 02:13:11.119815 kernel: NET: Registered PF_XDP protocol family Dec 13 02:13:11.119978 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 02:13:11.120132 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 02:13:11.120288 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 02:13:11.120436 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Dec 13 02:13:11.120608 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 02:13:11.120664 kernel: PCI: CLS 0 bytes, default 64 Dec 13 02:13:11.120689 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 02:13:11.120707 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Dec 13 02:13:11.120725 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 02:13:11.120742 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Dec 13 02:13:11.120761 kernel: clocksource: Switched to clocksource tsc Dec 13 02:13:11.120778 kernel: Initialise system trusted keyrings Dec 13 02:13:11.120793 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 02:13:11.120810 kernel: Key type asymmetric registered Dec 13 02:13:11.120827 kernel: Asymmetric key parser 'x509' registered Dec 13 02:13:11.120847 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 02:13:11.120865 kernel: io scheduler mq-deadline registered Dec 13 02:13:11.120883 kernel: io scheduler kyber registered Dec 13 02:13:11.120900 kernel: io scheduler bfq registered Dec 13 02:13:11.120918 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 02:13:11.120936 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Dec 13 02:13:11.121112 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Dec 13 02:13:11.121134 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Dec 13 02:13:11.121309 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Dec 13 02:13:11.121335 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Dec 13 02:13:11.121503 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Dec 13 02:13:11.121526 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 02:13:11.121544 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 02:13:11.121562 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 02:13:11.121579 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Dec 13 02:13:11.121597 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Dec 13 02:13:11.121834 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Dec 13 02:13:11.121865 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 02:13:11.121906 kernel: i8042: Warning: Keylock active Dec 13 02:13:11.121925 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 02:13:11.121943 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 02:13:11.122107 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 02:13:11.122267 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 02:13:11.122418 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T02:13:10 UTC (1734055990) Dec 13 02:13:11.122566 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 02:13:11.122592 kernel: intel_pstate: CPU model not supported Dec 13 02:13:11.122611 kernel: pstore: Registered efi as persistent store backend Dec 13 02:13:11.122651 kernel: NET: Registered PF_INET6 protocol family Dec 13 02:13:11.122669 kernel: Segment Routing with IPv6 Dec 13 02:13:11.122687 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 02:13:11.122704 kernel: NET: Registered PF_PACKET protocol family Dec 13 02:13:11.122722 kernel: Key type dns_resolver registered Dec 13 02:13:11.122739 kernel: IPI shorthand broadcast: enabled Dec 13 02:13:11.122757 kernel: sched_clock: Marking stable (730666194, 160968209)->(922345667, -30711264) Dec 13 02:13:11.122778 kernel: registered taskstats version 1 Dec 13 02:13:11.122795 kernel: Loading compiled-in X.509 certificates Dec 13 02:13:11.122813 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 02:13:11.122831 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 02:13:11.122848 kernel: Key type .fscrypt registered Dec 13 02:13:11.122866 kernel: Key type fscrypt-provisioning registered Dec 13 02:13:11.122883 kernel: pstore: Using crash dump compression: deflate Dec 13 02:13:11.122901 kernel: ima: Allocated hash algorithm: sha1 Dec 13 02:13:11.122919 kernel: ima: No architecture policies found Dec 13 02:13:11.122940 kernel: clk: Disabling unused clocks Dec 13 02:13:11.122956 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 02:13:11.122972 kernel: Write protecting the kernel read-only data: 28672k Dec 13 02:13:11.122990 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 02:13:11.123007 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 02:13:11.123025 kernel: Run /init as init process Dec 13 02:13:11.123042 kernel: with arguments: Dec 13 02:13:11.123059 kernel: /init Dec 13 02:13:11.123076 kernel: with environment: Dec 13 02:13:11.123097 kernel: HOME=/ Dec 13 02:13:11.123114 kernel: TERM=linux Dec 13 02:13:11.123131 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 02:13:11.123153 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:13:11.123171 systemd[1]: Detected virtualization kvm. Dec 13 02:13:11.123190 systemd[1]: Detected architecture x86-64. Dec 13 02:13:11.123208 systemd[1]: Running in initrd. Dec 13 02:13:11.123230 systemd[1]: No hostname configured, using default hostname. Dec 13 02:13:11.123248 systemd[1]: Hostname set to . Dec 13 02:13:11.123274 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:13:11.123293 systemd[1]: Queued start job for default target initrd.target. Dec 13 02:13:11.123311 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:13:11.123330 systemd[1]: Reached target cryptsetup.target. Dec 13 02:13:11.123345 systemd[1]: Reached target paths.target. Dec 13 02:13:11.123362 systemd[1]: Reached target slices.target. Dec 13 02:13:11.123384 systemd[1]: Reached target swap.target. Dec 13 02:13:11.123402 systemd[1]: Reached target timers.target. Dec 13 02:13:11.123421 systemd[1]: Listening on iscsid.socket. Dec 13 02:13:11.123440 systemd[1]: Listening on iscsiuio.socket. Dec 13 02:13:11.123459 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 02:13:11.123477 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 02:13:11.123495 systemd[1]: Listening on systemd-journald.socket. Dec 13 02:13:11.123513 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:13:11.123535 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:13:11.123554 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:13:11.123592 systemd[1]: Reached target sockets.target. Dec 13 02:13:11.123614 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:13:11.123679 systemd[1]: Finished network-cleanup.service. Dec 13 02:13:11.123698 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 02:13:11.123716 systemd[1]: Starting systemd-journald.service... Dec 13 02:13:11.123739 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:13:11.123758 systemd[1]: Starting systemd-resolved.service... Dec 13 02:13:11.123777 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 02:13:11.123797 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:13:11.123816 kernel: audit: type=1130 audit(1734055991.111:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:11.123836 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 02:13:11.123862 systemd-journald[190]: Journal started Dec 13 02:13:11.123958 systemd-journald[190]: Runtime Journal (/run/log/journal/3c6d89fa40173f1e634cac3c81b657f5) is 8.0M, max 148.8M, 140.8M free. Dec 13 02:13:11.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:11.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:11.137648 kernel: audit: type=1130 audit(1734055991.132:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:11.137691 systemd[1]: Started systemd-journald.service. Dec 13 02:13:11.138480 systemd-modules-load[191]: Inserted module 'overlay' Dec 13 02:13:11.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:11.148361 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 02:13:11.161782 kernel: audit: type=1130 audit(1734055991.146:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:11.161824 kernel: audit: type=1130 audit(1734055991.153:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:11.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:11.156335 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 02:13:11.164103 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:13:11.188601 systemd-resolved[192]: Positive Trust Anchors: Dec 13 02:13:11.191400 systemd-resolved[192]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:13:11.191721 systemd-resolved[192]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:13:11.196740 systemd-resolved[192]: Defaulting to hostname 'linux'. Dec 13 02:13:11.199477 systemd[1]: Started systemd-resolved.service. Dec 13 02:13:11.199760 systemd[1]: Reached target nss-lookup.target. Dec 13 02:13:11.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:11.204758 kernel: audit: type=1130 audit(1734055991.198:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:11.205363 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:13:11.213177 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 02:13:11.230244 kernel: audit: type=1130 audit(1734055991.211:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:11.230288 kernel: audit: type=1130 audit(1734055991.220:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:11.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:11.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:11.223344 systemd[1]: Starting dracut-cmdline.service... Dec 13 02:13:11.234643 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 02:13:11.243976 dracut-cmdline[205]: dracut-dracut-053 Dec 13 02:13:11.248023 kernel: Bridge firewalling registered Dec 13 02:13:11.248060 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:13:11.244492 systemd-modules-load[191]: Inserted module 'br_netfilter' Dec 13 02:13:11.276660 kernel: SCSI subsystem initialized Dec 13 02:13:11.297284 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 02:13:11.297372 kernel: device-mapper: uevent: version 1.0.3 Dec 13 02:13:11.299818 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 02:13:11.305469 systemd-modules-load[191]: Inserted module 'dm_multipath' Dec 13 02:13:11.306810 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:13:11.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:11.321073 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:13:11.325654 kernel: audit: type=1130 audit(1734055991.318:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:11.336442 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:13:11.348816 kernel: audit: type=1130 audit(1734055991.339:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:11.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:11.355660 kernel: Loading iSCSI transport class v2.0-870. Dec 13 02:13:11.377675 kernel: iscsi: registered transport (tcp) Dec 13 02:13:11.404665 kernel: iscsi: registered transport (qla4xxx) Dec 13 02:13:11.404748 kernel: QLogic iSCSI HBA Driver Dec 13 02:13:11.449590 systemd[1]: Finished dracut-cmdline.service. Dec 13 02:13:11.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:11.451270 systemd[1]: Starting dracut-pre-udev.service... Dec 13 02:13:11.509689 kernel: raid6: avx2x4 gen() 18410 MB/s Dec 13 02:13:11.526668 kernel: raid6: avx2x4 xor() 7730 MB/s Dec 13 02:13:11.544668 kernel: raid6: avx2x2 gen() 18436 MB/s Dec 13 02:13:11.561666 kernel: raid6: avx2x2 xor() 18641 MB/s Dec 13 02:13:11.578668 kernel: raid6: avx2x1 gen() 14307 MB/s Dec 13 02:13:11.595667 kernel: raid6: avx2x1 xor() 16205 MB/s Dec 13 02:13:11.612668 kernel: raid6: sse2x4 gen() 11055 MB/s Dec 13 02:13:11.629668 kernel: raid6: sse2x4 xor() 6659 MB/s Dec 13 02:13:11.646666 kernel: raid6: sse2x2 gen() 12076 MB/s Dec 13 02:13:11.663669 kernel: raid6: sse2x2 xor() 7427 MB/s Dec 13 02:13:11.680664 kernel: raid6: sse2x1 gen() 10521 MB/s Dec 13 02:13:11.698853 kernel: raid6: sse2x1 xor() 5176 MB/s Dec 13 02:13:11.698891 kernel: raid6: using algorithm avx2x2 gen() 18436 MB/s Dec 13 02:13:11.698913 kernel: raid6: .... xor() 18641 MB/s, rmw enabled Dec 13 02:13:11.699686 kernel: raid6: using avx2x2 recovery algorithm Dec 13 02:13:11.714661 kernel: xor: automatically using best checksumming function avx Dec 13 02:13:11.820667 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 02:13:11.832594 systemd[1]: Finished dracut-pre-udev.service. Dec 13 02:13:11.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:11.831000 audit: BPF prog-id=7 op=LOAD Dec 13 02:13:11.832000 audit: BPF prog-id=8 op=LOAD Dec 13 02:13:11.834457 systemd[1]: Starting systemd-udevd.service... Dec 13 02:13:11.851688 systemd-udevd[388]: Using default interface naming scheme 'v252'. Dec 13 02:13:11.859148 systemd[1]: Started systemd-udevd.service. Dec 13 02:13:11.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:11.862927 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 02:13:11.883560 dracut-pre-trigger[393]: rd.md=0: removing MD RAID activation Dec 13 02:13:11.921590 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 02:13:11.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:11.923883 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:13:11.990026 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:13:11.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:12.062785 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 02:13:12.077653 kernel: scsi host0: Virtio SCSI HBA Dec 13 02:13:12.092651 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Dec 13 02:13:12.148038 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 02:13:12.148126 kernel: AES CTR mode by8 optimization enabled Dec 13 02:13:12.203431 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Dec 13 02:13:12.219066 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Dec 13 02:13:12.219315 kernel: sd 0:0:1:0: [sda] Write Protect is off Dec 13 02:13:12.219547 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Dec 13 02:13:12.219776 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 02:13:12.219998 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 02:13:12.220023 kernel: GPT:17805311 != 25165823 Dec 13 02:13:12.220047 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 02:13:12.220069 kernel: GPT:17805311 != 25165823 Dec 13 02:13:12.220092 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 02:13:12.220113 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:13:12.220143 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Dec 13 02:13:12.260663 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (443) Dec 13 02:13:12.271119 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 02:13:12.274781 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 02:13:12.288428 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:13:12.294695 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 02:13:12.308102 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 02:13:12.317052 systemd[1]: Starting disk-uuid.service... Dec 13 02:13:12.330847 disk-uuid[509]: Primary Header is updated. Dec 13 02:13:12.330847 disk-uuid[509]: Secondary Entries is updated. Dec 13 02:13:12.330847 disk-uuid[509]: Secondary Header is updated. Dec 13 02:13:12.342761 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:13:12.354655 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:13:12.379656 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:13:13.389656 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:13:13.390691 disk-uuid[510]: The operation has completed successfully. Dec 13 02:13:13.455785 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 02:13:13.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:13.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:13.455958 systemd[1]: Finished disk-uuid.service. Dec 13 02:13:13.474134 systemd[1]: Starting verity-setup.service... Dec 13 02:13:13.502653 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 02:13:13.576885 systemd[1]: Found device dev-mapper-usr.device. Dec 13 02:13:13.579435 systemd[1]: Mounting sysusr-usr.mount... Dec 13 02:13:13.591231 systemd[1]: Finished verity-setup.service. Dec 13 02:13:13.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:13.680680 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 02:13:13.681570 systemd[1]: Mounted sysusr-usr.mount. Dec 13 02:13:13.689015 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 02:13:13.731258 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:13:13.731300 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:13:13.731323 kernel: BTRFS info (device sda6): has skinny extents Dec 13 02:13:13.690025 systemd[1]: Starting ignition-setup.service... Dec 13 02:13:13.753798 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:13:13.698064 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 02:13:13.766242 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 02:13:13.782330 systemd[1]: Finished ignition-setup.service. Dec 13 02:13:13.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:13.783805 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 02:13:13.819365 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 02:13:13.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:13.827000 audit: BPF prog-id=9 op=LOAD Dec 13 02:13:13.829678 systemd[1]: Starting systemd-networkd.service... Dec 13 02:13:13.866717 systemd-networkd[684]: lo: Link UP Dec 13 02:13:13.866731 systemd-networkd[684]: lo: Gained carrier Dec 13 02:13:13.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:13.868013 systemd-networkd[684]: Enumeration completed Dec 13 02:13:13.868190 systemd[1]: Started systemd-networkd.service. Dec 13 02:13:13.868889 systemd-networkd[684]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:13:13.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:13.870996 systemd-networkd[684]: eth0: Link UP Dec 13 02:13:13.871005 systemd-networkd[684]: eth0: Gained carrier Dec 13 02:13:13.949778 iscsid[694]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:13:13.949778 iscsid[694]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 02:13:13.949778 iscsid[694]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 02:13:13.949778 iscsid[694]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 02:13:13.949778 iscsid[694]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 02:13:13.949778 iscsid[694]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:13:13.949778 iscsid[694]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 02:13:14.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:13.874122 systemd[1]: Reached target network.target. Dec 13 02:13:14.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:14.077835 ignition[654]: Ignition 2.14.0 Dec 13 02:13:13.880788 systemd-networkd[684]: eth0: DHCPv4 address 10.128.0.53/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 02:13:14.077849 ignition[654]: Stage: fetch-offline Dec 13 02:13:13.903415 systemd[1]: Starting iscsiuio.service... Dec 13 02:13:14.077933 ignition[654]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:13:13.915958 systemd[1]: Started iscsiuio.service. Dec 13 02:13:14.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:14.077976 ignition[654]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:13:13.925199 systemd[1]: Starting iscsid.service... Dec 13 02:13:14.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:14.099933 ignition[654]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:13:14.010172 systemd[1]: Started iscsid.service. Dec 13 02:13:14.100149 ignition[654]: parsed url from cmdline: "" Dec 13 02:13:14.032182 systemd[1]: Starting dracut-initqueue.service... Dec 13 02:13:14.100155 ignition[654]: no config URL provided Dec 13 02:13:14.054199 systemd[1]: Finished dracut-initqueue.service. Dec 13 02:13:14.100162 ignition[654]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:13:14.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:14.079061 systemd[1]: Reached target remote-fs-pre.target. Dec 13 02:13:14.100173 ignition[654]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:13:14.094942 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:13:14.100183 ignition[654]: failed to fetch config: resource requires networking Dec 13 02:13:14.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:14.100974 systemd[1]: Reached target remote-fs.target. Dec 13 02:13:14.100354 ignition[654]: Ignition finished successfully Dec 13 02:13:14.128988 systemd[1]: Starting dracut-pre-mount.service... Dec 13 02:13:14.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:14.195859 ignition[709]: Ignition 2.14.0 Dec 13 02:13:14.149234 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 02:13:14.195870 ignition[709]: Stage: fetch Dec 13 02:13:14.167158 systemd[1]: Finished dracut-pre-mount.service. Dec 13 02:13:14.196014 ignition[709]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:13:14.184118 systemd[1]: Starting ignition-fetch.service... Dec 13 02:13:14.196053 ignition[709]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:13:14.219223 unknown[709]: fetched base config from "system" Dec 13 02:13:14.203575 ignition[709]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:13:14.219234 unknown[709]: fetched base config from "system" Dec 13 02:13:14.203807 ignition[709]: parsed url from cmdline: "" Dec 13 02:13:14.219244 unknown[709]: fetched user config from "gcp" Dec 13 02:13:14.203816 ignition[709]: no config URL provided Dec 13 02:13:14.224354 systemd[1]: Finished ignition-fetch.service. Dec 13 02:13:14.203825 ignition[709]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:13:14.246218 systemd[1]: Starting ignition-kargs.service... Dec 13 02:13:14.203835 ignition[709]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:13:14.280145 systemd[1]: Finished ignition-kargs.service. Dec 13 02:13:14.203874 ignition[709]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Dec 13 02:13:14.297082 systemd[1]: Starting ignition-disks.service... Dec 13 02:13:14.211033 ignition[709]: GET result: OK Dec 13 02:13:14.321183 systemd[1]: Finished ignition-disks.service. Dec 13 02:13:14.211219 ignition[709]: parsing config with SHA512: 69f6c652c7cafe8ada89e54fd4dd96fbea95af55b1aa3a8225efeb45a1643a153203479adba115077a2783703246c0440733d4d1e90b940c2d972a62300b5b2f Dec 13 02:13:14.328181 systemd[1]: Reached target initrd-root-device.target. Dec 13 02:13:14.220199 ignition[709]: fetch: fetch complete Dec 13 02:13:14.349820 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:13:14.220206 ignition[709]: fetch: fetch passed Dec 13 02:13:14.361805 systemd[1]: Reached target local-fs.target. Dec 13 02:13:14.220259 ignition[709]: Ignition finished successfully Dec 13 02:13:14.375840 systemd[1]: Reached target sysinit.target. Dec 13 02:13:14.259539 ignition[715]: Ignition 2.14.0 Dec 13 02:13:14.386875 systemd[1]: Reached target basic.target. Dec 13 02:13:14.259548 ignition[715]: Stage: kargs Dec 13 02:13:14.388225 systemd[1]: Starting systemd-fsck-root.service... Dec 13 02:13:14.259725 ignition[715]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:13:14.259759 ignition[715]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:13:14.267297 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:13:14.268707 ignition[715]: kargs: kargs passed Dec 13 02:13:14.268765 ignition[715]: Ignition finished successfully Dec 13 02:13:14.309127 ignition[721]: Ignition 2.14.0 Dec 13 02:13:14.309139 ignition[721]: Stage: disks Dec 13 02:13:14.309281 ignition[721]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:13:14.309312 ignition[721]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:13:14.317801 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:13:14.319247 ignition[721]: disks: disks passed Dec 13 02:13:14.319302 ignition[721]: Ignition finished successfully Dec 13 02:13:14.432984 systemd-fsck[729]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 02:13:14.626546 systemd[1]: Finished systemd-fsck-root.service. Dec 13 02:13:14.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:14.635966 systemd[1]: Mounting sysroot.mount... Dec 13 02:13:14.665813 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 02:13:14.662046 systemd[1]: Mounted sysroot.mount. Dec 13 02:13:14.673074 systemd[1]: Reached target initrd-root-fs.target. Dec 13 02:13:14.692330 systemd[1]: Mounting sysroot-usr.mount... Dec 13 02:13:14.703357 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 02:13:14.703416 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 02:13:14.703453 systemd[1]: Reached target ignition-diskful.target. Dec 13 02:13:14.785808 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (735) Dec 13 02:13:14.785852 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:13:14.785876 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:13:14.785899 kernel: BTRFS info (device sda6): has skinny extents Dec 13 02:13:14.719224 systemd[1]: Mounted sysroot-usr.mount. Dec 13 02:13:14.811810 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:13:14.744740 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:13:14.821851 initrd-setup-root[742]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 02:13:14.775782 systemd[1]: Starting initrd-setup-root.service... Dec 13 02:13:14.839773 initrd-setup-root[764]: cut: /sysroot/etc/group: No such file or directory Dec 13 02:13:14.810377 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:13:14.857823 initrd-setup-root[774]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 02:13:14.868781 initrd-setup-root[782]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 02:13:14.884440 systemd[1]: Finished initrd-setup-root.service. Dec 13 02:13:14.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:14.897154 systemd[1]: Starting ignition-mount.service... Dec 13 02:13:14.911934 systemd[1]: Starting sysroot-boot.service... Dec 13 02:13:14.927129 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 02:13:14.927281 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 02:13:14.950778 ignition[801]: INFO : Ignition 2.14.0 Dec 13 02:13:14.950778 ignition[801]: INFO : Stage: mount Dec 13 02:13:14.950778 ignition[801]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:13:14.950778 ignition[801]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:13:14.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:14.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:14.956356 systemd[1]: Finished sysroot-boot.service. Dec 13 02:13:15.020835 ignition[801]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:13:15.020835 ignition[801]: INFO : mount: mount passed Dec 13 02:13:15.020835 ignition[801]: INFO : Ignition finished successfully Dec 13 02:13:15.078782 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (810) Dec 13 02:13:15.078826 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:13:15.078850 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:13:15.078873 kernel: BTRFS info (device sda6): has skinny extents Dec 13 02:13:15.078895 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:13:14.965278 systemd[1]: Finished ignition-mount.service. Dec 13 02:13:14.983124 systemd[1]: Starting ignition-files.service... Dec 13 02:13:15.017999 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:13:15.088922 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:13:15.125815 ignition[829]: INFO : Ignition 2.14.0 Dec 13 02:13:15.125815 ignition[829]: INFO : Stage: files Dec 13 02:13:15.125815 ignition[829]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:13:15.125815 ignition[829]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:13:15.125815 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:13:15.191799 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (829) Dec 13 02:13:15.139621 unknown[829]: wrote ssh authorized keys file for user: core Dec 13 02:13:15.200798 ignition[829]: DEBUG : files: compiled without relabeling support, skipping Dec 13 02:13:15.200798 ignition[829]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 02:13:15.200798 ignition[829]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 02:13:15.200798 ignition[829]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 02:13:15.200798 ignition[829]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 02:13:15.200798 ignition[829]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 02:13:15.200798 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/hosts" Dec 13 02:13:15.200798 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(3): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:13:15.200798 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3927867563" Dec 13 02:13:15.200798 ignition[829]: CRITICAL : files: createFilesystemsFiles: createFiles: op(3): op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3927867563": device or resource busy Dec 13 02:13:15.200798 ignition[829]: ERROR : files: createFilesystemsFiles: createFiles: op(3): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3927867563", trying btrfs: device or resource busy Dec 13 02:13:15.200798 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3927867563" Dec 13 02:13:15.200798 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3927867563" Dec 13 02:13:15.200798 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [started] unmounting "/mnt/oem3927867563" Dec 13 02:13:15.200798 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [finished] unmounting "/mnt/oem3927867563" Dec 13 02:13:15.200798 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/hosts" Dec 13 02:13:15.200798 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:13:15.452867 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 02:13:15.619221 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Dec 13 02:13:15.861866 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:13:15.878781 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 02:13:15.878781 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 02:13:15.880840 systemd-networkd[684]: eth0: Gained IPv6LL Dec 13 02:13:16.140091 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Dec 13 02:13:16.293106 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 02:13:16.308785 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Dec 13 02:13:16.308785 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(9): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:13:16.308785 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3275349871" Dec 13 02:13:16.308785 ignition[829]: CRITICAL : files: createFilesystemsFiles: createFiles: op(9): op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3275349871": device or resource busy Dec 13 02:13:16.308785 ignition[829]: ERROR : files: createFilesystemsFiles: createFiles: op(9): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3275349871", trying btrfs: device or resource busy Dec 13 02:13:16.308785 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3275349871" Dec 13 02:13:16.308785 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3275349871" Dec 13 02:13:16.308785 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [started] unmounting "/mnt/oem3275349871" Dec 13 02:13:16.308785 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [finished] unmounting "/mnt/oem3275349871" Dec 13 02:13:16.308785 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Dec 13 02:13:16.308785 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/install.sh" Dec 13 02:13:16.308785 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 02:13:16.308785 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:13:16.308785 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:13:16.308785 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:13:16.552780 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:13:16.552780 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:13:16.552780 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:13:16.552780 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:13:16.552780 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:13:16.552780 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 02:13:16.552780 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 02:13:16.552780 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Dec 13 02:13:16.552780 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(13): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:13:16.552780 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2299066539" Dec 13 02:13:16.552780 ignition[829]: CRITICAL : files: createFilesystemsFiles: createFiles: op(13): op(14): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2299066539": device or resource busy Dec 13 02:13:16.552780 ignition[829]: ERROR : files: createFilesystemsFiles: createFiles: op(13): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2299066539", trying btrfs: device or resource busy Dec 13 02:13:16.552780 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2299066539" Dec 13 02:13:16.552780 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2299066539" Dec 13 02:13:16.311508 systemd[1]: mnt-oem3275349871.mount: Deactivated successfully. Dec 13 02:13:16.805804 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [started] unmounting "/mnt/oem2299066539" Dec 13 02:13:16.805804 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [finished] unmounting "/mnt/oem2299066539" Dec 13 02:13:16.805804 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Dec 13 02:13:16.805804 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(17): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 02:13:16.805804 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 02:13:16.805804 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET result: OK Dec 13 02:13:16.334841 systemd[1]: mnt-oem2299066539.mount: Deactivated successfully. Dec 13 02:13:17.216781 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(17): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 02:13:17.216781 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(18): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Dec 13 02:13:17.252887 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(18): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:13:17.252887 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(19): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2902642301" Dec 13 02:13:17.252887 ignition[829]: CRITICAL : files: createFilesystemsFiles: createFiles: op(18): op(19): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2902642301": device or resource busy Dec 13 02:13:17.252887 ignition[829]: ERROR : files: createFilesystemsFiles: createFiles: op(18): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2902642301", trying btrfs: device or resource busy Dec 13 02:13:17.252887 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2902642301" Dec 13 02:13:17.252887 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2902642301" Dec 13 02:13:17.252887 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [started] unmounting "/mnt/oem2902642301" Dec 13 02:13:17.252887 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [finished] unmounting "/mnt/oem2902642301" Dec 13 02:13:17.252887 ignition[829]: INFO : files: createFilesystemsFiles: createFiles: op(18): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Dec 13 02:13:17.252887 ignition[829]: INFO : files: op(1c): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 02:13:17.252887 ignition[829]: INFO : files: op(1c): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 02:13:17.252887 ignition[829]: INFO : files: op(1d): [started] processing unit "oem-gce.service" Dec 13 02:13:17.252887 ignition[829]: INFO : files: op(1d): [finished] processing unit "oem-gce.service" Dec 13 02:13:17.252887 ignition[829]: INFO : files: op(1e): [started] processing unit "oem-gce-enable-oslogin.service" Dec 13 02:13:17.252887 ignition[829]: INFO : files: op(1e): [finished] processing unit "oem-gce-enable-oslogin.service" Dec 13 02:13:17.252887 ignition[829]: INFO : files: op(1f): [started] processing unit "prepare-helm.service" Dec 13 02:13:17.252887 ignition[829]: INFO : files: op(1f): op(20): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:13:17.738842 kernel: kauditd_printk_skb: 26 callbacks suppressed Dec 13 02:13:17.738898 kernel: audit: type=1130 audit(1734055997.260:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:17.738924 kernel: audit: type=1130 audit(1734055997.351:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:17.738948 kernel: audit: type=1130 audit(1734055997.400:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:17.738970 kernel: audit: type=1131 audit(1734055997.400:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:17.738989 kernel: audit: type=1130 audit(1734055997.528:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:17.739004 kernel: audit: type=1131 audit(1734055997.550:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:17.739023 kernel: audit: type=1130 audit(1734055997.670:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:17.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:17.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:17.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:17.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:17.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:17.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:17.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:17.236934 systemd[1]: mnt-oem2902642301.mount: Deactivated successfully. Dec 13 02:13:17.754851 ignition[829]: INFO : files: op(1f): op(20): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:13:17.754851 ignition[829]: INFO : files: op(1f): [finished] processing unit "prepare-helm.service" Dec 13 02:13:17.754851 ignition[829]: INFO : files: op(21): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 02:13:17.754851 ignition[829]: INFO : files: op(21): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 02:13:17.754851 ignition[829]: INFO : files: op(22): [started] setting preset to enabled for "oem-gce.service" Dec 13 02:13:17.754851 ignition[829]: INFO : files: op(22): [finished] setting preset to enabled for "oem-gce.service" Dec 13 02:13:17.754851 ignition[829]: INFO : files: op(23): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Dec 13 02:13:17.754851 ignition[829]: INFO : files: op(23): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Dec 13 02:13:17.754851 ignition[829]: INFO : files: op(24): [started] setting preset to enabled for "prepare-helm.service" Dec 13 02:13:17.754851 ignition[829]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 02:13:17.754851 ignition[829]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:13:17.754851 ignition[829]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:13:17.754851 ignition[829]: INFO : files: files passed Dec 13 02:13:17.754851 ignition[829]: INFO : Ignition finished successfully Dec 13 02:13:18.010838 kernel: audit: type=1131 audit(1734055997.815:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:17.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:17.251393 systemd[1]: Finished ignition-files.service. Dec 13 02:13:17.272349 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 02:13:18.043825 initrd-setup-root-after-ignition[852]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:13:17.311812 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 02:13:17.312998 systemd[1]: Starting ignition-quench.service... Dec 13 02:13:17.329299 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 02:13:18.140968 kernel: audit: type=1131 audit(1734055998.111:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:18.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:17.353386 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 02:13:17.353532 systemd[1]: Finished ignition-quench.service. Dec 13 02:13:18.191906 kernel: audit: type=1131 audit(1734055998.163:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:18.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:17.402210 systemd[1]: Reached target ignition-complete.target. Dec 13 02:13:18.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:17.488972 systemd[1]: Starting initrd-parse-etc.service... Dec 13 02:13:18.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:17.516785 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 02:13:18.234829 ignition[867]: INFO : Ignition 2.14.0 Dec 13 02:13:18.234829 ignition[867]: INFO : Stage: umount Dec 13 02:13:18.234829 ignition[867]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:13:18.234829 ignition[867]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Dec 13 02:13:18.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:18.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:17.516949 systemd[1]: Finished initrd-parse-etc.service. Dec 13 02:13:18.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:18.323969 ignition[867]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Dec 13 02:13:18.323969 ignition[867]: INFO : umount: umount passed Dec 13 02:13:18.323969 ignition[867]: INFO : Ignition finished successfully Dec 13 02:13:18.330000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:18.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:18.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:17.551458 systemd[1]: Reached target initrd-fs.target. Dec 13 02:13:18.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:17.586982 systemd[1]: Reached target initrd.target. Dec 13 02:13:18.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:17.622937 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 02:13:17.624173 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 02:13:18.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:17.651125 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 02:13:17.673298 systemd[1]: Starting initrd-cleanup.service... Dec 13 02:13:17.730779 systemd[1]: Stopped target nss-lookup.target. Dec 13 02:13:17.747119 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 02:13:17.763172 systemd[1]: Stopped target timers.target. Dec 13 02:13:17.780185 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 02:13:17.780368 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 02:13:18.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:17.817390 systemd[1]: Stopped target initrd.target. Dec 13 02:13:18.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:17.869144 systemd[1]: Stopped target basic.target. Dec 13 02:13:17.882146 systemd[1]: Stopped target ignition-complete.target. Dec 13 02:13:17.921078 systemd[1]: Stopped target ignition-diskful.target. Dec 13 02:13:18.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:17.953085 systemd[1]: Stopped target initrd-root-device.target. Dec 13 02:13:18.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:17.990152 systemd[1]: Stopped target remote-fs.target. Dec 13 02:13:18.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:18.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:18.618000 audit: BPF prog-id=6 op=UNLOAD Dec 13 02:13:17.997140 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 02:13:18.019131 systemd[1]: Stopped target sysinit.target. Dec 13 02:13:18.034093 systemd[1]: Stopped target local-fs.target. Dec 13 02:13:18.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:18.052103 systemd[1]: Stopped target local-fs-pre.target. Dec 13 02:13:18.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:18.074084 systemd[1]: Stopped target swap.target. Dec 13 02:13:18.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:18.098046 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 02:13:18.098249 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 02:13:18.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:18.113306 systemd[1]: Stopped target cryptsetup.target. Dec 13 02:13:18.148968 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 02:13:18.149173 systemd[1]: Stopped dracut-initqueue.service. Dec 13 02:13:18.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:18.165130 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 02:13:18.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:18.165338 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 02:13:18.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:18.202086 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 02:13:18.202265 systemd[1]: Stopped ignition-files.service. Dec 13 02:13:18.219528 systemd[1]: Stopping ignition-mount.service... Dec 13 02:13:18.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:18.241816 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 02:13:18.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:18.242066 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 02:13:18.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:18.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:18.258335 systemd[1]: Stopping sysroot-boot.service... Dec 13 02:13:18.271788 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 02:13:18.272202 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 02:13:18.283204 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 02:13:18.283383 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 02:13:18.319074 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 02:13:18.954721 systemd-journald[190]: Received SIGTERM from PID 1 (systemd). Dec 13 02:13:18.320269 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 02:13:18.963861 iscsid[694]: iscsid shutting down. Dec 13 02:13:18.320386 systemd[1]: Stopped ignition-mount.service. Dec 13 02:13:18.332397 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 02:13:18.332511 systemd[1]: Stopped sysroot-boot.service. Dec 13 02:13:18.350528 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 02:13:18.350714 systemd[1]: Stopped ignition-disks.service. Dec 13 02:13:18.364892 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 02:13:18.364980 systemd[1]: Stopped ignition-kargs.service. Dec 13 02:13:18.380922 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 02:13:18.381003 systemd[1]: Stopped ignition-fetch.service. Dec 13 02:13:18.395885 systemd[1]: Stopped target network.target. Dec 13 02:13:18.408819 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 02:13:18.408933 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 02:13:18.427873 systemd[1]: Stopped target paths.target. Dec 13 02:13:18.440783 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 02:13:18.444778 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 02:13:18.455795 systemd[1]: Stopped target slices.target. Dec 13 02:13:18.468799 systemd[1]: Stopped target sockets.target. Dec 13 02:13:18.481900 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 02:13:18.481958 systemd[1]: Closed iscsid.socket. Dec 13 02:13:18.496885 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 02:13:18.496949 systemd[1]: Closed iscsiuio.socket. Dec 13 02:13:18.510857 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 02:13:18.510954 systemd[1]: Stopped ignition-setup.service. Dec 13 02:13:18.526903 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 02:13:18.526983 systemd[1]: Stopped initrd-setup-root.service. Dec 13 02:13:18.542066 systemd[1]: Stopping systemd-networkd.service... Dec 13 02:13:18.545680 systemd-networkd[684]: eth0: DHCPv6 lease lost Dec 13 02:13:18.556030 systemd[1]: Stopping systemd-resolved.service... Dec 13 02:13:18.571274 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 02:13:18.571398 systemd[1]: Stopped systemd-resolved.service. Dec 13 02:13:18.587451 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 02:13:18.587590 systemd[1]: Stopped systemd-networkd.service. Dec 13 02:13:18.603563 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 02:13:18.603700 systemd[1]: Finished initrd-cleanup.service. Dec 13 02:13:18.621030 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 02:13:18.621073 systemd[1]: Closed systemd-networkd.socket. Dec 13 02:13:18.635953 systemd[1]: Stopping network-cleanup.service... Dec 13 02:13:18.649794 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 02:13:18.649921 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 02:13:18.664977 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:13:18.665053 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:13:18.680053 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 02:13:18.680119 systemd[1]: Stopped systemd-modules-load.service. Dec 13 02:13:18.695055 systemd[1]: Stopping systemd-udevd.service... Dec 13 02:13:18.710661 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 02:13:18.711337 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 02:13:18.711496 systemd[1]: Stopped systemd-udevd.service. Dec 13 02:13:18.719583 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 02:13:18.719689 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 02:13:18.738892 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 02:13:18.738957 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 02:13:18.754871 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 02:13:18.754973 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 02:13:18.770950 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 02:13:18.771024 systemd[1]: Stopped dracut-cmdline.service. Dec 13 02:13:18.785936 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:13:18.786014 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 02:13:18.804994 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 02:13:18.828766 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:13:18.828986 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 02:13:18.844546 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 02:13:18.844692 systemd[1]: Stopped network-cleanup.service. Dec 13 02:13:18.859302 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 02:13:18.859451 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 02:13:18.877218 systemd[1]: Reached target initrd-switch-root.target. Dec 13 02:13:18.893985 systemd[1]: Starting initrd-switch-root.service... Dec 13 02:13:18.923129 systemd[1]: Switching root. Dec 13 02:13:18.973371 systemd-journald[190]: Journal stopped Dec 13 02:13:23.664655 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 02:13:23.664783 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 02:13:23.664817 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 02:13:23.664842 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 02:13:23.664866 kernel: SELinux: policy capability open_perms=1 Dec 13 02:13:23.664896 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 02:13:23.664920 kernel: SELinux: policy capability always_check_network=0 Dec 13 02:13:23.664948 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 02:13:23.664972 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 02:13:23.664994 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 02:13:23.665018 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 02:13:23.665043 systemd[1]: Successfully loaded SELinux policy in 109.921ms. Dec 13 02:13:23.665096 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.521ms. Dec 13 02:13:23.665134 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:13:23.665161 systemd[1]: Detected virtualization kvm. Dec 13 02:13:23.665185 systemd[1]: Detected architecture x86-64. Dec 13 02:13:23.665214 systemd[1]: Detected first boot. Dec 13 02:13:23.665240 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:13:23.665267 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 02:13:23.665292 systemd[1]: Populated /etc with preset unit settings. Dec 13 02:13:23.665329 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:13:23.665356 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:13:23.665385 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:13:23.665420 kernel: kauditd_printk_skb: 46 callbacks suppressed Dec 13 02:13:23.665443 kernel: audit: type=1334 audit(1734056002.695:86): prog-id=12 op=LOAD Dec 13 02:13:23.665602 kernel: audit: type=1334 audit(1734056002.695:87): prog-id=3 op=UNLOAD Dec 13 02:13:23.665644 kernel: audit: type=1334 audit(1734056002.707:88): prog-id=13 op=LOAD Dec 13 02:13:23.665667 kernel: audit: type=1334 audit(1734056002.721:89): prog-id=14 op=LOAD Dec 13 02:13:23.665687 kernel: audit: type=1334 audit(1734056002.721:90): prog-id=4 op=UNLOAD Dec 13 02:13:23.665709 kernel: audit: type=1334 audit(1734056002.721:91): prog-id=5 op=UNLOAD Dec 13 02:13:23.665732 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 02:13:23.665763 kernel: audit: type=1131 audit(1734056002.721:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:23.665787 systemd[1]: Stopped iscsiuio.service. Dec 13 02:13:23.665810 kernel: audit: type=1334 audit(1734056002.776:93): prog-id=12 op=UNLOAD Dec 13 02:13:23.665834 kernel: audit: type=1131 audit(1734056002.790:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:23.665856 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 02:13:23.665879 systemd[1]: Stopped iscsid.service. Dec 13 02:13:23.665906 kernel: audit: type=1131 audit(1734056002.830:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:23.665931 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 02:13:23.665960 systemd[1]: Stopped initrd-switch-root.service. Dec 13 02:13:23.665987 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 02:13:23.666013 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 02:13:23.666038 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 02:13:23.666063 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 02:13:23.666089 systemd[1]: Created slice system-getty.slice. Dec 13 02:13:23.666114 systemd[1]: Created slice system-modprobe.slice. Dec 13 02:13:23.666156 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 02:13:23.666181 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 02:13:23.666206 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 02:13:23.666233 systemd[1]: Created slice user.slice. Dec 13 02:13:23.666257 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:13:23.666281 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 02:13:23.666303 systemd[1]: Set up automount boot.automount. Dec 13 02:13:23.666325 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 02:13:23.666348 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 02:13:23.666375 systemd[1]: Stopped target initrd-fs.target. Dec 13 02:13:23.666399 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 02:13:23.666422 systemd[1]: Reached target integritysetup.target. Dec 13 02:13:23.666452 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:13:23.666475 systemd[1]: Reached target remote-fs.target. Dec 13 02:13:23.666499 systemd[1]: Reached target slices.target. Dec 13 02:13:23.666524 systemd[1]: Reached target swap.target. Dec 13 02:13:23.666546 systemd[1]: Reached target torcx.target. Dec 13 02:13:23.666569 systemd[1]: Reached target veritysetup.target. Dec 13 02:13:23.666592 systemd[1]: Listening on systemd-coredump.socket. Dec 13 02:13:23.666619 systemd[1]: Listening on systemd-initctl.socket. Dec 13 02:13:23.666659 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:13:23.666682 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:13:23.666705 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:13:23.666732 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 02:13:23.666756 systemd[1]: Mounting dev-hugepages.mount... Dec 13 02:13:23.666779 systemd[1]: Mounting dev-mqueue.mount... Dec 13 02:13:23.666802 systemd[1]: Mounting media.mount... Dec 13 02:13:23.666826 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:13:23.666938 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 02:13:23.666962 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 02:13:23.666986 systemd[1]: Mounting tmp.mount... Dec 13 02:13:23.667010 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 02:13:23.667033 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:13:23.667057 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:13:23.667081 systemd[1]: Starting modprobe@configfs.service... Dec 13 02:13:23.667105 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:13:23.667147 systemd[1]: Starting modprobe@drm.service... Dec 13 02:13:23.667175 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:13:23.667199 systemd[1]: Starting modprobe@fuse.service... Dec 13 02:13:23.667221 systemd[1]: Starting modprobe@loop.service... Dec 13 02:13:23.667245 kernel: fuse: init (API version 7.34) Dec 13 02:13:23.667269 kernel: loop: module loaded Dec 13 02:13:23.667294 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 02:13:23.667317 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 02:13:23.667341 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 02:13:23.667364 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 02:13:23.667392 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 02:13:23.667416 systemd[1]: Stopped systemd-journald.service. Dec 13 02:13:23.667440 systemd[1]: Starting systemd-journald.service... Dec 13 02:13:23.667465 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:13:23.667488 systemd[1]: Starting systemd-network-generator.service... Dec 13 02:13:23.667519 systemd-journald[991]: Journal started Dec 13 02:13:23.667646 systemd-journald[991]: Runtime Journal (/run/log/journal/3c6d89fa40173f1e634cac3c81b657f5) is 8.0M, max 148.8M, 140.8M free. Dec 13 02:13:18.972000 audit: BPF prog-id=9 op=UNLOAD Dec 13 02:13:19.252000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 02:13:19.401000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:13:19.401000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:13:19.401000 audit: BPF prog-id=10 op=LOAD Dec 13 02:13:19.401000 audit: BPF prog-id=10 op=UNLOAD Dec 13 02:13:19.401000 audit: BPF prog-id=11 op=LOAD Dec 13 02:13:19.401000 audit: BPF prog-id=11 op=UNLOAD Dec 13 02:13:19.556000 audit[900]: AVC avc: denied { associate } for pid=900 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 02:13:19.556000 audit[900]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c000024802 a1=c00002aae0 a2=c000028d00 a3=32 items=0 ppid=883 pid=900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:13:19.556000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:13:19.567000 audit[900]: AVC avc: denied { associate } for pid=900 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 02:13:19.567000 audit[900]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0000248d9 a2=1ed a3=0 items=2 ppid=883 pid=900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:13:19.567000 audit: CWD cwd="/" Dec 13 02:13:19.567000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:19.567000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:19.567000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:13:22.695000 audit: BPF prog-id=12 op=LOAD Dec 13 02:13:22.695000 audit: BPF prog-id=3 op=UNLOAD Dec 13 02:13:22.707000 audit: BPF prog-id=13 op=LOAD Dec 13 02:13:22.721000 audit: BPF prog-id=14 op=LOAD Dec 13 02:13:22.721000 audit: BPF prog-id=4 op=UNLOAD Dec 13 02:13:22.721000 audit: BPF prog-id=5 op=UNLOAD Dec 13 02:13:22.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:22.776000 audit: BPF prog-id=12 op=UNLOAD Dec 13 02:13:22.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:22.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:22.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:22.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:23.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:23.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:23.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:23.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:23.620000 audit: BPF prog-id=15 op=LOAD Dec 13 02:13:23.620000 audit: BPF prog-id=16 op=LOAD Dec 13 02:13:23.620000 audit: BPF prog-id=17 op=LOAD Dec 13 02:13:23.620000 audit: BPF prog-id=13 op=UNLOAD Dec 13 02:13:23.620000 audit: BPF prog-id=14 op=UNLOAD Dec 13 02:13:23.660000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 02:13:23.660000 audit[991]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffca3ce2f40 a2=4000 a3=7ffca3ce2fdc items=0 ppid=1 pid=991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:13:23.660000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 02:13:22.694114 systemd[1]: Queued start job for default target multi-user.target. Dec 13 02:13:19.551590 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T02:13:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:13:22.726294 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 02:13:19.553199 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T02:13:19Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 02:13:19.553240 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T02:13:19Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 02:13:19.553301 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T02:13:19Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 02:13:19.553322 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T02:13:19Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 02:13:19.553382 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T02:13:19Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 02:13:19.553407 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T02:13:19Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 02:13:19.553760 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T02:13:19Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 02:13:19.553817 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T02:13:19Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 02:13:19.553843 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T02:13:19Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 02:13:19.556161 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T02:13:19Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 02:13:19.556251 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T02:13:19Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 02:13:19.556298 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T02:13:19Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 02:13:19.556329 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T02:13:19Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 02:13:19.556361 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T02:13:19Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 02:13:19.556416 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T02:13:19Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 02:13:22.088143 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T02:13:22Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:13:22.088456 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T02:13:22Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:13:22.088601 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T02:13:22Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:13:22.088847 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T02:13:22Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:13:22.088910 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T02:13:22Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 02:13:22.088984 /usr/lib/systemd/system-generators/torcx-generator[900]: time="2024-12-13T02:13:22Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 02:13:23.678688 systemd[1]: Starting systemd-remount-fs.service... Dec 13 02:13:23.692661 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:13:23.706651 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 02:13:23.712675 systemd[1]: Stopped verity-setup.service. Dec 13 02:13:23.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:23.731662 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:13:23.740659 systemd[1]: Started systemd-journald.service. Dec 13 02:13:23.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:23.751437 systemd[1]: Mounted dev-hugepages.mount. Dec 13 02:13:23.758998 systemd[1]: Mounted dev-mqueue.mount. Dec 13 02:13:23.766010 systemd[1]: Mounted media.mount. Dec 13 02:13:23.774032 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 02:13:23.783993 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 02:13:23.792963 systemd[1]: Mounted tmp.mount. Dec 13 02:13:23.800115 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 02:13:23.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:23.809185 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:13:23.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:23.818182 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 02:13:23.818410 systemd[1]: Finished modprobe@configfs.service. Dec 13 02:13:23.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:23.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:23.827302 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:13:23.827529 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:13:23.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:23.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:23.836264 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:13:23.836498 systemd[1]: Finished modprobe@drm.service. Dec 13 02:13:23.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:23.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:23.845229 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:13:23.845451 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:13:23.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:23.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:23.854188 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 02:13:23.854401 systemd[1]: Finished modprobe@fuse.service. Dec 13 02:13:23.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:23.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:23.863216 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:13:23.863444 systemd[1]: Finished modprobe@loop.service. Dec 13 02:13:23.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:23.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:23.872197 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:13:23.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:23.881198 systemd[1]: Finished systemd-network-generator.service. Dec 13 02:13:23.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:23.890187 systemd[1]: Finished systemd-remount-fs.service. Dec 13 02:13:23.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:23.899183 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:13:23.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:23.908527 systemd[1]: Reached target network-pre.target. Dec 13 02:13:23.918275 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 02:13:23.928249 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 02:13:23.935791 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 02:13:23.938505 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 02:13:23.947766 systemd[1]: Starting systemd-journal-flush.service... Dec 13 02:13:23.956827 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:13:23.957424 systemd-journald[991]: Time spent on flushing to /var/log/journal/3c6d89fa40173f1e634cac3c81b657f5 is 68.362ms for 1152 entries. Dec 13 02:13:23.957424 systemd-journald[991]: System Journal (/var/log/journal/3c6d89fa40173f1e634cac3c81b657f5) is 8.0M, max 584.8M, 576.8M free. Dec 13 02:13:24.051441 systemd-journald[991]: Received client request to flush runtime journal. Dec 13 02:13:24.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:23.958673 systemd[1]: Starting systemd-random-seed.service... Dec 13 02:13:23.973913 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:13:23.975865 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:13:23.984667 systemd[1]: Starting systemd-sysusers.service... Dec 13 02:13:23.994537 systemd[1]: Starting systemd-udev-settle.service... Dec 13 02:13:24.053014 udevadm[1005]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 02:13:24.005177 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 02:13:24.013982 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 02:13:24.023135 systemd[1]: Finished systemd-random-seed.service. Dec 13 02:13:24.035679 systemd[1]: Reached target first-boot-complete.target. Dec 13 02:13:24.049573 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:13:24.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:24.058421 systemd[1]: Finished systemd-journal-flush.service. Dec 13 02:13:24.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:24.067285 systemd[1]: Finished systemd-sysusers.service. Dec 13 02:13:24.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:24.677351 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 02:13:24.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:24.685000 audit: BPF prog-id=18 op=LOAD Dec 13 02:13:24.685000 audit: BPF prog-id=19 op=LOAD Dec 13 02:13:24.685000 audit: BPF prog-id=7 op=UNLOAD Dec 13 02:13:24.685000 audit: BPF prog-id=8 op=UNLOAD Dec 13 02:13:24.687833 systemd[1]: Starting systemd-udevd.service... Dec 13 02:13:24.711566 systemd-udevd[1008]: Using default interface naming scheme 'v252'. Dec 13 02:13:24.758061 systemd[1]: Started systemd-udevd.service. Dec 13 02:13:24.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:24.768000 audit: BPF prog-id=20 op=LOAD Dec 13 02:13:24.771132 systemd[1]: Starting systemd-networkd.service... Dec 13 02:13:24.783000 audit: BPF prog-id=21 op=LOAD Dec 13 02:13:24.783000 audit: BPF prog-id=22 op=LOAD Dec 13 02:13:24.783000 audit: BPF prog-id=23 op=LOAD Dec 13 02:13:24.786705 systemd[1]: Starting systemd-userdbd.service... Dec 13 02:13:24.839565 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 02:13:24.857586 systemd[1]: Started systemd-userdbd.service. Dec 13 02:13:24.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:24.971671 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 02:13:24.996396 systemd-networkd[1023]: lo: Link UP Dec 13 02:13:24.996415 systemd-networkd[1023]: lo: Gained carrier Dec 13 02:13:24.997233 systemd-networkd[1023]: Enumeration completed Dec 13 02:13:24.997394 systemd[1]: Started systemd-networkd.service. Dec 13 02:13:24.997678 systemd-networkd[1023]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:13:25.000093 systemd-networkd[1023]: eth0: Link UP Dec 13 02:13:25.000122 systemd-networkd[1023]: eth0: Gained carrier Dec 13 02:13:25.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:25.009829 systemd-networkd[1023]: eth0: DHCPv4 address 10.128.0.53/32, gateway 10.128.0.1 acquired from 169.254.169.254 Dec 13 02:13:25.023687 kernel: ACPI: button: Power Button [PWRF] Dec 13 02:13:25.044676 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Dec 13 02:13:25.048000 audit[1020]: AVC avc: denied { confidentiality } for pid=1020 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 02:13:25.060667 kernel: EDAC MC: Ver: 3.0.0 Dec 13 02:13:25.048000 audit[1020]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55dc13becaa0 a1=337fc a2=7efda0ed7bc5 a3=5 items=110 ppid=1008 pid=1020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:13:25.048000 audit: CWD cwd="/" Dec 13 02:13:25.048000 audit: PATH item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=1 name=(null) inode=13864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=2 name=(null) inode=13864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=3 name=(null) inode=13865 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=4 name=(null) inode=13864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=5 name=(null) inode=13866 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=6 name=(null) inode=13864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=7 name=(null) inode=13867 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=8 name=(null) inode=13867 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=9 name=(null) inode=13868 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=10 name=(null) inode=13867 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=11 name=(null) inode=13869 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=12 name=(null) inode=13867 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=13 name=(null) inode=13870 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=14 name=(null) inode=13867 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=15 name=(null) inode=13871 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=16 name=(null) inode=13867 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=17 name=(null) inode=13872 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=18 name=(null) inode=13864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=19 name=(null) inode=13873 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=20 name=(null) inode=13873 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=21 name=(null) inode=13874 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=22 name=(null) inode=13873 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=23 name=(null) inode=13875 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=24 name=(null) inode=13873 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=25 name=(null) inode=13876 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=26 name=(null) inode=13873 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=27 name=(null) inode=13877 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=28 name=(null) inode=13873 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=29 name=(null) inode=13878 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=30 name=(null) inode=13864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=31 name=(null) inode=13879 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=32 name=(null) inode=13879 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=33 name=(null) inode=13880 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=34 name=(null) inode=13879 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=35 name=(null) inode=13881 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=36 name=(null) inode=13879 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=37 name=(null) inode=13882 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=38 name=(null) inode=13879 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=39 name=(null) inode=13883 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=40 name=(null) inode=13879 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=41 name=(null) inode=13884 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=42 name=(null) inode=13864 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=43 name=(null) inode=13885 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=44 name=(null) inode=13885 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=45 name=(null) inode=13886 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=46 name=(null) inode=13885 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=47 name=(null) inode=13887 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=48 name=(null) inode=13885 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=49 name=(null) inode=13888 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=50 name=(null) inode=13885 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=51 name=(null) inode=13889 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=52 name=(null) inode=13885 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=53 name=(null) inode=13890 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=54 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=55 name=(null) inode=13891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=56 name=(null) inode=13891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=57 name=(null) inode=13892 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=58 name=(null) inode=13891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=59 name=(null) inode=13893 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=60 name=(null) inode=13891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=61 name=(null) inode=13894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=62 name=(null) inode=13894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=63 name=(null) inode=13895 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=64 name=(null) inode=13894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=65 name=(null) inode=13896 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=66 name=(null) inode=13894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=67 name=(null) inode=13897 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=68 name=(null) inode=13894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=69 name=(null) inode=13898 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=70 name=(null) inode=13894 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=71 name=(null) inode=13899 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=72 name=(null) inode=13891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=73 name=(null) inode=13900 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=74 name=(null) inode=13900 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=75 name=(null) inode=13901 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=76 name=(null) inode=13900 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=77 name=(null) inode=13902 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=78 name=(null) inode=13900 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=79 name=(null) inode=13903 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=80 name=(null) inode=13900 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=81 name=(null) inode=13904 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=82 name=(null) inode=13900 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=83 name=(null) inode=13905 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=84 name=(null) inode=13891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=85 name=(null) inode=13906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=86 name=(null) inode=13906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=87 name=(null) inode=13907 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=88 name=(null) inode=13906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=89 name=(null) inode=13908 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=90 name=(null) inode=13906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=91 name=(null) inode=13909 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=92 name=(null) inode=13906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=93 name=(null) inode=13910 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=94 name=(null) inode=13906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=95 name=(null) inode=13911 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.071754 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 02:13:25.048000 audit: PATH item=96 name=(null) inode=13891 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=97 name=(null) inode=13912 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=98 name=(null) inode=13912 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=99 name=(null) inode=13913 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=100 name=(null) inode=13912 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=101 name=(null) inode=13914 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=102 name=(null) inode=13912 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=103 name=(null) inode=13915 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=104 name=(null) inode=13912 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=105 name=(null) inode=13916 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=106 name=(null) inode=13912 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=107 name=(null) inode=13917 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PATH item=109 name=(null) inode=13918 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:13:25.048000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 02:13:25.098229 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 02:13:25.098369 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1025) Dec 13 02:13:25.123664 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Dec 13 02:13:25.160815 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 02:13:25.181866 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:13:25.191207 systemd[1]: Finished systemd-udev-settle.service. Dec 13 02:13:25.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:25.201507 systemd[1]: Starting lvm2-activation-early.service... Dec 13 02:13:25.232225 lvm[1042]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:13:25.264052 systemd[1]: Finished lvm2-activation-early.service. Dec 13 02:13:25.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:25.273008 systemd[1]: Reached target cryptsetup.target. Dec 13 02:13:25.283353 systemd[1]: Starting lvm2-activation.service... Dec 13 02:13:25.289748 lvm[1043]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:13:25.318032 systemd[1]: Finished lvm2-activation.service. Dec 13 02:13:25.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:25.327033 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:13:25.335797 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 02:13:25.335850 systemd[1]: Reached target local-fs.target. Dec 13 02:13:25.344823 systemd[1]: Reached target machines.target. Dec 13 02:13:25.355441 systemd[1]: Starting ldconfig.service... Dec 13 02:13:25.363739 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:13:25.363839 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:13:25.365700 systemd[1]: Starting systemd-boot-update.service... Dec 13 02:13:25.374608 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 02:13:25.386548 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 02:13:25.388712 systemd[1]: Starting systemd-sysext.service... Dec 13 02:13:25.389379 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1045 (bootctl) Dec 13 02:13:25.393118 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 02:13:25.415210 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 02:13:25.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:25.420550 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 02:13:25.432735 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 02:13:25.433003 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 02:13:25.454918 kernel: loop0: detected capacity change from 0 to 210664 Dec 13 02:13:25.543374 systemd-fsck[1054]: fsck.fat 4.2 (2021-01-31) Dec 13 02:13:25.543374 systemd-fsck[1054]: /dev/sda1: 789 files, 119291/258078 clusters Dec 13 02:13:25.546463 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 02:13:25.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:25.558560 systemd[1]: Mounting boot.mount... Dec 13 02:13:25.585280 systemd[1]: Mounted boot.mount. Dec 13 02:13:25.609403 systemd[1]: Finished systemd-boot-update.service. Dec 13 02:13:25.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:25.787224 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 02:13:25.822667 kernel: loop1: detected capacity change from 0 to 210664 Dec 13 02:13:25.845849 (sd-sysext)[1060]: Using extensions 'kubernetes'. Dec 13 02:13:25.846531 (sd-sysext)[1060]: Merged extensions into '/usr'. Dec 13 02:13:25.873099 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:13:25.875883 systemd[1]: Mounting usr-share-oem.mount... Dec 13 02:13:25.883090 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:13:25.886138 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:13:25.894826 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:13:25.903952 systemd[1]: Starting modprobe@loop.service... Dec 13 02:13:25.910935 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:13:25.911202 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:13:25.911416 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:13:25.915507 systemd[1]: Mounted usr-share-oem.mount. Dec 13 02:13:25.924469 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:13:25.924711 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:13:25.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:25.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:25.934619 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:13:25.934868 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:13:25.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:25.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:25.946031 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 02:13:25.947189 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 02:13:25.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:25.957588 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:13:25.957822 systemd[1]: Finished modprobe@loop.service. Dec 13 02:13:25.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:25.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:25.966593 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:13:25.966842 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:13:25.968836 systemd[1]: Finished systemd-sysext.service. Dec 13 02:13:25.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:25.980750 systemd[1]: Starting ensure-sysext.service... Dec 13 02:13:25.989586 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 02:13:26.001496 systemd[1]: Reloading. Dec 13 02:13:26.026219 ldconfig[1044]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 02:13:26.033901 systemd-tmpfiles[1068]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 02:13:26.042815 systemd-tmpfiles[1068]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 02:13:26.054339 systemd-tmpfiles[1068]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 02:13:26.107285 /usr/lib/systemd/system-generators/torcx-generator[1087]: time="2024-12-13T02:13:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:13:26.109764 /usr/lib/systemd/system-generators/torcx-generator[1087]: time="2024-12-13T02:13:26Z" level=info msg="torcx already run" Dec 13 02:13:26.271068 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:13:26.271104 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:13:26.311174 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:13:26.392000 audit: BPF prog-id=24 op=LOAD Dec 13 02:13:26.392000 audit: BPF prog-id=21 op=UNLOAD Dec 13 02:13:26.393000 audit: BPF prog-id=25 op=LOAD Dec 13 02:13:26.393000 audit: BPF prog-id=26 op=LOAD Dec 13 02:13:26.393000 audit: BPF prog-id=22 op=UNLOAD Dec 13 02:13:26.393000 audit: BPF prog-id=23 op=UNLOAD Dec 13 02:13:26.395000 audit: BPF prog-id=27 op=LOAD Dec 13 02:13:26.395000 audit: BPF prog-id=15 op=UNLOAD Dec 13 02:13:26.395000 audit: BPF prog-id=28 op=LOAD Dec 13 02:13:26.395000 audit: BPF prog-id=29 op=LOAD Dec 13 02:13:26.395000 audit: BPF prog-id=16 op=UNLOAD Dec 13 02:13:26.395000 audit: BPF prog-id=17 op=UNLOAD Dec 13 02:13:26.397000 audit: BPF prog-id=30 op=LOAD Dec 13 02:13:26.397000 audit: BPF prog-id=20 op=UNLOAD Dec 13 02:13:26.398000 audit: BPF prog-id=31 op=LOAD Dec 13 02:13:26.398000 audit: BPF prog-id=32 op=LOAD Dec 13 02:13:26.398000 audit: BPF prog-id=18 op=UNLOAD Dec 13 02:13:26.398000 audit: BPF prog-id=19 op=UNLOAD Dec 13 02:13:26.403985 systemd[1]: Finished ldconfig.service. Dec 13 02:13:26.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:26.412708 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 02:13:26.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:26.426862 systemd[1]: Starting audit-rules.service... Dec 13 02:13:26.435544 systemd[1]: Starting clean-ca-certificates.service... Dec 13 02:13:26.446088 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 02:13:26.457254 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 02:13:26.465000 audit: BPF prog-id=33 op=LOAD Dec 13 02:13:26.468930 systemd[1]: Starting systemd-resolved.service... Dec 13 02:13:26.475000 audit: BPF prog-id=34 op=LOAD Dec 13 02:13:26.479064 systemd[1]: Starting systemd-timesyncd.service... Dec 13 02:13:26.488185 systemd[1]: Starting systemd-update-utmp.service... Dec 13 02:13:26.500867 systemd[1]: Finished clean-ca-certificates.service. Dec 13 02:13:26.501000 audit[1156]: SYSTEM_BOOT pid=1156 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 02:13:26.510560 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 02:13:26.510843 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 02:13:26.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:26.520721 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 02:13:26.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:26.518000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:13:26.521641 augenrules[1162]: No rules Dec 13 02:13:26.519000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 02:13:26.519000 audit[1162]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff47130a90 a2=420 a3=0 items=0 ppid=1132 pid=1162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:13:26.519000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 02:13:26.531388 systemd[1]: Finished audit-rules.service. Dec 13 02:13:26.547162 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:13:26.547707 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:13:26.551144 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:13:26.560018 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:13:26.568960 systemd[1]: Starting modprobe@loop.service... Dec 13 02:13:26.577987 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 02:13:26.586869 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:13:26.587159 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:13:26.589780 systemd[1]: Starting systemd-update-done.service... Dec 13 02:13:26.590871 enable-oslogin[1170]: /etc/pam.d/sshd already exists. Not enabling OS Login Dec 13 02:13:26.596806 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:13:26.597041 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:13:26.599781 systemd[1]: Finished systemd-update-utmp.service. Dec 13 02:13:26.608652 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:13:26.608879 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:13:26.618579 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:13:26.618808 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:13:26.627560 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:13:26.627789 systemd[1]: Finished modprobe@loop.service. Dec 13 02:13:26.636681 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 02:13:26.636939 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 02:13:26.646559 systemd[1]: Finished systemd-update-done.service. Dec 13 02:13:26.657109 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:13:26.657319 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:13:26.660182 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:13:26.660612 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:13:26.665040 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:13:26.671095 systemd-resolved[1149]: Positive Trust Anchors: Dec 13 02:13:26.671552 systemd-resolved[1149]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:13:26.671749 systemd-resolved[1149]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:13:26.673817 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:13:26.677049 systemd-timesyncd[1152]: Contacted time server 169.254.169.254:123 (169.254.169.254). Dec 13 02:13:26.677109 systemd-timesyncd[1152]: Initial clock synchronization to Fri 2024-12-13 02:13:26.397695 UTC. Dec 13 02:13:26.682751 systemd[1]: Starting modprobe@loop.service... Dec 13 02:13:26.691671 systemd[1]: Starting oem-gce-enable-oslogin.service... Dec 13 02:13:26.697228 enable-oslogin[1176]: /etc/pam.d/sshd already exists. Not enabling OS Login Dec 13 02:13:26.699854 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:13:26.700065 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:13:26.700171 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:13:26.700258 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:13:26.702025 systemd[1]: Started systemd-timesyncd.service. Dec 13 02:13:26.710185 systemd-resolved[1149]: Defaulting to hostname 'linux'. Dec 13 02:13:26.712938 systemd[1]: Started systemd-resolved.service. Dec 13 02:13:26.722396 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:13:26.722666 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:13:26.731383 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:13:26.731608 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:13:26.740385 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:13:26.740618 systemd[1]: Finished modprobe@loop.service. Dec 13 02:13:26.749354 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Dec 13 02:13:26.749597 systemd[1]: Finished oem-gce-enable-oslogin.service. Dec 13 02:13:26.758575 systemd[1]: Reached target network.target. Dec 13 02:13:26.760789 systemd-networkd[1023]: eth0: Gained IPv6LL Dec 13 02:13:26.766942 systemd[1]: Reached target nss-lookup.target. Dec 13 02:13:26.775963 systemd[1]: Reached target time-set.target. Dec 13 02:13:26.784955 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:13:26.785191 systemd[1]: Reached target sysinit.target. Dec 13 02:13:26.794088 systemd[1]: Started motdgen.path. Dec 13 02:13:26.801080 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 02:13:26.811300 systemd[1]: Started logrotate.timer. Dec 13 02:13:26.819284 systemd[1]: Started mdadm.timer. Dec 13 02:13:26.827038 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 02:13:26.835954 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:13:26.836277 systemd[1]: Reached target paths.target. Dec 13 02:13:26.843955 systemd[1]: Reached target timers.target. Dec 13 02:13:26.851657 systemd[1]: Listening on dbus.socket. Dec 13 02:13:26.860570 systemd[1]: Starting docker.socket... Dec 13 02:13:26.872011 systemd[1]: Listening on sshd.socket. Dec 13 02:13:26.879113 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:13:26.879352 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:13:26.882136 systemd[1]: Listening on docker.socket. Dec 13 02:13:26.891558 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:13:26.891889 systemd[1]: Reached target sockets.target. Dec 13 02:13:26.901023 systemd[1]: Reached target basic.target. Dec 13 02:13:26.907989 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:13:26.908280 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:13:26.910348 systemd[1]: Starting containerd.service... Dec 13 02:13:26.919604 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 02:13:26.932775 systemd[1]: Starting dbus.service... Dec 13 02:13:26.940730 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 02:13:26.949977 systemd[1]: Starting extend-filesystems.service... Dec 13 02:13:26.955341 jq[1182]: false Dec 13 02:13:26.956821 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 02:13:26.959301 systemd[1]: Starting modprobe@drm.service... Dec 13 02:13:26.967449 systemd[1]: Starting motdgen.service... Dec 13 02:13:26.976826 systemd[1]: Starting prepare-helm.service... Dec 13 02:13:26.985873 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 02:13:26.995455 systemd[1]: Starting sshd-keygen.service... Dec 13 02:13:27.005006 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 02:13:27.013375 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:13:27.013711 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Dec 13 02:13:27.014533 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 02:13:27.015936 systemd[1]: Starting update-engine.service... Dec 13 02:13:27.025953 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 02:13:27.032612 jq[1205]: true Dec 13 02:13:27.040620 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 02:13:27.040927 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 02:13:27.041721 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:13:27.042696 systemd[1]: Finished modprobe@drm.service. Dec 13 02:13:27.055923 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 02:13:27.056216 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 02:13:27.066276 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 02:13:27.081233 systemd[1]: Reached target network-online.target. Dec 13 02:13:27.092153 systemd[1]: Starting kubelet.service... Dec 13 02:13:27.098103 extend-filesystems[1183]: Found loop1 Dec 13 02:13:27.098103 extend-filesystems[1183]: Found sda Dec 13 02:13:27.098103 extend-filesystems[1183]: Found sda1 Dec 13 02:13:27.098103 extend-filesystems[1183]: Found sda2 Dec 13 02:13:27.098103 extend-filesystems[1183]: Found sda3 Dec 13 02:13:27.098103 extend-filesystems[1183]: Found usr Dec 13 02:13:27.098103 extend-filesystems[1183]: Found sda4 Dec 13 02:13:27.098103 extend-filesystems[1183]: Found sda6 Dec 13 02:13:27.098103 extend-filesystems[1183]: Found sda7 Dec 13 02:13:27.098103 extend-filesystems[1183]: Found sda9 Dec 13 02:13:27.098103 extend-filesystems[1183]: Checking size of /dev/sda9 Dec 13 02:13:27.265101 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Dec 13 02:13:27.285788 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Dec 13 02:13:27.285921 update_engine[1200]: I1213 02:13:27.239334 1200 main.cc:92] Flatcar Update Engine starting Dec 13 02:13:27.285921 update_engine[1200]: I1213 02:13:27.245971 1200 update_check_scheduler.cc:74] Next update check in 8m37s Dec 13 02:13:27.286316 tar[1207]: linux-amd64/helm Dec 13 02:13:27.286553 extend-filesystems[1183]: Resized partition /dev/sda9 Dec 13 02:13:27.318001 jq[1209]: true Dec 13 02:13:27.102270 systemd[1]: Starting oem-gce.service... Dec 13 02:13:27.152398 dbus-daemon[1181]: [system] SELinux support is enabled Dec 13 02:13:27.319023 extend-filesystems[1223]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 02:13:27.319023 extend-filesystems[1223]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 02:13:27.319023 extend-filesystems[1223]: old_desc_blocks = 1, new_desc_blocks = 2 Dec 13 02:13:27.319023 extend-filesystems[1223]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Dec 13 02:13:27.120723 systemd[1]: Starting systemd-logind.service... Dec 13 02:13:27.159732 dbus-daemon[1181]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1023 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 02:13:27.382738 kernel: loop2: detected capacity change from 0 to 2097152 Dec 13 02:13:27.382844 extend-filesystems[1183]: Resized filesystem in /dev/sda9 Dec 13 02:13:27.389812 env[1210]: time="2024-12-13T02:13:27.374860905Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 02:13:27.123983 systemd[1]: Finished ensure-sysext.service. Dec 13 02:13:27.178567 dbus-daemon[1181]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 02:13:27.148282 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 02:13:27.390822 bash[1244]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:13:27.148554 systemd[1]: Finished motdgen.service. Dec 13 02:13:27.156063 systemd[1]: Started dbus.service. Dec 13 02:13:27.176551 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 02:13:27.392202 mkfs.ext4[1232]: mke2fs 1.46.5 (30-Dec-2021) Dec 13 02:13:27.392202 mkfs.ext4[1232]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Dec 13 02:13:27.392202 mkfs.ext4[1232]: Creating filesystem with 262144 4k blocks and 65536 inodes Dec 13 02:13:27.392202 mkfs.ext4[1232]: Filesystem UUID: fd641623-0884-4996-8257-4282e1e6ebd4 Dec 13 02:13:27.392202 mkfs.ext4[1232]: Superblock backups stored on blocks: Dec 13 02:13:27.392202 mkfs.ext4[1232]: 32768, 98304, 163840, 229376 Dec 13 02:13:27.392202 mkfs.ext4[1232]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 02:13:27.392202 mkfs.ext4[1232]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 02:13:27.392202 mkfs.ext4[1232]: Creating journal (8192 blocks): done Dec 13 02:13:27.392202 mkfs.ext4[1232]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Dec 13 02:13:27.176637 systemd[1]: Reached target system-config.target. Dec 13 02:13:27.195901 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 02:13:27.195938 systemd[1]: Reached target user-config.target. Dec 13 02:13:27.224514 systemd[1]: Starting systemd-hostnamed.service... Dec 13 02:13:27.396433 umount[1248]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Dec 13 02:13:27.245721 systemd[1]: Started update-engine.service. Dec 13 02:13:27.249901 systemd[1]: Started locksmithd.service. Dec 13 02:13:27.295015 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 02:13:27.295300 systemd[1]: Finished extend-filesystems.service. Dec 13 02:13:27.333718 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 02:13:27.468664 kernel: EXT4-fs (loop2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 02:13:27.558059 coreos-metadata[1180]: Dec 13 02:13:27.556 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Dec 13 02:13:27.572574 systemd-logind[1219]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 02:13:27.573141 systemd-logind[1219]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 02:13:27.573347 systemd-logind[1219]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 02:13:27.575752 systemd-logind[1219]: New seat seat0. Dec 13 02:13:27.582026 coreos-metadata[1180]: Dec 13 02:13:27.581 INFO Fetch failed with 404: resource not found Dec 13 02:13:27.582302 coreos-metadata[1180]: Dec 13 02:13:27.582 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Dec 13 02:13:27.587654 coreos-metadata[1180]: Dec 13 02:13:27.587 INFO Fetch successful Dec 13 02:13:27.587922 coreos-metadata[1180]: Dec 13 02:13:27.587 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Dec 13 02:13:27.591831 systemd[1]: Started systemd-logind.service. Dec 13 02:13:27.594233 coreos-metadata[1180]: Dec 13 02:13:27.594 INFO Fetch failed with 404: resource not found Dec 13 02:13:27.594488 coreos-metadata[1180]: Dec 13 02:13:27.594 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Dec 13 02:13:27.595524 dbus-daemon[1181]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 02:13:27.598504 coreos-metadata[1180]: Dec 13 02:13:27.598 INFO Fetch failed with 404: resource not found Dec 13 02:13:27.598768 coreos-metadata[1180]: Dec 13 02:13:27.598 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Dec 13 02:13:27.599456 dbus-daemon[1181]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1233 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 02:13:27.600151 systemd[1]: Started systemd-hostnamed.service. Dec 13 02:13:27.602828 coreos-metadata[1180]: Dec 13 02:13:27.602 INFO Fetch successful Dec 13 02:13:27.607609 unknown[1180]: wrote ssh authorized keys file for user: core Dec 13 02:13:27.614297 systemd[1]: Starting polkit.service... Dec 13 02:13:27.672555 update-ssh-keys[1259]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:13:27.673801 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 02:13:27.686796 env[1210]: time="2024-12-13T02:13:27.686733301Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 02:13:27.689453 env[1210]: time="2024-12-13T02:13:27.689408495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:13:27.692928 env[1210]: time="2024-12-13T02:13:27.692870239Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:13:27.693101 env[1210]: time="2024-12-13T02:13:27.693076344Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:13:27.695068 env[1210]: time="2024-12-13T02:13:27.695026136Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:13:27.696712 env[1210]: time="2024-12-13T02:13:27.696673157Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 02:13:27.698713 env[1210]: time="2024-12-13T02:13:27.698674997Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 02:13:27.698973 env[1210]: time="2024-12-13T02:13:27.698935458Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 02:13:27.699220 env[1210]: time="2024-12-13T02:13:27.699198225Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:13:27.701035 env[1210]: time="2024-12-13T02:13:27.701002401Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:13:27.717648 env[1210]: time="2024-12-13T02:13:27.717577098Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:13:27.717850 env[1210]: time="2024-12-13T02:13:27.717826894Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 02:13:27.718118 env[1210]: time="2024-12-13T02:13:27.718074964Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 02:13:27.718256 env[1210]: time="2024-12-13T02:13:27.718237324Z" level=info msg="metadata content store policy set" policy=shared Dec 13 02:13:27.723994 env[1210]: time="2024-12-13T02:13:27.723932166Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 02:13:27.724209 env[1210]: time="2024-12-13T02:13:27.724186459Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 02:13:27.724338 env[1210]: time="2024-12-13T02:13:27.724316238Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 02:13:27.724571 env[1210]: time="2024-12-13T02:13:27.724467572Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 02:13:27.724704 env[1210]: time="2024-12-13T02:13:27.724681457Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 02:13:27.724844 env[1210]: time="2024-12-13T02:13:27.724825681Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 02:13:27.724972 env[1210]: time="2024-12-13T02:13:27.724952768Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 02:13:27.725105 env[1210]: time="2024-12-13T02:13:27.725081516Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 02:13:27.725221 env[1210]: time="2024-12-13T02:13:27.725203726Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 02:13:27.725340 env[1210]: time="2024-12-13T02:13:27.725320217Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 02:13:27.725455 env[1210]: time="2024-12-13T02:13:27.725438212Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 02:13:27.725579 env[1210]: time="2024-12-13T02:13:27.725558053Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 02:13:27.725897 env[1210]: time="2024-12-13T02:13:27.725869595Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 02:13:27.726196 env[1210]: time="2024-12-13T02:13:27.726171081Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 02:13:27.726838 env[1210]: time="2024-12-13T02:13:27.726791651Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 02:13:27.726999 env[1210]: time="2024-12-13T02:13:27.726974015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 02:13:27.727100 env[1210]: time="2024-12-13T02:13:27.727082969Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 02:13:27.727264 env[1210]: time="2024-12-13T02:13:27.727245561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 02:13:27.727823 env[1210]: time="2024-12-13T02:13:27.727780608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 02:13:27.731731 env[1210]: time="2024-12-13T02:13:27.731685545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 02:13:27.734437 env[1210]: time="2024-12-13T02:13:27.734403901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 02:13:27.735720 env[1210]: time="2024-12-13T02:13:27.735688078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 02:13:27.735886 env[1210]: time="2024-12-13T02:13:27.735860637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 02:13:27.736015 env[1210]: time="2024-12-13T02:13:27.735996311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 02:13:27.736153 env[1210]: time="2024-12-13T02:13:27.736131921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 02:13:27.736289 env[1210]: time="2024-12-13T02:13:27.736267719Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 02:13:27.736666 env[1210]: time="2024-12-13T02:13:27.736593372Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 02:13:27.736831 env[1210]: time="2024-12-13T02:13:27.736810249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 02:13:27.736965 env[1210]: time="2024-12-13T02:13:27.736944063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 02:13:27.737087 env[1210]: time="2024-12-13T02:13:27.737066258Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 02:13:27.738471 env[1210]: time="2024-12-13T02:13:27.738424009Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 02:13:27.738638 env[1210]: time="2024-12-13T02:13:27.738595308Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 02:13:27.738762 env[1210]: time="2024-12-13T02:13:27.738738082Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 02:13:27.740053 env[1210]: time="2024-12-13T02:13:27.739982950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 02:13:27.740693 env[1210]: time="2024-12-13T02:13:27.740570717Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 02:13:27.744519 env[1210]: time="2024-12-13T02:13:27.744477538Z" level=info msg="Connect containerd service" Dec 13 02:13:27.761728 env[1210]: time="2024-12-13T02:13:27.761660312Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 02:13:27.763093 env[1210]: time="2024-12-13T02:13:27.763045271Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:13:27.763356 env[1210]: time="2024-12-13T02:13:27.763320434Z" level=info msg="Start subscribing containerd event" Dec 13 02:13:27.766129 env[1210]: time="2024-12-13T02:13:27.766084755Z" level=info msg="Start recovering state" Dec 13 02:13:27.766392 env[1210]: time="2024-12-13T02:13:27.766372348Z" level=info msg="Start event monitor" Dec 13 02:13:27.768744 env[1210]: time="2024-12-13T02:13:27.768700590Z" level=info msg="Start snapshots syncer" Dec 13 02:13:27.770716 env[1210]: time="2024-12-13T02:13:27.770685813Z" level=info msg="Start cni network conf syncer for default" Dec 13 02:13:27.770861 env[1210]: time="2024-12-13T02:13:27.770839337Z" level=info msg="Start streaming server" Dec 13 02:13:27.771640 env[1210]: time="2024-12-13T02:13:27.771597453Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 02:13:27.775747 env[1210]: time="2024-12-13T02:13:27.775715096Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 02:13:27.787360 systemd[1]: Started containerd.service. Dec 13 02:13:27.790793 env[1210]: time="2024-12-13T02:13:27.790733827Z" level=info msg="containerd successfully booted in 0.416920s" Dec 13 02:13:27.802202 polkitd[1258]: Started polkitd version 121 Dec 13 02:13:27.845448 polkitd[1258]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 02:13:27.847983 polkitd[1258]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 02:13:27.857970 polkitd[1258]: Finished loading, compiling and executing 2 rules Dec 13 02:13:27.858863 dbus-daemon[1181]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 02:13:27.859082 systemd[1]: Started polkit.service. Dec 13 02:13:27.860093 polkitd[1258]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 02:13:27.898521 systemd-hostnamed[1233]: Hostname set to (transient) Dec 13 02:13:27.901609 systemd-resolved[1149]: System hostname changed to 'ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal'. Dec 13 02:13:29.081327 tar[1207]: linux-amd64/LICENSE Dec 13 02:13:29.082111 tar[1207]: linux-amd64/README.md Dec 13 02:13:29.099051 systemd[1]: Finished prepare-helm.service. Dec 13 02:13:29.355002 systemd[1]: Started kubelet.service. Dec 13 02:13:29.944955 locksmithd[1237]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 02:13:30.594265 sshd_keygen[1199]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 02:13:30.642544 systemd[1]: Finished sshd-keygen.service. Dec 13 02:13:30.652799 systemd[1]: Starting issuegen.service... Dec 13 02:13:30.664745 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 02:13:30.665004 systemd[1]: Finished issuegen.service. Dec 13 02:13:30.674240 systemd[1]: Starting systemd-user-sessions.service... Dec 13 02:13:30.686114 systemd[1]: Finished systemd-user-sessions.service. Dec 13 02:13:30.697236 systemd[1]: Started getty@tty1.service. Dec 13 02:13:30.706553 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 02:13:30.715260 systemd[1]: Reached target getty.target. Dec 13 02:13:30.750681 kubelet[1274]: E1213 02:13:30.750595 1274 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:13:30.752928 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:13:30.753178 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:13:30.753658 systemd[1]: kubelet.service: Consumed 1.495s CPU time. Dec 13 02:13:32.875616 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Dec 13 02:13:35.033655 kernel: loop2: detected capacity change from 0 to 2097152 Dec 13 02:13:35.057305 systemd-nspawn[1296]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Dec 13 02:13:35.057305 systemd-nspawn[1296]: Press ^] three times within 1s to kill container. Dec 13 02:13:35.071668 kernel: EXT4-fs (loop2): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 02:13:35.152779 systemd[1]: Started oem-gce.service. Dec 13 02:13:35.161271 systemd[1]: Reached target multi-user.target. Dec 13 02:13:35.171923 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 02:13:35.185503 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 02:13:35.185826 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 02:13:35.196001 systemd[1]: Startup finished in 1.023s (kernel) + 8.317s (initrd) + 16.066s (userspace) = 25.407s. Dec 13 02:13:35.207572 systemd-nspawn[1296]: + '[' -e /etc/default/instance_configs.cfg.template ']' Dec 13 02:13:35.207789 systemd-nspawn[1296]: + echo -e '[InstanceSetup]\nset_host_keys = false' Dec 13 02:13:35.207789 systemd-nspawn[1296]: + /usr/bin/google_instance_setup Dec 13 02:13:35.411250 systemd[1]: Created slice system-sshd.slice. Dec 13 02:13:35.413421 systemd[1]: Started sshd@0-10.128.0.53:22-139.178.68.195:39686.service. Dec 13 02:13:35.742526 sshd[1304]: Accepted publickey for core from 139.178.68.195 port 39686 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:13:35.746833 sshd[1304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:13:35.765826 systemd[1]: Created slice user-500.slice. Dec 13 02:13:35.767860 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 02:13:35.772612 systemd-logind[1219]: New session 1 of user core. Dec 13 02:13:35.786948 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 02:13:35.789676 systemd[1]: Starting user@500.service... Dec 13 02:13:35.808597 (systemd)[1309]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:13:35.865336 instance-setup[1302]: INFO Running google_set_multiqueue. Dec 13 02:13:35.883368 instance-setup[1302]: INFO Set channels for eth0 to 2. Dec 13 02:13:35.889115 instance-setup[1302]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Dec 13 02:13:35.892020 instance-setup[1302]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Dec 13 02:13:35.892206 instance-setup[1302]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Dec 13 02:13:35.894962 instance-setup[1302]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Dec 13 02:13:35.895136 instance-setup[1302]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Dec 13 02:13:35.897602 instance-setup[1302]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Dec 13 02:13:35.897801 instance-setup[1302]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Dec 13 02:13:35.899914 instance-setup[1302]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Dec 13 02:13:35.919150 instance-setup[1302]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Dec 13 02:13:35.919314 instance-setup[1302]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Dec 13 02:13:35.968176 systemd[1309]: Queued start job for default target default.target. Dec 13 02:13:35.969524 systemd[1309]: Reached target paths.target. Dec 13 02:13:35.969798 systemd[1309]: Reached target sockets.target. Dec 13 02:13:35.969962 systemd[1309]: Reached target timers.target. Dec 13 02:13:35.970094 systemd[1309]: Reached target basic.target. Dec 13 02:13:35.970338 systemd[1309]: Reached target default.target. Dec 13 02:13:35.970417 systemd[1]: Started user@500.service. Dec 13 02:13:35.970660 systemd[1309]: Startup finished in 149ms. Dec 13 02:13:35.971904 systemd[1]: Started session-1.scope. Dec 13 02:13:35.984742 systemd-nspawn[1296]: + /usr/bin/google_metadata_script_runner --script-type startup Dec 13 02:13:36.194519 systemd[1]: Started sshd@1-10.128.0.53:22-139.178.68.195:60392.service. Dec 13 02:13:36.387574 startup-script[1344]: INFO Starting startup scripts. Dec 13 02:13:36.402135 startup-script[1344]: INFO No startup scripts found in metadata. Dec 13 02:13:36.402316 startup-script[1344]: INFO Finished running startup scripts. Dec 13 02:13:36.441994 systemd-nspawn[1296]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Dec 13 02:13:36.441994 systemd-nspawn[1296]: + daemon_pids=() Dec 13 02:13:36.442675 systemd-nspawn[1296]: + for d in accounts clock_skew network Dec 13 02:13:36.442675 systemd-nspawn[1296]: + daemon_pids+=($!) Dec 13 02:13:36.442675 systemd-nspawn[1296]: + for d in accounts clock_skew network Dec 13 02:13:36.443052 systemd-nspawn[1296]: + daemon_pids+=($!) Dec 13 02:13:36.443163 systemd-nspawn[1296]: + for d in accounts clock_skew network Dec 13 02:13:36.443429 systemd-nspawn[1296]: + /usr/bin/google_accounts_daemon Dec 13 02:13:36.443650 systemd-nspawn[1296]: + daemon_pids+=($!) Dec 13 02:13:36.443826 systemd-nspawn[1296]: + NOTIFY_SOCKET=/run/systemd/notify Dec 13 02:13:36.443900 systemd-nspawn[1296]: + /usr/bin/systemd-notify --ready Dec 13 02:13:36.444361 systemd-nspawn[1296]: + /usr/bin/google_network_daemon Dec 13 02:13:36.452878 systemd-nspawn[1296]: + /usr/bin/google_clock_skew_daemon Dec 13 02:13:36.501226 systemd-nspawn[1296]: + wait -n 36 37 38 Dec 13 02:13:36.509404 sshd[1347]: Accepted publickey for core from 139.178.68.195 port 60392 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:13:36.511081 sshd[1347]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:13:36.520047 systemd[1]: Started session-2.scope. Dec 13 02:13:36.522655 systemd-logind[1219]: New session 2 of user core. Dec 13 02:13:36.727865 sshd[1347]: pam_unix(sshd:session): session closed for user core Dec 13 02:13:36.732129 systemd[1]: sshd@1-10.128.0.53:22-139.178.68.195:60392.service: Deactivated successfully. Dec 13 02:13:36.733337 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 02:13:36.735989 systemd-logind[1219]: Session 2 logged out. Waiting for processes to exit. Dec 13 02:13:36.738087 systemd-logind[1219]: Removed session 2. Dec 13 02:13:36.772411 systemd[1]: Started sshd@2-10.128.0.53:22-139.178.68.195:60396.service. Dec 13 02:13:37.086497 sshd[1359]: Accepted publickey for core from 139.178.68.195 port 60396 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:13:37.088102 sshd[1359]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:13:37.096683 systemd[1]: Started session-3.scope. Dec 13 02:13:37.097774 systemd-logind[1219]: New session 3 of user core. Dec 13 02:13:37.208933 groupadd[1368]: group added to /etc/group: name=google-sudoers, GID=1000 Dec 13 02:13:37.212916 groupadd[1368]: group added to /etc/gshadow: name=google-sudoers Dec 13 02:13:37.228397 google-networking[1353]: INFO Starting Google Networking daemon. Dec 13 02:13:37.232160 groupadd[1368]: new group: name=google-sudoers, GID=1000 Dec 13 02:13:37.273307 google-accounts[1351]: INFO Starting Google Accounts daemon. Dec 13 02:13:37.293960 sshd[1359]: pam_unix(sshd:session): session closed for user core Dec 13 02:13:37.295884 google-clock-skew[1352]: INFO Starting Google Clock Skew daemon. Dec 13 02:13:37.300139 systemd[1]: sshd@2-10.128.0.53:22-139.178.68.195:60396.service: Deactivated successfully. Dec 13 02:13:37.301270 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 02:13:37.303699 systemd-logind[1219]: Session 3 logged out. Waiting for processes to exit. Dec 13 02:13:37.305683 systemd-logind[1219]: Removed session 3. Dec 13 02:13:37.310329 google-clock-skew[1352]: INFO Clock drift token has changed: 0. Dec 13 02:13:37.315044 systemd-nspawn[1296]: hwclock: Cannot access the Hardware Clock via any known method. Dec 13 02:13:37.315334 systemd-nspawn[1296]: hwclock: Use the --verbose option to see the details of our search for an access method. Dec 13 02:13:37.316165 google-clock-skew[1352]: WARNING Failed to sync system time with hardware clock. Dec 13 02:13:37.322930 google-accounts[1351]: WARNING OS Login not installed. Dec 13 02:13:37.323948 google-accounts[1351]: INFO Creating a new user account for 0. Dec 13 02:13:37.328815 systemd-nspawn[1296]: useradd: invalid user name '0': use --badname to ignore Dec 13 02:13:37.329463 google-accounts[1351]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Dec 13 02:13:37.339995 systemd[1]: Started sshd@3-10.128.0.53:22-139.178.68.195:60412.service. Dec 13 02:13:37.630758 sshd[1384]: Accepted publickey for core from 139.178.68.195 port 60412 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:13:37.632716 sshd[1384]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:13:37.638926 systemd-logind[1219]: New session 4 of user core. Dec 13 02:13:37.639706 systemd[1]: Started session-4.scope. Dec 13 02:13:37.844338 sshd[1384]: pam_unix(sshd:session): session closed for user core Dec 13 02:13:37.848536 systemd[1]: sshd@3-10.128.0.53:22-139.178.68.195:60412.service: Deactivated successfully. Dec 13 02:13:37.849596 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 02:13:37.850481 systemd-logind[1219]: Session 4 logged out. Waiting for processes to exit. Dec 13 02:13:37.851836 systemd-logind[1219]: Removed session 4. Dec 13 02:13:37.890985 systemd[1]: Started sshd@4-10.128.0.53:22-139.178.68.195:60424.service. Dec 13 02:13:38.184543 sshd[1390]: Accepted publickey for core from 139.178.68.195 port 60424 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:13:38.186275 sshd[1390]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:13:38.192733 systemd-logind[1219]: New session 5 of user core. Dec 13 02:13:38.193416 systemd[1]: Started session-5.scope. Dec 13 02:13:38.378422 sudo[1393]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 02:13:38.378880 sudo[1393]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 02:13:38.412005 systemd[1]: Starting docker.service... Dec 13 02:13:38.462974 env[1403]: time="2024-12-13T02:13:38.462278205Z" level=info msg="Starting up" Dec 13 02:13:38.464040 env[1403]: time="2024-12-13T02:13:38.463987187Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 02:13:38.464040 env[1403]: time="2024-12-13T02:13:38.464016083Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 02:13:38.464233 env[1403]: time="2024-12-13T02:13:38.464042601Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 02:13:38.464233 env[1403]: time="2024-12-13T02:13:38.464058141Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 02:13:38.466756 env[1403]: time="2024-12-13T02:13:38.466723602Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 02:13:38.466879 env[1403]: time="2024-12-13T02:13:38.466862686Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 02:13:38.466974 env[1403]: time="2024-12-13T02:13:38.466956962Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 02:13:38.467036 env[1403]: time="2024-12-13T02:13:38.467023318Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 02:13:38.476381 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2186171518-merged.mount: Deactivated successfully. Dec 13 02:13:38.503450 env[1403]: time="2024-12-13T02:13:38.503390247Z" level=info msg="Loading containers: start." Dec 13 02:13:38.672662 kernel: Initializing XFRM netlink socket Dec 13 02:13:38.716301 env[1403]: time="2024-12-13T02:13:38.716157673Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 02:13:38.798667 systemd-networkd[1023]: docker0: Link UP Dec 13 02:13:38.817852 env[1403]: time="2024-12-13T02:13:38.817801385Z" level=info msg="Loading containers: done." Dec 13 02:13:38.833842 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4252652109-merged.mount: Deactivated successfully. Dec 13 02:13:38.838898 env[1403]: time="2024-12-13T02:13:38.838843188Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 02:13:38.839179 env[1403]: time="2024-12-13T02:13:38.839140670Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 02:13:38.839331 env[1403]: time="2024-12-13T02:13:38.839295067Z" level=info msg="Daemon has completed initialization" Dec 13 02:13:38.862817 systemd[1]: Started docker.service. Dec 13 02:13:38.871555 env[1403]: time="2024-12-13T02:13:38.871472772Z" level=info msg="API listen on /run/docker.sock" Dec 13 02:13:40.042098 env[1210]: time="2024-12-13T02:13:40.042020980Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 02:13:40.503552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2293693095.mount: Deactivated successfully. Dec 13 02:13:41.004593 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 02:13:41.004951 systemd[1]: Stopped kubelet.service. Dec 13 02:13:41.005023 systemd[1]: kubelet.service: Consumed 1.495s CPU time. Dec 13 02:13:41.007667 systemd[1]: Starting kubelet.service... Dec 13 02:13:41.395736 systemd[1]: Started kubelet.service. Dec 13 02:13:41.509202 kubelet[1535]: E1213 02:13:41.509144 1535 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:13:41.513794 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:13:41.514013 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:13:42.645805 env[1210]: time="2024-12-13T02:13:42.645719365Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:42.648452 env[1210]: time="2024-12-13T02:13:42.648398837Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:42.651580 env[1210]: time="2024-12-13T02:13:42.651526833Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:42.654754 env[1210]: time="2024-12-13T02:13:42.654708283Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:42.655433 env[1210]: time="2024-12-13T02:13:42.655384755Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 02:13:42.670023 env[1210]: time="2024-12-13T02:13:42.669970918Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 02:13:44.457213 env[1210]: time="2024-12-13T02:13:44.457133334Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:44.460193 env[1210]: time="2024-12-13T02:13:44.460125043Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:44.462724 env[1210]: time="2024-12-13T02:13:44.462667692Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:44.465136 env[1210]: time="2024-12-13T02:13:44.465086833Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:44.466115 env[1210]: time="2024-12-13T02:13:44.466058287Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 02:13:44.481304 env[1210]: time="2024-12-13T02:13:44.481241455Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 02:13:45.709509 env[1210]: time="2024-12-13T02:13:45.709434399Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:45.714768 env[1210]: time="2024-12-13T02:13:45.714711804Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:45.718693 env[1210]: time="2024-12-13T02:13:45.718641816Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:45.723188 env[1210]: time="2024-12-13T02:13:45.723132847Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:45.724185 env[1210]: time="2024-12-13T02:13:45.724127711Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 02:13:45.739517 env[1210]: time="2024-12-13T02:13:45.739467197Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 02:13:46.851952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3740355444.mount: Deactivated successfully. Dec 13 02:13:47.551788 env[1210]: time="2024-12-13T02:13:47.551707426Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:47.556922 env[1210]: time="2024-12-13T02:13:47.556862515Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:47.560140 env[1210]: time="2024-12-13T02:13:47.560063998Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:47.563550 env[1210]: time="2024-12-13T02:13:47.563496084Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:47.564199 env[1210]: time="2024-12-13T02:13:47.564140488Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 02:13:47.580437 env[1210]: time="2024-12-13T02:13:47.580387921Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 02:13:47.969053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount751456268.mount: Deactivated successfully. Dec 13 02:13:49.159193 env[1210]: time="2024-12-13T02:13:49.159117369Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:49.161942 env[1210]: time="2024-12-13T02:13:49.161887844Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:49.164480 env[1210]: time="2024-12-13T02:13:49.164431281Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:49.166609 env[1210]: time="2024-12-13T02:13:49.166566292Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:49.167615 env[1210]: time="2024-12-13T02:13:49.167564892Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 02:13:49.180807 env[1210]: time="2024-12-13T02:13:49.180755363Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 02:13:49.569037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1971874753.mount: Deactivated successfully. Dec 13 02:13:49.575490 env[1210]: time="2024-12-13T02:13:49.575419512Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:49.577950 env[1210]: time="2024-12-13T02:13:49.577896050Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:49.580114 env[1210]: time="2024-12-13T02:13:49.580068615Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:49.582482 env[1210]: time="2024-12-13T02:13:49.582436661Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:49.583187 env[1210]: time="2024-12-13T02:13:49.583133762Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 02:13:49.597274 env[1210]: time="2024-12-13T02:13:49.597221098Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 02:13:50.009013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2332842181.mount: Deactivated successfully. Dec 13 02:13:51.765366 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 02:13:51.765707 systemd[1]: Stopped kubelet.service. Dec 13 02:13:51.768100 systemd[1]: Starting kubelet.service... Dec 13 02:13:52.010294 systemd[1]: Started kubelet.service. Dec 13 02:13:52.111008 kubelet[1577]: E1213 02:13:52.110870 1577 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:13:52.114095 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:13:52.114310 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:13:52.761825 env[1210]: time="2024-12-13T02:13:52.761759896Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:52.766305 env[1210]: time="2024-12-13T02:13:52.766230672Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:52.769485 env[1210]: time="2024-12-13T02:13:52.769420361Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:52.771589 env[1210]: time="2024-12-13T02:13:52.771535848Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:52.773042 env[1210]: time="2024-12-13T02:13:52.772993599Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 02:13:56.356467 systemd[1]: Stopped kubelet.service. Dec 13 02:13:56.360135 systemd[1]: Starting kubelet.service... Dec 13 02:13:56.396848 systemd[1]: Reloading. Dec 13 02:13:56.546948 /usr/lib/systemd/system-generators/torcx-generator[1675]: time="2024-12-13T02:13:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:13:56.546993 /usr/lib/systemd/system-generators/torcx-generator[1675]: time="2024-12-13T02:13:56Z" level=info msg="torcx already run" Dec 13 02:13:56.681433 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:13:56.681465 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:13:56.705470 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:13:56.873051 systemd[1]: Stopping kubelet.service... Dec 13 02:13:56.873830 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:13:56.874129 systemd[1]: Stopped kubelet.service. Dec 13 02:13:56.877071 systemd[1]: Starting kubelet.service... Dec 13 02:13:57.197167 systemd[1]: Started kubelet.service. Dec 13 02:13:57.278052 kubelet[1723]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:13:57.278052 kubelet[1723]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:13:57.278052 kubelet[1723]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:13:57.278783 kubelet[1723]: I1213 02:13:57.278177 1723 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:13:57.931728 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 02:13:58.100549 kubelet[1723]: I1213 02:13:58.100481 1723 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 02:13:58.100549 kubelet[1723]: I1213 02:13:58.100523 1723 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:13:58.100972 kubelet[1723]: I1213 02:13:58.100927 1723 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 02:13:58.136597 kubelet[1723]: I1213 02:13:58.136008 1723 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:13:58.137425 kubelet[1723]: E1213 02:13:58.137398 1723 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.53:6443: connect: connection refused Dec 13 02:13:58.160390 kubelet[1723]: I1213 02:13:58.160341 1723 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:13:58.162131 kubelet[1723]: I1213 02:13:58.162053 1723 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:13:58.162418 kubelet[1723]: I1213 02:13:58.162122 1723 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:13:58.163449 kubelet[1723]: I1213 02:13:58.163400 1723 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:13:58.163449 kubelet[1723]: I1213 02:13:58.163441 1723 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:13:58.163726 kubelet[1723]: I1213 02:13:58.163691 1723 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:13:58.165498 kubelet[1723]: I1213 02:13:58.165461 1723 kubelet.go:400] "Attempting to sync node with API server" Dec 13 02:13:58.165498 kubelet[1723]: I1213 02:13:58.165496 1723 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:13:58.165706 kubelet[1723]: I1213 02:13:58.165534 1723 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:13:58.165706 kubelet[1723]: I1213 02:13:58.165562 1723 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:13:58.177313 kubelet[1723]: W1213 02:13:58.177208 1723 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.53:6443: connect: connection refused Dec 13 02:13:58.177313 kubelet[1723]: E1213 02:13:58.177293 1723 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.53:6443: connect: connection refused Dec 13 02:13:58.178603 kubelet[1723]: I1213 02:13:58.178112 1723 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:13:58.185786 kubelet[1723]: I1213 02:13:58.184348 1723 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:13:58.185786 kubelet[1723]: W1213 02:13:58.184472 1723 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 02:13:58.187061 kubelet[1723]: I1213 02:13:58.186193 1723 server.go:1264] "Started kubelet" Dec 13 02:13:58.187061 kubelet[1723]: W1213 02:13:58.186705 1723 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.53:6443: connect: connection refused Dec 13 02:13:58.187061 kubelet[1723]: E1213 02:13:58.186927 1723 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.53:6443: connect: connection refused Dec 13 02:13:58.190215 kubelet[1723]: I1213 02:13:58.190159 1723 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:13:58.191656 kubelet[1723]: I1213 02:13:58.191604 1723 server.go:455] "Adding debug handlers to kubelet server" Dec 13 02:13:58.199687 kubelet[1723]: I1213 02:13:58.196703 1723 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:13:58.199687 kubelet[1723]: I1213 02:13:58.196999 1723 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:13:58.203947 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 02:13:58.204172 kubelet[1723]: E1213 02:13:58.203831 1723 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.53:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.53:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal.18109ac780b4f60c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal,UID:ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal,},FirstTimestamp:2024-12-13 02:13:58.186153484 +0000 UTC m=+0.979757387,LastTimestamp:2024-12-13 02:13:58.186153484 +0000 UTC m=+0.979757387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal,}" Dec 13 02:13:58.204376 kubelet[1723]: I1213 02:13:58.204173 1723 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:13:58.209207 kubelet[1723]: I1213 02:13:58.206917 1723 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:13:58.209207 kubelet[1723]: I1213 02:13:58.207084 1723 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 02:13:58.209207 kubelet[1723]: I1213 02:13:58.207166 1723 reconciler.go:26] "Reconciler: start to sync state" Dec 13 02:13:58.209207 kubelet[1723]: W1213 02:13:58.207698 1723 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.53:6443: connect: connection refused Dec 13 02:13:58.209207 kubelet[1723]: E1213 02:13:58.207770 1723 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.53:6443: connect: connection refused Dec 13 02:13:58.209207 kubelet[1723]: E1213 02:13:58.208431 1723 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.53:6443: connect: connection refused" interval="200ms" Dec 13 02:13:58.211575 kubelet[1723]: E1213 02:13:58.211538 1723 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:13:58.211786 kubelet[1723]: I1213 02:13:58.211762 1723 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:13:58.211786 kubelet[1723]: I1213 02:13:58.211788 1723 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:13:58.211936 kubelet[1723]: I1213 02:13:58.211896 1723 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:13:58.234084 kubelet[1723]: I1213 02:13:58.234054 1723 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:13:58.234305 kubelet[1723]: I1213 02:13:58.234287 1723 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:13:58.234415 kubelet[1723]: I1213 02:13:58.234402 1723 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:13:58.238832 kubelet[1723]: I1213 02:13:58.238797 1723 policy_none.go:49] "None policy: Start" Dec 13 02:13:58.243025 kubelet[1723]: I1213 02:13:58.242999 1723 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:13:58.243269 kubelet[1723]: I1213 02:13:58.243252 1723 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:13:58.245732 kubelet[1723]: I1213 02:13:58.245690 1723 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:13:58.249940 kubelet[1723]: I1213 02:13:58.249908 1723 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:13:58.249940 kubelet[1723]: I1213 02:13:58.249945 1723 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:13:58.250151 kubelet[1723]: I1213 02:13:58.249970 1723 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 02:13:58.250151 kubelet[1723]: E1213 02:13:58.250035 1723 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:13:58.252195 kubelet[1723]: W1213 02:13:58.252119 1723 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.53:6443: connect: connection refused Dec 13 02:13:58.252419 kubelet[1723]: E1213 02:13:58.252399 1723 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.53:6443: connect: connection refused Dec 13 02:13:58.256095 kubelet[1723]: E1213 02:13:58.255949 1723 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.53:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.53:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal.18109ac780b4f60c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal,UID:ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal,},FirstTimestamp:2024-12-13 02:13:58.186153484 +0000 UTC m=+0.979757387,LastTimestamp:2024-12-13 02:13:58.186153484 +0000 UTC m=+0.979757387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal,}" Dec 13 02:13:58.260461 systemd[1]: Created slice kubepods.slice. Dec 13 02:13:58.267722 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 02:13:58.272197 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 02:13:58.273827 kubelet[1723]: W1213 02:13:58.273753 1723 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/cpuset.cpus.effective: no such device Dec 13 02:13:58.277566 kubelet[1723]: I1213 02:13:58.277517 1723 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:13:58.277811 kubelet[1723]: I1213 02:13:58.277752 1723 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 02:13:58.277944 kubelet[1723]: I1213 02:13:58.277926 1723 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:13:58.283137 kubelet[1723]: E1213 02:13:58.283108 1723 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" not found" Dec 13 02:13:58.313774 kubelet[1723]: I1213 02:13:58.313739 1723 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:13:58.314403 kubelet[1723]: E1213 02:13:58.314354 1723 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.53:6443/api/v1/nodes\": dial tcp 10.128.0.53:6443: connect: connection refused" node="ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:13:58.350788 kubelet[1723]: I1213 02:13:58.350701 1723 topology_manager.go:215] "Topology Admit Handler" podUID="7c7f9418ac82083d62aff22e0f7130ea" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:13:58.358693 kubelet[1723]: I1213 02:13:58.358645 1723 topology_manager.go:215] "Topology Admit Handler" podUID="1f9de9593e7b0dc8b5f912aa4eaabda8" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:13:58.365666 kubelet[1723]: I1213 02:13:58.365604 1723 topology_manager.go:215] "Topology Admit Handler" podUID="74dd9f637943e1a723c7e5e572bb730e" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:13:58.371813 systemd[1]: Created slice kubepods-burstable-pod7c7f9418ac82083d62aff22e0f7130ea.slice. Dec 13 02:13:58.386850 systemd[1]: Created slice kubepods-burstable-pod1f9de9593e7b0dc8b5f912aa4eaabda8.slice. Dec 13 02:13:58.397705 systemd[1]: Created slice kubepods-burstable-pod74dd9f637943e1a723c7e5e572bb730e.slice. Dec 13 02:13:58.410081 kubelet[1723]: E1213 02:13:58.409995 1723 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.53:6443: connect: connection refused" interval="400ms" Dec 13 02:13:58.509091 kubelet[1723]: I1213 02:13:58.508574 1723 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c7f9418ac82083d62aff22e0f7130ea-ca-certs\") pod \"kube-controller-manager-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" (UID: \"7c7f9418ac82083d62aff22e0f7130ea\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:13:58.509091 kubelet[1723]: I1213 02:13:58.508661 1723 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c7f9418ac82083d62aff22e0f7130ea-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" (UID: \"7c7f9418ac82083d62aff22e0f7130ea\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:13:58.509091 kubelet[1723]: I1213 02:13:58.508696 1723 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c7f9418ac82083d62aff22e0f7130ea-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" (UID: \"7c7f9418ac82083d62aff22e0f7130ea\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:13:58.509091 kubelet[1723]: I1213 02:13:58.508727 1723 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c7f9418ac82083d62aff22e0f7130ea-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" (UID: \"7c7f9418ac82083d62aff22e0f7130ea\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:13:58.509448 kubelet[1723]: I1213 02:13:58.508755 1723 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74dd9f637943e1a723c7e5e572bb730e-ca-certs\") pod \"kube-apiserver-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" (UID: \"74dd9f637943e1a723c7e5e572bb730e\") " pod="kube-system/kube-apiserver-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:13:58.509448 kubelet[1723]: I1213 02:13:58.508783 1723 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74dd9f637943e1a723c7e5e572bb730e-k8s-certs\") pod \"kube-apiserver-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" (UID: \"74dd9f637943e1a723c7e5e572bb730e\") " pod="kube-system/kube-apiserver-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:13:58.509448 kubelet[1723]: I1213 02:13:58.508810 1723 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c7f9418ac82083d62aff22e0f7130ea-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" (UID: \"7c7f9418ac82083d62aff22e0f7130ea\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:13:58.509448 kubelet[1723]: I1213 02:13:58.508838 1723 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1f9de9593e7b0dc8b5f912aa4eaabda8-kubeconfig\") pod \"kube-scheduler-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" (UID: \"1f9de9593e7b0dc8b5f912aa4eaabda8\") " pod="kube-system/kube-scheduler-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:13:58.509710 kubelet[1723]: I1213 02:13:58.508869 1723 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74dd9f637943e1a723c7e5e572bb730e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" (UID: \"74dd9f637943e1a723c7e5e572bb730e\") " pod="kube-system/kube-apiserver-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:13:58.519844 kubelet[1723]: I1213 02:13:58.519802 1723 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:13:58.520255 kubelet[1723]: E1213 02:13:58.520204 1723 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.53:6443/api/v1/nodes\": dial tcp 10.128.0.53:6443: connect: connection refused" node="ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:13:58.682396 env[1210]: time="2024-12-13T02:13:58.682332218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal,Uid:7c7f9418ac82083d62aff22e0f7130ea,Namespace:kube-system,Attempt:0,}" Dec 13 02:13:58.695230 env[1210]: time="2024-12-13T02:13:58.695171439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal,Uid:1f9de9593e7b0dc8b5f912aa4eaabda8,Namespace:kube-system,Attempt:0,}" Dec 13 02:13:58.704371 env[1210]: time="2024-12-13T02:13:58.704315136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal,Uid:74dd9f637943e1a723c7e5e572bb730e,Namespace:kube-system,Attempt:0,}" Dec 13 02:13:58.811261 kubelet[1723]: E1213 02:13:58.811177 1723 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.53:6443: connect: connection refused" interval="800ms" Dec 13 02:13:58.925844 kubelet[1723]: I1213 02:13:58.925789 1723 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:13:58.926258 kubelet[1723]: E1213 02:13:58.926208 1723 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.53:6443/api/v1/nodes\": dial tcp 10.128.0.53:6443: connect: connection refused" node="ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:13:59.058801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3960529175.mount: Deactivated successfully. Dec 13 02:13:59.070380 env[1210]: time="2024-12-13T02:13:59.069947884Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:59.071884 env[1210]: time="2024-12-13T02:13:59.071820701Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:59.075288 env[1210]: time="2024-12-13T02:13:59.075241405Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:59.077023 env[1210]: time="2024-12-13T02:13:59.076965930Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:59.078329 env[1210]: time="2024-12-13T02:13:59.078249338Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:59.081092 env[1210]: time="2024-12-13T02:13:59.081053449Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:59.082318 env[1210]: time="2024-12-13T02:13:59.082258924Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:59.083696 env[1210]: time="2024-12-13T02:13:59.083655958Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:59.086150 env[1210]: time="2024-12-13T02:13:59.086100790Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:59.088701 env[1210]: time="2024-12-13T02:13:59.088659151Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:59.090173 env[1210]: time="2024-12-13T02:13:59.090113685Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:59.091853 env[1210]: time="2024-12-13T02:13:59.091815280Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:13:59.141303 env[1210]: time="2024-12-13T02:13:59.141032166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:13:59.141303 env[1210]: time="2024-12-13T02:13:59.141086939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:13:59.141303 env[1210]: time="2024-12-13T02:13:59.141106609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:13:59.141617 env[1210]: time="2024-12-13T02:13:59.141373415Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/62d03f6a06ce01d49c60a680369dd4914eaa66b3130bd732e2a0bd3b6b4afb24 pid=1763 runtime=io.containerd.runc.v2 Dec 13 02:13:59.144129 env[1210]: time="2024-12-13T02:13:59.144045121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:13:59.144392 env[1210]: time="2024-12-13T02:13:59.144351890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:13:59.144690 env[1210]: time="2024-12-13T02:13:59.144619030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:13:59.145218 env[1210]: time="2024-12-13T02:13:59.145139521Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1c1316fdf844b8b8a30a9f3797c1b4322b11d93b71f9d52c90deb31a8f57273b pid=1770 runtime=io.containerd.runc.v2 Dec 13 02:13:59.166297 systemd[1]: Started cri-containerd-62d03f6a06ce01d49c60a680369dd4914eaa66b3130bd732e2a0bd3b6b4afb24.scope. Dec 13 02:13:59.180902 env[1210]: time="2024-12-13T02:13:59.180772794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:13:59.181240 env[1210]: time="2024-12-13T02:13:59.181160716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:13:59.181476 env[1210]: time="2024-12-13T02:13:59.181417564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:13:59.182145 env[1210]: time="2024-12-13T02:13:59.182079612Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a5d103f09cbeda2cf0721a6e08912e5ebbdf72a79af88c6f43bccd13efbf9f68 pid=1805 runtime=io.containerd.runc.v2 Dec 13 02:13:59.193447 kubelet[1723]: W1213 02:13:59.193294 1723 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.53:6443: connect: connection refused Dec 13 02:13:59.193447 kubelet[1723]: E1213 02:13:59.193399 1723 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.53:6443: connect: connection refused Dec 13 02:13:59.221146 systemd[1]: Started cri-containerd-a5d103f09cbeda2cf0721a6e08912e5ebbdf72a79af88c6f43bccd13efbf9f68.scope. Dec 13 02:13:59.242514 systemd[1]: Started cri-containerd-1c1316fdf844b8b8a30a9f3797c1b4322b11d93b71f9d52c90deb31a8f57273b.scope. Dec 13 02:13:59.262209 kubelet[1723]: W1213 02:13:59.262046 1723 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.53:6443: connect: connection refused Dec 13 02:13:59.262209 kubelet[1723]: E1213 02:13:59.262165 1723 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.53:6443: connect: connection refused Dec 13 02:13:59.282454 kubelet[1723]: W1213 02:13:59.281770 1723 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.53:6443: connect: connection refused Dec 13 02:13:59.282454 kubelet[1723]: E1213 02:13:59.281868 1723 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.53:6443: connect: connection refused Dec 13 02:13:59.293903 env[1210]: time="2024-12-13T02:13:59.293840767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal,Uid:7c7f9418ac82083d62aff22e0f7130ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"62d03f6a06ce01d49c60a680369dd4914eaa66b3130bd732e2a0bd3b6b4afb24\"" Dec 13 02:13:59.300934 kubelet[1723]: E1213 02:13:59.300879 1723 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flat" Dec 13 02:13:59.320749 env[1210]: time="2024-12-13T02:13:59.315461768Z" level=info msg="CreateContainer within sandbox \"62d03f6a06ce01d49c60a680369dd4914eaa66b3130bd732e2a0bd3b6b4afb24\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 02:13:59.350912 env[1210]: time="2024-12-13T02:13:59.350851026Z" level=info msg="CreateContainer within sandbox \"62d03f6a06ce01d49c60a680369dd4914eaa66b3130bd732e2a0bd3b6b4afb24\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4ec7bbc6a16b8778a827b4dd14b283e3b354e8bf287cb75020908df3b9822e6f\"" Dec 13 02:13:59.352125 env[1210]: time="2024-12-13T02:13:59.352078771Z" level=info msg="StartContainer for \"4ec7bbc6a16b8778a827b4dd14b283e3b354e8bf287cb75020908df3b9822e6f\"" Dec 13 02:13:59.364252 env[1210]: time="2024-12-13T02:13:59.364188753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal,Uid:74dd9f637943e1a723c7e5e572bb730e,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c1316fdf844b8b8a30a9f3797c1b4322b11d93b71f9d52c90deb31a8f57273b\"" Dec 13 02:13:59.366669 kubelet[1723]: E1213 02:13:59.366608 1723 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-21291" Dec 13 02:13:59.373744 env[1210]: time="2024-12-13T02:13:59.373679626Z" level=info msg="CreateContainer within sandbox \"1c1316fdf844b8b8a30a9f3797c1b4322b11d93b71f9d52c90deb31a8f57273b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 02:13:59.379425 env[1210]: time="2024-12-13T02:13:59.379369053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal,Uid:1f9de9593e7b0dc8b5f912aa4eaabda8,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5d103f09cbeda2cf0721a6e08912e5ebbdf72a79af88c6f43bccd13efbf9f68\"" Dec 13 02:13:59.381241 kubelet[1723]: E1213 02:13:59.381196 1723 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-21291" Dec 13 02:13:59.383492 env[1210]: time="2024-12-13T02:13:59.383444479Z" level=info msg="CreateContainer within sandbox \"a5d103f09cbeda2cf0721a6e08912e5ebbdf72a79af88c6f43bccd13efbf9f68\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 02:13:59.404813 systemd[1]: Started cri-containerd-4ec7bbc6a16b8778a827b4dd14b283e3b354e8bf287cb75020908df3b9822e6f.scope. Dec 13 02:13:59.413245 env[1210]: time="2024-12-13T02:13:59.413190359Z" level=info msg="CreateContainer within sandbox \"1c1316fdf844b8b8a30a9f3797c1b4322b11d93b71f9d52c90deb31a8f57273b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f3882c4548c949ecb899e6610ad9663de5b1ccb141b25e86af1471acad501b9a\"" Dec 13 02:13:59.414923 env[1210]: time="2024-12-13T02:13:59.414860113Z" level=info msg="StartContainer for \"f3882c4548c949ecb899e6610ad9663de5b1ccb141b25e86af1471acad501b9a\"" Dec 13 02:13:59.418865 env[1210]: time="2024-12-13T02:13:59.418806637Z" level=info msg="CreateContainer within sandbox \"a5d103f09cbeda2cf0721a6e08912e5ebbdf72a79af88c6f43bccd13efbf9f68\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f2d490d7eb7b01fc98a5873a080ca75f54888b9b3369e4d10e10b375b92b29a0\"" Dec 13 02:13:59.419465 env[1210]: time="2024-12-13T02:13:59.419416829Z" level=info msg="StartContainer for \"f2d490d7eb7b01fc98a5873a080ca75f54888b9b3369e4d10e10b375b92b29a0\"" Dec 13 02:13:59.446481 systemd[1]: Started cri-containerd-f3882c4548c949ecb899e6610ad9663de5b1ccb141b25e86af1471acad501b9a.scope. Dec 13 02:13:59.472639 systemd[1]: Started cri-containerd-f2d490d7eb7b01fc98a5873a080ca75f54888b9b3369e4d10e10b375b92b29a0.scope. Dec 13 02:13:59.568802 env[1210]: time="2024-12-13T02:13:59.566315782Z" level=info msg="StartContainer for \"f3882c4548c949ecb899e6610ad9663de5b1ccb141b25e86af1471acad501b9a\" returns successfully" Dec 13 02:13:59.583396 env[1210]: time="2024-12-13T02:13:59.583245742Z" level=info msg="StartContainer for \"4ec7bbc6a16b8778a827b4dd14b283e3b354e8bf287cb75020908df3b9822e6f\" returns successfully" Dec 13 02:13:59.600510 env[1210]: time="2024-12-13T02:13:59.600440192Z" level=info msg="StartContainer for \"f2d490d7eb7b01fc98a5873a080ca75f54888b9b3369e4d10e10b375b92b29a0\" returns successfully" Dec 13 02:13:59.612066 kubelet[1723]: E1213 02:13:59.611999 1723 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.53:6443: connect: connection refused" interval="1.6s" Dec 13 02:13:59.641186 kubelet[1723]: W1213 02:13:59.641096 1723 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.53:6443: connect: connection refused Dec 13 02:13:59.641186 kubelet[1723]: E1213 02:13:59.641195 1723 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.53:6443: connect: connection refused Dec 13 02:13:59.732713 kubelet[1723]: I1213 02:13:59.732668 1723 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:13:59.733121 kubelet[1723]: E1213 02:13:59.733081 1723 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.53:6443/api/v1/nodes\": dial tcp 10.128.0.53:6443: connect: connection refused" node="ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:14:01.338984 kubelet[1723]: I1213 02:14:01.338942 1723 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:14:02.572026 kubelet[1723]: E1213 02:14:02.571968 1723 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" not found" node="ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:14:02.742637 kubelet[1723]: I1213 02:14:02.742553 1723 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:14:03.180443 kubelet[1723]: I1213 02:14:03.180395 1723 apiserver.go:52] "Watching apiserver" Dec 13 02:14:03.208334 kubelet[1723]: I1213 02:14:03.207560 1723 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 02:14:04.804271 kubelet[1723]: W1213 02:14:04.804224 1723 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 02:14:05.176212 systemd[1]: Reloading. Dec 13 02:14:05.314929 /usr/lib/systemd/system-generators/torcx-generator[2019]: time="2024-12-13T02:14:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:14:05.317110 /usr/lib/systemd/system-generators/torcx-generator[2019]: time="2024-12-13T02:14:05Z" level=info msg="torcx already run" Dec 13 02:14:05.415826 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:14:05.415855 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:14:05.440945 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:14:05.598944 kubelet[1723]: E1213 02:14:05.598743 1723 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal.18109ac780b4f60c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal,UID:ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal,},FirstTimestamp:2024-12-13 02:13:58.186153484 +0000 UTC m=+0.979757387,LastTimestamp:2024-12-13 02:13:58.186153484 +0000 UTC m=+0.979757387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal,}" Dec 13 02:14:05.601788 systemd[1]: Stopping kubelet.service... Dec 13 02:14:05.616296 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:14:05.616619 systemd[1]: Stopped kubelet.service. Dec 13 02:14:05.616734 systemd[1]: kubelet.service: Consumed 1.439s CPU time. Dec 13 02:14:05.619457 systemd[1]: Starting kubelet.service... Dec 13 02:14:05.868485 systemd[1]: Started kubelet.service. Dec 13 02:14:05.964927 kubelet[2067]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:14:05.965352 kubelet[2067]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:14:05.965438 kubelet[2067]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:14:05.965592 kubelet[2067]: I1213 02:14:05.965561 2067 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:14:05.974780 kubelet[2067]: I1213 02:14:05.974730 2067 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 02:14:05.975020 kubelet[2067]: I1213 02:14:05.974999 2067 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:14:05.975546 kubelet[2067]: I1213 02:14:05.975511 2067 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 02:14:05.977661 kubelet[2067]: I1213 02:14:05.977574 2067 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 02:14:05.982194 kubelet[2067]: I1213 02:14:05.982162 2067 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:14:05.999163 kubelet[2067]: I1213 02:14:05.999104 2067 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:14:05.999697 kubelet[2067]: I1213 02:14:05.999645 2067 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:14:06.000021 kubelet[2067]: I1213 02:14:05.999815 2067 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:14:06.000276 kubelet[2067]: I1213 02:14:06.000262 2067 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:14:06.000380 kubelet[2067]: I1213 02:14:06.000369 2067 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:14:06.000501 kubelet[2067]: I1213 02:14:06.000490 2067 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:14:06.000711 kubelet[2067]: I1213 02:14:06.000698 2067 kubelet.go:400] "Attempting to sync node with API server" Dec 13 02:14:06.001510 kubelet[2067]: I1213 02:14:06.001489 2067 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:14:06.001696 kubelet[2067]: I1213 02:14:06.001683 2067 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:14:06.006757 kubelet[2067]: I1213 02:14:06.006720 2067 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:14:06.008154 kubelet[2067]: I1213 02:14:06.008130 2067 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:14:06.008447 kubelet[2067]: I1213 02:14:06.008435 2067 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:14:06.009088 kubelet[2067]: I1213 02:14:06.009066 2067 server.go:1264] "Started kubelet" Dec 13 02:14:06.012117 kubelet[2067]: I1213 02:14:06.012076 2067 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:14:06.016547 kubelet[2067]: I1213 02:14:06.016499 2067 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:14:06.018236 kubelet[2067]: I1213 02:14:06.018208 2067 server.go:455] "Adding debug handlers to kubelet server" Dec 13 02:14:06.025917 kubelet[2067]: I1213 02:14:06.025873 2067 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:14:06.031491 kubelet[2067]: I1213 02:14:06.021798 2067 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:14:06.032142 kubelet[2067]: I1213 02:14:06.032116 2067 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:14:06.032360 kubelet[2067]: I1213 02:14:06.029695 2067 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 02:14:06.032787 kubelet[2067]: I1213 02:14:06.032768 2067 reconciler.go:26] "Reconciler: start to sync state" Dec 13 02:14:06.035184 kubelet[2067]: I1213 02:14:06.035134 2067 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:14:06.050201 kubelet[2067]: I1213 02:14:06.046709 2067 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:14:06.053406 kubelet[2067]: E1213 02:14:06.053360 2067 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:14:06.057696 kubelet[2067]: I1213 02:14:06.057656 2067 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:14:06.057884 kubelet[2067]: I1213 02:14:06.057744 2067 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:14:06.065732 kubelet[2067]: I1213 02:14:06.065698 2067 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:14:06.066015 kubelet[2067]: I1213 02:14:06.065994 2067 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:14:06.066182 kubelet[2067]: I1213 02:14:06.066166 2067 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 02:14:06.066392 kubelet[2067]: E1213 02:14:06.066365 2067 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:14:06.129093 kubelet[2067]: I1213 02:14:06.127212 2067 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:14:06.129093 kubelet[2067]: I1213 02:14:06.127251 2067 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:14:06.129093 kubelet[2067]: I1213 02:14:06.127278 2067 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:14:06.129093 kubelet[2067]: I1213 02:14:06.127508 2067 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 02:14:06.129093 kubelet[2067]: I1213 02:14:06.127525 2067 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 02:14:06.129093 kubelet[2067]: I1213 02:14:06.127558 2067 policy_none.go:49] "None policy: Start" Dec 13 02:14:06.129527 kubelet[2067]: I1213 02:14:06.129235 2067 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:14:06.129527 kubelet[2067]: I1213 02:14:06.129264 2067 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:14:06.129653 kubelet[2067]: I1213 02:14:06.129614 2067 state_mem.go:75] "Updated machine memory state" Dec 13 02:14:06.136926 kubelet[2067]: I1213 02:14:06.136891 2067 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:14:06.139658 kubelet[2067]: I1213 02:14:06.139576 2067 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:14:06.140142 kubelet[2067]: I1213 02:14:06.139979 2067 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 02:14:06.140142 kubelet[2067]: I1213 02:14:06.140134 2067 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:14:06.161818 kubelet[2067]: I1213 02:14:06.161783 2067 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:14:06.162154 kubelet[2067]: I1213 02:14:06.162135 2067 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:14:06.167885 kubelet[2067]: I1213 02:14:06.167818 2067 topology_manager.go:215] "Topology Admit Handler" podUID="74dd9f637943e1a723c7e5e572bb730e" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:14:06.169717 kubelet[2067]: I1213 02:14:06.169681 2067 topology_manager.go:215] "Topology Admit Handler" podUID="7c7f9418ac82083d62aff22e0f7130ea" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:14:06.170025 kubelet[2067]: I1213 02:14:06.169997 2067 topology_manager.go:215] "Topology Admit Handler" podUID="1f9de9593e7b0dc8b5f912aa4eaabda8" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:14:06.186087 kubelet[2067]: W1213 02:14:06.186048 2067 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 02:14:06.194055 kubelet[2067]: W1213 02:14:06.194018 2067 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 02:14:06.194545 kubelet[2067]: W1213 02:14:06.194258 2067 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 02:14:06.194818 kubelet[2067]: E1213 02:14:06.194788 2067 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:14:06.196800 sudo[2099]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 02:14:06.197254 sudo[2099]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 02:14:06.233859 kubelet[2067]: I1213 02:14:06.233814 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/74dd9f637943e1a723c7e5e572bb730e-ca-certs\") pod \"kube-apiserver-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" (UID: \"74dd9f637943e1a723c7e5e572bb730e\") " pod="kube-system/kube-apiserver-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:14:06.334372 kubelet[2067]: I1213 02:14:06.334290 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/74dd9f637943e1a723c7e5e572bb730e-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" (UID: \"74dd9f637943e1a723c7e5e572bb730e\") " pod="kube-system/kube-apiserver-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:14:06.334614 kubelet[2067]: I1213 02:14:06.334381 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c7f9418ac82083d62aff22e0f7130ea-ca-certs\") pod \"kube-controller-manager-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" (UID: \"7c7f9418ac82083d62aff22e0f7130ea\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:14:06.334614 kubelet[2067]: I1213 02:14:06.334461 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c7f9418ac82083d62aff22e0f7130ea-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" (UID: \"7c7f9418ac82083d62aff22e0f7130ea\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:14:06.334614 kubelet[2067]: I1213 02:14:06.334511 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c7f9418ac82083d62aff22e0f7130ea-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" (UID: \"7c7f9418ac82083d62aff22e0f7130ea\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:14:06.334614 kubelet[2067]: I1213 02:14:06.334583 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1f9de9593e7b0dc8b5f912aa4eaabda8-kubeconfig\") pod \"kube-scheduler-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" (UID: \"1f9de9593e7b0dc8b5f912aa4eaabda8\") " pod="kube-system/kube-scheduler-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:14:06.334890 kubelet[2067]: I1213 02:14:06.334613 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/74dd9f637943e1a723c7e5e572bb730e-k8s-certs\") pod \"kube-apiserver-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" (UID: \"74dd9f637943e1a723c7e5e572bb730e\") " pod="kube-system/kube-apiserver-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:14:06.334890 kubelet[2067]: I1213 02:14:06.334755 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c7f9418ac82083d62aff22e0f7130ea-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" (UID: \"7c7f9418ac82083d62aff22e0f7130ea\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:14:06.334890 kubelet[2067]: I1213 02:14:06.334789 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c7f9418ac82083d62aff22e0f7130ea-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" (UID: \"7c7f9418ac82083d62aff22e0f7130ea\") " pod="kube-system/kube-controller-manager-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:14:06.953124 sudo[2099]: pam_unix(sudo:session): session closed for user root Dec 13 02:14:07.008284 kubelet[2067]: I1213 02:14:07.008228 2067 apiserver.go:52] "Watching apiserver" Dec 13 02:14:07.032823 kubelet[2067]: I1213 02:14:07.032769 2067 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 02:14:07.118202 kubelet[2067]: W1213 02:14:07.118159 2067 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Dec 13 02:14:07.118425 kubelet[2067]: E1213 02:14:07.118257 2067 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" Dec 13 02:14:07.161684 kubelet[2067]: I1213 02:14:07.161561 2067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" podStartSLOduration=3.161536991 podStartE2EDuration="3.161536991s" podCreationTimestamp="2024-12-13 02:14:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:14:07.159480529 +0000 UTC m=+1.281506517" watchObservedRunningTime="2024-12-13 02:14:07.161536991 +0000 UTC m=+1.283562971" Dec 13 02:14:07.161959 kubelet[2067]: I1213 02:14:07.161809 2067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" podStartSLOduration=1.161781238 podStartE2EDuration="1.161781238s" podCreationTimestamp="2024-12-13 02:14:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:14:07.148160059 +0000 UTC m=+1.270186061" watchObservedRunningTime="2024-12-13 02:14:07.161781238 +0000 UTC m=+1.283807223" Dec 13 02:14:07.175362 kubelet[2067]: I1213 02:14:07.175263 2067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" podStartSLOduration=1.175242552 podStartE2EDuration="1.175242552s" podCreationTimestamp="2024-12-13 02:14:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:14:07.17406599 +0000 UTC m=+1.296091979" watchObservedRunningTime="2024-12-13 02:14:07.175242552 +0000 UTC m=+1.297268531" Dec 13 02:14:09.152179 sudo[1393]: pam_unix(sudo:session): session closed for user root Dec 13 02:14:09.196000 sshd[1390]: pam_unix(sshd:session): session closed for user core Dec 13 02:14:09.200970 systemd-logind[1219]: Session 5 logged out. Waiting for processes to exit. Dec 13 02:14:09.201256 systemd[1]: sshd@4-10.128.0.53:22-139.178.68.195:60424.service: Deactivated successfully. Dec 13 02:14:09.202518 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 02:14:09.202783 systemd[1]: session-5.scope: Consumed 6.421s CPU time. Dec 13 02:14:09.204150 systemd-logind[1219]: Removed session 5. Dec 13 02:14:12.792780 update_engine[1200]: I1213 02:14:12.792707 1200 update_attempter.cc:509] Updating boot flags... Dec 13 02:14:19.194144 kubelet[2067]: I1213 02:14:19.194084 2067 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 02:14:19.195052 env[1210]: time="2024-12-13T02:14:19.195001834Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 02:14:19.195548 kubelet[2067]: I1213 02:14:19.195298 2067 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 02:14:19.996790 kubelet[2067]: I1213 02:14:19.996725 2067 topology_manager.go:215] "Topology Admit Handler" podUID="05bd72ba-e412-43dc-8698-698d74abaec0" podNamespace="kube-system" podName="kube-proxy-pjrd5" Dec 13 02:14:20.006613 systemd[1]: Created slice kubepods-besteffort-pod05bd72ba_e412_43dc_8698_698d74abaec0.slice. Dec 13 02:14:20.019328 kubelet[2067]: I1213 02:14:20.019285 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/05bd72ba-e412-43dc-8698-698d74abaec0-kube-proxy\") pod \"kube-proxy-pjrd5\" (UID: \"05bd72ba-e412-43dc-8698-698d74abaec0\") " pod="kube-system/kube-proxy-pjrd5" Dec 13 02:14:20.019731 kubelet[2067]: I1213 02:14:20.019706 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05bd72ba-e412-43dc-8698-698d74abaec0-xtables-lock\") pod \"kube-proxy-pjrd5\" (UID: \"05bd72ba-e412-43dc-8698-698d74abaec0\") " pod="kube-system/kube-proxy-pjrd5" Dec 13 02:14:20.019974 kubelet[2067]: I1213 02:14:20.019943 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05bd72ba-e412-43dc-8698-698d74abaec0-lib-modules\") pod \"kube-proxy-pjrd5\" (UID: \"05bd72ba-e412-43dc-8698-698d74abaec0\") " pod="kube-system/kube-proxy-pjrd5" Dec 13 02:14:20.020173 kubelet[2067]: I1213 02:14:20.020146 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j9ff\" (UniqueName: \"kubernetes.io/projected/05bd72ba-e412-43dc-8698-698d74abaec0-kube-api-access-4j9ff\") pod \"kube-proxy-pjrd5\" (UID: \"05bd72ba-e412-43dc-8698-698d74abaec0\") " pod="kube-system/kube-proxy-pjrd5" Dec 13 02:14:20.028118 kubelet[2067]: I1213 02:14:20.028069 2067 topology_manager.go:215] "Topology Admit Handler" podUID="f8b8b38a-559a-42ba-860f-fb6b85c6005b" podNamespace="kube-system" podName="cilium-5994m" Dec 13 02:14:20.036377 systemd[1]: Created slice kubepods-burstable-podf8b8b38a_559a_42ba_860f_fb6b85c6005b.slice. Dec 13 02:14:20.121470 kubelet[2067]: I1213 02:14:20.121422 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-bpf-maps\") pod \"cilium-5994m\" (UID: \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\") " pod="kube-system/cilium-5994m" Dec 13 02:14:20.121740 kubelet[2067]: I1213 02:14:20.121593 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f8b8b38a-559a-42ba-860f-fb6b85c6005b-cilium-config-path\") pod \"cilium-5994m\" (UID: \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\") " pod="kube-system/cilium-5994m" Dec 13 02:14:20.121740 kubelet[2067]: I1213 02:14:20.121674 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-etc-cni-netd\") pod \"cilium-5994m\" (UID: \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\") " pod="kube-system/cilium-5994m" Dec 13 02:14:20.121889 kubelet[2067]: I1213 02:14:20.121704 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f8b8b38a-559a-42ba-860f-fb6b85c6005b-hubble-tls\") pod \"cilium-5994m\" (UID: \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\") " pod="kube-system/cilium-5994m" Dec 13 02:14:20.121889 kubelet[2067]: I1213 02:14:20.121839 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-cni-path\") pod \"cilium-5994m\" (UID: \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\") " pod="kube-system/cilium-5994m" Dec 13 02:14:20.122087 kubelet[2067]: I1213 02:14:20.121868 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f8b8b38a-559a-42ba-860f-fb6b85c6005b-clustermesh-secrets\") pod \"cilium-5994m\" (UID: \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\") " pod="kube-system/cilium-5994m" Dec 13 02:14:20.122209 kubelet[2067]: I1213 02:14:20.122105 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-host-proc-sys-kernel\") pod \"cilium-5994m\" (UID: \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\") " pod="kube-system/cilium-5994m" Dec 13 02:14:20.122277 kubelet[2067]: I1213 02:14:20.122252 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-host-proc-sys-net\") pod \"cilium-5994m\" (UID: \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\") " pod="kube-system/cilium-5994m" Dec 13 02:14:20.122381 kubelet[2067]: I1213 02:14:20.122355 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p5r4\" (UniqueName: \"kubernetes.io/projected/f8b8b38a-559a-42ba-860f-fb6b85c6005b-kube-api-access-8p5r4\") pod \"cilium-5994m\" (UID: \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\") " pod="kube-system/cilium-5994m" Dec 13 02:14:20.122537 kubelet[2067]: I1213 02:14:20.122511 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-xtables-lock\") pod \"cilium-5994m\" (UID: \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\") " pod="kube-system/cilium-5994m" Dec 13 02:14:20.122650 kubelet[2067]: I1213 02:14:20.122607 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-cilium-run\") pod \"cilium-5994m\" (UID: \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\") " pod="kube-system/cilium-5994m" Dec 13 02:14:20.122781 kubelet[2067]: I1213 02:14:20.122755 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-hostproc\") pod \"cilium-5994m\" (UID: \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\") " pod="kube-system/cilium-5994m" Dec 13 02:14:20.122879 kubelet[2067]: I1213 02:14:20.122830 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-cilium-cgroup\") pod \"cilium-5994m\" (UID: \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\") " pod="kube-system/cilium-5994m" Dec 13 02:14:20.122879 kubelet[2067]: I1213 02:14:20.122862 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-lib-modules\") pod \"cilium-5994m\" (UID: \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\") " pod="kube-system/cilium-5994m" Dec 13 02:14:20.219354 kubelet[2067]: I1213 02:14:20.219301 2067 topology_manager.go:215] "Topology Admit Handler" podUID="ba016148-1efc-41ec-81b4-89c243fc81e7" podNamespace="kube-system" podName="cilium-operator-599987898-9bdpg" Dec 13 02:14:20.229039 systemd[1]: Created slice kubepods-besteffort-podba016148_1efc_41ec_81b4_89c243fc81e7.slice. Dec 13 02:14:20.314571 env[1210]: time="2024-12-13T02:14:20.314485374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pjrd5,Uid:05bd72ba-e412-43dc-8698-698d74abaec0,Namespace:kube-system,Attempt:0,}" Dec 13 02:14:20.327023 kubelet[2067]: I1213 02:14:20.326965 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ba016148-1efc-41ec-81b4-89c243fc81e7-cilium-config-path\") pod \"cilium-operator-599987898-9bdpg\" (UID: \"ba016148-1efc-41ec-81b4-89c243fc81e7\") " pod="kube-system/cilium-operator-599987898-9bdpg" Dec 13 02:14:20.332477 kubelet[2067]: I1213 02:14:20.332430 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn4f4\" (UniqueName: \"kubernetes.io/projected/ba016148-1efc-41ec-81b4-89c243fc81e7-kube-api-access-cn4f4\") pod \"cilium-operator-599987898-9bdpg\" (UID: \"ba016148-1efc-41ec-81b4-89c243fc81e7\") " pod="kube-system/cilium-operator-599987898-9bdpg" Dec 13 02:14:20.340783 env[1210]: time="2024-12-13T02:14:20.340721022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5994m,Uid:f8b8b38a-559a-42ba-860f-fb6b85c6005b,Namespace:kube-system,Attempt:0,}" Dec 13 02:14:20.345133 env[1210]: time="2024-12-13T02:14:20.344884430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:14:20.345133 env[1210]: time="2024-12-13T02:14:20.344953087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:14:20.345133 env[1210]: time="2024-12-13T02:14:20.344983472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:14:20.345430 env[1210]: time="2024-12-13T02:14:20.345228618Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb291185057d147650575502b676440947b5f22f0e6eeddc0db95c21ef6546a5 pid=2165 runtime=io.containerd.runc.v2 Dec 13 02:14:20.364606 systemd[1]: Started cri-containerd-bb291185057d147650575502b676440947b5f22f0e6eeddc0db95c21ef6546a5.scope. Dec 13 02:14:20.387983 env[1210]: time="2024-12-13T02:14:20.387885743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:14:20.388340 env[1210]: time="2024-12-13T02:14:20.388239805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:14:20.388546 env[1210]: time="2024-12-13T02:14:20.388508560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:14:20.389091 env[1210]: time="2024-12-13T02:14:20.389024325Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f4e23180c0e1512fb2b307c400974be9ce96071a6558826d655b49f0761b6a3 pid=2197 runtime=io.containerd.runc.v2 Dec 13 02:14:20.424652 systemd[1]: Started cri-containerd-3f4e23180c0e1512fb2b307c400974be9ce96071a6558826d655b49f0761b6a3.scope. Dec 13 02:14:20.446496 env[1210]: time="2024-12-13T02:14:20.446432644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pjrd5,Uid:05bd72ba-e412-43dc-8698-698d74abaec0,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb291185057d147650575502b676440947b5f22f0e6eeddc0db95c21ef6546a5\"" Dec 13 02:14:20.455109 env[1210]: time="2024-12-13T02:14:20.455034935Z" level=info msg="CreateContainer within sandbox \"bb291185057d147650575502b676440947b5f22f0e6eeddc0db95c21ef6546a5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 02:14:20.482045 env[1210]: time="2024-12-13T02:14:20.481975902Z" level=info msg="CreateContainer within sandbox \"bb291185057d147650575502b676440947b5f22f0e6eeddc0db95c21ef6546a5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"96cb458dd66edef044493effd83c84cdf8512bf9615005012266f3e15234f9a3\"" Dec 13 02:14:20.485987 env[1210]: time="2024-12-13T02:14:20.485000544Z" level=info msg="StartContainer for \"96cb458dd66edef044493effd83c84cdf8512bf9615005012266f3e15234f9a3\"" Dec 13 02:14:20.489184 env[1210]: time="2024-12-13T02:14:20.489126537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5994m,Uid:f8b8b38a-559a-42ba-860f-fb6b85c6005b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f4e23180c0e1512fb2b307c400974be9ce96071a6558826d655b49f0761b6a3\"" Dec 13 02:14:20.494837 env[1210]: time="2024-12-13T02:14:20.494765634Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 02:14:20.515427 systemd[1]: Started cri-containerd-96cb458dd66edef044493effd83c84cdf8512bf9615005012266f3e15234f9a3.scope. Dec 13 02:14:20.564054 env[1210]: time="2024-12-13T02:14:20.563921117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-9bdpg,Uid:ba016148-1efc-41ec-81b4-89c243fc81e7,Namespace:kube-system,Attempt:0,}" Dec 13 02:14:20.564688 env[1210]: time="2024-12-13T02:14:20.564555035Z" level=info msg="StartContainer for \"96cb458dd66edef044493effd83c84cdf8512bf9615005012266f3e15234f9a3\" returns successfully" Dec 13 02:14:20.586587 env[1210]: time="2024-12-13T02:14:20.586481959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:14:20.586950 env[1210]: time="2024-12-13T02:14:20.586860277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:14:20.586950 env[1210]: time="2024-12-13T02:14:20.586892330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:14:20.587532 env[1210]: time="2024-12-13T02:14:20.587449379Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b810a4865f41950a170c4497bd14aa695313b595f2ce325f95a6d0c3ad77092a pid=2283 runtime=io.containerd.runc.v2 Dec 13 02:14:20.619726 systemd[1]: Started cri-containerd-b810a4865f41950a170c4497bd14aa695313b595f2ce325f95a6d0c3ad77092a.scope. Dec 13 02:14:20.712549 env[1210]: time="2024-12-13T02:14:20.712433851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-9bdpg,Uid:ba016148-1efc-41ec-81b4-89c243fc81e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"b810a4865f41950a170c4497bd14aa695313b595f2ce325f95a6d0c3ad77092a\"" Dec 13 02:14:26.089663 kubelet[2067]: I1213 02:14:26.089349 2067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pjrd5" podStartSLOduration=7.089324019 podStartE2EDuration="7.089324019s" podCreationTimestamp="2024-12-13 02:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:14:21.16317456 +0000 UTC m=+15.285200551" watchObservedRunningTime="2024-12-13 02:14:26.089324019 +0000 UTC m=+20.211350010" Dec 13 02:14:26.520564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount131491620.mount: Deactivated successfully. Dec 13 02:14:29.941467 env[1210]: time="2024-12-13T02:14:29.941394036Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:29.944760 env[1210]: time="2024-12-13T02:14:29.944708049Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:29.948084 env[1210]: time="2024-12-13T02:14:29.948034730Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:29.948966 env[1210]: time="2024-12-13T02:14:29.948906811Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 02:14:29.951854 env[1210]: time="2024-12-13T02:14:29.951810535Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 02:14:29.954605 env[1210]: time="2024-12-13T02:14:29.954547523Z" level=info msg="CreateContainer within sandbox \"3f4e23180c0e1512fb2b307c400974be9ce96071a6558826d655b49f0761b6a3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:14:29.974146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount673784854.mount: Deactivated successfully. Dec 13 02:14:29.985312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3852417020.mount: Deactivated successfully. Dec 13 02:14:29.990396 env[1210]: time="2024-12-13T02:14:29.990332243Z" level=info msg="CreateContainer within sandbox \"3f4e23180c0e1512fb2b307c400974be9ce96071a6558826d655b49f0761b6a3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8d3974c5c00e99e780b99e9a8107bd241de53b69304cf884c1b761f4afa18496\"" Dec 13 02:14:29.992801 env[1210]: time="2024-12-13T02:14:29.991395583Z" level=info msg="StartContainer for \"8d3974c5c00e99e780b99e9a8107bd241de53b69304cf884c1b761f4afa18496\"" Dec 13 02:14:30.021073 systemd[1]: Started cri-containerd-8d3974c5c00e99e780b99e9a8107bd241de53b69304cf884c1b761f4afa18496.scope. Dec 13 02:14:30.072933 env[1210]: time="2024-12-13T02:14:30.072877989Z" level=info msg="StartContainer for \"8d3974c5c00e99e780b99e9a8107bd241de53b69304cf884c1b761f4afa18496\" returns successfully" Dec 13 02:14:30.083003 systemd[1]: cri-containerd-8d3974c5c00e99e780b99e9a8107bd241de53b69304cf884c1b761f4afa18496.scope: Deactivated successfully. Dec 13 02:14:30.968258 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d3974c5c00e99e780b99e9a8107bd241de53b69304cf884c1b761f4afa18496-rootfs.mount: Deactivated successfully. Dec 13 02:14:31.913331 env[1210]: time="2024-12-13T02:14:31.913256034Z" level=info msg="shim disconnected" id=8d3974c5c00e99e780b99e9a8107bd241de53b69304cf884c1b761f4afa18496 Dec 13 02:14:31.913331 env[1210]: time="2024-12-13T02:14:31.913320723Z" level=warning msg="cleaning up after shim disconnected" id=8d3974c5c00e99e780b99e9a8107bd241de53b69304cf884c1b761f4afa18496 namespace=k8s.io Dec 13 02:14:31.913331 env[1210]: time="2024-12-13T02:14:31.913338999Z" level=info msg="cleaning up dead shim" Dec 13 02:14:31.926152 env[1210]: time="2024-12-13T02:14:31.926076141Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:14:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2489 runtime=io.containerd.runc.v2\n" Dec 13 02:14:32.174684 env[1210]: time="2024-12-13T02:14:32.174254405Z" level=info msg="CreateContainer within sandbox \"3f4e23180c0e1512fb2b307c400974be9ce96071a6558826d655b49f0761b6a3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:14:32.214386 env[1210]: time="2024-12-13T02:14:32.206903116Z" level=info msg="CreateContainer within sandbox \"3f4e23180c0e1512fb2b307c400974be9ce96071a6558826d655b49f0761b6a3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"83931ba6a9572aefb089122126cdba23a022fbc3b42df85a41b5caba230057d3\"" Dec 13 02:14:32.214386 env[1210]: time="2024-12-13T02:14:32.213329417Z" level=info msg="StartContainer for \"83931ba6a9572aefb089122126cdba23a022fbc3b42df85a41b5caba230057d3\"" Dec 13 02:14:32.212904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount809905574.mount: Deactivated successfully. Dec 13 02:14:32.251821 systemd[1]: Started cri-containerd-83931ba6a9572aefb089122126cdba23a022fbc3b42df85a41b5caba230057d3.scope. Dec 13 02:14:32.291602 env[1210]: time="2024-12-13T02:14:32.291544898Z" level=info msg="StartContainer for \"83931ba6a9572aefb089122126cdba23a022fbc3b42df85a41b5caba230057d3\" returns successfully" Dec 13 02:14:32.308084 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:14:32.308498 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:14:32.309657 systemd[1]: Stopping systemd-sysctl.service... Dec 13 02:14:32.315792 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:14:32.316428 systemd[1]: cri-containerd-83931ba6a9572aefb089122126cdba23a022fbc3b42df85a41b5caba230057d3.scope: Deactivated successfully. Dec 13 02:14:32.334465 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:14:32.350908 env[1210]: time="2024-12-13T02:14:32.350844120Z" level=info msg="shim disconnected" id=83931ba6a9572aefb089122126cdba23a022fbc3b42df85a41b5caba230057d3 Dec 13 02:14:32.350908 env[1210]: time="2024-12-13T02:14:32.350908295Z" level=warning msg="cleaning up after shim disconnected" id=83931ba6a9572aefb089122126cdba23a022fbc3b42df85a41b5caba230057d3 namespace=k8s.io Dec 13 02:14:32.350908 env[1210]: time="2024-12-13T02:14:32.350922408Z" level=info msg="cleaning up dead shim" Dec 13 02:14:32.362005 env[1210]: time="2024-12-13T02:14:32.361938018Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:14:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2553 runtime=io.containerd.runc.v2\n" Dec 13 02:14:33.176522 env[1210]: time="2024-12-13T02:14:33.176026725Z" level=info msg="CreateContainer within sandbox \"3f4e23180c0e1512fb2b307c400974be9ce96071a6558826d655b49f0761b6a3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:14:33.193969 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83931ba6a9572aefb089122126cdba23a022fbc3b42df85a41b5caba230057d3-rootfs.mount: Deactivated successfully. Dec 13 02:14:33.219804 env[1210]: time="2024-12-13T02:14:33.219737283Z" level=info msg="CreateContainer within sandbox \"3f4e23180c0e1512fb2b307c400974be9ce96071a6558826d655b49f0761b6a3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a30c324baf93db23c0607758cac45e9e081d12d836c9cba7bf75f3ee573d2a05\"" Dec 13 02:14:33.228925 env[1210]: time="2024-12-13T02:14:33.228872238Z" level=info msg="StartContainer for \"a30c324baf93db23c0607758cac45e9e081d12d836c9cba7bf75f3ee573d2a05\"" Dec 13 02:14:33.268864 systemd[1]: Started cri-containerd-a30c324baf93db23c0607758cac45e9e081d12d836c9cba7bf75f3ee573d2a05.scope. Dec 13 02:14:33.314284 env[1210]: time="2024-12-13T02:14:33.314218462Z" level=info msg="StartContainer for \"a30c324baf93db23c0607758cac45e9e081d12d836c9cba7bf75f3ee573d2a05\" returns successfully" Dec 13 02:14:33.324683 systemd[1]: cri-containerd-a30c324baf93db23c0607758cac45e9e081d12d836c9cba7bf75f3ee573d2a05.scope: Deactivated successfully. Dec 13 02:14:33.355324 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a30c324baf93db23c0607758cac45e9e081d12d836c9cba7bf75f3ee573d2a05-rootfs.mount: Deactivated successfully. Dec 13 02:14:33.361426 env[1210]: time="2024-12-13T02:14:33.361350904Z" level=info msg="shim disconnected" id=a30c324baf93db23c0607758cac45e9e081d12d836c9cba7bf75f3ee573d2a05 Dec 13 02:14:33.361426 env[1210]: time="2024-12-13T02:14:33.361418519Z" level=warning msg="cleaning up after shim disconnected" id=a30c324baf93db23c0607758cac45e9e081d12d836c9cba7bf75f3ee573d2a05 namespace=k8s.io Dec 13 02:14:33.361426 env[1210]: time="2024-12-13T02:14:33.361434534Z" level=info msg="cleaning up dead shim" Dec 13 02:14:33.374444 env[1210]: time="2024-12-13T02:14:33.374331088Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:14:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2609 runtime=io.containerd.runc.v2\n" Dec 13 02:14:34.184985 env[1210]: time="2024-12-13T02:14:34.184916979Z" level=info msg="CreateContainer within sandbox \"3f4e23180c0e1512fb2b307c400974be9ce96071a6558826d655b49f0761b6a3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:14:34.215296 env[1210]: time="2024-12-13T02:14:34.215234184Z" level=info msg="CreateContainer within sandbox \"3f4e23180c0e1512fb2b307c400974be9ce96071a6558826d655b49f0761b6a3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"80cd9469289d2ab85fca15424cb458dcd38e37dc5caf650364bbb80c0f9c8c9c\"" Dec 13 02:14:34.216404 env[1210]: time="2024-12-13T02:14:34.216363002Z" level=info msg="StartContainer for \"80cd9469289d2ab85fca15424cb458dcd38e37dc5caf650364bbb80c0f9c8c9c\"" Dec 13 02:14:34.261710 systemd[1]: Started cri-containerd-80cd9469289d2ab85fca15424cb458dcd38e37dc5caf650364bbb80c0f9c8c9c.scope. Dec 13 02:14:34.275521 systemd[1]: run-containerd-runc-k8s.io-80cd9469289d2ab85fca15424cb458dcd38e37dc5caf650364bbb80c0f9c8c9c-runc.8ulHff.mount: Deactivated successfully. Dec 13 02:14:34.314175 systemd[1]: cri-containerd-80cd9469289d2ab85fca15424cb458dcd38e37dc5caf650364bbb80c0f9c8c9c.scope: Deactivated successfully. Dec 13 02:14:34.315617 env[1210]: time="2024-12-13T02:14:34.315558886Z" level=info msg="StartContainer for \"80cd9469289d2ab85fca15424cb458dcd38e37dc5caf650364bbb80c0f9c8c9c\" returns successfully" Dec 13 02:14:34.349879 env[1210]: time="2024-12-13T02:14:34.349813298Z" level=info msg="shim disconnected" id=80cd9469289d2ab85fca15424cb458dcd38e37dc5caf650364bbb80c0f9c8c9c Dec 13 02:14:34.349879 env[1210]: time="2024-12-13T02:14:34.349880378Z" level=warning msg="cleaning up after shim disconnected" id=80cd9469289d2ab85fca15424cb458dcd38e37dc5caf650364bbb80c0f9c8c9c namespace=k8s.io Dec 13 02:14:34.350310 env[1210]: time="2024-12-13T02:14:34.349894835Z" level=info msg="cleaning up dead shim" Dec 13 02:14:34.361556 env[1210]: time="2024-12-13T02:14:34.361493407Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:14:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2665 runtime=io.containerd.runc.v2\n" Dec 13 02:14:35.195192 env[1210]: time="2024-12-13T02:14:35.195132772Z" level=info msg="CreateContainer within sandbox \"3f4e23180c0e1512fb2b307c400974be9ce96071a6558826d655b49f0761b6a3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:14:35.202701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80cd9469289d2ab85fca15424cb458dcd38e37dc5caf650364bbb80c0f9c8c9c-rootfs.mount: Deactivated successfully. Dec 13 02:14:35.233650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2640746880.mount: Deactivated successfully. Dec 13 02:14:35.240146 env[1210]: time="2024-12-13T02:14:35.240052506Z" level=info msg="CreateContainer within sandbox \"3f4e23180c0e1512fb2b307c400974be9ce96071a6558826d655b49f0761b6a3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"49e8dd497cfd189465d58e5fce99e01df4a82dff82d92410bc6e39cfa0bdde63\"" Dec 13 02:14:35.241285 env[1210]: time="2024-12-13T02:14:35.241217419Z" level=info msg="StartContainer for \"49e8dd497cfd189465d58e5fce99e01df4a82dff82d92410bc6e39cfa0bdde63\"" Dec 13 02:14:35.280068 systemd[1]: Started cri-containerd-49e8dd497cfd189465d58e5fce99e01df4a82dff82d92410bc6e39cfa0bdde63.scope. Dec 13 02:14:35.348200 env[1210]: time="2024-12-13T02:14:35.348134929Z" level=info msg="StartContainer for \"49e8dd497cfd189465d58e5fce99e01df4a82dff82d92410bc6e39cfa0bdde63\" returns successfully" Dec 13 02:14:35.722920 kubelet[2067]: I1213 02:14:35.722045 2067 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 02:14:35.803217 kubelet[2067]: I1213 02:14:35.802075 2067 topology_manager.go:215] "Topology Admit Handler" podUID="8f4856a7-1f14-4ac0-b1e4-8514e43c3618" podNamespace="kube-system" podName="coredns-7db6d8ff4d-25rmx" Dec 13 02:14:35.819206 systemd[1]: Created slice kubepods-burstable-pod8f4856a7_1f14_4ac0_b1e4_8514e43c3618.slice. Dec 13 02:14:35.839576 kubelet[2067]: I1213 02:14:35.839528 2067 topology_manager.go:215] "Topology Admit Handler" podUID="7ed07168-b94f-40ff-88a0-490c50c1313e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-xwxxb" Dec 13 02:14:35.852099 systemd[1]: Created slice kubepods-burstable-pod7ed07168_b94f_40ff_88a0_490c50c1313e.slice. Dec 13 02:14:35.868969 kubelet[2067]: I1213 02:14:35.868920 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6rbx\" (UniqueName: \"kubernetes.io/projected/8f4856a7-1f14-4ac0-b1e4-8514e43c3618-kube-api-access-r6rbx\") pod \"coredns-7db6d8ff4d-25rmx\" (UID: \"8f4856a7-1f14-4ac0-b1e4-8514e43c3618\") " pod="kube-system/coredns-7db6d8ff4d-25rmx" Dec 13 02:14:35.869303 kubelet[2067]: I1213 02:14:35.869224 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf2nm\" (UniqueName: \"kubernetes.io/projected/7ed07168-b94f-40ff-88a0-490c50c1313e-kube-api-access-nf2nm\") pod \"coredns-7db6d8ff4d-xwxxb\" (UID: \"7ed07168-b94f-40ff-88a0-490c50c1313e\") " pod="kube-system/coredns-7db6d8ff4d-xwxxb" Dec 13 02:14:35.869538 kubelet[2067]: I1213 02:14:35.869485 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f4856a7-1f14-4ac0-b1e4-8514e43c3618-config-volume\") pod \"coredns-7db6d8ff4d-25rmx\" (UID: \"8f4856a7-1f14-4ac0-b1e4-8514e43c3618\") " pod="kube-system/coredns-7db6d8ff4d-25rmx" Dec 13 02:14:35.869738 kubelet[2067]: I1213 02:14:35.869706 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ed07168-b94f-40ff-88a0-490c50c1313e-config-volume\") pod \"coredns-7db6d8ff4d-xwxxb\" (UID: \"7ed07168-b94f-40ff-88a0-490c50c1313e\") " pod="kube-system/coredns-7db6d8ff4d-xwxxb" Dec 13 02:14:36.136755 env[1210]: time="2024-12-13T02:14:36.136689219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-25rmx,Uid:8f4856a7-1f14-4ac0-b1e4-8514e43c3618,Namespace:kube-system,Attempt:0,}" Dec 13 02:14:36.172700 env[1210]: time="2024-12-13T02:14:36.172598692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xwxxb,Uid:7ed07168-b94f-40ff-88a0-490c50c1313e,Namespace:kube-system,Attempt:0,}" Dec 13 02:14:36.862521 env[1210]: time="2024-12-13T02:14:36.862453332Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:36.865304 env[1210]: time="2024-12-13T02:14:36.865248356Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:36.867689 env[1210]: time="2024-12-13T02:14:36.867617808Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:14:36.868477 env[1210]: time="2024-12-13T02:14:36.868423947Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 02:14:36.872818 env[1210]: time="2024-12-13T02:14:36.872764234Z" level=info msg="CreateContainer within sandbox \"b810a4865f41950a170c4497bd14aa695313b595f2ce325f95a6d0c3ad77092a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 02:14:36.893896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1161837891.mount: Deactivated successfully. Dec 13 02:14:36.901378 env[1210]: time="2024-12-13T02:14:36.901317453Z" level=info msg="CreateContainer within sandbox \"b810a4865f41950a170c4497bd14aa695313b595f2ce325f95a6d0c3ad77092a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d7367452ca20549a071e01f41756bfaab0ac7851f62853465230c270e84842a5\"" Dec 13 02:14:36.904085 env[1210]: time="2024-12-13T02:14:36.904015550Z" level=info msg="StartContainer for \"d7367452ca20549a071e01f41756bfaab0ac7851f62853465230c270e84842a5\"" Dec 13 02:14:36.929651 systemd[1]: Started cri-containerd-d7367452ca20549a071e01f41756bfaab0ac7851f62853465230c270e84842a5.scope. Dec 13 02:14:36.979886 env[1210]: time="2024-12-13T02:14:36.979823504Z" level=info msg="StartContainer for \"d7367452ca20549a071e01f41756bfaab0ac7851f62853465230c270e84842a5\" returns successfully" Dec 13 02:14:37.295860 kubelet[2067]: I1213 02:14:37.295607 2067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5994m" podStartSLOduration=8.836438809 podStartE2EDuration="18.295579695s" podCreationTimestamp="2024-12-13 02:14:19 +0000 UTC" firstStartedPulling="2024-12-13 02:14:20.491303929 +0000 UTC m=+14.613329911" lastFinishedPulling="2024-12-13 02:14:29.950444829 +0000 UTC m=+24.072470797" observedRunningTime="2024-12-13 02:14:36.315336309 +0000 UTC m=+30.437362300" watchObservedRunningTime="2024-12-13 02:14:37.295579695 +0000 UTC m=+31.417605685" Dec 13 02:14:41.137674 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 02:14:41.145820 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 02:14:41.145597 systemd-networkd[1023]: cilium_host: Link UP Dec 13 02:14:41.147164 systemd-networkd[1023]: cilium_net: Link UP Dec 13 02:14:41.147499 systemd-networkd[1023]: cilium_net: Gained carrier Dec 13 02:14:41.147824 systemd-networkd[1023]: cilium_host: Gained carrier Dec 13 02:14:41.148293 systemd-networkd[1023]: cilium_net: Gained IPv6LL Dec 13 02:14:41.288208 systemd-networkd[1023]: cilium_vxlan: Link UP Dec 13 02:14:41.288221 systemd-networkd[1023]: cilium_vxlan: Gained carrier Dec 13 02:14:41.573670 kernel: NET: Registered PF_ALG protocol family Dec 13 02:14:41.961259 systemd-networkd[1023]: cilium_host: Gained IPv6LL Dec 13 02:14:42.345243 systemd-networkd[1023]: cilium_vxlan: Gained IPv6LL Dec 13 02:14:42.472139 systemd-networkd[1023]: lxc_health: Link UP Dec 13 02:14:42.487664 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:14:42.490105 systemd-networkd[1023]: lxc_health: Gained carrier Dec 13 02:14:42.808792 systemd-networkd[1023]: lxc45a9fd44becb: Link UP Dec 13 02:14:42.818776 kernel: eth0: renamed from tmpb34e6 Dec 13 02:14:42.831663 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc45a9fd44becb: link becomes ready Dec 13 02:14:42.835989 systemd-networkd[1023]: lxc45a9fd44becb: Gained carrier Dec 13 02:14:42.841746 systemd-networkd[1023]: lxcc7cd413b8a6a: Link UP Dec 13 02:14:42.871900 kernel: eth0: renamed from tmp26849 Dec 13 02:14:42.889276 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc7cd413b8a6a: link becomes ready Dec 13 02:14:42.897087 systemd-networkd[1023]: lxcc7cd413b8a6a: Gained carrier Dec 13 02:14:44.073473 systemd-networkd[1023]: lxcc7cd413b8a6a: Gained IPv6LL Dec 13 02:14:44.201432 systemd-networkd[1023]: lxc45a9fd44becb: Gained IPv6LL Dec 13 02:14:44.329540 systemd-networkd[1023]: lxc_health: Gained IPv6LL Dec 13 02:14:44.387915 kubelet[2067]: I1213 02:14:44.387828 2067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-9bdpg" podStartSLOduration=8.232368246 podStartE2EDuration="24.387800478s" podCreationTimestamp="2024-12-13 02:14:20 +0000 UTC" firstStartedPulling="2024-12-13 02:14:20.714421293 +0000 UTC m=+14.836447261" lastFinishedPulling="2024-12-13 02:14:36.869853512 +0000 UTC m=+30.991879493" observedRunningTime="2024-12-13 02:14:37.297172402 +0000 UTC m=+31.419198391" watchObservedRunningTime="2024-12-13 02:14:44.387800478 +0000 UTC m=+38.509826466" Dec 13 02:14:48.043351 env[1210]: time="2024-12-13T02:14:48.043251827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:14:48.043965 env[1210]: time="2024-12-13T02:14:48.043363746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:14:48.043965 env[1210]: time="2024-12-13T02:14:48.043403584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:14:48.043965 env[1210]: time="2024-12-13T02:14:48.043609960Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/26849dca72aaa0e52fe3e482ce98ea8ad401b8197017ebde3a21d75e4b59895f pid=3244 runtime=io.containerd.runc.v2 Dec 13 02:14:48.066378 env[1210]: time="2024-12-13T02:14:48.066246529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:14:48.066602 env[1210]: time="2024-12-13T02:14:48.066381755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:14:48.066602 env[1210]: time="2024-12-13T02:14:48.066423089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:14:48.066808 env[1210]: time="2024-12-13T02:14:48.066619124Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b34e649cae9a473cbe0ace2a4e4554c0e75986492b3123b3d135f1574440b75f pid=3262 runtime=io.containerd.runc.v2 Dec 13 02:14:48.100498 systemd[1]: Started cri-containerd-b34e649cae9a473cbe0ace2a4e4554c0e75986492b3123b3d135f1574440b75f.scope. Dec 13 02:14:48.138394 systemd[1]: run-containerd-runc-k8s.io-26849dca72aaa0e52fe3e482ce98ea8ad401b8197017ebde3a21d75e4b59895f-runc.KtmtvN.mount: Deactivated successfully. Dec 13 02:14:48.146525 systemd[1]: Started cri-containerd-26849dca72aaa0e52fe3e482ce98ea8ad401b8197017ebde3a21d75e4b59895f.scope. Dec 13 02:14:48.235472 env[1210]: time="2024-12-13T02:14:48.235410260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-25rmx,Uid:8f4856a7-1f14-4ac0-b1e4-8514e43c3618,Namespace:kube-system,Attempt:0,} returns sandbox id \"b34e649cae9a473cbe0ace2a4e4554c0e75986492b3123b3d135f1574440b75f\"" Dec 13 02:14:48.240241 env[1210]: time="2024-12-13T02:14:48.240189585Z" level=info msg="CreateContainer within sandbox \"b34e649cae9a473cbe0ace2a4e4554c0e75986492b3123b3d135f1574440b75f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:14:48.256137 env[1210]: time="2024-12-13T02:14:48.256076041Z" level=info msg="CreateContainer within sandbox \"b34e649cae9a473cbe0ace2a4e4554c0e75986492b3123b3d135f1574440b75f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a74ec544008c1bea2346a0ecf8f101c4f221d31d342a9752d0ce8b9119b21a48\"" Dec 13 02:14:48.257333 env[1210]: time="2024-12-13T02:14:48.257286523Z" level=info msg="StartContainer for \"a74ec544008c1bea2346a0ecf8f101c4f221d31d342a9752d0ce8b9119b21a48\"" Dec 13 02:14:48.293149 systemd[1]: Started cri-containerd-a74ec544008c1bea2346a0ecf8f101c4f221d31d342a9752d0ce8b9119b21a48.scope. Dec 13 02:14:48.307158 env[1210]: time="2024-12-13T02:14:48.307100504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-xwxxb,Uid:7ed07168-b94f-40ff-88a0-490c50c1313e,Namespace:kube-system,Attempt:0,} returns sandbox id \"26849dca72aaa0e52fe3e482ce98ea8ad401b8197017ebde3a21d75e4b59895f\"" Dec 13 02:14:48.315361 env[1210]: time="2024-12-13T02:14:48.315306185Z" level=info msg="CreateContainer within sandbox \"26849dca72aaa0e52fe3e482ce98ea8ad401b8197017ebde3a21d75e4b59895f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:14:48.340712 env[1210]: time="2024-12-13T02:14:48.340591232Z" level=info msg="CreateContainer within sandbox \"26849dca72aaa0e52fe3e482ce98ea8ad401b8197017ebde3a21d75e4b59895f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"85dfe74a34781e7fe18a0aaf15f2481ee29bf844bce91d82a3a850b1f2a2c570\"" Dec 13 02:14:48.342020 env[1210]: time="2024-12-13T02:14:48.341969089Z" level=info msg="StartContainer for \"85dfe74a34781e7fe18a0aaf15f2481ee29bf844bce91d82a3a850b1f2a2c570\"" Dec 13 02:14:48.398950 systemd[1]: Started cri-containerd-85dfe74a34781e7fe18a0aaf15f2481ee29bf844bce91d82a3a850b1f2a2c570.scope. Dec 13 02:14:48.401170 env[1210]: time="2024-12-13T02:14:48.401112691Z" level=info msg="StartContainer for \"a74ec544008c1bea2346a0ecf8f101c4f221d31d342a9752d0ce8b9119b21a48\" returns successfully" Dec 13 02:14:48.470698 env[1210]: time="2024-12-13T02:14:48.470584593Z" level=info msg="StartContainer for \"85dfe74a34781e7fe18a0aaf15f2481ee29bf844bce91d82a3a850b1f2a2c570\" returns successfully" Dec 13 02:14:49.309258 kubelet[2067]: I1213 02:14:49.309120 2067 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:14:49.395946 kubelet[2067]: I1213 02:14:49.395863 2067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-xwxxb" podStartSLOduration=29.395833338 podStartE2EDuration="29.395833338s" podCreationTimestamp="2024-12-13 02:14:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:14:49.368113376 +0000 UTC m=+43.490139367" watchObservedRunningTime="2024-12-13 02:14:49.395833338 +0000 UTC m=+43.517859329" Dec 13 02:14:56.923494 systemd[1]: Started sshd@5-10.128.0.53:22-139.178.68.195:60794.service. Dec 13 02:14:57.214006 sshd[3409]: Accepted publickey for core from 139.178.68.195 port 60794 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:14:57.215900 sshd[3409]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:14:57.223486 systemd[1]: Started session-6.scope. Dec 13 02:14:57.224734 systemd-logind[1219]: New session 6 of user core. Dec 13 02:14:57.512761 sshd[3409]: pam_unix(sshd:session): session closed for user core Dec 13 02:14:57.517789 systemd[1]: sshd@5-10.128.0.53:22-139.178.68.195:60794.service: Deactivated successfully. Dec 13 02:14:57.519045 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 02:14:57.519942 systemd-logind[1219]: Session 6 logged out. Waiting for processes to exit. Dec 13 02:14:57.521212 systemd-logind[1219]: Removed session 6. Dec 13 02:15:02.561311 systemd[1]: Started sshd@6-10.128.0.53:22-139.178.68.195:60806.service. Dec 13 02:15:02.856457 sshd[3422]: Accepted publickey for core from 139.178.68.195 port 60806 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:15:02.858943 sshd[3422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:15:02.866057 systemd[1]: Started session-7.scope. Dec 13 02:15:02.866980 systemd-logind[1219]: New session 7 of user core. Dec 13 02:15:03.143769 sshd[3422]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:03.148733 systemd[1]: sshd@6-10.128.0.53:22-139.178.68.195:60806.service: Deactivated successfully. Dec 13 02:15:03.149974 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 02:15:03.150883 systemd-logind[1219]: Session 7 logged out. Waiting for processes to exit. Dec 13 02:15:03.152244 systemd-logind[1219]: Removed session 7. Dec 13 02:15:08.190977 systemd[1]: Started sshd@7-10.128.0.53:22-139.178.68.195:52770.service. Dec 13 02:15:08.483164 sshd[3439]: Accepted publickey for core from 139.178.68.195 port 52770 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:15:08.485096 sshd[3439]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:15:08.492405 systemd[1]: Started session-8.scope. Dec 13 02:15:08.493224 systemd-logind[1219]: New session 8 of user core. Dec 13 02:15:08.780377 sshd[3439]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:08.784743 systemd[1]: sshd@7-10.128.0.53:22-139.178.68.195:52770.service: Deactivated successfully. Dec 13 02:15:08.785918 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 02:15:08.786847 systemd-logind[1219]: Session 8 logged out. Waiting for processes to exit. Dec 13 02:15:08.788274 systemd-logind[1219]: Removed session 8. Dec 13 02:15:13.825924 systemd[1]: Started sshd@8-10.128.0.53:22-139.178.68.195:52780.service. Dec 13 02:15:14.116993 sshd[3452]: Accepted publickey for core from 139.178.68.195 port 52780 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:15:14.118862 sshd[3452]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:15:14.126153 systemd[1]: Started session-9.scope. Dec 13 02:15:14.126848 systemd-logind[1219]: New session 9 of user core. Dec 13 02:15:14.409070 sshd[3452]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:14.414294 systemd[1]: sshd@8-10.128.0.53:22-139.178.68.195:52780.service: Deactivated successfully. Dec 13 02:15:14.415518 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 02:15:14.416769 systemd-logind[1219]: Session 9 logged out. Waiting for processes to exit. Dec 13 02:15:14.418431 systemd-logind[1219]: Removed session 9. Dec 13 02:15:14.454920 systemd[1]: Started sshd@9-10.128.0.53:22-139.178.68.195:52782.service. Dec 13 02:15:14.751769 sshd[3465]: Accepted publickey for core from 139.178.68.195 port 52782 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:15:14.754082 sshd[3465]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:15:14.760773 systemd-logind[1219]: New session 10 of user core. Dec 13 02:15:14.761369 systemd[1]: Started session-10.scope. Dec 13 02:15:15.090848 sshd[3465]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:15.096338 systemd-logind[1219]: Session 10 logged out. Waiting for processes to exit. Dec 13 02:15:15.096855 systemd[1]: sshd@9-10.128.0.53:22-139.178.68.195:52782.service: Deactivated successfully. Dec 13 02:15:15.098047 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 02:15:15.099917 systemd-logind[1219]: Removed session 10. Dec 13 02:15:15.137719 systemd[1]: Started sshd@10-10.128.0.53:22-139.178.68.195:52790.service. Dec 13 02:15:15.428595 sshd[3475]: Accepted publickey for core from 139.178.68.195 port 52790 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:15:15.431589 sshd[3475]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:15:15.438938 systemd[1]: Started session-11.scope. Dec 13 02:15:15.440169 systemd-logind[1219]: New session 11 of user core. Dec 13 02:15:15.719005 sshd[3475]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:15.724272 systemd[1]: sshd@10-10.128.0.53:22-139.178.68.195:52790.service: Deactivated successfully. Dec 13 02:15:15.725475 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 02:15:15.727945 systemd-logind[1219]: Session 11 logged out. Waiting for processes to exit. Dec 13 02:15:15.729361 systemd-logind[1219]: Removed session 11. Dec 13 02:15:20.766034 systemd[1]: Started sshd@11-10.128.0.53:22-139.178.68.195:53206.service. Dec 13 02:15:21.059718 sshd[3491]: Accepted publickey for core from 139.178.68.195 port 53206 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:15:21.062129 sshd[3491]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:15:21.069387 systemd[1]: Started session-12.scope. Dec 13 02:15:21.070509 systemd-logind[1219]: New session 12 of user core. Dec 13 02:15:21.344320 sshd[3491]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:21.349322 systemd[1]: sshd@11-10.128.0.53:22-139.178.68.195:53206.service: Deactivated successfully. Dec 13 02:15:21.350552 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 02:15:21.351690 systemd-logind[1219]: Session 12 logged out. Waiting for processes to exit. Dec 13 02:15:21.353262 systemd-logind[1219]: Removed session 12. Dec 13 02:15:26.393105 systemd[1]: Started sshd@12-10.128.0.53:22-139.178.68.195:43670.service. Dec 13 02:15:26.689699 sshd[3502]: Accepted publickey for core from 139.178.68.195 port 43670 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:15:26.692030 sshd[3502]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:15:26.699540 systemd[1]: Started session-13.scope. Dec 13 02:15:26.700793 systemd-logind[1219]: New session 13 of user core. Dec 13 02:15:26.984710 sshd[3502]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:26.989932 systemd[1]: sshd@12-10.128.0.53:22-139.178.68.195:43670.service: Deactivated successfully. Dec 13 02:15:26.990976 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 02:15:26.992453 systemd-logind[1219]: Session 13 logged out. Waiting for processes to exit. Dec 13 02:15:26.994100 systemd-logind[1219]: Removed session 13. Dec 13 02:15:32.033857 systemd[1]: Started sshd@13-10.128.0.53:22-139.178.68.195:43676.service. Dec 13 02:15:32.330845 sshd[3514]: Accepted publickey for core from 139.178.68.195 port 43676 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:15:32.332910 sshd[3514]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:15:32.339883 systemd[1]: Started session-14.scope. Dec 13 02:15:32.341243 systemd-logind[1219]: New session 14 of user core. Dec 13 02:15:32.622847 sshd[3514]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:32.627703 systemd[1]: sshd@13-10.128.0.53:22-139.178.68.195:43676.service: Deactivated successfully. Dec 13 02:15:32.628920 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 02:15:32.629856 systemd-logind[1219]: Session 14 logged out. Waiting for processes to exit. Dec 13 02:15:32.631294 systemd-logind[1219]: Removed session 14. Dec 13 02:15:32.669382 systemd[1]: Started sshd@14-10.128.0.53:22-139.178.68.195:43688.service. Dec 13 02:15:32.965606 sshd[3526]: Accepted publickey for core from 139.178.68.195 port 43688 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:15:32.967937 sshd[3526]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:15:32.974812 systemd-logind[1219]: New session 15 of user core. Dec 13 02:15:32.975244 systemd[1]: Started session-15.scope. Dec 13 02:15:33.342468 sshd[3526]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:33.346928 systemd[1]: sshd@14-10.128.0.53:22-139.178.68.195:43688.service: Deactivated successfully. Dec 13 02:15:33.348165 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 02:15:33.349198 systemd-logind[1219]: Session 15 logged out. Waiting for processes to exit. Dec 13 02:15:33.350495 systemd-logind[1219]: Removed session 15. Dec 13 02:15:33.388365 systemd[1]: Started sshd@15-10.128.0.53:22-139.178.68.195:43698.service. Dec 13 02:15:33.679797 sshd[3535]: Accepted publickey for core from 139.178.68.195 port 43698 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:15:33.681722 sshd[3535]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:15:33.689045 systemd[1]: Started session-16.scope. Dec 13 02:15:33.689920 systemd-logind[1219]: New session 16 of user core. Dec 13 02:15:35.535922 sshd[3535]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:35.541494 systemd[1]: sshd@15-10.128.0.53:22-139.178.68.195:43698.service: Deactivated successfully. Dec 13 02:15:35.542755 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 02:15:35.543707 systemd-logind[1219]: Session 16 logged out. Waiting for processes to exit. Dec 13 02:15:35.545969 systemd-logind[1219]: Removed session 16. Dec 13 02:15:35.584753 systemd[1]: Started sshd@16-10.128.0.53:22-139.178.68.195:43706.service. Dec 13 02:15:35.875810 sshd[3552]: Accepted publickey for core from 139.178.68.195 port 43706 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:15:35.878027 sshd[3552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:15:35.885351 systemd[1]: Started session-17.scope. Dec 13 02:15:35.886244 systemd-logind[1219]: New session 17 of user core. Dec 13 02:15:36.318517 sshd[3552]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:36.323138 systemd[1]: sshd@16-10.128.0.53:22-139.178.68.195:43706.service: Deactivated successfully. Dec 13 02:15:36.324334 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 02:15:36.325268 systemd-logind[1219]: Session 17 logged out. Waiting for processes to exit. Dec 13 02:15:36.326487 systemd-logind[1219]: Removed session 17. Dec 13 02:15:36.365236 systemd[1]: Started sshd@17-10.128.0.53:22-139.178.68.195:42368.service. Dec 13 02:15:36.665994 sshd[3562]: Accepted publickey for core from 139.178.68.195 port 42368 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:15:36.668124 sshd[3562]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:15:36.675290 systemd[1]: Started session-18.scope. Dec 13 02:15:36.676652 systemd-logind[1219]: New session 18 of user core. Dec 13 02:15:36.948539 sshd[3562]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:36.953269 systemd[1]: sshd@17-10.128.0.53:22-139.178.68.195:42368.service: Deactivated successfully. Dec 13 02:15:36.954471 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 02:15:36.955605 systemd-logind[1219]: Session 18 logged out. Waiting for processes to exit. Dec 13 02:15:36.956931 systemd-logind[1219]: Removed session 18. Dec 13 02:15:41.997241 systemd[1]: Started sshd@18-10.128.0.53:22-139.178.68.195:42376.service. Dec 13 02:15:42.302681 sshd[3576]: Accepted publickey for core from 139.178.68.195 port 42376 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:15:42.304857 sshd[3576]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:15:42.312397 systemd[1]: Started session-19.scope. Dec 13 02:15:42.313105 systemd-logind[1219]: New session 19 of user core. Dec 13 02:15:42.593444 sshd[3576]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:42.599350 systemd[1]: sshd@18-10.128.0.53:22-139.178.68.195:42376.service: Deactivated successfully. Dec 13 02:15:42.600411 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 02:15:42.601101 systemd-logind[1219]: Session 19 logged out. Waiting for processes to exit. Dec 13 02:15:42.602453 systemd-logind[1219]: Removed session 19. Dec 13 02:15:47.640212 systemd[1]: Started sshd@19-10.128.0.53:22-139.178.68.195:59488.service. Dec 13 02:15:47.934210 sshd[3591]: Accepted publickey for core from 139.178.68.195 port 59488 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:15:47.936264 sshd[3591]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:15:47.942731 systemd-logind[1219]: New session 20 of user core. Dec 13 02:15:47.943386 systemd[1]: Started session-20.scope. Dec 13 02:15:48.220061 sshd[3591]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:48.224897 systemd[1]: sshd@19-10.128.0.53:22-139.178.68.195:59488.service: Deactivated successfully. Dec 13 02:15:48.226135 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 02:15:48.227214 systemd-logind[1219]: Session 20 logged out. Waiting for processes to exit. Dec 13 02:15:48.228501 systemd-logind[1219]: Removed session 20. Dec 13 02:15:53.266390 systemd[1]: Started sshd@20-10.128.0.53:22-139.178.68.195:59500.service. Dec 13 02:15:53.556873 sshd[3605]: Accepted publickey for core from 139.178.68.195 port 59500 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:15:53.559040 sshd[3605]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:15:53.566208 systemd-logind[1219]: New session 21 of user core. Dec 13 02:15:53.567347 systemd[1]: Started session-21.scope. Dec 13 02:15:53.842209 sshd[3605]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:53.847065 systemd-logind[1219]: Session 21 logged out. Waiting for processes to exit. Dec 13 02:15:53.847372 systemd[1]: sshd@20-10.128.0.53:22-139.178.68.195:59500.service: Deactivated successfully. Dec 13 02:15:53.848565 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 02:15:53.849956 systemd-logind[1219]: Removed session 21. Dec 13 02:15:53.889486 systemd[1]: Started sshd@21-10.128.0.53:22-139.178.68.195:59514.service. Dec 13 02:15:54.184439 sshd[3617]: Accepted publickey for core from 139.178.68.195 port 59514 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:15:54.186762 sshd[3617]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:15:54.193785 systemd[1]: Started session-22.scope. Dec 13 02:15:54.194727 systemd-logind[1219]: New session 22 of user core. Dec 13 02:15:56.071674 kubelet[2067]: I1213 02:15:56.071569 2067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-25rmx" podStartSLOduration=96.071543039 podStartE2EDuration="1m36.071543039s" podCreationTimestamp="2024-12-13 02:14:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:14:49.397043107 +0000 UTC m=+43.519069093" watchObservedRunningTime="2024-12-13 02:15:56.071543039 +0000 UTC m=+110.193569048" Dec 13 02:15:56.096449 env[1210]: time="2024-12-13T02:15:56.096378960Z" level=info msg="StopContainer for \"d7367452ca20549a071e01f41756bfaab0ac7851f62853465230c270e84842a5\" with timeout 30 (s)" Dec 13 02:15:56.097500 env[1210]: time="2024-12-13T02:15:56.097456321Z" level=info msg="Stop container \"d7367452ca20549a071e01f41756bfaab0ac7851f62853465230c270e84842a5\" with signal terminated" Dec 13 02:15:56.113167 systemd[1]: run-containerd-runc-k8s.io-49e8dd497cfd189465d58e5fce99e01df4a82dff82d92410bc6e39cfa0bdde63-runc.JwPylC.mount: Deactivated successfully. Dec 13 02:15:56.152895 systemd[1]: cri-containerd-d7367452ca20549a071e01f41756bfaab0ac7851f62853465230c270e84842a5.scope: Deactivated successfully. Dec 13 02:15:56.157881 env[1210]: time="2024-12-13T02:15:56.157779029Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:15:56.169918 env[1210]: time="2024-12-13T02:15:56.169863741Z" level=info msg="StopContainer for \"49e8dd497cfd189465d58e5fce99e01df4a82dff82d92410bc6e39cfa0bdde63\" with timeout 2 (s)" Dec 13 02:15:56.170423 env[1210]: time="2024-12-13T02:15:56.170382780Z" level=info msg="Stop container \"49e8dd497cfd189465d58e5fce99e01df4a82dff82d92410bc6e39cfa0bdde63\" with signal terminated" Dec 13 02:15:56.185528 kubelet[2067]: E1213 02:15:56.185464 2067 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:15:56.189951 systemd-networkd[1023]: lxc_health: Link DOWN Dec 13 02:15:56.189966 systemd-networkd[1023]: lxc_health: Lost carrier Dec 13 02:15:56.222820 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7367452ca20549a071e01f41756bfaab0ac7851f62853465230c270e84842a5-rootfs.mount: Deactivated successfully. Dec 13 02:15:56.226193 systemd[1]: cri-containerd-49e8dd497cfd189465d58e5fce99e01df4a82dff82d92410bc6e39cfa0bdde63.scope: Deactivated successfully. Dec 13 02:15:56.226553 systemd[1]: cri-containerd-49e8dd497cfd189465d58e5fce99e01df4a82dff82d92410bc6e39cfa0bdde63.scope: Consumed 9.662s CPU time. Dec 13 02:15:56.251915 env[1210]: time="2024-12-13T02:15:56.251469209Z" level=info msg="shim disconnected" id=d7367452ca20549a071e01f41756bfaab0ac7851f62853465230c270e84842a5 Dec 13 02:15:56.251915 env[1210]: time="2024-12-13T02:15:56.251552001Z" level=warning msg="cleaning up after shim disconnected" id=d7367452ca20549a071e01f41756bfaab0ac7851f62853465230c270e84842a5 namespace=k8s.io Dec 13 02:15:56.251915 env[1210]: time="2024-12-13T02:15:56.251569057Z" level=info msg="cleaning up dead shim" Dec 13 02:15:56.264368 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49e8dd497cfd189465d58e5fce99e01df4a82dff82d92410bc6e39cfa0bdde63-rootfs.mount: Deactivated successfully. Dec 13 02:15:56.275178 env[1210]: time="2024-12-13T02:15:56.275038255Z" level=info msg="shim disconnected" id=49e8dd497cfd189465d58e5fce99e01df4a82dff82d92410bc6e39cfa0bdde63 Dec 13 02:15:56.275178 env[1210]: time="2024-12-13T02:15:56.275125145Z" level=warning msg="cleaning up after shim disconnected" id=49e8dd497cfd189465d58e5fce99e01df4a82dff82d92410bc6e39cfa0bdde63 namespace=k8s.io Dec 13 02:15:56.275178 env[1210]: time="2024-12-13T02:15:56.275143459Z" level=info msg="cleaning up dead shim" Dec 13 02:15:56.277220 env[1210]: time="2024-12-13T02:15:56.277147601Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:15:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3684 runtime=io.containerd.runc.v2\n" Dec 13 02:15:56.279680 env[1210]: time="2024-12-13T02:15:56.279576062Z" level=info msg="StopContainer for \"d7367452ca20549a071e01f41756bfaab0ac7851f62853465230c270e84842a5\" returns successfully" Dec 13 02:15:56.280472 env[1210]: time="2024-12-13T02:15:56.280426642Z" level=info msg="StopPodSandbox for \"b810a4865f41950a170c4497bd14aa695313b595f2ce325f95a6d0c3ad77092a\"" Dec 13 02:15:56.280820 env[1210]: time="2024-12-13T02:15:56.280785909Z" level=info msg="Container to stop \"d7367452ca20549a071e01f41756bfaab0ac7851f62853465230c270e84842a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:15:56.284546 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b810a4865f41950a170c4497bd14aa695313b595f2ce325f95a6d0c3ad77092a-shm.mount: Deactivated successfully. Dec 13 02:15:56.295228 env[1210]: time="2024-12-13T02:15:56.295121695Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:15:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3696 runtime=io.containerd.runc.v2\n" Dec 13 02:15:56.298229 env[1210]: time="2024-12-13T02:15:56.298167451Z" level=info msg="StopContainer for \"49e8dd497cfd189465d58e5fce99e01df4a82dff82d92410bc6e39cfa0bdde63\" returns successfully" Dec 13 02:15:56.298882 env[1210]: time="2024-12-13T02:15:56.298838926Z" level=info msg="StopPodSandbox for \"3f4e23180c0e1512fb2b307c400974be9ce96071a6558826d655b49f0761b6a3\"" Dec 13 02:15:56.299047 env[1210]: time="2024-12-13T02:15:56.298927802Z" level=info msg="Container to stop \"83931ba6a9572aefb089122126cdba23a022fbc3b42df85a41b5caba230057d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:15:56.299047 env[1210]: time="2024-12-13T02:15:56.298954976Z" level=info msg="Container to stop \"8d3974c5c00e99e780b99e9a8107bd241de53b69304cf884c1b761f4afa18496\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:15:56.299047 env[1210]: time="2024-12-13T02:15:56.298976278Z" level=info msg="Container to stop \"a30c324baf93db23c0607758cac45e9e081d12d836c9cba7bf75f3ee573d2a05\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:15:56.299047 env[1210]: time="2024-12-13T02:15:56.298995065Z" level=info msg="Container to stop \"80cd9469289d2ab85fca15424cb458dcd38e37dc5caf650364bbb80c0f9c8c9c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:15:56.299047 env[1210]: time="2024-12-13T02:15:56.299013525Z" level=info msg="Container to stop \"49e8dd497cfd189465d58e5fce99e01df4a82dff82d92410bc6e39cfa0bdde63\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:15:56.304159 systemd[1]: cri-containerd-b810a4865f41950a170c4497bd14aa695313b595f2ce325f95a6d0c3ad77092a.scope: Deactivated successfully. Dec 13 02:15:56.317866 systemd[1]: cri-containerd-3f4e23180c0e1512fb2b307c400974be9ce96071a6558826d655b49f0761b6a3.scope: Deactivated successfully. Dec 13 02:15:56.358539 env[1210]: time="2024-12-13T02:15:56.358284431Z" level=info msg="shim disconnected" id=b810a4865f41950a170c4497bd14aa695313b595f2ce325f95a6d0c3ad77092a Dec 13 02:15:56.358539 env[1210]: time="2024-12-13T02:15:56.358355733Z" level=warning msg="cleaning up after shim disconnected" id=b810a4865f41950a170c4497bd14aa695313b595f2ce325f95a6d0c3ad77092a namespace=k8s.io Dec 13 02:15:56.358539 env[1210]: time="2024-12-13T02:15:56.358372869Z" level=info msg="cleaning up dead shim" Dec 13 02:15:56.366026 env[1210]: time="2024-12-13T02:15:56.365864955Z" level=info msg="shim disconnected" id=3f4e23180c0e1512fb2b307c400974be9ce96071a6558826d655b49f0761b6a3 Dec 13 02:15:56.366437 env[1210]: time="2024-12-13T02:15:56.366389999Z" level=warning msg="cleaning up after shim disconnected" id=3f4e23180c0e1512fb2b307c400974be9ce96071a6558826d655b49f0761b6a3 namespace=k8s.io Dec 13 02:15:56.366603 env[1210]: time="2024-12-13T02:15:56.366577480Z" level=info msg="cleaning up dead shim" Dec 13 02:15:56.380873 env[1210]: time="2024-12-13T02:15:56.380817211Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:15:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3755 runtime=io.containerd.runc.v2\n" Dec 13 02:15:56.381305 env[1210]: time="2024-12-13T02:15:56.381199778Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:15:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3749 runtime=io.containerd.runc.v2\n" Dec 13 02:15:56.381789 env[1210]: time="2024-12-13T02:15:56.381741440Z" level=info msg="TearDown network for sandbox \"b810a4865f41950a170c4497bd14aa695313b595f2ce325f95a6d0c3ad77092a\" successfully" Dec 13 02:15:56.381789 env[1210]: time="2024-12-13T02:15:56.381787769Z" level=info msg="StopPodSandbox for \"b810a4865f41950a170c4497bd14aa695313b595f2ce325f95a6d0c3ad77092a\" returns successfully" Dec 13 02:15:56.382388 env[1210]: time="2024-12-13T02:15:56.382334918Z" level=info msg="TearDown network for sandbox \"3f4e23180c0e1512fb2b307c400974be9ce96071a6558826d655b49f0761b6a3\" successfully" Dec 13 02:15:56.382595 env[1210]: time="2024-12-13T02:15:56.382548710Z" level=info msg="StopPodSandbox for \"3f4e23180c0e1512fb2b307c400974be9ce96071a6558826d655b49f0761b6a3\" returns successfully" Dec 13 02:15:56.405080 kubelet[2067]: I1213 02:15:56.404891 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ba016148-1efc-41ec-81b4-89c243fc81e7-cilium-config-path\") pod \"ba016148-1efc-41ec-81b4-89c243fc81e7\" (UID: \"ba016148-1efc-41ec-81b4-89c243fc81e7\") " Dec 13 02:15:56.405080 kubelet[2067]: I1213 02:15:56.404957 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn4f4\" (UniqueName: \"kubernetes.io/projected/ba016148-1efc-41ec-81b4-89c243fc81e7-kube-api-access-cn4f4\") pod \"ba016148-1efc-41ec-81b4-89c243fc81e7\" (UID: \"ba016148-1efc-41ec-81b4-89c243fc81e7\") " Dec 13 02:15:56.409533 kubelet[2067]: I1213 02:15:56.409479 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba016148-1efc-41ec-81b4-89c243fc81e7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ba016148-1efc-41ec-81b4-89c243fc81e7" (UID: "ba016148-1efc-41ec-81b4-89c243fc81e7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:15:56.420896 kubelet[2067]: I1213 02:15:56.420835 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba016148-1efc-41ec-81b4-89c243fc81e7-kube-api-access-cn4f4" (OuterVolumeSpecName: "kube-api-access-cn4f4") pod "ba016148-1efc-41ec-81b4-89c243fc81e7" (UID: "ba016148-1efc-41ec-81b4-89c243fc81e7"). InnerVolumeSpecName "kube-api-access-cn4f4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:15:56.476506 kubelet[2067]: I1213 02:15:56.476468 2067 scope.go:117] "RemoveContainer" containerID="d7367452ca20549a071e01f41756bfaab0ac7851f62853465230c270e84842a5" Dec 13 02:15:56.481758 env[1210]: time="2024-12-13T02:15:56.481412137Z" level=info msg="RemoveContainer for \"d7367452ca20549a071e01f41756bfaab0ac7851f62853465230c270e84842a5\"" Dec 13 02:15:56.487283 systemd[1]: Removed slice kubepods-besteffort-podba016148_1efc_41ec_81b4_89c243fc81e7.slice. Dec 13 02:15:56.490890 env[1210]: time="2024-12-13T02:15:56.490743565Z" level=info msg="RemoveContainer for \"d7367452ca20549a071e01f41756bfaab0ac7851f62853465230c270e84842a5\" returns successfully" Dec 13 02:15:56.493339 kubelet[2067]: I1213 02:15:56.491732 2067 scope.go:117] "RemoveContainer" containerID="d7367452ca20549a071e01f41756bfaab0ac7851f62853465230c270e84842a5" Dec 13 02:15:56.493710 env[1210]: time="2024-12-13T02:15:56.492122040Z" level=error msg="ContainerStatus for \"d7367452ca20549a071e01f41756bfaab0ac7851f62853465230c270e84842a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d7367452ca20549a071e01f41756bfaab0ac7851f62853465230c270e84842a5\": not found" Dec 13 02:15:56.495925 kubelet[2067]: E1213 02:15:56.494835 2067 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d7367452ca20549a071e01f41756bfaab0ac7851f62853465230c270e84842a5\": not found" containerID="d7367452ca20549a071e01f41756bfaab0ac7851f62853465230c270e84842a5" Dec 13 02:15:56.495925 kubelet[2067]: I1213 02:15:56.494877 2067 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d7367452ca20549a071e01f41756bfaab0ac7851f62853465230c270e84842a5"} err="failed to get container status \"d7367452ca20549a071e01f41756bfaab0ac7851f62853465230c270e84842a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"d7367452ca20549a071e01f41756bfaab0ac7851f62853465230c270e84842a5\": not found" Dec 13 02:15:56.495925 kubelet[2067]: I1213 02:15:56.494997 2067 scope.go:117] "RemoveContainer" containerID="49e8dd497cfd189465d58e5fce99e01df4a82dff82d92410bc6e39cfa0bdde63" Dec 13 02:15:56.498354 env[1210]: time="2024-12-13T02:15:56.497712483Z" level=info msg="RemoveContainer for \"49e8dd497cfd189465d58e5fce99e01df4a82dff82d92410bc6e39cfa0bdde63\"" Dec 13 02:15:56.503043 env[1210]: time="2024-12-13T02:15:56.502980372Z" level=info msg="RemoveContainer for \"49e8dd497cfd189465d58e5fce99e01df4a82dff82d92410bc6e39cfa0bdde63\" returns successfully" Dec 13 02:15:56.503361 kubelet[2067]: I1213 02:15:56.503314 2067 scope.go:117] "RemoveContainer" containerID="80cd9469289d2ab85fca15424cb458dcd38e37dc5caf650364bbb80c0f9c8c9c" Dec 13 02:15:56.506766 kubelet[2067]: I1213 02:15:56.506725 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f8b8b38a-559a-42ba-860f-fb6b85c6005b-clustermesh-secrets\") pod \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\" (UID: \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\") " Dec 13 02:15:56.508824 kubelet[2067]: I1213 02:15:56.506775 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-cilium-run\") pod \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\" (UID: \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\") " Dec 13 02:15:56.508824 kubelet[2067]: I1213 02:15:56.506813 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-lib-modules\") pod \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\" (UID: \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\") " Dec 13 02:15:56.509497 kubelet[2067]: I1213 02:15:56.509459 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f8b8b38a-559a-42ba-860f-fb6b85c6005b-hubble-tls\") pod \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\" (UID: \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\") " Dec 13 02:15:56.509676 kubelet[2067]: I1213 02:15:56.509556 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-host-proc-sys-net\") pod \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\" (UID: \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\") " Dec 13 02:15:56.509676 kubelet[2067]: I1213 02:15:56.509588 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-bpf-maps\") pod \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\" (UID: \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\") " Dec 13 02:15:56.509676 kubelet[2067]: I1213 02:15:56.509616 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-host-proc-sys-kernel\") pod \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\" (UID: \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\") " Dec 13 02:15:56.509871 kubelet[2067]: I1213 02:15:56.509680 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p5r4\" (UniqueName: \"kubernetes.io/projected/f8b8b38a-559a-42ba-860f-fb6b85c6005b-kube-api-access-8p5r4\") pod \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\" (UID: \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\") " Dec 13 02:15:56.509871 kubelet[2067]: I1213 02:15:56.509743 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-cilium-cgroup\") pod \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\" (UID: \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\") " Dec 13 02:15:56.509871 kubelet[2067]: I1213 02:15:56.509773 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-cni-path\") pod \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\" (UID: \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\") " Dec 13 02:15:56.509871 kubelet[2067]: I1213 02:15:56.509803 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-etc-cni-netd\") pod \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\" (UID: \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\") " Dec 13 02:15:56.509871 kubelet[2067]: I1213 02:15:56.509829 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-hostproc\") pod \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\" (UID: \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\") " Dec 13 02:15:56.509871 kubelet[2067]: I1213 02:15:56.509861 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f8b8b38a-559a-42ba-860f-fb6b85c6005b-cilium-config-path\") pod \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\" (UID: \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\") " Dec 13 02:15:56.510208 kubelet[2067]: I1213 02:15:56.509886 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-xtables-lock\") pod \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\" (UID: \"f8b8b38a-559a-42ba-860f-fb6b85c6005b\") " Dec 13 02:15:56.510208 kubelet[2067]: I1213 02:15:56.509955 2067 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ba016148-1efc-41ec-81b4-89c243fc81e7-cilium-config-path\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:15:56.510208 kubelet[2067]: I1213 02:15:56.509977 2067 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-cn4f4\" (UniqueName: \"kubernetes.io/projected/ba016148-1efc-41ec-81b4-89c243fc81e7-kube-api-access-cn4f4\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:15:56.510208 kubelet[2067]: I1213 02:15:56.510032 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f8b8b38a-559a-42ba-860f-fb6b85c6005b" (UID: "f8b8b38a-559a-42ba-860f-fb6b85c6005b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:15:56.511756 kubelet[2067]: I1213 02:15:56.511707 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f8b8b38a-559a-42ba-860f-fb6b85c6005b" (UID: "f8b8b38a-559a-42ba-860f-fb6b85c6005b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:15:56.512030 kubelet[2067]: I1213 02:15:56.511991 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f8b8b38a-559a-42ba-860f-fb6b85c6005b" (UID: "f8b8b38a-559a-42ba-860f-fb6b85c6005b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:15:56.512417 env[1210]: time="2024-12-13T02:15:56.512328611Z" level=info msg="RemoveContainer for \"80cd9469289d2ab85fca15424cb458dcd38e37dc5caf650364bbb80c0f9c8c9c\"" Dec 13 02:15:56.513146 kubelet[2067]: I1213 02:15:56.513100 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f8b8b38a-559a-42ba-860f-fb6b85c6005b" (UID: "f8b8b38a-559a-42ba-860f-fb6b85c6005b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:15:56.513263 kubelet[2067]: I1213 02:15:56.513182 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f8b8b38a-559a-42ba-860f-fb6b85c6005b" (UID: "f8b8b38a-559a-42ba-860f-fb6b85c6005b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:15:56.513263 kubelet[2067]: I1213 02:15:56.513213 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f8b8b38a-559a-42ba-860f-fb6b85c6005b" (UID: "f8b8b38a-559a-42ba-860f-fb6b85c6005b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:15:56.513396 kubelet[2067]: I1213 02:15:56.513266 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f8b8b38a-559a-42ba-860f-fb6b85c6005b" (UID: "f8b8b38a-559a-42ba-860f-fb6b85c6005b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:15:56.514697 kubelet[2067]: I1213 02:15:56.514610 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-cni-path" (OuterVolumeSpecName: "cni-path") pod "f8b8b38a-559a-42ba-860f-fb6b85c6005b" (UID: "f8b8b38a-559a-42ba-860f-fb6b85c6005b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:15:56.514816 kubelet[2067]: I1213 02:15:56.514719 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f8b8b38a-559a-42ba-860f-fb6b85c6005b" (UID: "f8b8b38a-559a-42ba-860f-fb6b85c6005b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:15:56.514816 kubelet[2067]: I1213 02:15:56.514778 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-hostproc" (OuterVolumeSpecName: "hostproc") pod "f8b8b38a-559a-42ba-860f-fb6b85c6005b" (UID: "f8b8b38a-559a-42ba-860f-fb6b85c6005b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:15:56.517982 env[1210]: time="2024-12-13T02:15:56.517895651Z" level=info msg="RemoveContainer for \"80cd9469289d2ab85fca15424cb458dcd38e37dc5caf650364bbb80c0f9c8c9c\" returns successfully" Dec 13 02:15:56.518406 kubelet[2067]: I1213 02:15:56.518376 2067 scope.go:117] "RemoveContainer" containerID="a30c324baf93db23c0607758cac45e9e081d12d836c9cba7bf75f3ee573d2a05" Dec 13 02:15:56.522072 env[1210]: time="2024-12-13T02:15:56.522023561Z" level=info msg="RemoveContainer for \"a30c324baf93db23c0607758cac45e9e081d12d836c9cba7bf75f3ee573d2a05\"" Dec 13 02:15:56.526665 env[1210]: time="2024-12-13T02:15:56.526579904Z" level=info msg="RemoveContainer for \"a30c324baf93db23c0607758cac45e9e081d12d836c9cba7bf75f3ee573d2a05\" returns successfully" Dec 13 02:15:56.526966 kubelet[2067]: I1213 02:15:56.526901 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8b8b38a-559a-42ba-860f-fb6b85c6005b-kube-api-access-8p5r4" (OuterVolumeSpecName: "kube-api-access-8p5r4") pod "f8b8b38a-559a-42ba-860f-fb6b85c6005b" (UID: "f8b8b38a-559a-42ba-860f-fb6b85c6005b"). InnerVolumeSpecName "kube-api-access-8p5r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:15:56.527090 kubelet[2067]: I1213 02:15:56.527070 2067 scope.go:117] "RemoveContainer" containerID="83931ba6a9572aefb089122126cdba23a022fbc3b42df85a41b5caba230057d3" Dec 13 02:15:56.528977 env[1210]: time="2024-12-13T02:15:56.528932523Z" level=info msg="RemoveContainer for \"83931ba6a9572aefb089122126cdba23a022fbc3b42df85a41b5caba230057d3\"" Dec 13 02:15:56.529560 kubelet[2067]: I1213 02:15:56.529511 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8b8b38a-559a-42ba-860f-fb6b85c6005b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f8b8b38a-559a-42ba-860f-fb6b85c6005b" (UID: "f8b8b38a-559a-42ba-860f-fb6b85c6005b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:15:56.533758 env[1210]: time="2024-12-13T02:15:56.533696859Z" level=info msg="RemoveContainer for \"83931ba6a9572aefb089122126cdba23a022fbc3b42df85a41b5caba230057d3\" returns successfully" Dec 13 02:15:56.534136 kubelet[2067]: I1213 02:15:56.534107 2067 scope.go:117] "RemoveContainer" containerID="8d3974c5c00e99e780b99e9a8107bd241de53b69304cf884c1b761f4afa18496" Dec 13 02:15:56.536819 kubelet[2067]: I1213 02:15:56.536781 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8b8b38a-559a-42ba-860f-fb6b85c6005b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f8b8b38a-559a-42ba-860f-fb6b85c6005b" (UID: "f8b8b38a-559a-42ba-860f-fb6b85c6005b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:15:56.537827 env[1210]: time="2024-12-13T02:15:56.537170966Z" level=info msg="RemoveContainer for \"8d3974c5c00e99e780b99e9a8107bd241de53b69304cf884c1b761f4afa18496\"" Dec 13 02:15:56.547838 env[1210]: time="2024-12-13T02:15:56.547784590Z" level=info msg="RemoveContainer for \"8d3974c5c00e99e780b99e9a8107bd241de53b69304cf884c1b761f4afa18496\" returns successfully" Dec 13 02:15:56.548255 kubelet[2067]: I1213 02:15:56.548200 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8b8b38a-559a-42ba-860f-fb6b85c6005b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f8b8b38a-559a-42ba-860f-fb6b85c6005b" (UID: "f8b8b38a-559a-42ba-860f-fb6b85c6005b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:15:56.548524 kubelet[2067]: I1213 02:15:56.548494 2067 scope.go:117] "RemoveContainer" containerID="49e8dd497cfd189465d58e5fce99e01df4a82dff82d92410bc6e39cfa0bdde63" Dec 13 02:15:56.550040 env[1210]: time="2024-12-13T02:15:56.549926455Z" level=error msg="ContainerStatus for \"49e8dd497cfd189465d58e5fce99e01df4a82dff82d92410bc6e39cfa0bdde63\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"49e8dd497cfd189465d58e5fce99e01df4a82dff82d92410bc6e39cfa0bdde63\": not found" Dec 13 02:15:56.553769 kubelet[2067]: E1213 02:15:56.552972 2067 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"49e8dd497cfd189465d58e5fce99e01df4a82dff82d92410bc6e39cfa0bdde63\": not found" containerID="49e8dd497cfd189465d58e5fce99e01df4a82dff82d92410bc6e39cfa0bdde63" Dec 13 02:15:56.553769 kubelet[2067]: I1213 02:15:56.553047 2067 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"49e8dd497cfd189465d58e5fce99e01df4a82dff82d92410bc6e39cfa0bdde63"} err="failed to get container status \"49e8dd497cfd189465d58e5fce99e01df4a82dff82d92410bc6e39cfa0bdde63\": rpc error: code = NotFound desc = an error occurred when try to find container \"49e8dd497cfd189465d58e5fce99e01df4a82dff82d92410bc6e39cfa0bdde63\": not found" Dec 13 02:15:56.553769 kubelet[2067]: I1213 02:15:56.553083 2067 scope.go:117] "RemoveContainer" containerID="80cd9469289d2ab85fca15424cb458dcd38e37dc5caf650364bbb80c0f9c8c9c" Dec 13 02:15:56.554612 env[1210]: time="2024-12-13T02:15:56.554511527Z" level=error msg="ContainerStatus for \"80cd9469289d2ab85fca15424cb458dcd38e37dc5caf650364bbb80c0f9c8c9c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"80cd9469289d2ab85fca15424cb458dcd38e37dc5caf650364bbb80c0f9c8c9c\": not found" Dec 13 02:15:56.554951 kubelet[2067]: E1213 02:15:56.554917 2067 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"80cd9469289d2ab85fca15424cb458dcd38e37dc5caf650364bbb80c0f9c8c9c\": not found" containerID="80cd9469289d2ab85fca15424cb458dcd38e37dc5caf650364bbb80c0f9c8c9c" Dec 13 02:15:56.555059 kubelet[2067]: I1213 02:15:56.554962 2067 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"80cd9469289d2ab85fca15424cb458dcd38e37dc5caf650364bbb80c0f9c8c9c"} err="failed to get container status \"80cd9469289d2ab85fca15424cb458dcd38e37dc5caf650364bbb80c0f9c8c9c\": rpc error: code = NotFound desc = an error occurred when try to find container \"80cd9469289d2ab85fca15424cb458dcd38e37dc5caf650364bbb80c0f9c8c9c\": not found" Dec 13 02:15:56.555059 kubelet[2067]: I1213 02:15:56.555028 2067 scope.go:117] "RemoveContainer" containerID="a30c324baf93db23c0607758cac45e9e081d12d836c9cba7bf75f3ee573d2a05" Dec 13 02:15:56.555488 env[1210]: time="2024-12-13T02:15:56.555400962Z" level=error msg="ContainerStatus for \"a30c324baf93db23c0607758cac45e9e081d12d836c9cba7bf75f3ee573d2a05\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a30c324baf93db23c0607758cac45e9e081d12d836c9cba7bf75f3ee573d2a05\": not found" Dec 13 02:15:56.555758 kubelet[2067]: E1213 02:15:56.555703 2067 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a30c324baf93db23c0607758cac45e9e081d12d836c9cba7bf75f3ee573d2a05\": not found" containerID="a30c324baf93db23c0607758cac45e9e081d12d836c9cba7bf75f3ee573d2a05" Dec 13 02:15:56.555860 kubelet[2067]: I1213 02:15:56.555762 2067 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a30c324baf93db23c0607758cac45e9e081d12d836c9cba7bf75f3ee573d2a05"} err="failed to get container status \"a30c324baf93db23c0607758cac45e9e081d12d836c9cba7bf75f3ee573d2a05\": rpc error: code = NotFound desc = an error occurred when try to find container \"a30c324baf93db23c0607758cac45e9e081d12d836c9cba7bf75f3ee573d2a05\": not found" Dec 13 02:15:56.555860 kubelet[2067]: I1213 02:15:56.555792 2067 scope.go:117] "RemoveContainer" containerID="83931ba6a9572aefb089122126cdba23a022fbc3b42df85a41b5caba230057d3" Dec 13 02:15:56.556438 env[1210]: time="2024-12-13T02:15:56.556356166Z" level=error msg="ContainerStatus for \"83931ba6a9572aefb089122126cdba23a022fbc3b42df85a41b5caba230057d3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"83931ba6a9572aefb089122126cdba23a022fbc3b42df85a41b5caba230057d3\": not found" Dec 13 02:15:56.556618 kubelet[2067]: E1213 02:15:56.556590 2067 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"83931ba6a9572aefb089122126cdba23a022fbc3b42df85a41b5caba230057d3\": not found" containerID="83931ba6a9572aefb089122126cdba23a022fbc3b42df85a41b5caba230057d3" Dec 13 02:15:56.556750 kubelet[2067]: I1213 02:15:56.556651 2067 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"83931ba6a9572aefb089122126cdba23a022fbc3b42df85a41b5caba230057d3"} err="failed to get container status \"83931ba6a9572aefb089122126cdba23a022fbc3b42df85a41b5caba230057d3\": rpc error: code = NotFound desc = an error occurred when try to find container \"83931ba6a9572aefb089122126cdba23a022fbc3b42df85a41b5caba230057d3\": not found" Dec 13 02:15:56.556750 kubelet[2067]: I1213 02:15:56.556705 2067 scope.go:117] "RemoveContainer" containerID="8d3974c5c00e99e780b99e9a8107bd241de53b69304cf884c1b761f4afa18496" Dec 13 02:15:56.557171 env[1210]: time="2024-12-13T02:15:56.557091883Z" level=error msg="ContainerStatus for \"8d3974c5c00e99e780b99e9a8107bd241de53b69304cf884c1b761f4afa18496\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d3974c5c00e99e780b99e9a8107bd241de53b69304cf884c1b761f4afa18496\": not found" Dec 13 02:15:56.561919 kubelet[2067]: E1213 02:15:56.561873 2067 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d3974c5c00e99e780b99e9a8107bd241de53b69304cf884c1b761f4afa18496\": not found" containerID="8d3974c5c00e99e780b99e9a8107bd241de53b69304cf884c1b761f4afa18496" Dec 13 02:15:56.562259 kubelet[2067]: I1213 02:15:56.561924 2067 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d3974c5c00e99e780b99e9a8107bd241de53b69304cf884c1b761f4afa18496"} err="failed to get container status \"8d3974c5c00e99e780b99e9a8107bd241de53b69304cf884c1b761f4afa18496\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d3974c5c00e99e780b99e9a8107bd241de53b69304cf884c1b761f4afa18496\": not found" Dec 13 02:15:56.610795 kubelet[2067]: I1213 02:15:56.610428 2067 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f8b8b38a-559a-42ba-860f-fb6b85c6005b-hubble-tls\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:15:56.610795 kubelet[2067]: I1213 02:15:56.610470 2067 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-host-proc-sys-net\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:15:56.610795 kubelet[2067]: I1213 02:15:56.610490 2067 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-bpf-maps\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:15:56.610795 kubelet[2067]: I1213 02:15:56.610509 2067 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-host-proc-sys-kernel\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:15:56.610795 kubelet[2067]: I1213 02:15:56.610534 2067 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-8p5r4\" (UniqueName: \"kubernetes.io/projected/f8b8b38a-559a-42ba-860f-fb6b85c6005b-kube-api-access-8p5r4\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:15:56.610795 kubelet[2067]: I1213 02:15:56.610571 2067 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-cilium-cgroup\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:15:56.610795 kubelet[2067]: I1213 02:15:56.610590 2067 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-cni-path\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:15:56.611307 kubelet[2067]: I1213 02:15:56.610605 2067 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-etc-cni-netd\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:15:56.611307 kubelet[2067]: I1213 02:15:56.610620 2067 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-hostproc\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:15:56.611307 kubelet[2067]: I1213 02:15:56.610651 2067 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f8b8b38a-559a-42ba-860f-fb6b85c6005b-cilium-config-path\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:15:56.611307 kubelet[2067]: I1213 02:15:56.610667 2067 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-xtables-lock\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:15:56.611307 kubelet[2067]: I1213 02:15:56.610683 2067 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f8b8b38a-559a-42ba-860f-fb6b85c6005b-clustermesh-secrets\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:15:56.611307 kubelet[2067]: I1213 02:15:56.610696 2067 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-cilium-run\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:15:56.611307 kubelet[2067]: I1213 02:15:56.610713 2067 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8b8b38a-559a-42ba-860f-fb6b85c6005b-lib-modules\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:15:56.794770 systemd[1]: Removed slice kubepods-burstable-podf8b8b38a_559a_42ba_860f_fb6b85c6005b.slice. Dec 13 02:15:56.794961 systemd[1]: kubepods-burstable-podf8b8b38a_559a_42ba_860f_fb6b85c6005b.slice: Consumed 9.816s CPU time. Dec 13 02:15:57.101475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b810a4865f41950a170c4497bd14aa695313b595f2ce325f95a6d0c3ad77092a-rootfs.mount: Deactivated successfully. Dec 13 02:15:57.101975 systemd[1]: var-lib-kubelet-pods-ba016148\x2d1efc\x2d41ec\x2d81b4\x2d89c243fc81e7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcn4f4.mount: Deactivated successfully. Dec 13 02:15:57.102275 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f4e23180c0e1512fb2b307c400974be9ce96071a6558826d655b49f0761b6a3-rootfs.mount: Deactivated successfully. Dec 13 02:15:57.102509 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3f4e23180c0e1512fb2b307c400974be9ce96071a6558826d655b49f0761b6a3-shm.mount: Deactivated successfully. Dec 13 02:15:57.102652 systemd[1]: var-lib-kubelet-pods-f8b8b38a\x2d559a\x2d42ba\x2d860f\x2dfb6b85c6005b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8p5r4.mount: Deactivated successfully. Dec 13 02:15:57.102782 systemd[1]: var-lib-kubelet-pods-f8b8b38a\x2d559a\x2d42ba\x2d860f\x2dfb6b85c6005b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:15:57.102885 systemd[1]: var-lib-kubelet-pods-f8b8b38a\x2d559a\x2d42ba\x2d860f\x2dfb6b85c6005b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:15:58.060542 sshd[3617]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:58.065925 systemd[1]: sshd@21-10.128.0.53:22-139.178.68.195:59514.service: Deactivated successfully. Dec 13 02:15:58.067124 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 02:15:58.067342 systemd[1]: session-22.scope: Consumed 1.124s CPU time. Dec 13 02:15:58.069740 systemd-logind[1219]: Session 22 logged out. Waiting for processes to exit. Dec 13 02:15:58.072107 systemd-logind[1219]: Removed session 22. Dec 13 02:15:58.073567 kubelet[2067]: I1213 02:15:58.073526 2067 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba016148-1efc-41ec-81b4-89c243fc81e7" path="/var/lib/kubelet/pods/ba016148-1efc-41ec-81b4-89c243fc81e7/volumes" Dec 13 02:15:58.074377 kubelet[2067]: I1213 02:15:58.074346 2067 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8b8b38a-559a-42ba-860f-fb6b85c6005b" path="/var/lib/kubelet/pods/f8b8b38a-559a-42ba-860f-fb6b85c6005b/volumes" Dec 13 02:15:58.107999 systemd[1]: Started sshd@22-10.128.0.53:22-139.178.68.195:60534.service. Dec 13 02:15:58.404174 sshd[3781]: Accepted publickey for core from 139.178.68.195 port 60534 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:15:58.406669 sshd[3781]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:15:58.413730 systemd-logind[1219]: New session 23 of user core. Dec 13 02:15:58.414079 systemd[1]: Started session-23.scope. Dec 13 02:15:58.941302 kubelet[2067]: I1213 02:15:58.941243 2067 setters.go:580] "Node became not ready" node="ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T02:15:58Z","lastTransitionTime":"2024-12-13T02:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 02:15:59.456557 kubelet[2067]: I1213 02:15:59.456491 2067 topology_manager.go:215] "Topology Admit Handler" podUID="309902e9-9877-4c82-bad7-d7ba99153d21" podNamespace="kube-system" podName="cilium-r9mx5" Dec 13 02:15:59.457155 kubelet[2067]: E1213 02:15:59.456612 2067 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f8b8b38a-559a-42ba-860f-fb6b85c6005b" containerName="apply-sysctl-overwrites" Dec 13 02:15:59.457155 kubelet[2067]: E1213 02:15:59.456647 2067 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f8b8b38a-559a-42ba-860f-fb6b85c6005b" containerName="clean-cilium-state" Dec 13 02:15:59.457155 kubelet[2067]: E1213 02:15:59.456659 2067 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f8b8b38a-559a-42ba-860f-fb6b85c6005b" containerName="cilium-agent" Dec 13 02:15:59.457155 kubelet[2067]: E1213 02:15:59.456668 2067 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ba016148-1efc-41ec-81b4-89c243fc81e7" containerName="cilium-operator" Dec 13 02:15:59.457155 kubelet[2067]: E1213 02:15:59.456679 2067 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f8b8b38a-559a-42ba-860f-fb6b85c6005b" containerName="mount-cgroup" Dec 13 02:15:59.457155 kubelet[2067]: E1213 02:15:59.456690 2067 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f8b8b38a-559a-42ba-860f-fb6b85c6005b" containerName="mount-bpf-fs" Dec 13 02:15:59.457155 kubelet[2067]: I1213 02:15:59.456756 2067 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8b8b38a-559a-42ba-860f-fb6b85c6005b" containerName="cilium-agent" Dec 13 02:15:59.457155 kubelet[2067]: I1213 02:15:59.456769 2067 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba016148-1efc-41ec-81b4-89c243fc81e7" containerName="cilium-operator" Dec 13 02:15:59.465060 sshd[3781]: pam_unix(sshd:session): session closed for user core Dec 13 02:15:59.469062 systemd[1]: Created slice kubepods-burstable-pod309902e9_9877_4c82_bad7_d7ba99153d21.slice. Dec 13 02:15:59.475145 systemd[1]: sshd@22-10.128.0.53:22-139.178.68.195:60534.service: Deactivated successfully. Dec 13 02:15:59.476654 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 02:15:59.480312 systemd-logind[1219]: Session 23 logged out. Waiting for processes to exit. Dec 13 02:15:59.481803 systemd-logind[1219]: Removed session 23. Dec 13 02:15:59.514621 systemd[1]: Started sshd@23-10.128.0.53:22-139.178.68.195:60540.service. Dec 13 02:15:59.529771 kubelet[2067]: I1213 02:15:59.529704 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-host-proc-sys-kernel\") pod \"cilium-r9mx5\" (UID: \"309902e9-9877-4c82-bad7-d7ba99153d21\") " pod="kube-system/cilium-r9mx5" Dec 13 02:15:59.529978 kubelet[2067]: I1213 02:15:59.529781 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/309902e9-9877-4c82-bad7-d7ba99153d21-hubble-tls\") pod \"cilium-r9mx5\" (UID: \"309902e9-9877-4c82-bad7-d7ba99153d21\") " pod="kube-system/cilium-r9mx5" Dec 13 02:15:59.529978 kubelet[2067]: I1213 02:15:59.529826 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-cni-path\") pod \"cilium-r9mx5\" (UID: \"309902e9-9877-4c82-bad7-d7ba99153d21\") " pod="kube-system/cilium-r9mx5" Dec 13 02:15:59.529978 kubelet[2067]: I1213 02:15:59.529852 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt9sp\" (UniqueName: \"kubernetes.io/projected/309902e9-9877-4c82-bad7-d7ba99153d21-kube-api-access-kt9sp\") pod \"cilium-r9mx5\" (UID: \"309902e9-9877-4c82-bad7-d7ba99153d21\") " pod="kube-system/cilium-r9mx5" Dec 13 02:15:59.529978 kubelet[2067]: I1213 02:15:59.529901 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-cilium-cgroup\") pod \"cilium-r9mx5\" (UID: \"309902e9-9877-4c82-bad7-d7ba99153d21\") " pod="kube-system/cilium-r9mx5" Dec 13 02:15:59.529978 kubelet[2067]: I1213 02:15:59.529927 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/309902e9-9877-4c82-bad7-d7ba99153d21-cilium-ipsec-secrets\") pod \"cilium-r9mx5\" (UID: \"309902e9-9877-4c82-bad7-d7ba99153d21\") " pod="kube-system/cilium-r9mx5" Dec 13 02:15:59.530271 kubelet[2067]: I1213 02:15:59.529975 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-bpf-maps\") pod \"cilium-r9mx5\" (UID: \"309902e9-9877-4c82-bad7-d7ba99153d21\") " pod="kube-system/cilium-r9mx5" Dec 13 02:15:59.530271 kubelet[2067]: I1213 02:15:59.530035 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-hostproc\") pod \"cilium-r9mx5\" (UID: \"309902e9-9877-4c82-bad7-d7ba99153d21\") " pod="kube-system/cilium-r9mx5" Dec 13 02:15:59.530271 kubelet[2067]: I1213 02:15:59.530082 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/309902e9-9877-4c82-bad7-d7ba99153d21-clustermesh-secrets\") pod \"cilium-r9mx5\" (UID: \"309902e9-9877-4c82-bad7-d7ba99153d21\") " pod="kube-system/cilium-r9mx5" Dec 13 02:15:59.530271 kubelet[2067]: I1213 02:15:59.530111 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/309902e9-9877-4c82-bad7-d7ba99153d21-cilium-config-path\") pod \"cilium-r9mx5\" (UID: \"309902e9-9877-4c82-bad7-d7ba99153d21\") " pod="kube-system/cilium-r9mx5" Dec 13 02:15:59.530271 kubelet[2067]: I1213 02:15:59.530157 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-host-proc-sys-net\") pod \"cilium-r9mx5\" (UID: \"309902e9-9877-4c82-bad7-d7ba99153d21\") " pod="kube-system/cilium-r9mx5" Dec 13 02:15:59.530271 kubelet[2067]: I1213 02:15:59.530189 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-etc-cni-netd\") pod \"cilium-r9mx5\" (UID: \"309902e9-9877-4c82-bad7-d7ba99153d21\") " pod="kube-system/cilium-r9mx5" Dec 13 02:15:59.530821 kubelet[2067]: I1213 02:15:59.530234 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-lib-modules\") pod \"cilium-r9mx5\" (UID: \"309902e9-9877-4c82-bad7-d7ba99153d21\") " pod="kube-system/cilium-r9mx5" Dec 13 02:15:59.530821 kubelet[2067]: I1213 02:15:59.530261 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-xtables-lock\") pod \"cilium-r9mx5\" (UID: \"309902e9-9877-4c82-bad7-d7ba99153d21\") " pod="kube-system/cilium-r9mx5" Dec 13 02:15:59.530821 kubelet[2067]: I1213 02:15:59.530313 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-cilium-run\") pod \"cilium-r9mx5\" (UID: \"309902e9-9877-4c82-bad7-d7ba99153d21\") " pod="kube-system/cilium-r9mx5" Dec 13 02:15:59.777203 env[1210]: time="2024-12-13T02:15:59.777005347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r9mx5,Uid:309902e9-9877-4c82-bad7-d7ba99153d21,Namespace:kube-system,Attempt:0,}" Dec 13 02:15:59.802718 env[1210]: time="2024-12-13T02:15:59.802607231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:15:59.802718 env[1210]: time="2024-12-13T02:15:59.802729767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:15:59.803023 env[1210]: time="2024-12-13T02:15:59.802770603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:15:59.803023 env[1210]: time="2024-12-13T02:15:59.802985118Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c433da650e764ec2adfed919e9f2d127fd570d2536bad9c08d28c5e2396f1353 pid=3806 runtime=io.containerd.runc.v2 Dec 13 02:15:59.824725 systemd[1]: Started cri-containerd-c433da650e764ec2adfed919e9f2d127fd570d2536bad9c08d28c5e2396f1353.scope. Dec 13 02:15:59.828206 sshd[3792]: Accepted publickey for core from 139.178.68.195 port 60540 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:15:59.830085 sshd[3792]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:15:59.842866 systemd[1]: Started session-24.scope. Dec 13 02:15:59.844843 systemd-logind[1219]: New session 24 of user core. Dec 13 02:15:59.878085 env[1210]: time="2024-12-13T02:15:59.878024697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r9mx5,Uid:309902e9-9877-4c82-bad7-d7ba99153d21,Namespace:kube-system,Attempt:0,} returns sandbox id \"c433da650e764ec2adfed919e9f2d127fd570d2536bad9c08d28c5e2396f1353\"" Dec 13 02:15:59.892167 env[1210]: time="2024-12-13T02:15:59.892103829Z" level=info msg="CreateContainer within sandbox \"c433da650e764ec2adfed919e9f2d127fd570d2536bad9c08d28c5e2396f1353\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:15:59.907823 env[1210]: time="2024-12-13T02:15:59.907746095Z" level=info msg="CreateContainer within sandbox \"c433da650e764ec2adfed919e9f2d127fd570d2536bad9c08d28c5e2396f1353\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1703646f5623f6ff58498d21f7b262ce42f2175472dd9ec43202acd48c39ea73\"" Dec 13 02:15:59.909890 env[1210]: time="2024-12-13T02:15:59.908730034Z" level=info msg="StartContainer for \"1703646f5623f6ff58498d21f7b262ce42f2175472dd9ec43202acd48c39ea73\"" Dec 13 02:15:59.936306 systemd[1]: Started cri-containerd-1703646f5623f6ff58498d21f7b262ce42f2175472dd9ec43202acd48c39ea73.scope. Dec 13 02:15:59.950725 systemd[1]: cri-containerd-1703646f5623f6ff58498d21f7b262ce42f2175472dd9ec43202acd48c39ea73.scope: Deactivated successfully. Dec 13 02:15:59.971303 env[1210]: time="2024-12-13T02:15:59.971215389Z" level=info msg="shim disconnected" id=1703646f5623f6ff58498d21f7b262ce42f2175472dd9ec43202acd48c39ea73 Dec 13 02:15:59.971303 env[1210]: time="2024-12-13T02:15:59.971293372Z" level=warning msg="cleaning up after shim disconnected" id=1703646f5623f6ff58498d21f7b262ce42f2175472dd9ec43202acd48c39ea73 namespace=k8s.io Dec 13 02:15:59.971303 env[1210]: time="2024-12-13T02:15:59.971309075Z" level=info msg="cleaning up dead shim" Dec 13 02:15:59.984047 env[1210]: time="2024-12-13T02:15:59.983941938Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:15:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3864 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T02:15:59Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1703646f5623f6ff58498d21f7b262ce42f2175472dd9ec43202acd48c39ea73/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 02:15:59.984505 env[1210]: time="2024-12-13T02:15:59.984348669Z" level=error msg="copy shim log" error="read /proc/self/fd/39: file already closed" Dec 13 02:15:59.985375 env[1210]: time="2024-12-13T02:15:59.985311237Z" level=error msg="Failed to pipe stderr of container \"1703646f5623f6ff58498d21f7b262ce42f2175472dd9ec43202acd48c39ea73\"" error="reading from a closed fifo" Dec 13 02:15:59.985706 env[1210]: time="2024-12-13T02:15:59.985650630Z" level=error msg="Failed to pipe stdout of container \"1703646f5623f6ff58498d21f7b262ce42f2175472dd9ec43202acd48c39ea73\"" error="reading from a closed fifo" Dec 13 02:15:59.987904 env[1210]: time="2024-12-13T02:15:59.987822771Z" level=error msg="StartContainer for \"1703646f5623f6ff58498d21f7b262ce42f2175472dd9ec43202acd48c39ea73\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 02:15:59.988301 kubelet[2067]: E1213 02:15:59.988196 2067 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="1703646f5623f6ff58498d21f7b262ce42f2175472dd9ec43202acd48c39ea73" Dec 13 02:15:59.990844 kubelet[2067]: E1213 02:15:59.990434 2067 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 02:15:59.990844 kubelet[2067]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 02:15:59.990844 kubelet[2067]: rm /hostbin/cilium-mount Dec 13 02:15:59.991093 kubelet[2067]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kt9sp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-r9mx5_kube-system(309902e9-9877-4c82-bad7-d7ba99153d21): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 02:15:59.991093 kubelet[2067]: E1213 02:15:59.990510 2067 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-r9mx5" podUID="309902e9-9877-4c82-bad7-d7ba99153d21" Dec 13 02:16:00.141487 sshd[3792]: pam_unix(sshd:session): session closed for user core Dec 13 02:16:00.148014 systemd[1]: sshd@23-10.128.0.53:22-139.178.68.195:60540.service: Deactivated successfully. Dec 13 02:16:00.149386 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 02:16:00.150571 systemd-logind[1219]: Session 24 logged out. Waiting for processes to exit. Dec 13 02:16:00.152434 systemd-logind[1219]: Removed session 24. Dec 13 02:16:00.187548 systemd[1]: Started sshd@24-10.128.0.53:22-139.178.68.195:60550.service. Dec 13 02:16:00.477355 sshd[3885]: Accepted publickey for core from 139.178.68.195 port 60550 ssh2: RSA SHA256:iNfeuC4o6O46DLX6rqVJVwfztbFRXyh3VDk9s2BL7mw Dec 13 02:16:00.478987 sshd[3885]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:16:00.486621 systemd[1]: Started session-25.scope. Dec 13 02:16:00.487734 systemd-logind[1219]: New session 25 of user core. Dec 13 02:16:00.505382 env[1210]: time="2024-12-13T02:16:00.504568307Z" level=info msg="StopPodSandbox for \"c433da650e764ec2adfed919e9f2d127fd570d2536bad9c08d28c5e2396f1353\"" Dec 13 02:16:00.505382 env[1210]: time="2024-12-13T02:16:00.504707867Z" level=info msg="Container to stop \"1703646f5623f6ff58498d21f7b262ce42f2175472dd9ec43202acd48c39ea73\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:16:00.518597 systemd[1]: cri-containerd-c433da650e764ec2adfed919e9f2d127fd570d2536bad9c08d28c5e2396f1353.scope: Deactivated successfully. Dec 13 02:16:00.564761 env[1210]: time="2024-12-13T02:16:00.564682202Z" level=info msg="shim disconnected" id=c433da650e764ec2adfed919e9f2d127fd570d2536bad9c08d28c5e2396f1353 Dec 13 02:16:00.564761 env[1210]: time="2024-12-13T02:16:00.564761973Z" level=warning msg="cleaning up after shim disconnected" id=c433da650e764ec2adfed919e9f2d127fd570d2536bad9c08d28c5e2396f1353 namespace=k8s.io Dec 13 02:16:00.565155 env[1210]: time="2024-12-13T02:16:00.564777570Z" level=info msg="cleaning up dead shim" Dec 13 02:16:00.577349 env[1210]: time="2024-12-13T02:16:00.577284565Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:16:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3908 runtime=io.containerd.runc.v2\n" Dec 13 02:16:00.577785 env[1210]: time="2024-12-13T02:16:00.577738586Z" level=info msg="TearDown network for sandbox \"c433da650e764ec2adfed919e9f2d127fd570d2536bad9c08d28c5e2396f1353\" successfully" Dec 13 02:16:00.577785 env[1210]: time="2024-12-13T02:16:00.577783885Z" level=info msg="StopPodSandbox for \"c433da650e764ec2adfed919e9f2d127fd570d2536bad9c08d28c5e2396f1353\" returns successfully" Dec 13 02:16:00.642860 kubelet[2067]: I1213 02:16:00.642683 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-cilium-run\") pod \"309902e9-9877-4c82-bad7-d7ba99153d21\" (UID: \"309902e9-9877-4c82-bad7-d7ba99153d21\") " Dec 13 02:16:00.643458 kubelet[2067]: I1213 02:16:00.642781 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "309902e9-9877-4c82-bad7-d7ba99153d21" (UID: "309902e9-9877-4c82-bad7-d7ba99153d21"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:16:00.643458 kubelet[2067]: I1213 02:16:00.642907 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/309902e9-9877-4c82-bad7-d7ba99153d21-hubble-tls\") pod \"309902e9-9877-4c82-bad7-d7ba99153d21\" (UID: \"309902e9-9877-4c82-bad7-d7ba99153d21\") " Dec 13 02:16:00.643607 kubelet[2067]: I1213 02:16:00.643569 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-cni-path\") pod \"309902e9-9877-4c82-bad7-d7ba99153d21\" (UID: \"309902e9-9877-4c82-bad7-d7ba99153d21\") " Dec 13 02:16:00.645041 kubelet[2067]: I1213 02:16:00.643739 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/309902e9-9877-4c82-bad7-d7ba99153d21-cilium-ipsec-secrets\") pod \"309902e9-9877-4c82-bad7-d7ba99153d21\" (UID: \"309902e9-9877-4c82-bad7-d7ba99153d21\") " Dec 13 02:16:00.645041 kubelet[2067]: I1213 02:16:00.643788 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt9sp\" (UniqueName: \"kubernetes.io/projected/309902e9-9877-4c82-bad7-d7ba99153d21-kube-api-access-kt9sp\") pod \"309902e9-9877-4c82-bad7-d7ba99153d21\" (UID: \"309902e9-9877-4c82-bad7-d7ba99153d21\") " Dec 13 02:16:00.645041 kubelet[2067]: I1213 02:16:00.643828 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-bpf-maps\") pod \"309902e9-9877-4c82-bad7-d7ba99153d21\" (UID: \"309902e9-9877-4c82-bad7-d7ba99153d21\") " Dec 13 02:16:00.645041 kubelet[2067]: I1213 02:16:00.643853 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/309902e9-9877-4c82-bad7-d7ba99153d21-clustermesh-secrets\") pod \"309902e9-9877-4c82-bad7-d7ba99153d21\" (UID: \"309902e9-9877-4c82-bad7-d7ba99153d21\") " Dec 13 02:16:00.645041 kubelet[2067]: I1213 02:16:00.643891 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-lib-modules\") pod \"309902e9-9877-4c82-bad7-d7ba99153d21\" (UID: \"309902e9-9877-4c82-bad7-d7ba99153d21\") " Dec 13 02:16:00.645041 kubelet[2067]: I1213 02:16:00.643916 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-cilium-cgroup\") pod \"309902e9-9877-4c82-bad7-d7ba99153d21\" (UID: \"309902e9-9877-4c82-bad7-d7ba99153d21\") " Dec 13 02:16:00.645041 kubelet[2067]: I1213 02:16:00.643937 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-xtables-lock\") pod \"309902e9-9877-4c82-bad7-d7ba99153d21\" (UID: \"309902e9-9877-4c82-bad7-d7ba99153d21\") " Dec 13 02:16:00.645041 kubelet[2067]: I1213 02:16:00.643976 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-etc-cni-netd\") pod \"309902e9-9877-4c82-bad7-d7ba99153d21\" (UID: \"309902e9-9877-4c82-bad7-d7ba99153d21\") " Dec 13 02:16:00.645041 kubelet[2067]: I1213 02:16:00.644011 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-host-proc-sys-kernel\") pod \"309902e9-9877-4c82-bad7-d7ba99153d21\" (UID: \"309902e9-9877-4c82-bad7-d7ba99153d21\") " Dec 13 02:16:00.645041 kubelet[2067]: I1213 02:16:00.644050 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-hostproc\") pod \"309902e9-9877-4c82-bad7-d7ba99153d21\" (UID: \"309902e9-9877-4c82-bad7-d7ba99153d21\") " Dec 13 02:16:00.645041 kubelet[2067]: I1213 02:16:00.644076 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-host-proc-sys-net\") pod \"309902e9-9877-4c82-bad7-d7ba99153d21\" (UID: \"309902e9-9877-4c82-bad7-d7ba99153d21\") " Dec 13 02:16:00.645041 kubelet[2067]: I1213 02:16:00.644104 2067 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/309902e9-9877-4c82-bad7-d7ba99153d21-cilium-config-path\") pod \"309902e9-9877-4c82-bad7-d7ba99153d21\" (UID: \"309902e9-9877-4c82-bad7-d7ba99153d21\") " Dec 13 02:16:00.645041 kubelet[2067]: I1213 02:16:00.644171 2067 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-cilium-run\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:16:00.648414 kubelet[2067]: I1213 02:16:00.648359 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/309902e9-9877-4c82-bad7-d7ba99153d21-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "309902e9-9877-4c82-bad7-d7ba99153d21" (UID: "309902e9-9877-4c82-bad7-d7ba99153d21"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:16:00.648545 kubelet[2067]: I1213 02:16:00.648443 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-cni-path" (OuterVolumeSpecName: "cni-path") pod "309902e9-9877-4c82-bad7-d7ba99153d21" (UID: "309902e9-9877-4c82-bad7-d7ba99153d21"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:16:00.653561 kubelet[2067]: I1213 02:16:00.653516 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "309902e9-9877-4c82-bad7-d7ba99153d21" (UID: "309902e9-9877-4c82-bad7-d7ba99153d21"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:16:00.655692 kubelet[2067]: I1213 02:16:00.654010 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "309902e9-9877-4c82-bad7-d7ba99153d21" (UID: "309902e9-9877-4c82-bad7-d7ba99153d21"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:16:00.655692 kubelet[2067]: I1213 02:16:00.654074 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "309902e9-9877-4c82-bad7-d7ba99153d21" (UID: "309902e9-9877-4c82-bad7-d7ba99153d21"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:16:00.655692 kubelet[2067]: I1213 02:16:00.654103 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "309902e9-9877-4c82-bad7-d7ba99153d21" (UID: "309902e9-9877-4c82-bad7-d7ba99153d21"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:16:00.655692 kubelet[2067]: I1213 02:16:00.654127 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-hostproc" (OuterVolumeSpecName: "hostproc") pod "309902e9-9877-4c82-bad7-d7ba99153d21" (UID: "309902e9-9877-4c82-bad7-d7ba99153d21"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:16:00.655692 kubelet[2067]: I1213 02:16:00.654152 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "309902e9-9877-4c82-bad7-d7ba99153d21" (UID: "309902e9-9877-4c82-bad7-d7ba99153d21"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:16:00.658052 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c433da650e764ec2adfed919e9f2d127fd570d2536bad9c08d28c5e2396f1353-rootfs.mount: Deactivated successfully. Dec 13 02:16:00.658218 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c433da650e764ec2adfed919e9f2d127fd570d2536bad9c08d28c5e2396f1353-shm.mount: Deactivated successfully. Dec 13 02:16:00.661746 kubelet[2067]: I1213 02:16:00.661606 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "309902e9-9877-4c82-bad7-d7ba99153d21" (UID: "309902e9-9877-4c82-bad7-d7ba99153d21"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:16:00.665065 systemd[1]: var-lib-kubelet-pods-309902e9\x2d9877\x2d4c82\x2dbad7\x2dd7ba99153d21-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkt9sp.mount: Deactivated successfully. Dec 13 02:16:00.668800 kubelet[2067]: I1213 02:16:00.668760 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "309902e9-9877-4c82-bad7-d7ba99153d21" (UID: "309902e9-9877-4c82-bad7-d7ba99153d21"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:16:00.674177 systemd[1]: var-lib-kubelet-pods-309902e9\x2d9877\x2d4c82\x2dbad7\x2dd7ba99153d21-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:16:00.676664 kubelet[2067]: I1213 02:16:00.676447 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/309902e9-9877-4c82-bad7-d7ba99153d21-kube-api-access-kt9sp" (OuterVolumeSpecName: "kube-api-access-kt9sp") pod "309902e9-9877-4c82-bad7-d7ba99153d21" (UID: "309902e9-9877-4c82-bad7-d7ba99153d21"). InnerVolumeSpecName "kube-api-access-kt9sp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:16:00.678829 kubelet[2067]: I1213 02:16:00.677707 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/309902e9-9877-4c82-bad7-d7ba99153d21-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "309902e9-9877-4c82-bad7-d7ba99153d21" (UID: "309902e9-9877-4c82-bad7-d7ba99153d21"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:16:00.684726 systemd[1]: var-lib-kubelet-pods-309902e9\x2d9877\x2d4c82\x2dbad7\x2dd7ba99153d21-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 02:16:00.687578 kubelet[2067]: I1213 02:16:00.686595 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/309902e9-9877-4c82-bad7-d7ba99153d21-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "309902e9-9877-4c82-bad7-d7ba99153d21" (UID: "309902e9-9877-4c82-bad7-d7ba99153d21"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:16:00.691389 systemd[1]: var-lib-kubelet-pods-309902e9\x2d9877\x2d4c82\x2dbad7\x2dd7ba99153d21-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:16:00.693893 kubelet[2067]: I1213 02:16:00.693843 2067 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/309902e9-9877-4c82-bad7-d7ba99153d21-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "309902e9-9877-4c82-bad7-d7ba99153d21" (UID: "309902e9-9877-4c82-bad7-d7ba99153d21"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:16:00.744923 kubelet[2067]: I1213 02:16:00.744795 2067 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/309902e9-9877-4c82-bad7-d7ba99153d21-cilium-config-path\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:16:00.745206 kubelet[2067]: I1213 02:16:00.745182 2067 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-cni-path\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:16:00.745354 kubelet[2067]: I1213 02:16:00.745335 2067 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/309902e9-9877-4c82-bad7-d7ba99153d21-cilium-ipsec-secrets\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:16:00.745514 kubelet[2067]: I1213 02:16:00.745492 2067 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/309902e9-9877-4c82-bad7-d7ba99153d21-hubble-tls\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:16:00.745727 kubelet[2067]: I1213 02:16:00.745703 2067 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-kt9sp\" (UniqueName: \"kubernetes.io/projected/309902e9-9877-4c82-bad7-d7ba99153d21-kube-api-access-kt9sp\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:16:00.745921 kubelet[2067]: I1213 02:16:00.745900 2067 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-bpf-maps\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:16:00.746096 kubelet[2067]: I1213 02:16:00.746074 2067 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/309902e9-9877-4c82-bad7-d7ba99153d21-clustermesh-secrets\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:16:00.746262 kubelet[2067]: I1213 02:16:00.746240 2067 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-lib-modules\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:16:00.746428 kubelet[2067]: I1213 02:16:00.746405 2067 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-cilium-cgroup\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:16:00.746596 kubelet[2067]: I1213 02:16:00.746571 2067 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-xtables-lock\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:16:00.746772 kubelet[2067]: I1213 02:16:00.746755 2067 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-etc-cni-netd\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:16:00.746908 kubelet[2067]: I1213 02:16:00.746892 2067 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-hostproc\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:16:00.747038 kubelet[2067]: I1213 02:16:00.747022 2067 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-host-proc-sys-kernel\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:16:00.747168 kubelet[2067]: I1213 02:16:00.747152 2067 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/309902e9-9877-4c82-bad7-d7ba99153d21-host-proc-sys-net\") on node \"ci-3510-3-6-85cdb7f3f856dfef2d7b.c.flatcar-212911.internal\" DevicePath \"\"" Dec 13 02:16:01.187448 kubelet[2067]: E1213 02:16:01.187378 2067 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:16:01.509255 kubelet[2067]: I1213 02:16:01.508810 2067 scope.go:117] "RemoveContainer" containerID="1703646f5623f6ff58498d21f7b262ce42f2175472dd9ec43202acd48c39ea73" Dec 13 02:16:01.513052 env[1210]: time="2024-12-13T02:16:01.512996137Z" level=info msg="RemoveContainer for \"1703646f5623f6ff58498d21f7b262ce42f2175472dd9ec43202acd48c39ea73\"" Dec 13 02:16:01.514834 systemd[1]: Removed slice kubepods-burstable-pod309902e9_9877_4c82_bad7_d7ba99153d21.slice. Dec 13 02:16:01.521550 env[1210]: time="2024-12-13T02:16:01.521493538Z" level=info msg="RemoveContainer for \"1703646f5623f6ff58498d21f7b262ce42f2175472dd9ec43202acd48c39ea73\" returns successfully" Dec 13 02:16:01.561494 kubelet[2067]: I1213 02:16:01.561442 2067 topology_manager.go:215] "Topology Admit Handler" podUID="a2b54d06-ca83-48b2-a9b2-4585f9b0645f" podNamespace="kube-system" podName="cilium-dsftl" Dec 13 02:16:01.561874 kubelet[2067]: E1213 02:16:01.561850 2067 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="309902e9-9877-4c82-bad7-d7ba99153d21" containerName="mount-cgroup" Dec 13 02:16:01.562098 kubelet[2067]: I1213 02:16:01.562076 2067 memory_manager.go:354] "RemoveStaleState removing state" podUID="309902e9-9877-4c82-bad7-d7ba99153d21" containerName="mount-cgroup" Dec 13 02:16:01.571193 systemd[1]: Created slice kubepods-burstable-poda2b54d06_ca83_48b2_a9b2_4585f9b0645f.slice. Dec 13 02:16:01.654033 kubelet[2067]: I1213 02:16:01.653969 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2b54d06-ca83-48b2-a9b2-4585f9b0645f-cni-path\") pod \"cilium-dsftl\" (UID: \"a2b54d06-ca83-48b2-a9b2-4585f9b0645f\") " pod="kube-system/cilium-dsftl" Dec 13 02:16:01.654033 kubelet[2067]: I1213 02:16:01.654033 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2b54d06-ca83-48b2-a9b2-4585f9b0645f-cilium-run\") pod \"cilium-dsftl\" (UID: \"a2b54d06-ca83-48b2-a9b2-4585f9b0645f\") " pod="kube-system/cilium-dsftl" Dec 13 02:16:01.654753 kubelet[2067]: I1213 02:16:01.654061 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2b54d06-ca83-48b2-a9b2-4585f9b0645f-hostproc\") pod \"cilium-dsftl\" (UID: \"a2b54d06-ca83-48b2-a9b2-4585f9b0645f\") " pod="kube-system/cilium-dsftl" Dec 13 02:16:01.654753 kubelet[2067]: I1213 02:16:01.654087 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2b54d06-ca83-48b2-a9b2-4585f9b0645f-xtables-lock\") pod \"cilium-dsftl\" (UID: \"a2b54d06-ca83-48b2-a9b2-4585f9b0645f\") " pod="kube-system/cilium-dsftl" Dec 13 02:16:01.654753 kubelet[2067]: I1213 02:16:01.654115 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2jj7\" (UniqueName: \"kubernetes.io/projected/a2b54d06-ca83-48b2-a9b2-4585f9b0645f-kube-api-access-d2jj7\") pod \"cilium-dsftl\" (UID: \"a2b54d06-ca83-48b2-a9b2-4585f9b0645f\") " pod="kube-system/cilium-dsftl" Dec 13 02:16:01.654753 kubelet[2067]: I1213 02:16:01.654141 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2b54d06-ca83-48b2-a9b2-4585f9b0645f-bpf-maps\") pod \"cilium-dsftl\" (UID: \"a2b54d06-ca83-48b2-a9b2-4585f9b0645f\") " pod="kube-system/cilium-dsftl" Dec 13 02:16:01.654753 kubelet[2067]: I1213 02:16:01.654185 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2b54d06-ca83-48b2-a9b2-4585f9b0645f-etc-cni-netd\") pod \"cilium-dsftl\" (UID: \"a2b54d06-ca83-48b2-a9b2-4585f9b0645f\") " pod="kube-system/cilium-dsftl" Dec 13 02:16:01.654753 kubelet[2067]: I1213 02:16:01.654213 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2b54d06-ca83-48b2-a9b2-4585f9b0645f-lib-modules\") pod \"cilium-dsftl\" (UID: \"a2b54d06-ca83-48b2-a9b2-4585f9b0645f\") " pod="kube-system/cilium-dsftl" Dec 13 02:16:01.654753 kubelet[2067]: I1213 02:16:01.654245 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2b54d06-ca83-48b2-a9b2-4585f9b0645f-host-proc-sys-kernel\") pod \"cilium-dsftl\" (UID: \"a2b54d06-ca83-48b2-a9b2-4585f9b0645f\") " pod="kube-system/cilium-dsftl" Dec 13 02:16:01.654753 kubelet[2067]: I1213 02:16:01.654310 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2b54d06-ca83-48b2-a9b2-4585f9b0645f-clustermesh-secrets\") pod \"cilium-dsftl\" (UID: \"a2b54d06-ca83-48b2-a9b2-4585f9b0645f\") " pod="kube-system/cilium-dsftl" Dec 13 02:16:01.654753 kubelet[2067]: I1213 02:16:01.654337 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2b54d06-ca83-48b2-a9b2-4585f9b0645f-hubble-tls\") pod \"cilium-dsftl\" (UID: \"a2b54d06-ca83-48b2-a9b2-4585f9b0645f\") " pod="kube-system/cilium-dsftl" Dec 13 02:16:01.654753 kubelet[2067]: I1213 02:16:01.654366 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2b54d06-ca83-48b2-a9b2-4585f9b0645f-cilium-config-path\") pod \"cilium-dsftl\" (UID: \"a2b54d06-ca83-48b2-a9b2-4585f9b0645f\") " pod="kube-system/cilium-dsftl" Dec 13 02:16:01.654753 kubelet[2067]: I1213 02:16:01.654413 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a2b54d06-ca83-48b2-a9b2-4585f9b0645f-cilium-ipsec-secrets\") pod \"cilium-dsftl\" (UID: \"a2b54d06-ca83-48b2-a9b2-4585f9b0645f\") " pod="kube-system/cilium-dsftl" Dec 13 02:16:01.654753 kubelet[2067]: I1213 02:16:01.654444 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2b54d06-ca83-48b2-a9b2-4585f9b0645f-host-proc-sys-net\") pod \"cilium-dsftl\" (UID: \"a2b54d06-ca83-48b2-a9b2-4585f9b0645f\") " pod="kube-system/cilium-dsftl" Dec 13 02:16:01.654753 kubelet[2067]: I1213 02:16:01.654472 2067 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2b54d06-ca83-48b2-a9b2-4585f9b0645f-cilium-cgroup\") pod \"cilium-dsftl\" (UID: \"a2b54d06-ca83-48b2-a9b2-4585f9b0645f\") " pod="kube-system/cilium-dsftl" Dec 13 02:16:01.877727 env[1210]: time="2024-12-13T02:16:01.877656237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dsftl,Uid:a2b54d06-ca83-48b2-a9b2-4585f9b0645f,Namespace:kube-system,Attempt:0,}" Dec 13 02:16:01.904803 env[1210]: time="2024-12-13T02:16:01.904479748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:16:01.904803 env[1210]: time="2024-12-13T02:16:01.904537098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:16:01.904803 env[1210]: time="2024-12-13T02:16:01.904557386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:16:01.905221 env[1210]: time="2024-12-13T02:16:01.904868945Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/937bf9575496c888401857db074d043fd0f3fd920750217c872d68529fbf944d pid=3941 runtime=io.containerd.runc.v2 Dec 13 02:16:01.926683 systemd[1]: Started cri-containerd-937bf9575496c888401857db074d043fd0f3fd920750217c872d68529fbf944d.scope. Dec 13 02:16:01.962325 env[1210]: time="2024-12-13T02:16:01.962239719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dsftl,Uid:a2b54d06-ca83-48b2-a9b2-4585f9b0645f,Namespace:kube-system,Attempt:0,} returns sandbox id \"937bf9575496c888401857db074d043fd0f3fd920750217c872d68529fbf944d\"" Dec 13 02:16:01.969792 env[1210]: time="2024-12-13T02:16:01.969740451Z" level=info msg="CreateContainer within sandbox \"937bf9575496c888401857db074d043fd0f3fd920750217c872d68529fbf944d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:16:01.983442 env[1210]: time="2024-12-13T02:16:01.983392470Z" level=info msg="CreateContainer within sandbox \"937bf9575496c888401857db074d043fd0f3fd920750217c872d68529fbf944d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"718b48a88522b852e183106d5759509c51c16aceb8faadfc2d7c839c02a74d3a\"" Dec 13 02:16:01.985787 env[1210]: time="2024-12-13T02:16:01.985683975Z" level=info msg="StartContainer for \"718b48a88522b852e183106d5759509c51c16aceb8faadfc2d7c839c02a74d3a\"" Dec 13 02:16:02.010377 systemd[1]: Started cri-containerd-718b48a88522b852e183106d5759509c51c16aceb8faadfc2d7c839c02a74d3a.scope. Dec 13 02:16:02.049356 env[1210]: time="2024-12-13T02:16:02.049298548Z" level=info msg="StartContainer for \"718b48a88522b852e183106d5759509c51c16aceb8faadfc2d7c839c02a74d3a\" returns successfully" Dec 13 02:16:02.066601 systemd[1]: cri-containerd-718b48a88522b852e183106d5759509c51c16aceb8faadfc2d7c839c02a74d3a.scope: Deactivated successfully. Dec 13 02:16:02.072766 kubelet[2067]: I1213 02:16:02.072255 2067 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="309902e9-9877-4c82-bad7-d7ba99153d21" path="/var/lib/kubelet/pods/309902e9-9877-4c82-bad7-d7ba99153d21/volumes" Dec 13 02:16:02.105540 env[1210]: time="2024-12-13T02:16:02.105471702Z" level=info msg="shim disconnected" id=718b48a88522b852e183106d5759509c51c16aceb8faadfc2d7c839c02a74d3a Dec 13 02:16:02.105540 env[1210]: time="2024-12-13T02:16:02.105542025Z" level=warning msg="cleaning up after shim disconnected" id=718b48a88522b852e183106d5759509c51c16aceb8faadfc2d7c839c02a74d3a namespace=k8s.io Dec 13 02:16:02.105959 env[1210]: time="2024-12-13T02:16:02.105556676Z" level=info msg="cleaning up dead shim" Dec 13 02:16:02.125874 env[1210]: time="2024-12-13T02:16:02.125802440Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:16:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4024 runtime=io.containerd.runc.v2\n" Dec 13 02:16:02.518029 env[1210]: time="2024-12-13T02:16:02.517078331Z" level=info msg="CreateContainer within sandbox \"937bf9575496c888401857db074d043fd0f3fd920750217c872d68529fbf944d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:16:02.534128 env[1210]: time="2024-12-13T02:16:02.534058542Z" level=info msg="CreateContainer within sandbox \"937bf9575496c888401857db074d043fd0f3fd920750217c872d68529fbf944d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bc0e71568945a44834624ecb89484bad1aad4a6c00b699e710678f52bc21af1f\"" Dec 13 02:16:02.534953 env[1210]: time="2024-12-13T02:16:02.534802230Z" level=info msg="StartContainer for \"bc0e71568945a44834624ecb89484bad1aad4a6c00b699e710678f52bc21af1f\"" Dec 13 02:16:02.581741 systemd[1]: Started cri-containerd-bc0e71568945a44834624ecb89484bad1aad4a6c00b699e710678f52bc21af1f.scope. Dec 13 02:16:02.624147 env[1210]: time="2024-12-13T02:16:02.624082594Z" level=info msg="StartContainer for \"bc0e71568945a44834624ecb89484bad1aad4a6c00b699e710678f52bc21af1f\" returns successfully" Dec 13 02:16:02.634291 systemd[1]: cri-containerd-bc0e71568945a44834624ecb89484bad1aad4a6c00b699e710678f52bc21af1f.scope: Deactivated successfully. Dec 13 02:16:02.662578 env[1210]: time="2024-12-13T02:16:02.662511850Z" level=info msg="shim disconnected" id=bc0e71568945a44834624ecb89484bad1aad4a6c00b699e710678f52bc21af1f Dec 13 02:16:02.662578 env[1210]: time="2024-12-13T02:16:02.662581619Z" level=warning msg="cleaning up after shim disconnected" id=bc0e71568945a44834624ecb89484bad1aad4a6c00b699e710678f52bc21af1f namespace=k8s.io Dec 13 02:16:02.663076 env[1210]: time="2024-12-13T02:16:02.662595758Z" level=info msg="cleaning up dead shim" Dec 13 02:16:02.675079 env[1210]: time="2024-12-13T02:16:02.675004232Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:16:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4087 runtime=io.containerd.runc.v2\n" Dec 13 02:16:03.080913 kubelet[2067]: W1213 02:16:03.079293 2067 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod309902e9_9877_4c82_bad7_d7ba99153d21.slice/cri-containerd-1703646f5623f6ff58498d21f7b262ce42f2175472dd9ec43202acd48c39ea73.scope WatchSource:0}: container "1703646f5623f6ff58498d21f7b262ce42f2175472dd9ec43202acd48c39ea73" in namespace "k8s.io": not found Dec 13 02:16:03.523707 env[1210]: time="2024-12-13T02:16:03.523260959Z" level=info msg="CreateContainer within sandbox \"937bf9575496c888401857db074d043fd0f3fd920750217c872d68529fbf944d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:16:03.551907 env[1210]: time="2024-12-13T02:16:03.551834793Z" level=info msg="CreateContainer within sandbox \"937bf9575496c888401857db074d043fd0f3fd920750217c872d68529fbf944d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6455db1c70edbf2574a13075d0d536e156d64fa8bc1fc367fc382faf8276e319\"" Dec 13 02:16:03.552972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2242398820.mount: Deactivated successfully. Dec 13 02:16:03.553378 env[1210]: time="2024-12-13T02:16:03.553340042Z" level=info msg="StartContainer for \"6455db1c70edbf2574a13075d0d536e156d64fa8bc1fc367fc382faf8276e319\"" Dec 13 02:16:03.607370 systemd[1]: Started cri-containerd-6455db1c70edbf2574a13075d0d536e156d64fa8bc1fc367fc382faf8276e319.scope. Dec 13 02:16:03.651909 systemd[1]: cri-containerd-6455db1c70edbf2574a13075d0d536e156d64fa8bc1fc367fc382faf8276e319.scope: Deactivated successfully. Dec 13 02:16:03.653986 env[1210]: time="2024-12-13T02:16:03.653943292Z" level=info msg="StartContainer for \"6455db1c70edbf2574a13075d0d536e156d64fa8bc1fc367fc382faf8276e319\" returns successfully" Dec 13 02:16:03.691353 env[1210]: time="2024-12-13T02:16:03.691286155Z" level=info msg="shim disconnected" id=6455db1c70edbf2574a13075d0d536e156d64fa8bc1fc367fc382faf8276e319 Dec 13 02:16:03.691353 env[1210]: time="2024-12-13T02:16:03.691352359Z" level=warning msg="cleaning up after shim disconnected" id=6455db1c70edbf2574a13075d0d536e156d64fa8bc1fc367fc382faf8276e319 namespace=k8s.io Dec 13 02:16:03.691857 env[1210]: time="2024-12-13T02:16:03.691367492Z" level=info msg="cleaning up dead shim" Dec 13 02:16:03.703816 env[1210]: time="2024-12-13T02:16:03.703747346Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:16:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4148 runtime=io.containerd.runc.v2\n" Dec 13 02:16:03.771574 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6455db1c70edbf2574a13075d0d536e156d64fa8bc1fc367fc382faf8276e319-rootfs.mount: Deactivated successfully. Dec 13 02:16:04.527089 env[1210]: time="2024-12-13T02:16:04.526996105Z" level=info msg="CreateContainer within sandbox \"937bf9575496c888401857db074d043fd0f3fd920750217c872d68529fbf944d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:16:04.553666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1207431849.mount: Deactivated successfully. Dec 13 02:16:04.570983 env[1210]: time="2024-12-13T02:16:04.570914965Z" level=info msg="CreateContainer within sandbox \"937bf9575496c888401857db074d043fd0f3fd920750217c872d68529fbf944d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"383d1ac9ec4ee8fa0c3f86564da7e2fca011f88203c80b3110dd593e4e89a085\"" Dec 13 02:16:04.572118 env[1210]: time="2024-12-13T02:16:04.572072453Z" level=info msg="StartContainer for \"383d1ac9ec4ee8fa0c3f86564da7e2fca011f88203c80b3110dd593e4e89a085\"" Dec 13 02:16:04.619416 systemd[1]: Started cri-containerd-383d1ac9ec4ee8fa0c3f86564da7e2fca011f88203c80b3110dd593e4e89a085.scope. Dec 13 02:16:04.657393 systemd[1]: cri-containerd-383d1ac9ec4ee8fa0c3f86564da7e2fca011f88203c80b3110dd593e4e89a085.scope: Deactivated successfully. Dec 13 02:16:04.659205 env[1210]: time="2024-12-13T02:16:04.659071538Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2b54d06_ca83_48b2_a9b2_4585f9b0645f.slice/cri-containerd-383d1ac9ec4ee8fa0c3f86564da7e2fca011f88203c80b3110dd593e4e89a085.scope/memory.events\": no such file or directory" Dec 13 02:16:04.662081 env[1210]: time="2024-12-13T02:16:04.662006284Z" level=info msg="StartContainer for \"383d1ac9ec4ee8fa0c3f86564da7e2fca011f88203c80b3110dd593e4e89a085\" returns successfully" Dec 13 02:16:04.693227 env[1210]: time="2024-12-13T02:16:04.693154816Z" level=info msg="shim disconnected" id=383d1ac9ec4ee8fa0c3f86564da7e2fca011f88203c80b3110dd593e4e89a085 Dec 13 02:16:04.693227 env[1210]: time="2024-12-13T02:16:04.693215657Z" level=warning msg="cleaning up after shim disconnected" id=383d1ac9ec4ee8fa0c3f86564da7e2fca011f88203c80b3110dd593e4e89a085 namespace=k8s.io Dec 13 02:16:04.693227 env[1210]: time="2024-12-13T02:16:04.693231135Z" level=info msg="cleaning up dead shim" Dec 13 02:16:04.705239 env[1210]: time="2024-12-13T02:16:04.705153099Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:16:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4206 runtime=io.containerd.runc.v2\n" Dec 13 02:16:04.772025 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-383d1ac9ec4ee8fa0c3f86564da7e2fca011f88203c80b3110dd593e4e89a085-rootfs.mount: Deactivated successfully. Dec 13 02:16:05.534245 env[1210]: time="2024-12-13T02:16:05.534189216Z" level=info msg="CreateContainer within sandbox \"937bf9575496c888401857db074d043fd0f3fd920750217c872d68529fbf944d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:16:05.561857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3419258994.mount: Deactivated successfully. Dec 13 02:16:05.570938 env[1210]: time="2024-12-13T02:16:05.570850452Z" level=info msg="CreateContainer within sandbox \"937bf9575496c888401857db074d043fd0f3fd920750217c872d68529fbf944d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"21e78638854ce7df497da20304658dc3df3c96b6f6b35bd69fb629b4e31ca9d9\"" Dec 13 02:16:05.572502 env[1210]: time="2024-12-13T02:16:05.572458745Z" level=info msg="StartContainer for \"21e78638854ce7df497da20304658dc3df3c96b6f6b35bd69fb629b4e31ca9d9\"" Dec 13 02:16:05.610893 systemd[1]: Started cri-containerd-21e78638854ce7df497da20304658dc3df3c96b6f6b35bd69fb629b4e31ca9d9.scope. Dec 13 02:16:05.663323 env[1210]: time="2024-12-13T02:16:05.663186493Z" level=info msg="StartContainer for \"21e78638854ce7df497da20304658dc3df3c96b6f6b35bd69fb629b4e31ca9d9\" returns successfully" Dec 13 02:16:06.055312 env[1210]: time="2024-12-13T02:16:06.055252662Z" level=info msg="StopPodSandbox for \"3f4e23180c0e1512fb2b307c400974be9ce96071a6558826d655b49f0761b6a3\"" Dec 13 02:16:06.055544 env[1210]: time="2024-12-13T02:16:06.055396327Z" level=info msg="TearDown network for sandbox \"3f4e23180c0e1512fb2b307c400974be9ce96071a6558826d655b49f0761b6a3\" successfully" Dec 13 02:16:06.055544 env[1210]: time="2024-12-13T02:16:06.055445691Z" level=info msg="StopPodSandbox for \"3f4e23180c0e1512fb2b307c400974be9ce96071a6558826d655b49f0761b6a3\" returns successfully" Dec 13 02:16:06.056233 env[1210]: time="2024-12-13T02:16:06.056184906Z" level=info msg="RemovePodSandbox for \"3f4e23180c0e1512fb2b307c400974be9ce96071a6558826d655b49f0761b6a3\"" Dec 13 02:16:06.056392 env[1210]: time="2024-12-13T02:16:06.056238465Z" level=info msg="Forcibly stopping sandbox \"3f4e23180c0e1512fb2b307c400974be9ce96071a6558826d655b49f0761b6a3\"" Dec 13 02:16:06.056392 env[1210]: time="2024-12-13T02:16:06.056351220Z" level=info msg="TearDown network for sandbox \"3f4e23180c0e1512fb2b307c400974be9ce96071a6558826d655b49f0761b6a3\" successfully" Dec 13 02:16:06.062242 env[1210]: time="2024-12-13T02:16:06.062194457Z" level=info msg="RemovePodSandbox \"3f4e23180c0e1512fb2b307c400974be9ce96071a6558826d655b49f0761b6a3\" returns successfully" Dec 13 02:16:06.062972 env[1210]: time="2024-12-13T02:16:06.062926837Z" level=info msg="StopPodSandbox for \"b810a4865f41950a170c4497bd14aa695313b595f2ce325f95a6d0c3ad77092a\"" Dec 13 02:16:06.063126 env[1210]: time="2024-12-13T02:16:06.063064366Z" level=info msg="TearDown network for sandbox \"b810a4865f41950a170c4497bd14aa695313b595f2ce325f95a6d0c3ad77092a\" successfully" Dec 13 02:16:06.063204 env[1210]: time="2024-12-13T02:16:06.063127227Z" level=info msg="StopPodSandbox for \"b810a4865f41950a170c4497bd14aa695313b595f2ce325f95a6d0c3ad77092a\" returns successfully" Dec 13 02:16:06.063644 env[1210]: time="2024-12-13T02:16:06.063599830Z" level=info msg="RemovePodSandbox for \"b810a4865f41950a170c4497bd14aa695313b595f2ce325f95a6d0c3ad77092a\"" Dec 13 02:16:06.063764 env[1210]: time="2024-12-13T02:16:06.063651900Z" level=info msg="Forcibly stopping sandbox \"b810a4865f41950a170c4497bd14aa695313b595f2ce325f95a6d0c3ad77092a\"" Dec 13 02:16:06.063829 env[1210]: time="2024-12-13T02:16:06.063771264Z" level=info msg="TearDown network for sandbox \"b810a4865f41950a170c4497bd14aa695313b595f2ce325f95a6d0c3ad77092a\" successfully" Dec 13 02:16:06.074047 env[1210]: time="2024-12-13T02:16:06.073943353Z" level=info msg="RemovePodSandbox \"b810a4865f41950a170c4497bd14aa695313b595f2ce325f95a6d0c3ad77092a\" returns successfully" Dec 13 02:16:06.074536 env[1210]: time="2024-12-13T02:16:06.074493353Z" level=info msg="StopPodSandbox for \"c433da650e764ec2adfed919e9f2d127fd570d2536bad9c08d28c5e2396f1353\"" Dec 13 02:16:06.074746 env[1210]: time="2024-12-13T02:16:06.074662136Z" level=info msg="TearDown network for sandbox \"c433da650e764ec2adfed919e9f2d127fd570d2536bad9c08d28c5e2396f1353\" successfully" Dec 13 02:16:06.074856 env[1210]: time="2024-12-13T02:16:06.074740988Z" level=info msg="StopPodSandbox for \"c433da650e764ec2adfed919e9f2d127fd570d2536bad9c08d28c5e2396f1353\" returns successfully" Dec 13 02:16:06.075183 env[1210]: time="2024-12-13T02:16:06.075141983Z" level=info msg="RemovePodSandbox for \"c433da650e764ec2adfed919e9f2d127fd570d2536bad9c08d28c5e2396f1353\"" Dec 13 02:16:06.075300 env[1210]: time="2024-12-13T02:16:06.075193753Z" level=info msg="Forcibly stopping sandbox \"c433da650e764ec2adfed919e9f2d127fd570d2536bad9c08d28c5e2396f1353\"" Dec 13 02:16:06.075365 env[1210]: time="2024-12-13T02:16:06.075304798Z" level=info msg="TearDown network for sandbox \"c433da650e764ec2adfed919e9f2d127fd570d2536bad9c08d28c5e2396f1353\" successfully" Dec 13 02:16:06.080391 env[1210]: time="2024-12-13T02:16:06.080342564Z" level=info msg="RemovePodSandbox \"c433da650e764ec2adfed919e9f2d127fd570d2536bad9c08d28c5e2396f1353\" returns successfully" Dec 13 02:16:06.150682 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 02:16:06.193109 kubelet[2067]: W1213 02:16:06.193059 2067 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2b54d06_ca83_48b2_a9b2_4585f9b0645f.slice/cri-containerd-718b48a88522b852e183106d5759509c51c16aceb8faadfc2d7c839c02a74d3a.scope WatchSource:0}: task 718b48a88522b852e183106d5759509c51c16aceb8faadfc2d7c839c02a74d3a not found: not found Dec 13 02:16:06.943730 systemd[1]: run-containerd-runc-k8s.io-21e78638854ce7df497da20304658dc3df3c96b6f6b35bd69fb629b4e31ca9d9-runc.XHmMtf.mount: Deactivated successfully. Dec 13 02:16:09.128419 systemd[1]: run-containerd-runc-k8s.io-21e78638854ce7df497da20304658dc3df3c96b6f6b35bd69fb629b4e31ca9d9-runc.QXGITO.mount: Deactivated successfully. Dec 13 02:16:09.305656 kubelet[2067]: W1213 02:16:09.303733 2067 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2b54d06_ca83_48b2_a9b2_4585f9b0645f.slice/cri-containerd-bc0e71568945a44834624ecb89484bad1aad4a6c00b699e710678f52bc21af1f.scope WatchSource:0}: task bc0e71568945a44834624ecb89484bad1aad4a6c00b699e710678f52bc21af1f not found: not found Dec 13 02:16:09.383988 systemd-networkd[1023]: lxc_health: Link UP Dec 13 02:16:09.399659 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:16:09.404963 systemd-networkd[1023]: lxc_health: Gained carrier Dec 13 02:16:09.918276 kubelet[2067]: I1213 02:16:09.918199 2067 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dsftl" podStartSLOduration=8.918175965 podStartE2EDuration="8.918175965s" podCreationTimestamp="2024-12-13 02:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:16:06.5610821 +0000 UTC m=+120.683108113" watchObservedRunningTime="2024-12-13 02:16:09.918175965 +0000 UTC m=+124.040201954" Dec 13 02:16:10.792813 systemd-networkd[1023]: lxc_health: Gained IPv6LL Dec 13 02:16:11.335974 systemd[1]: run-containerd-runc-k8s.io-21e78638854ce7df497da20304658dc3df3c96b6f6b35bd69fb629b4e31ca9d9-runc.Svi1jj.mount: Deactivated successfully. Dec 13 02:16:12.439597 kubelet[2067]: W1213 02:16:12.439523 2067 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2b54d06_ca83_48b2_a9b2_4585f9b0645f.slice/cri-containerd-6455db1c70edbf2574a13075d0d536e156d64fa8bc1fc367fc382faf8276e319.scope WatchSource:0}: task 6455db1c70edbf2574a13075d0d536e156d64fa8bc1fc367fc382faf8276e319 not found: not found Dec 13 02:16:13.671885 systemd[1]: run-containerd-runc-k8s.io-21e78638854ce7df497da20304658dc3df3c96b6f6b35bd69fb629b4e31ca9d9-runc.NSKxl6.mount: Deactivated successfully. Dec 13 02:16:15.573489 kubelet[2067]: W1213 02:16:15.573419 2067 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2b54d06_ca83_48b2_a9b2_4585f9b0645f.slice/cri-containerd-383d1ac9ec4ee8fa0c3f86564da7e2fca011f88203c80b3110dd593e4e89a085.scope WatchSource:0}: task 383d1ac9ec4ee8fa0c3f86564da7e2fca011f88203c80b3110dd593e4e89a085 not found: not found Dec 13 02:16:15.873439 systemd[1]: run-containerd-runc-k8s.io-21e78638854ce7df497da20304658dc3df3c96b6f6b35bd69fb629b4e31ca9d9-runc.UY1mFF.mount: Deactivated successfully. Dec 13 02:16:16.042477 sshd[3885]: pam_unix(sshd:session): session closed for user core Dec 13 02:16:16.047339 systemd[1]: sshd@24-10.128.0.53:22-139.178.68.195:60550.service: Deactivated successfully. Dec 13 02:16:16.048590 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 02:16:16.049573 systemd-logind[1219]: Session 25 logged out. Waiting for processes to exit. Dec 13 02:16:16.051427 systemd-logind[1219]: Removed session 25.