Dec 13 02:16:53.260169 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 02:16:53.260197 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:16:53.260211 kernel: BIOS-provided physical RAM map: Dec 13 02:16:53.260221 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 02:16:53.260229 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 02:16:53.260238 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 02:16:53.260251 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Dec 13 02:16:53.260261 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Dec 13 02:16:53.260270 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Dec 13 02:16:53.260279 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 02:16:53.260288 kernel: NX (Execute Disable) protection: active Dec 13 02:16:53.260297 kernel: SMBIOS 2.7 present. Dec 13 02:16:53.260305 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 13 02:16:53.260315 kernel: Hypervisor detected: KVM Dec 13 02:16:53.260330 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 02:16:53.260339 kernel: kvm-clock: cpu 0, msr 6119b001, primary cpu clock Dec 13 02:16:53.260350 kernel: kvm-clock: using sched offset of 7976334112 cycles Dec 13 02:16:53.260362 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 02:16:53.260372 kernel: tsc: Detected 2499.996 MHz processor Dec 13 02:16:53.264117 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 02:16:53.264144 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 02:16:53.264156 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Dec 13 02:16:53.264167 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 02:16:53.264177 kernel: Using GB pages for direct mapping Dec 13 02:16:53.264188 kernel: ACPI: Early table checksum verification disabled Dec 13 02:16:53.264199 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Dec 13 02:16:53.264211 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Dec 13 02:16:53.264222 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 02:16:53.264232 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 13 02:16:53.264245 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Dec 13 02:16:53.264256 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 02:16:53.264267 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 02:16:53.264277 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 13 02:16:53.264288 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 02:16:53.264298 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 13 02:16:53.264309 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 13 02:16:53.264319 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 02:16:53.264334 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Dec 13 02:16:53.264345 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Dec 13 02:16:53.264356 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Dec 13 02:16:53.264371 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Dec 13 02:16:53.264392 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Dec 13 02:16:53.264405 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Dec 13 02:16:53.264417 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Dec 13 02:16:53.264432 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Dec 13 02:16:53.264444 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Dec 13 02:16:53.264455 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Dec 13 02:16:53.264466 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 02:16:53.264478 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 02:16:53.264489 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 13 02:16:53.264502 kernel: NUMA: Initialized distance table, cnt=1 Dec 13 02:16:53.264514 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Dec 13 02:16:53.264528 kernel: Zone ranges: Dec 13 02:16:53.264540 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 02:16:53.264552 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Dec 13 02:16:53.264563 kernel: Normal empty Dec 13 02:16:53.264575 kernel: Movable zone start for each node Dec 13 02:16:53.264586 kernel: Early memory node ranges Dec 13 02:16:53.264597 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 02:16:53.264608 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Dec 13 02:16:53.264620 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Dec 13 02:16:53.264635 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 02:16:53.264646 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 02:16:53.264657 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Dec 13 02:16:53.264669 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 02:16:53.264680 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 02:16:53.264692 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 13 02:16:53.264703 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 02:16:53.264715 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 02:16:53.264726 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 02:16:53.264740 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 02:16:53.264751 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 02:16:53.264762 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 02:16:53.264774 kernel: TSC deadline timer available Dec 13 02:16:53.264785 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 02:16:53.264796 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Dec 13 02:16:53.264807 kernel: Booting paravirtualized kernel on KVM Dec 13 02:16:53.264820 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 02:16:53.264831 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 02:16:53.264845 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 02:16:53.264857 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 02:16:53.264868 kernel: pcpu-alloc: [0] 0 1 Dec 13 02:16:53.264879 kernel: kvm-guest: stealtime: cpu 0, msr 7b61c0c0 Dec 13 02:16:53.264891 kernel: kvm-guest: PV spinlocks enabled Dec 13 02:16:53.264902 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 02:16:53.264914 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Dec 13 02:16:53.264924 kernel: Policy zone: DMA32 Dec 13 02:16:53.264937 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:16:53.264952 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 02:16:53.264963 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 02:16:53.264975 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 02:16:53.264986 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 02:16:53.264997 kernel: Memory: 1934420K/2057760K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 123080K reserved, 0K cma-reserved) Dec 13 02:16:53.265080 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 02:16:53.265092 kernel: Kernel/User page tables isolation: enabled Dec 13 02:16:53.265104 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 02:16:53.265119 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 02:16:53.265130 kernel: rcu: Hierarchical RCU implementation. Dec 13 02:16:53.265142 kernel: rcu: RCU event tracing is enabled. Dec 13 02:16:53.265154 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 02:16:53.265166 kernel: Rude variant of Tasks RCU enabled. Dec 13 02:16:53.265178 kernel: Tracing variant of Tasks RCU enabled. Dec 13 02:16:53.265190 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 02:16:53.265202 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 02:16:53.265213 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 02:16:53.265227 kernel: random: crng init done Dec 13 02:16:53.265239 kernel: Console: colour VGA+ 80x25 Dec 13 02:16:53.265250 kernel: printk: console [ttyS0] enabled Dec 13 02:16:53.265262 kernel: ACPI: Core revision 20210730 Dec 13 02:16:53.265273 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 13 02:16:53.265286 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 02:16:53.265298 kernel: x2apic enabled Dec 13 02:16:53.265310 kernel: Switched APIC routing to physical x2apic. Dec 13 02:16:53.265320 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 13 02:16:53.265335 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Dec 13 02:16:53.265346 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 02:16:53.265357 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 02:16:53.265369 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 02:16:53.275444 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 02:16:53.275469 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 02:16:53.275481 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 02:16:53.275494 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 02:16:53.275508 kernel: RETBleed: Vulnerable Dec 13 02:16:53.275520 kernel: Speculative Store Bypass: Vulnerable Dec 13 02:16:53.275533 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 02:16:53.275545 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 02:16:53.275556 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 02:16:53.275567 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 02:16:53.275583 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 02:16:53.275595 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 02:16:53.275608 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 02:16:53.275620 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 02:16:53.275632 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 02:16:53.275645 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 02:16:53.275659 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 02:16:53.275671 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 13 02:16:53.275682 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 02:16:53.275693 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 02:16:53.275705 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 02:16:53.275716 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 13 02:16:53.275729 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 13 02:16:53.275740 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 13 02:16:53.275751 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 13 02:16:53.275764 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 13 02:16:53.275776 kernel: Freeing SMP alternatives memory: 32K Dec 13 02:16:53.275790 kernel: pid_max: default: 32768 minimum: 301 Dec 13 02:16:53.275801 kernel: LSM: Security Framework initializing Dec 13 02:16:53.275814 kernel: SELinux: Initializing. Dec 13 02:16:53.275825 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 02:16:53.275836 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 02:16:53.275849 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 02:16:53.275861 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 02:16:53.276101 kernel: signal: max sigframe size: 3632 Dec 13 02:16:53.276118 kernel: rcu: Hierarchical SRCU implementation. Dec 13 02:16:53.276131 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 02:16:53.276148 kernel: smp: Bringing up secondary CPUs ... Dec 13 02:16:53.276160 kernel: x86: Booting SMP configuration: Dec 13 02:16:53.276172 kernel: .... node #0, CPUs: #1 Dec 13 02:16:53.276223 kernel: kvm-clock: cpu 1, msr 6119b041, secondary cpu clock Dec 13 02:16:53.276239 kernel: kvm-guest: stealtime: cpu 1, msr 7b71c0c0 Dec 13 02:16:53.276252 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 02:16:53.276265 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 02:16:53.276278 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 02:16:53.276290 kernel: smpboot: Max logical packages: 1 Dec 13 02:16:53.276305 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Dec 13 02:16:53.276317 kernel: devtmpfs: initialized Dec 13 02:16:53.276330 kernel: x86/mm: Memory block size: 128MB Dec 13 02:16:53.276342 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 02:16:53.276354 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 02:16:53.276366 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 02:16:53.276378 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 02:16:53.276402 kernel: audit: initializing netlink subsys (disabled) Dec 13 02:16:53.276414 kernel: audit: type=2000 audit(1734056211.715:1): state=initialized audit_enabled=0 res=1 Dec 13 02:16:53.276429 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 02:16:53.276441 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 02:16:53.276453 kernel: cpuidle: using governor menu Dec 13 02:16:53.276465 kernel: ACPI: bus type PCI registered Dec 13 02:16:53.276476 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 02:16:53.276488 kernel: dca service started, version 1.12.1 Dec 13 02:16:53.276501 kernel: PCI: Using configuration type 1 for base access Dec 13 02:16:53.276513 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 02:16:53.276525 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 02:16:53.276540 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 02:16:53.276553 kernel: ACPI: Added _OSI(Module Device) Dec 13 02:16:53.276565 kernel: ACPI: Added _OSI(Processor Device) Dec 13 02:16:53.276576 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 02:16:53.276589 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 02:16:53.276601 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 02:16:53.276613 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 02:16:53.276625 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 02:16:53.276636 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 02:16:53.276650 kernel: ACPI: Interpreter enabled Dec 13 02:16:53.276662 kernel: ACPI: PM: (supports S0 S5) Dec 13 02:16:53.276673 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 02:16:53.276685 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 02:16:53.276697 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 02:16:53.276709 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 02:16:53.276934 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 02:16:53.277055 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 02:16:53.277074 kernel: acpiphp: Slot [3] registered Dec 13 02:16:53.277087 kernel: acpiphp: Slot [4] registered Dec 13 02:16:53.277099 kernel: acpiphp: Slot [5] registered Dec 13 02:16:53.277111 kernel: acpiphp: Slot [6] registered Dec 13 02:16:53.277122 kernel: acpiphp: Slot [7] registered Dec 13 02:16:53.277134 kernel: acpiphp: Slot [8] registered Dec 13 02:16:53.277145 kernel: acpiphp: Slot [9] registered Dec 13 02:16:53.277157 kernel: acpiphp: Slot [10] registered Dec 13 02:16:53.277169 kernel: acpiphp: Slot [11] registered Dec 13 02:16:53.277183 kernel: acpiphp: Slot [12] registered Dec 13 02:16:53.277196 kernel: acpiphp: Slot [13] registered Dec 13 02:16:53.277208 kernel: acpiphp: Slot [14] registered Dec 13 02:16:53.277219 kernel: acpiphp: Slot [15] registered Dec 13 02:16:53.277231 kernel: acpiphp: Slot [16] registered Dec 13 02:16:53.277243 kernel: acpiphp: Slot [17] registered Dec 13 02:16:53.277256 kernel: acpiphp: Slot [18] registered Dec 13 02:16:53.277267 kernel: acpiphp: Slot [19] registered Dec 13 02:16:53.277278 kernel: acpiphp: Slot [20] registered Dec 13 02:16:53.277293 kernel: acpiphp: Slot [21] registered Dec 13 02:16:53.277305 kernel: acpiphp: Slot [22] registered Dec 13 02:16:53.277317 kernel: acpiphp: Slot [23] registered Dec 13 02:16:53.277328 kernel: acpiphp: Slot [24] registered Dec 13 02:16:53.277341 kernel: acpiphp: Slot [25] registered Dec 13 02:16:53.277354 kernel: acpiphp: Slot [26] registered Dec 13 02:16:53.277366 kernel: acpiphp: Slot [27] registered Dec 13 02:16:53.277377 kernel: acpiphp: Slot [28] registered Dec 13 02:16:53.277397 kernel: acpiphp: Slot [29] registered Dec 13 02:16:53.277410 kernel: acpiphp: Slot [30] registered Dec 13 02:16:53.277424 kernel: acpiphp: Slot [31] registered Dec 13 02:16:53.277435 kernel: PCI host bridge to bus 0000:00 Dec 13 02:16:53.277553 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 02:16:53.277658 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 02:16:53.277760 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 02:16:53.277856 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 02:16:53.277955 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 02:16:53.278198 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 02:16:53.278338 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 02:16:53.278480 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Dec 13 02:16:53.278593 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 02:16:53.278706 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Dec 13 02:16:53.278817 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 13 02:16:53.278928 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 13 02:16:53.279044 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 13 02:16:53.279155 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 13 02:16:53.279346 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 13 02:16:53.279471 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 13 02:16:53.279597 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Dec 13 02:16:53.279814 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Dec 13 02:16:53.279924 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 02:16:53.280041 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 02:16:53.280162 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 02:16:53.280330 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Dec 13 02:16:53.290013 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 02:16:53.290169 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Dec 13 02:16:53.290187 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 02:16:53.290208 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 02:16:53.290220 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 02:16:53.290233 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 02:16:53.290245 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 02:16:53.290257 kernel: iommu: Default domain type: Translated Dec 13 02:16:53.290270 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 02:16:53.290392 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 13 02:16:53.290508 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 02:16:53.290618 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 13 02:16:53.290636 kernel: vgaarb: loaded Dec 13 02:16:53.290649 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 02:16:53.290661 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 02:16:53.290674 kernel: PTP clock support registered Dec 13 02:16:53.290686 kernel: PCI: Using ACPI for IRQ routing Dec 13 02:16:53.290698 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 02:16:53.290709 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 02:16:53.290722 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Dec 13 02:16:53.290736 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 13 02:16:53.290748 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 13 02:16:53.290760 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 02:16:53.290773 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 02:16:53.290784 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 02:16:53.290797 kernel: pnp: PnP ACPI init Dec 13 02:16:53.290809 kernel: pnp: PnP ACPI: found 5 devices Dec 13 02:16:53.290822 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 02:16:53.290835 kernel: NET: Registered PF_INET protocol family Dec 13 02:16:53.290849 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 02:16:53.290862 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 02:16:53.290873 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 02:16:53.290886 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 02:16:53.290897 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 02:16:53.290910 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 02:16:53.290923 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 02:16:53.290935 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 02:16:53.290946 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 02:16:53.290960 kernel: NET: Registered PF_XDP protocol family Dec 13 02:16:53.291066 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 02:16:53.291168 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 02:16:53.291328 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 02:16:53.291439 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 02:16:53.291556 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 02:16:53.291669 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Dec 13 02:16:53.291690 kernel: PCI: CLS 0 bytes, default 64 Dec 13 02:16:53.291702 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 02:16:53.291714 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 13 02:16:53.291726 kernel: clocksource: Switched to clocksource tsc Dec 13 02:16:53.291739 kernel: Initialise system trusted keyrings Dec 13 02:16:53.291752 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 02:16:53.291764 kernel: Key type asymmetric registered Dec 13 02:16:53.291776 kernel: Asymmetric key parser 'x509' registered Dec 13 02:16:53.291787 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 02:16:53.291802 kernel: io scheduler mq-deadline registered Dec 13 02:16:53.291815 kernel: io scheduler kyber registered Dec 13 02:16:53.291827 kernel: io scheduler bfq registered Dec 13 02:16:53.291839 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 02:16:53.291850 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 02:16:53.291864 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 02:16:53.291877 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 02:16:53.291889 kernel: i8042: Warning: Keylock active Dec 13 02:16:53.291901 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 02:16:53.291916 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 02:16:53.292034 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 02:16:53.292143 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 02:16:53.292326 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T02:16:52 UTC (1734056212) Dec 13 02:16:53.292444 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 02:16:53.292459 kernel: intel_pstate: CPU model not supported Dec 13 02:16:53.292471 kernel: NET: Registered PF_INET6 protocol family Dec 13 02:16:53.292484 kernel: Segment Routing with IPv6 Dec 13 02:16:53.292499 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 02:16:53.292511 kernel: NET: Registered PF_PACKET protocol family Dec 13 02:16:53.292525 kernel: Key type dns_resolver registered Dec 13 02:16:53.292536 kernel: IPI shorthand broadcast: enabled Dec 13 02:16:53.292549 kernel: sched_clock: Marking stable (491385876, 312525900)->(970746015, -166834239) Dec 13 02:16:53.292561 kernel: registered taskstats version 1 Dec 13 02:16:53.292573 kernel: Loading compiled-in X.509 certificates Dec 13 02:16:53.292586 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 02:16:53.292598 kernel: Key type .fscrypt registered Dec 13 02:16:53.292612 kernel: Key type fscrypt-provisioning registered Dec 13 02:16:53.292624 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 02:16:53.292636 kernel: ima: Allocated hash algorithm: sha1 Dec 13 02:16:53.292647 kernel: ima: No architecture policies found Dec 13 02:16:53.292660 kernel: clk: Disabling unused clocks Dec 13 02:16:53.292672 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 02:16:53.292683 kernel: Write protecting the kernel read-only data: 28672k Dec 13 02:16:53.292696 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 02:16:53.292708 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 02:16:53.292723 kernel: Run /init as init process Dec 13 02:16:53.292734 kernel: with arguments: Dec 13 02:16:53.292746 kernel: /init Dec 13 02:16:53.292758 kernel: with environment: Dec 13 02:16:53.292770 kernel: HOME=/ Dec 13 02:16:53.292781 kernel: TERM=linux Dec 13 02:16:53.292793 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 02:16:53.292809 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:16:53.292827 systemd[1]: Detected virtualization amazon. Dec 13 02:16:53.292841 systemd[1]: Detected architecture x86-64. Dec 13 02:16:53.292853 systemd[1]: Running in initrd. Dec 13 02:16:53.292865 systemd[1]: No hostname configured, using default hostname. Dec 13 02:16:53.292891 systemd[1]: Hostname set to . Dec 13 02:16:53.292907 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:16:53.292922 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 02:16:53.292935 systemd[1]: Queued start job for default target initrd.target. Dec 13 02:16:53.292948 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:16:53.292961 systemd[1]: Reached target cryptsetup.target. Dec 13 02:16:53.292974 systemd[1]: Reached target paths.target. Dec 13 02:16:53.292987 systemd[1]: Reached target slices.target. Dec 13 02:16:53.292999 systemd[1]: Reached target swap.target. Dec 13 02:16:53.293012 systemd[1]: Reached target timers.target. Dec 13 02:16:53.293029 systemd[1]: Listening on iscsid.socket. Dec 13 02:16:53.293042 systemd[1]: Listening on iscsiuio.socket. Dec 13 02:16:53.293055 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 02:16:53.293070 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 02:16:53.293084 systemd[1]: Listening on systemd-journald.socket. Dec 13 02:16:53.293097 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:16:53.293110 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:16:53.293123 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:16:53.293139 systemd[1]: Reached target sockets.target. Dec 13 02:16:53.293152 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:16:53.293168 systemd[1]: Finished network-cleanup.service. Dec 13 02:16:53.293181 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 02:16:53.293194 systemd[1]: Starting systemd-journald.service... Dec 13 02:16:53.293209 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:16:53.293222 systemd[1]: Starting systemd-resolved.service... Dec 13 02:16:53.293234 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 02:16:53.293247 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:16:53.293262 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 02:16:53.293282 systemd-journald[185]: Journal started Dec 13 02:16:53.293347 systemd-journald[185]: Runtime Journal (/run/log/journal/ec276b4bee41db9e2c02f015a7a84a9a) is 4.8M, max 38.7M, 33.9M free. Dec 13 02:16:53.260424 systemd-modules-load[186]: Inserted module 'overlay' Dec 13 02:16:53.395450 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 02:16:53.395488 kernel: Bridge firewalling registered Dec 13 02:16:53.395508 systemd[1]: Started systemd-journald.service. Dec 13 02:16:53.395530 kernel: audit: type=1130 audit(1734056213.365:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:53.395555 kernel: audit: type=1130 audit(1734056213.370:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:53.395583 kernel: SCSI subsystem initialized Dec 13 02:16:53.395602 kernel: audit: type=1130 audit(1734056213.376:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:53.395620 kernel: audit: type=1130 audit(1734056213.382:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:53.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:53.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:53.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:53.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:53.306134 systemd-resolved[187]: Positive Trust Anchors: Dec 13 02:16:53.306146 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:16:53.306192 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:16:53.314955 systemd-resolved[187]: Defaulting to hostname 'linux'. Dec 13 02:16:53.346180 systemd-modules-load[186]: Inserted module 'br_netfilter' Dec 13 02:16:53.372632 systemd[1]: Started systemd-resolved.service. Dec 13 02:16:53.377716 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 02:16:53.383852 systemd[1]: Reached target nss-lookup.target. Dec 13 02:16:53.390265 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 02:16:53.392358 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:16:53.436115 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:16:53.446213 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 02:16:53.446284 kernel: device-mapper: uevent: version 1.0.3 Dec 13 02:16:53.446304 kernel: audit: type=1130 audit(1734056213.432:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:53.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:53.452217 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 02:16:53.466534 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 02:16:53.466572 kernel: audit: type=1130 audit(1734056213.453:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:53.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:53.455768 systemd[1]: Starting dracut-cmdline.service... Dec 13 02:16:53.468911 systemd-modules-load[186]: Inserted module 'dm_multipath' Dec 13 02:16:53.471452 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:16:53.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:53.478699 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:16:53.489279 kernel: audit: type=1130 audit(1734056213.474:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:53.489314 dracut-cmdline[201]: dracut-dracut-053 Dec 13 02:16:53.492006 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:16:53.503301 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:16:53.510281 kernel: audit: type=1130 audit(1734056213.503:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:53.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:53.571409 kernel: Loading iSCSI transport class v2.0-870. Dec 13 02:16:53.604413 kernel: iscsi: registered transport (tcp) Dec 13 02:16:53.664544 kernel: iscsi: registered transport (qla4xxx) Dec 13 02:16:53.664663 kernel: QLogic iSCSI HBA Driver Dec 13 02:16:53.721330 systemd[1]: Finished dracut-cmdline.service. Dec 13 02:16:53.729416 kernel: audit: type=1130 audit(1734056213.721:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:53.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:53.727902 systemd[1]: Starting dracut-pre-udev.service... Dec 13 02:16:53.805430 kernel: raid6: avx512x4 gen() 10486 MB/s Dec 13 02:16:53.823447 kernel: raid6: avx512x4 xor() 4659 MB/s Dec 13 02:16:53.840414 kernel: raid6: avx512x2 gen() 12657 MB/s Dec 13 02:16:53.858419 kernel: raid6: avx512x2 xor() 18459 MB/s Dec 13 02:16:53.876415 kernel: raid6: avx512x1 gen() 11351 MB/s Dec 13 02:16:53.897031 kernel: raid6: avx512x1 xor() 5641 MB/s Dec 13 02:16:53.914420 kernel: raid6: avx2x4 gen() 5989 MB/s Dec 13 02:16:53.931961 kernel: raid6: avx2x4 xor() 4366 MB/s Dec 13 02:16:53.948413 kernel: raid6: avx2x2 gen() 8585 MB/s Dec 13 02:16:53.980420 kernel: raid6: avx2x2 xor() 14395 MB/s Dec 13 02:16:53.998442 kernel: raid6: avx2x1 gen() 5414 MB/s Dec 13 02:16:54.015410 kernel: raid6: avx2x1 xor() 9733 MB/s Dec 13 02:16:54.036421 kernel: raid6: sse2x4 gen() 7072 MB/s Dec 13 02:16:54.054416 kernel: raid6: sse2x4 xor() 2639 MB/s Dec 13 02:16:54.071409 kernel: raid6: sse2x2 gen() 8415 MB/s Dec 13 02:16:54.088416 kernel: raid6: sse2x2 xor() 4155 MB/s Dec 13 02:16:54.105509 kernel: raid6: sse2x1 gen() 7580 MB/s Dec 13 02:16:54.123359 kernel: raid6: sse2x1 xor() 4215 MB/s Dec 13 02:16:54.123443 kernel: raid6: using algorithm avx512x2 gen() 12657 MB/s Dec 13 02:16:54.123461 kernel: raid6: .... xor() 18459 MB/s, rmw enabled Dec 13 02:16:54.124324 kernel: raid6: using avx512x2 recovery algorithm Dec 13 02:16:54.141410 kernel: xor: automatically using best checksumming function avx Dec 13 02:16:54.283413 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 02:16:54.295493 systemd[1]: Finished dracut-pre-udev.service. Dec 13 02:16:54.299016 systemd[1]: Starting systemd-udevd.service... Dec 13 02:16:54.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:54.297000 audit: BPF prog-id=7 op=LOAD Dec 13 02:16:54.297000 audit: BPF prog-id=8 op=LOAD Dec 13 02:16:54.316930 systemd-udevd[385]: Using default interface naming scheme 'v252'. Dec 13 02:16:54.323186 systemd[1]: Started systemd-udevd.service. Dec 13 02:16:54.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:54.325908 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 02:16:54.350462 dracut-pre-trigger[392]: rd.md=0: removing MD RAID activation Dec 13 02:16:54.403005 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 02:16:54.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:54.405951 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:16:54.498755 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:16:54.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:54.590573 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 02:16:54.594412 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 02:16:54.594596 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 13 02:16:54.594736 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:4b:81:ac:29:91 Dec 13 02:16:54.594874 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 02:16:54.631510 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 02:16:54.631586 kernel: AES CTR mode by8 optimization enabled Dec 13 02:16:54.609894 (udev-worker)[431]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:16:54.687020 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 02:16:54.687397 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 02:16:54.715445 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 02:16:54.724412 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 02:16:54.724486 kernel: GPT:9289727 != 16777215 Dec 13 02:16:54.724503 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 02:16:54.724519 kernel: GPT:9289727 != 16777215 Dec 13 02:16:54.724534 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 02:16:54.724549 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 02:16:54.823594 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (435) Dec 13 02:16:54.859142 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 02:16:54.919173 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:16:54.954154 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 02:16:54.974468 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 02:16:54.979667 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 02:16:54.991956 systemd[1]: Starting disk-uuid.service... Dec 13 02:16:55.000347 disk-uuid[594]: Primary Header is updated. Dec 13 02:16:55.000347 disk-uuid[594]: Secondary Entries is updated. Dec 13 02:16:55.000347 disk-uuid[594]: Secondary Header is updated. Dec 13 02:16:55.009415 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 02:16:55.016059 kernel: GPT:disk_guids don't match. Dec 13 02:16:55.016301 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 02:16:55.016327 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 02:16:55.027422 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 02:16:56.023475 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 02:16:56.024028 disk-uuid[595]: The operation has completed successfully. Dec 13 02:16:56.223154 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 02:16:56.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:56.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:56.223605 systemd[1]: Finished disk-uuid.service. Dec 13 02:16:56.234701 systemd[1]: Starting verity-setup.service... Dec 13 02:16:56.277400 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 02:16:56.391686 systemd[1]: Found device dev-mapper-usr.device. Dec 13 02:16:56.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:56.394550 systemd[1]: Mounting sysusr-usr.mount... Dec 13 02:16:56.395676 systemd[1]: Finished verity-setup.service. Dec 13 02:16:56.482402 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 02:16:56.483184 systemd[1]: Mounted sysusr-usr.mount. Dec 13 02:16:56.484394 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 02:16:56.486650 systemd[1]: Starting ignition-setup.service... Dec 13 02:16:56.488692 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 02:16:56.511001 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:16:56.511069 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 02:16:56.511092 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 02:16:56.538554 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 02:16:56.553629 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 02:16:56.576059 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 02:16:56.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:56.580000 audit: BPF prog-id=9 op=LOAD Dec 13 02:16:56.584425 systemd[1]: Starting systemd-networkd.service... Dec 13 02:16:56.600660 systemd[1]: Finished ignition-setup.service. Dec 13 02:16:56.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:56.604865 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 02:16:56.625239 systemd-networkd[1104]: lo: Link UP Dec 13 02:16:56.625251 systemd-networkd[1104]: lo: Gained carrier Dec 13 02:16:56.626843 systemd-networkd[1104]: Enumeration completed Dec 13 02:16:56.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:56.627053 systemd[1]: Started systemd-networkd.service. Dec 13 02:16:56.627687 systemd-networkd[1104]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:16:56.628743 systemd[1]: Reached target network.target. Dec 13 02:16:56.635425 systemd[1]: Starting iscsiuio.service... Dec 13 02:16:56.636211 systemd-networkd[1104]: eth0: Link UP Dec 13 02:16:56.636215 systemd-networkd[1104]: eth0: Gained carrier Dec 13 02:16:56.649677 systemd[1]: Started iscsiuio.service. Dec 13 02:16:56.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:56.654637 systemd[1]: Starting iscsid.service... Dec 13 02:16:56.664961 iscsid[1111]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:16:56.664961 iscsid[1111]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 02:16:56.664961 iscsid[1111]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 02:16:56.664961 iscsid[1111]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 02:16:56.664961 iscsid[1111]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:16:56.664961 iscsid[1111]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 02:16:56.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:56.669644 systemd[1]: Started iscsid.service. Dec 13 02:16:56.670521 systemd-networkd[1104]: eth0: DHCPv4 address 172.31.19.93/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 02:16:56.681003 systemd[1]: Starting dracut-initqueue.service... Dec 13 02:16:56.704845 systemd[1]: Finished dracut-initqueue.service. Dec 13 02:16:56.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:56.705104 systemd[1]: Reached target remote-fs-pre.target. Dec 13 02:16:56.707967 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:16:56.709274 systemd[1]: Reached target remote-fs.target. Dec 13 02:16:56.712610 systemd[1]: Starting dracut-pre-mount.service... Dec 13 02:16:56.729759 systemd[1]: Finished dracut-pre-mount.service. Dec 13 02:16:56.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:57.463711 ignition[1106]: Ignition 2.14.0 Dec 13 02:16:57.463724 ignition[1106]: Stage: fetch-offline Dec 13 02:16:57.463861 ignition[1106]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:16:57.463902 ignition[1106]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:16:57.476247 ignition[1106]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:16:57.477641 ignition[1106]: Ignition finished successfully Dec 13 02:16:57.479518 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 02:16:57.481644 systemd[1]: Starting ignition-fetch.service... Dec 13 02:16:57.489443 kernel: kauditd_printk_skb: 17 callbacks suppressed Dec 13 02:16:57.489484 kernel: audit: type=1130 audit(1734056217.479:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:57.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:57.498220 ignition[1130]: Ignition 2.14.0 Dec 13 02:16:57.498233 ignition[1130]: Stage: fetch Dec 13 02:16:57.498612 ignition[1130]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:16:57.498648 ignition[1130]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:16:57.509132 ignition[1130]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:16:57.510472 ignition[1130]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:16:57.548304 ignition[1130]: INFO : PUT result: OK Dec 13 02:16:57.552297 ignition[1130]: DEBUG : parsed url from cmdline: "" Dec 13 02:16:57.552297 ignition[1130]: INFO : no config URL provided Dec 13 02:16:57.552297 ignition[1130]: INFO : reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:16:57.552297 ignition[1130]: INFO : no config at "/usr/lib/ignition/user.ign" Dec 13 02:16:57.557550 ignition[1130]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:16:57.557550 ignition[1130]: INFO : PUT result: OK Dec 13 02:16:57.557550 ignition[1130]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 02:16:57.562693 ignition[1130]: INFO : GET result: OK Dec 13 02:16:57.562693 ignition[1130]: DEBUG : parsing config with SHA512: e41f7cf1fb458bd0d0046313d4084efb9116867d95bb50101f2d9089b0176181382451cfd50cb4b83b8a7124404ce55907f729a1bf55ca585d97212e6eaabefd Dec 13 02:16:57.572764 unknown[1130]: fetched base config from "system" Dec 13 02:16:57.572779 unknown[1130]: fetched base config from "system" Dec 13 02:16:57.573719 ignition[1130]: fetch: fetch complete Dec 13 02:16:57.572787 unknown[1130]: fetched user config from "aws" Dec 13 02:16:57.573726 ignition[1130]: fetch: fetch passed Dec 13 02:16:57.573782 ignition[1130]: Ignition finished successfully Dec 13 02:16:57.579170 systemd[1]: Finished ignition-fetch.service. Dec 13 02:16:57.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:57.582422 systemd[1]: Starting ignition-kargs.service... Dec 13 02:16:57.588594 kernel: audit: type=1130 audit(1734056217.579:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:57.599997 ignition[1136]: Ignition 2.14.0 Dec 13 02:16:57.600014 ignition[1136]: Stage: kargs Dec 13 02:16:57.600226 ignition[1136]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:16:57.600319 ignition[1136]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:16:57.616013 ignition[1136]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:16:57.621051 ignition[1136]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:16:57.623059 ignition[1136]: INFO : PUT result: OK Dec 13 02:16:57.627560 ignition[1136]: kargs: kargs passed Dec 13 02:16:57.627814 ignition[1136]: Ignition finished successfully Dec 13 02:16:57.630750 systemd[1]: Finished ignition-kargs.service. Dec 13 02:16:57.635487 kernel: audit: type=1130 audit(1734056217.629:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:57.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:57.631852 systemd[1]: Starting ignition-disks.service... Dec 13 02:16:57.645339 ignition[1142]: Ignition 2.14.0 Dec 13 02:16:57.645353 ignition[1142]: Stage: disks Dec 13 02:16:57.645589 ignition[1142]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:16:57.645633 ignition[1142]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:16:57.663337 ignition[1142]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:16:57.665339 ignition[1142]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:16:57.667613 ignition[1142]: INFO : PUT result: OK Dec 13 02:16:57.671455 ignition[1142]: disks: disks passed Dec 13 02:16:57.671527 ignition[1142]: Ignition finished successfully Dec 13 02:16:57.673487 systemd[1]: Finished ignition-disks.service. Dec 13 02:16:57.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:57.675709 systemd[1]: Reached target initrd-root-device.target. Dec 13 02:16:57.680633 kernel: audit: type=1130 audit(1734056217.674:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:57.683943 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:16:57.685953 systemd[1]: Reached target local-fs.target. Dec 13 02:16:57.686836 systemd[1]: Reached target sysinit.target. Dec 13 02:16:57.688757 systemd[1]: Reached target basic.target. Dec 13 02:16:57.697145 systemd[1]: Starting systemd-fsck-root.service... Dec 13 02:16:57.741715 systemd-fsck[1150]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 02:16:57.745818 systemd[1]: Finished systemd-fsck-root.service. Dec 13 02:16:57.769926 kernel: audit: type=1130 audit(1734056217.746:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:57.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:57.757162 systemd[1]: Mounting sysroot.mount... Dec 13 02:16:57.796541 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 02:16:57.797526 systemd[1]: Mounted sysroot.mount. Dec 13 02:16:57.797726 systemd[1]: Reached target initrd-root-fs.target. Dec 13 02:16:57.814200 systemd[1]: Mounting sysroot-usr.mount... Dec 13 02:16:57.827312 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 02:16:57.837141 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 02:16:57.837298 systemd[1]: Reached target ignition-diskful.target. Dec 13 02:16:57.853337 systemd[1]: Mounted sysroot-usr.mount. Dec 13 02:16:57.877848 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:16:57.881320 systemd[1]: Starting initrd-setup-root.service... Dec 13 02:16:57.893406 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1167) Dec 13 02:16:57.895935 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:16:57.895988 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 02:16:57.896007 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 02:16:57.898342 initrd-setup-root[1172]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 02:16:57.906410 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 02:16:57.909336 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:16:57.917612 initrd-setup-root[1198]: cut: /sysroot/etc/group: No such file or directory Dec 13 02:16:57.925703 initrd-setup-root[1206]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 02:16:57.931348 initrd-setup-root[1214]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 02:16:58.196072 systemd[1]: Finished initrd-setup-root.service. Dec 13 02:16:58.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:58.198851 systemd[1]: Starting ignition-mount.service... Dec 13 02:16:58.201527 kernel: audit: type=1130 audit(1734056218.196:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:58.204279 systemd[1]: Starting sysroot-boot.service... Dec 13 02:16:58.212431 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 02:16:58.212564 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 02:16:58.240668 systemd[1]: Finished sysroot-boot.service. Dec 13 02:16:58.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:58.246210 ignition[1233]: INFO : Ignition 2.14.0 Dec 13 02:16:58.246210 ignition[1233]: INFO : Stage: mount Dec 13 02:16:58.248357 kernel: audit: type=1130 audit(1734056218.241:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:58.248404 ignition[1233]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:16:58.248404 ignition[1233]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:16:58.257111 ignition[1233]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:16:58.258469 ignition[1233]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:16:58.259863 ignition[1233]: INFO : PUT result: OK Dec 13 02:16:58.264236 ignition[1233]: INFO : mount: mount passed Dec 13 02:16:58.265191 ignition[1233]: INFO : Ignition finished successfully Dec 13 02:16:58.267143 systemd[1]: Finished ignition-mount.service. Dec 13 02:16:58.274551 kernel: audit: type=1130 audit(1734056218.266:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:58.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:16:58.269271 systemd[1]: Starting ignition-files.service... Dec 13 02:16:58.281334 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:16:58.296410 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1242) Dec 13 02:16:58.298949 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:16:58.299007 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 02:16:58.299025 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 02:16:58.306405 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 02:16:58.309802 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:16:58.322611 ignition[1261]: INFO : Ignition 2.14.0 Dec 13 02:16:58.322611 ignition[1261]: INFO : Stage: files Dec 13 02:16:58.324844 ignition[1261]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:16:58.324844 ignition[1261]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:16:58.340014 ignition[1261]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:16:58.341554 ignition[1261]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:16:58.343857 ignition[1261]: INFO : PUT result: OK Dec 13 02:16:58.349161 ignition[1261]: DEBUG : files: compiled without relabeling support, skipping Dec 13 02:16:58.354574 ignition[1261]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 02:16:58.354574 ignition[1261]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 02:16:58.373567 ignition[1261]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 02:16:58.375546 ignition[1261]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 02:16:58.378668 unknown[1261]: wrote ssh authorized keys file for user: core Dec 13 02:16:58.380056 ignition[1261]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 02:16:58.393357 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:16:58.395408 ignition[1261]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 02:16:58.493367 ignition[1261]: INFO : GET result: OK Dec 13 02:16:58.610972 systemd-networkd[1104]: eth0: Gained IPv6LL Dec 13 02:16:58.666546 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:16:58.670783 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:16:58.677018 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:16:58.677018 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 02:16:58.677018 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:16:58.691158 ignition[1261]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3620740239" Dec 13 02:16:58.694443 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1263) Dec 13 02:16:58.694471 ignition[1261]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3620740239": device or resource busy Dec 13 02:16:58.694471 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3620740239", trying btrfs: device or resource busy Dec 13 02:16:58.694471 ignition[1261]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3620740239" Dec 13 02:16:58.694471 ignition[1261]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3620740239" Dec 13 02:16:58.701753 ignition[1261]: INFO : op(3): [started] unmounting "/mnt/oem3620740239" Dec 13 02:16:58.704510 ignition[1261]: INFO : op(3): [finished] unmounting "/mnt/oem3620740239" Dec 13 02:16:58.704510 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 02:16:58.710288 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 02:16:58.710288 ignition[1261]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 02:16:59.215054 ignition[1261]: INFO : GET result: OK Dec 13 02:16:59.353535 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 02:16:59.355938 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Dec 13 02:16:59.355938 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 02:16:59.355938 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:16:59.355938 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:16:59.355938 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:16:59.355938 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:16:59.355938 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:16:59.355938 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:16:59.355938 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 02:16:59.355938 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 02:16:59.355938 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 02:16:59.355938 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:16:59.396454 ignition[1261]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1643950104" Dec 13 02:16:59.405651 ignition[1261]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1643950104": device or resource busy Dec 13 02:16:59.405651 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1643950104", trying btrfs: device or resource busy Dec 13 02:16:59.405651 ignition[1261]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1643950104" Dec 13 02:16:59.405651 ignition[1261]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1643950104" Dec 13 02:16:59.405651 ignition[1261]: INFO : op(6): [started] unmounting "/mnt/oem1643950104" Dec 13 02:16:59.405651 ignition[1261]: INFO : op(6): [finished] unmounting "/mnt/oem1643950104" Dec 13 02:16:59.405651 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 02:16:59.405651 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 02:16:59.405651 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:16:59.406910 systemd[1]: mnt-oem1643950104.mount: Deactivated successfully. Dec 13 02:16:59.435140 ignition[1261]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3329746313" Dec 13 02:16:59.435140 ignition[1261]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3329746313": device or resource busy Dec 13 02:16:59.435140 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3329746313", trying btrfs: device or resource busy Dec 13 02:16:59.435140 ignition[1261]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3329746313" Dec 13 02:16:59.458117 ignition[1261]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3329746313" Dec 13 02:16:59.460007 ignition[1261]: INFO : op(9): [started] unmounting "/mnt/oem3329746313" Dec 13 02:16:59.474914 systemd[1]: mnt-oem3329746313.mount: Deactivated successfully. Dec 13 02:16:59.479003 ignition[1261]: INFO : op(9): [finished] unmounting "/mnt/oem3329746313" Dec 13 02:16:59.480431 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 02:16:59.480431 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 02:16:59.480431 ignition[1261]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 02:16:59.769235 ignition[1261]: INFO : GET result: OK Dec 13 02:17:00.463779 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 02:17:00.466924 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 02:17:00.466924 ignition[1261]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:17:00.474286 ignition[1261]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem470615974" Dec 13 02:17:00.482954 ignition[1261]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem470615974": device or resource busy Dec 13 02:17:00.482954 ignition[1261]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem470615974", trying btrfs: device or resource busy Dec 13 02:17:00.482954 ignition[1261]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem470615974" Dec 13 02:17:00.499466 ignition[1261]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem470615974" Dec 13 02:17:00.499466 ignition[1261]: INFO : op(c): [started] unmounting "/mnt/oem470615974" Dec 13 02:17:00.512928 ignition[1261]: INFO : op(c): [finished] unmounting "/mnt/oem470615974" Dec 13 02:17:00.512928 ignition[1261]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 02:17:00.512928 ignition[1261]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 02:17:00.512928 ignition[1261]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 02:17:00.507910 systemd[1]: mnt-oem470615974.mount: Deactivated successfully. Dec 13 02:17:00.522883 ignition[1261]: INFO : files: op(11): [started] processing unit "amazon-ssm-agent.service" Dec 13 02:17:00.522883 ignition[1261]: INFO : files: op(11): op(12): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 02:17:00.522883 ignition[1261]: INFO : files: op(11): op(12): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 02:17:00.522883 ignition[1261]: INFO : files: op(11): [finished] processing unit "amazon-ssm-agent.service" Dec 13 02:17:00.522883 ignition[1261]: INFO : files: op(13): [started] processing unit "nvidia.service" Dec 13 02:17:00.522883 ignition[1261]: INFO : files: op(13): [finished] processing unit "nvidia.service" Dec 13 02:17:00.522883 ignition[1261]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Dec 13 02:17:00.522883 ignition[1261]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:17:00.522883 ignition[1261]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:17:00.522883 ignition[1261]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Dec 13 02:17:00.522883 ignition[1261]: INFO : files: op(16): [started] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 02:17:00.522883 ignition[1261]: INFO : files: op(16): [finished] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 02:17:00.522883 ignition[1261]: INFO : files: op(17): [started] setting preset to enabled for "nvidia.service" Dec 13 02:17:00.522883 ignition[1261]: INFO : files: op(17): [finished] setting preset to enabled for "nvidia.service" Dec 13 02:17:00.522883 ignition[1261]: INFO : files: op(18): [started] setting preset to enabled for "prepare-helm.service" Dec 13 02:17:00.522883 ignition[1261]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 02:17:00.522883 ignition[1261]: INFO : files: op(19): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 02:17:00.522883 ignition[1261]: INFO : files: op(19): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 02:17:00.560207 ignition[1261]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:17:00.560207 ignition[1261]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:17:00.560207 ignition[1261]: INFO : files: files passed Dec 13 02:17:00.560207 ignition[1261]: INFO : Ignition finished successfully Dec 13 02:17:00.570665 kernel: audit: type=1130 audit(1734056220.562:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:00.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:00.560982 systemd[1]: Finished ignition-files.service. Dec 13 02:17:00.571449 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 02:17:00.575320 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 02:17:00.576399 systemd[1]: Starting ignition-quench.service... Dec 13 02:17:00.587240 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 02:17:00.587370 systemd[1]: Finished ignition-quench.service. Dec 13 02:17:00.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:00.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:00.595459 kernel: audit: type=1130 audit(1734056220.589:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:00.605169 initrd-setup-root-after-ignition[1286]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:17:00.608459 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 02:17:00.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:00.613175 systemd[1]: Reached target ignition-complete.target. Dec 13 02:17:00.617216 systemd[1]: Starting initrd-parse-etc.service... Dec 13 02:17:00.636662 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 02:17:00.636786 systemd[1]: Finished initrd-parse-etc.service. Dec 13 02:17:00.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:00.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:00.638761 systemd[1]: Reached target initrd-fs.target. Dec 13 02:17:00.640368 systemd[1]: Reached target initrd.target. Dec 13 02:17:00.643970 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 02:17:00.646254 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 02:17:00.666431 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 02:17:00.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:00.671330 systemd[1]: Starting initrd-cleanup.service... Dec 13 02:17:00.690948 systemd[1]: Stopped target nss-lookup.target. Dec 13 02:17:00.692884 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 02:17:00.695339 systemd[1]: Stopped target timers.target. Dec 13 02:17:00.697699 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 02:17:00.697828 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 02:17:00.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:00.700895 systemd[1]: Stopped target initrd.target. Dec 13 02:17:00.702793 systemd[1]: Stopped target basic.target. Dec 13 02:17:00.703738 systemd[1]: Stopped target ignition-complete.target. Dec 13 02:17:00.706740 systemd[1]: Stopped target ignition-diskful.target. Dec 13 02:17:00.723274 systemd[1]: Stopped target initrd-root-device.target. Dec 13 02:17:00.725357 systemd[1]: Stopped target remote-fs.target. Dec 13 02:17:00.728265 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 02:17:00.730988 systemd[1]: Stopped target sysinit.target. Dec 13 02:17:00.733267 systemd[1]: Stopped target local-fs.target. Dec 13 02:17:00.735234 systemd[1]: Stopped target local-fs-pre.target. Dec 13 02:17:00.737364 systemd[1]: Stopped target swap.target. Dec 13 02:17:00.739664 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 02:17:00.741068 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 02:17:00.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:00.743750 systemd[1]: Stopped target cryptsetup.target. Dec 13 02:17:00.746722 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 02:17:00.748478 systemd[1]: Stopped dracut-initqueue.service. Dec 13 02:17:00.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:00.751245 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 02:17:00.753804 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 02:17:00.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:00.756639 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 02:17:00.758356 systemd[1]: Stopped ignition-files.service. Dec 13 02:17:00.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:00.763588 systemd[1]: Stopping ignition-mount.service... Dec 13 02:17:00.807218 ignition[1299]: INFO : Ignition 2.14.0 Dec 13 02:17:00.807218 ignition[1299]: INFO : Stage: umount Dec 13 02:17:00.807218 ignition[1299]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:17:00.807218 ignition[1299]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:17:00.807218 ignition[1299]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:17:00.807218 ignition[1299]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:17:00.807218 ignition[1299]: INFO : PUT result: OK Dec 13 02:17:00.820544 iscsid[1111]: iscsid shutting down. Dec 13 02:17:00.806300 systemd[1]: Stopping iscsid.service... Dec 13 02:17:00.823523 ignition[1299]: INFO : umount: umount passed Dec 13 02:17:00.823523 ignition[1299]: INFO : Ignition finished successfully Dec 13 02:17:00.823535 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 02:17:00.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:00.824036 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 02:17:00.858984 systemd[1]: Stopping sysroot-boot.service... Dec 13 02:17:00.875150 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 02:17:00.875843 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 02:17:00.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:00.885455 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 02:17:00.890924 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 02:17:00.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:00.901005 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 02:17:00.901269 systemd[1]: Stopped iscsid.service. Dec 13 02:17:00.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:00.905107 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 02:17:00.905334 systemd[1]: Stopped ignition-mount.service. Dec 13 02:17:00.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:00.910031 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 02:17:00.910282 systemd[1]: Finished initrd-cleanup.service. Dec 13 02:17:00.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:00.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:00.915183 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 02:17:00.915352 systemd[1]: Stopped ignition-disks.service. Dec 13 02:17:00.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:00.919612 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 02:17:00.919744 systemd[1]: Stopped ignition-kargs.service. Dec 13 02:17:00.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:00.927639 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 02:17:00.930999 systemd[1]: Stopped ignition-fetch.service. Dec 13 02:17:00.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:00.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:00.946049 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 02:17:00.946156 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 02:17:00.946746 systemd[1]: Stopped target paths.target. Dec 13 02:17:00.946886 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 02:17:00.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:00.950473 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 02:17:00.952849 systemd[1]: Stopped target slices.target. Dec 13 02:17:00.955216 systemd[1]: Stopped target sockets.target. Dec 13 02:17:00.955356 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 02:17:00.955456 systemd[1]: Closed iscsid.socket. Dec 13 02:17:00.960187 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 02:17:00.960328 systemd[1]: Stopped ignition-setup.service. Dec 13 02:17:00.961453 systemd[1]: Stopping iscsiuio.service... Dec 13 02:17:00.984586 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 02:17:00.985549 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 02:17:00.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:00.985786 systemd[1]: Stopped iscsiuio.service. Dec 13 02:17:00.995113 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 02:17:01.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:00.998540 systemd[1]: Stopped sysroot-boot.service. Dec 13 02:17:01.010990 systemd[1]: Stopped target network.target. Dec 13 02:17:01.018288 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 02:17:01.018635 systemd[1]: Closed iscsiuio.socket. Dec 13 02:17:01.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:01.023100 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 02:17:01.023222 systemd[1]: Stopped initrd-setup-root.service. Dec 13 02:17:01.027792 systemd[1]: Stopping systemd-networkd.service... Dec 13 02:17:01.034581 systemd-networkd[1104]: eth0: DHCPv6 lease lost Dec 13 02:17:01.037295 systemd[1]: Stopping systemd-resolved.service... Dec 13 02:17:01.041661 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 02:17:01.041771 systemd[1]: Stopped systemd-networkd.service. Dec 13 02:17:01.048548 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 02:17:01.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:01.047000 audit: BPF prog-id=9 op=UNLOAD Dec 13 02:17:01.048723 systemd[1]: Closed systemd-networkd.socket. Dec 13 02:17:01.054077 systemd[1]: Stopping network-cleanup.service... Dec 13 02:17:01.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:01.057500 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 02:17:01.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:01.057588 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 02:17:01.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:01.059948 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:17:01.060001 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:17:01.063114 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 02:17:01.063183 systemd[1]: Stopped systemd-modules-load.service. Dec 13 02:17:01.070478 systemd[1]: Stopping systemd-udevd.service... Dec 13 02:17:01.076305 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 02:17:01.078227 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 02:17:01.078439 systemd[1]: Stopped systemd-resolved.service. Dec 13 02:17:01.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:01.105785 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 02:17:01.106265 systemd[1]: Stopped systemd-udevd.service. Dec 13 02:17:01.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:01.122000 audit: BPF prog-id=6 op=UNLOAD Dec 13 02:17:01.124355 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 02:17:01.127247 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 02:17:01.128817 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 02:17:01.128864 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 02:17:01.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:01.131574 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 02:17:01.132672 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 02:17:01.137927 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 02:17:01.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:01.137993 systemd[1]: Stopped dracut-cmdline.service. Dec 13 02:17:01.151730 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:17:01.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:01.151809 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 02:17:01.160070 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 02:17:01.179742 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:17:01.185049 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 02:17:01.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:01.224575 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 02:17:01.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:01.224755 systemd[1]: Stopped network-cleanup.service. Dec 13 02:17:01.239039 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 02:17:01.239238 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 02:17:01.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:01.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:01.255550 systemd[1]: Reached target initrd-switch-root.target. Dec 13 02:17:01.270566 systemd[1]: Starting initrd-switch-root.service... Dec 13 02:17:01.375519 systemd[1]: Switching root. Dec 13 02:17:01.433331 systemd-journald[185]: Journal stopped Dec 13 02:17:09.847865 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Dec 13 02:17:09.847961 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 02:17:09.847983 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 02:17:09.848006 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 02:17:09.848024 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 02:17:09.848041 kernel: SELinux: policy capability open_perms=1 Dec 13 02:17:09.848059 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 02:17:09.848076 kernel: SELinux: policy capability always_check_network=0 Dec 13 02:17:09.848094 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 02:17:09.848111 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 02:17:09.848134 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 02:17:09.848151 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 02:17:09.848172 kernel: kauditd_printk_skb: 40 callbacks suppressed Dec 13 02:17:09.848185 kernel: audit: type=1403 audit(1734056222.929:78): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 02:17:09.848198 systemd[1]: Successfully loaded SELinux policy in 168.556ms. Dec 13 02:17:09.848216 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.995ms. Dec 13 02:17:09.848229 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:17:09.848242 systemd[1]: Detected virtualization amazon. Dec 13 02:17:09.848254 systemd[1]: Detected architecture x86-64. Dec 13 02:17:09.848266 systemd[1]: Detected first boot. Dec 13 02:17:09.848285 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:17:09.848530 kernel: audit: type=1400 audit(1734056223.210:79): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:17:09.848562 kernel: audit: type=1400 audit(1734056223.210:80): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:17:09.848581 kernel: audit: type=1334 audit(1734056223.218:81): prog-id=10 op=LOAD Dec 13 02:17:09.848665 kernel: audit: type=1334 audit(1734056223.218:82): prog-id=10 op=UNLOAD Dec 13 02:17:09.848691 kernel: audit: type=1334 audit(1734056223.228:83): prog-id=11 op=LOAD Dec 13 02:17:09.848715 kernel: audit: type=1334 audit(1734056223.228:84): prog-id=11 op=UNLOAD Dec 13 02:17:09.848735 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 02:17:09.848755 kernel: audit: type=1400 audit(1734056223.655:85): avc: denied { associate } for pid=1332 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 02:17:09.848776 kernel: audit: type=1300 audit(1734056223.655:85): arch=c000003e syscall=188 success=yes exit=0 a0=c000024302 a1=c00002a3d8 a2=c000028840 a3=32 items=0 ppid=1315 pid=1332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:09.848796 kernel: audit: type=1327 audit(1734056223.655:85): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:17:09.848817 systemd[1]: Populated /etc with preset unit settings. Dec 13 02:17:09.848839 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:17:09.848865 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:17:09.848887 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:17:09.848911 kernel: kauditd_printk_skb: 6 callbacks suppressed Dec 13 02:17:09.848930 kernel: audit: type=1334 audit(1734056229.537:87): prog-id=12 op=LOAD Dec 13 02:17:09.848949 kernel: audit: type=1334 audit(1734056229.537:88): prog-id=3 op=UNLOAD Dec 13 02:17:09.848968 kernel: audit: type=1334 audit(1734056229.538:89): prog-id=13 op=LOAD Dec 13 02:17:09.848987 kernel: audit: type=1334 audit(1734056229.539:90): prog-id=14 op=LOAD Dec 13 02:17:09.849057 kernel: audit: type=1334 audit(1734056229.539:91): prog-id=4 op=UNLOAD Dec 13 02:17:09.849083 kernel: audit: type=1334 audit(1734056229.539:92): prog-id=5 op=UNLOAD Dec 13 02:17:09.849101 kernel: audit: type=1334 audit(1734056229.540:93): prog-id=15 op=LOAD Dec 13 02:17:09.849119 kernel: audit: type=1334 audit(1734056229.540:94): prog-id=12 op=UNLOAD Dec 13 02:17:09.849135 kernel: audit: type=1334 audit(1734056229.541:95): prog-id=16 op=LOAD Dec 13 02:17:09.849152 kernel: audit: type=1334 audit(1734056229.544:96): prog-id=17 op=LOAD Dec 13 02:17:09.849168 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 02:17:09.849185 systemd[1]: Stopped initrd-switch-root.service. Dec 13 02:17:09.849203 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 02:17:09.849227 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 02:17:09.849247 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 02:17:09.849268 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 02:17:09.849288 systemd[1]: Created slice system-getty.slice. Dec 13 02:17:09.849308 systemd[1]: Created slice system-modprobe.slice. Dec 13 02:17:09.849328 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 02:17:09.849347 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 02:17:09.849366 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 02:17:09.849402 systemd[1]: Created slice user.slice. Dec 13 02:17:09.849421 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:17:09.849440 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 02:17:09.849459 systemd[1]: Set up automount boot.automount. Dec 13 02:17:09.849479 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 02:17:09.849499 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 02:17:09.849520 systemd[1]: Stopped target initrd-fs.target. Dec 13 02:17:09.849540 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 02:17:09.849560 systemd[1]: Reached target integritysetup.target. Dec 13 02:17:09.849585 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:17:09.849607 systemd[1]: Reached target remote-fs.target. Dec 13 02:17:09.849627 systemd[1]: Reached target slices.target. Dec 13 02:17:09.849647 systemd[1]: Reached target swap.target. Dec 13 02:17:09.849667 systemd[1]: Reached target torcx.target. Dec 13 02:17:09.849687 systemd[1]: Reached target veritysetup.target. Dec 13 02:17:09.849709 systemd[1]: Listening on systemd-coredump.socket. Dec 13 02:17:09.849729 systemd[1]: Listening on systemd-initctl.socket. Dec 13 02:17:09.849749 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:17:09.849769 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:17:09.849792 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:17:09.849813 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 02:17:09.849833 systemd[1]: Mounting dev-hugepages.mount... Dec 13 02:17:09.849854 systemd[1]: Mounting dev-mqueue.mount... Dec 13 02:17:09.849886 systemd[1]: Mounting media.mount... Dec 13 02:17:09.849912 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:17:09.849933 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 02:17:09.849954 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 02:17:09.849975 systemd[1]: Mounting tmp.mount... Dec 13 02:17:09.849994 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 02:17:09.850015 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:17:09.850037 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:17:09.850059 systemd[1]: Starting modprobe@configfs.service... Dec 13 02:17:09.850088 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:17:09.850111 systemd[1]: Starting modprobe@drm.service... Dec 13 02:17:09.850131 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:17:09.850153 systemd[1]: Starting modprobe@fuse.service... Dec 13 02:17:09.850173 systemd[1]: Starting modprobe@loop.service... Dec 13 02:17:09.850195 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 02:17:09.850215 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 02:17:09.850237 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 02:17:09.850258 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 02:17:09.850279 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 02:17:09.850302 systemd[1]: Stopped systemd-journald.service. Dec 13 02:17:09.850322 systemd[1]: Starting systemd-journald.service... Dec 13 02:17:09.850343 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:17:09.850363 systemd[1]: Starting systemd-network-generator.service... Dec 13 02:17:09.850535 systemd[1]: Starting systemd-remount-fs.service... Dec 13 02:17:09.850558 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:17:09.850577 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 02:17:09.850597 systemd[1]: Stopped verity-setup.service. Dec 13 02:17:09.850616 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:17:09.850640 kernel: loop: module loaded Dec 13 02:17:09.850661 systemd[1]: Mounted dev-hugepages.mount. Dec 13 02:17:09.850681 systemd[1]: Mounted dev-mqueue.mount. Dec 13 02:17:09.850699 systemd[1]: Mounted media.mount. Dec 13 02:17:09.850718 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 02:17:09.850740 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 02:17:09.850762 systemd[1]: Mounted tmp.mount. Dec 13 02:17:09.850780 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:17:09.850797 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 02:17:09.850819 systemd[1]: Finished modprobe@configfs.service. Dec 13 02:17:09.850841 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:17:09.850862 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:17:09.850884 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:17:09.850907 systemd[1]: Finished modprobe@drm.service. Dec 13 02:17:09.850931 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:17:09.850951 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:17:09.850972 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:17:09.850995 systemd[1]: Finished modprobe@loop.service. Dec 13 02:17:09.851023 systemd-journald[1404]: Journal started Dec 13 02:17:09.851170 systemd-journald[1404]: Runtime Journal (/run/log/journal/ec276b4bee41db9e2c02f015a7a84a9a) is 4.8M, max 38.7M, 33.9M free. Dec 13 02:17:02.929000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 02:17:03.210000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:17:03.210000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:17:03.218000 audit: BPF prog-id=10 op=LOAD Dec 13 02:17:03.218000 audit: BPF prog-id=10 op=UNLOAD Dec 13 02:17:03.228000 audit: BPF prog-id=11 op=LOAD Dec 13 02:17:03.228000 audit: BPF prog-id=11 op=UNLOAD Dec 13 02:17:03.655000 audit[1332]: AVC avc: denied { associate } for pid=1332 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 02:17:03.655000 audit[1332]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c000024302 a1=c00002a3d8 a2=c000028840 a3=32 items=0 ppid=1315 pid=1332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:03.655000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:17:03.657000 audit[1332]: AVC avc: denied { associate } for pid=1332 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 02:17:03.657000 audit[1332]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0000243d9 a2=1ed a3=0 items=2 ppid=1315 pid=1332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:03.657000 audit: CWD cwd="/" Dec 13 02:17:03.657000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:03.657000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:03.657000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:17:09.537000 audit: BPF prog-id=12 op=LOAD Dec 13 02:17:09.537000 audit: BPF prog-id=3 op=UNLOAD Dec 13 02:17:09.538000 audit: BPF prog-id=13 op=LOAD Dec 13 02:17:09.539000 audit: BPF prog-id=14 op=LOAD Dec 13 02:17:09.539000 audit: BPF prog-id=4 op=UNLOAD Dec 13 02:17:09.539000 audit: BPF prog-id=5 op=UNLOAD Dec 13 02:17:09.540000 audit: BPF prog-id=15 op=LOAD Dec 13 02:17:09.540000 audit: BPF prog-id=12 op=UNLOAD Dec 13 02:17:09.873146 systemd[1]: Started systemd-journald.service. Dec 13 02:17:09.541000 audit: BPF prog-id=16 op=LOAD Dec 13 02:17:09.544000 audit: BPF prog-id=17 op=LOAD Dec 13 02:17:09.544000 audit: BPF prog-id=13 op=UNLOAD Dec 13 02:17:09.544000 audit: BPF prog-id=14 op=UNLOAD Dec 13 02:17:09.548000 audit: BPF prog-id=18 op=LOAD Dec 13 02:17:09.549000 audit: BPF prog-id=15 op=UNLOAD Dec 13 02:17:09.550000 audit: BPF prog-id=19 op=LOAD Dec 13 02:17:09.551000 audit: BPF prog-id=20 op=LOAD Dec 13 02:17:09.551000 audit: BPF prog-id=16 op=UNLOAD Dec 13 02:17:09.551000 audit: BPF prog-id=17 op=UNLOAD Dec 13 02:17:09.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:09.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:09.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:09.560000 audit: BPF prog-id=18 op=UNLOAD Dec 13 02:17:09.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:09.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:09.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:09.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:09.763000 audit: BPF prog-id=21 op=LOAD Dec 13 02:17:09.766000 audit: BPF prog-id=22 op=LOAD Dec 13 02:17:09.766000 audit: BPF prog-id=23 op=LOAD Dec 13 02:17:09.766000 audit: BPF prog-id=19 op=UNLOAD Dec 13 02:17:09.766000 audit: BPF prog-id=20 op=UNLOAD Dec 13 02:17:09.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:09.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:09.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:09.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:09.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:09.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:09.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:09.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:09.837000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 02:17:09.837000 audit[1404]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe8acf5b00 a2=4000 a3=7ffe8acf5b9c items=0 ppid=1 pid=1404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:09.837000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 02:17:09.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:09.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:09.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:09.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:09.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:09.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:09.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:09.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:03.627580 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-12-13T02:17:03Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:17:09.536740 systemd[1]: Queued start job for default target multi-user.target. Dec 13 02:17:09.907954 kernel: fuse: init (API version 7.34) Dec 13 02:17:09.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:09.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:09.908758 systemd-journald[1404]: Time spent on flushing to /var/log/journal/ec276b4bee41db9e2c02f015a7a84a9a is 73.905ms for 1174 entries. Dec 13 02:17:09.908758 systemd-journald[1404]: System Journal (/var/log/journal/ec276b4bee41db9e2c02f015a7a84a9a) is 8.0M, max 195.6M, 187.6M free. Dec 13 02:17:09.989206 systemd-journald[1404]: Received client request to flush runtime journal. Dec 13 02:17:09.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:09.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:09.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:03.628298 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-12-13T02:17:03Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 02:17:09.553201 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 02:17:03.628323 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-12-13T02:17:03Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 02:17:09.855436 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:17:03.628357 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-12-13T02:17:03Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 02:17:09.857346 systemd[1]: Finished systemd-network-generator.service. Dec 13 02:17:03.628367 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-12-13T02:17:03Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 02:17:09.863126 systemd[1]: Finished systemd-remount-fs.service. Dec 13 02:17:03.628498 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-12-13T02:17:03Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 02:17:09.864892 systemd[1]: Reached target network-pre.target. Dec 13 02:17:03.628517 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-12-13T02:17:03Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 02:17:09.867765 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 02:17:03.628716 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-12-13T02:17:03Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 02:17:09.868971 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 02:17:03.628758 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-12-13T02:17:03Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 02:17:09.875165 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 02:17:03.628770 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-12-13T02:17:03Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 02:17:09.881442 systemd[1]: Starting systemd-journal-flush.service... Dec 13 02:17:03.641535 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-12-13T02:17:03Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 02:17:09.882597 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:17:03.641762 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-12-13T02:17:03Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 02:17:09.884499 systemd[1]: Starting systemd-random-seed.service... Dec 13 02:17:03.641801 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-12-13T02:17:03Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 02:17:09.885889 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:17:03.641828 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-12-13T02:17:03Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 02:17:09.887572 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:17:03.641884 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-12-13T02:17:03Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 02:17:09.892026 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 02:17:03.641907 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-12-13T02:17:03Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 02:17:09.895700 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 02:17:08.772731 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-12-13T02:17:08Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:17:09.895896 systemd[1]: Finished modprobe@fuse.service. Dec 13 02:17:08.773102 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-12-13T02:17:08Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:17:09.899558 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 02:17:08.773218 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-12-13T02:17:08Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:17:09.906570 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 02:17:08.773421 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-12-13T02:17:08Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:17:09.923784 systemd[1]: Finished systemd-random-seed.service. Dec 13 02:17:08.773472 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-12-13T02:17:08Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 02:17:09.925425 systemd[1]: Reached target first-boot-complete.target. Dec 13 02:17:08.773530 /usr/lib/systemd/system-generators/torcx-generator[1332]: time="2024-12-13T02:17:08Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 02:17:09.954901 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:17:09.990498 systemd[1]: Finished systemd-journal-flush.service. Dec 13 02:17:10.025261 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:17:10.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:10.031308 systemd[1]: Starting systemd-udev-settle.service... Dec 13 02:17:10.054158 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 02:17:10.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:10.057466 systemd[1]: Starting systemd-sysusers.service... Dec 13 02:17:10.063742 udevadm[1445]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 02:17:10.280033 systemd[1]: Finished systemd-sysusers.service. Dec 13 02:17:10.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:11.078848 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 02:17:11.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:11.080000 audit: BPF prog-id=24 op=LOAD Dec 13 02:17:11.080000 audit: BPF prog-id=25 op=LOAD Dec 13 02:17:11.080000 audit: BPF prog-id=7 op=UNLOAD Dec 13 02:17:11.081000 audit: BPF prog-id=8 op=UNLOAD Dec 13 02:17:11.085460 systemd[1]: Starting systemd-udevd.service... Dec 13 02:17:11.127207 systemd-udevd[1449]: Using default interface naming scheme 'v252'. Dec 13 02:17:11.214991 systemd[1]: Started systemd-udevd.service. Dec 13 02:17:11.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:11.216000 audit: BPF prog-id=26 op=LOAD Dec 13 02:17:11.219259 systemd[1]: Starting systemd-networkd.service... Dec 13 02:17:11.260000 audit: BPF prog-id=27 op=LOAD Dec 13 02:17:11.261000 audit: BPF prog-id=28 op=LOAD Dec 13 02:17:11.261000 audit: BPF prog-id=29 op=LOAD Dec 13 02:17:11.264928 systemd[1]: Starting systemd-userdbd.service... Dec 13 02:17:11.316287 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 02:17:11.322731 (udev-worker)[1465]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:17:11.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:11.335298 systemd[1]: Started systemd-userdbd.service. Dec 13 02:17:11.425411 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 02:17:11.456542 kernel: ACPI: button: Power Button [PWRF] Dec 13 02:17:11.456771 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Dec 13 02:17:11.465466 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 02:17:11.451000 audit[1461]: AVC avc: denied { confidentiality } for pid=1461 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 02:17:11.491080 systemd-networkd[1458]: lo: Link UP Dec 13 02:17:11.491091 systemd-networkd[1458]: lo: Gained carrier Dec 13 02:17:11.492161 systemd-networkd[1458]: Enumeration completed Dec 13 02:17:11.492292 systemd[1]: Started systemd-networkd.service. Dec 13 02:17:11.492302 systemd-networkd[1458]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:17:11.495473 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 02:17:11.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:11.502226 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:17:11.501298 systemd-networkd[1458]: eth0: Link UP Dec 13 02:17:11.501661 systemd-networkd[1458]: eth0: Gained carrier Dec 13 02:17:11.511562 systemd-networkd[1458]: eth0: DHCPv4 address 172.31.19.93/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 02:17:11.519408 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Dec 13 02:17:11.451000 audit[1461]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=555fff7fa380 a1=337fc a2=7f3c0b1b8bc5 a3=5 items=110 ppid=1449 pid=1461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:11.451000 audit: CWD cwd="/" Dec 13 02:17:11.451000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=1 name=(null) inode=14773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=2 name=(null) inode=14773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=3 name=(null) inode=14774 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=4 name=(null) inode=14773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=5 name=(null) inode=14775 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=6 name=(null) inode=14773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=7 name=(null) inode=14776 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=8 name=(null) inode=14776 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=9 name=(null) inode=14777 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=10 name=(null) inode=14776 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=11 name=(null) inode=14778 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=12 name=(null) inode=14776 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=13 name=(null) inode=14779 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=14 name=(null) inode=14776 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=15 name=(null) inode=14780 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=16 name=(null) inode=14776 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=17 name=(null) inode=14781 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=18 name=(null) inode=14773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=19 name=(null) inode=14782 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=20 name=(null) inode=14782 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=21 name=(null) inode=14783 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=22 name=(null) inode=14782 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=23 name=(null) inode=14784 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=24 name=(null) inode=14782 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=25 name=(null) inode=14785 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=26 name=(null) inode=14782 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=27 name=(null) inode=14786 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=28 name=(null) inode=14782 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=29 name=(null) inode=14787 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=30 name=(null) inode=14773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=31 name=(null) inode=14788 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=32 name=(null) inode=14788 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=33 name=(null) inode=14789 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=34 name=(null) inode=14788 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=35 name=(null) inode=14790 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=36 name=(null) inode=14788 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=37 name=(null) inode=14791 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=38 name=(null) inode=14788 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=39 name=(null) inode=14792 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=40 name=(null) inode=14788 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=41 name=(null) inode=14793 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=42 name=(null) inode=14773 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=43 name=(null) inode=14794 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=44 name=(null) inode=14794 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=45 name=(null) inode=14795 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=46 name=(null) inode=14794 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=47 name=(null) inode=14796 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=48 name=(null) inode=14794 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=49 name=(null) inode=14797 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=50 name=(null) inode=14794 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=51 name=(null) inode=14798 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=52 name=(null) inode=14794 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=53 name=(null) inode=14799 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=55 name=(null) inode=14800 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=56 name=(null) inode=14800 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=57 name=(null) inode=14801 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=58 name=(null) inode=14800 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=59 name=(null) inode=14802 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=60 name=(null) inode=14800 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=61 name=(null) inode=14803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=62 name=(null) inode=14803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=63 name=(null) inode=14804 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=64 name=(null) inode=14803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=65 name=(null) inode=14805 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=66 name=(null) inode=14803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=67 name=(null) inode=14806 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=68 name=(null) inode=14803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=69 name=(null) inode=14807 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=70 name=(null) inode=14803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=71 name=(null) inode=14808 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=72 name=(null) inode=14800 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=73 name=(null) inode=14809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=74 name=(null) inode=14809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=75 name=(null) inode=14810 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=76 name=(null) inode=14809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=77 name=(null) inode=14811 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=78 name=(null) inode=14809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=79 name=(null) inode=14812 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=80 name=(null) inode=14809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=81 name=(null) inode=14813 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=82 name=(null) inode=14809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=83 name=(null) inode=14814 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=84 name=(null) inode=14800 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=85 name=(null) inode=14815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=86 name=(null) inode=14815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=87 name=(null) inode=14816 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=88 name=(null) inode=14815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=89 name=(null) inode=14817 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=90 name=(null) inode=14815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=91 name=(null) inode=14170 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=92 name=(null) inode=14815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=93 name=(null) inode=14171 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=94 name=(null) inode=14815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=95 name=(null) inode=14172 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.547146 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 02:17:11.451000 audit: PATH item=96 name=(null) inode=14800 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=97 name=(null) inode=14173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=98 name=(null) inode=14173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=99 name=(null) inode=14174 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=100 name=(null) inode=14173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=101 name=(null) inode=14175 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=102 name=(null) inode=14173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=103 name=(null) inode=14176 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=104 name=(null) inode=14173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=105 name=(null) inode=14177 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=106 name=(null) inode=14173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=107 name=(null) inode=14178 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PATH item=109 name=(null) inode=14179 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:17:11.451000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 02:17:11.553420 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 02:17:11.600425 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1450) Dec 13 02:17:11.712100 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:17:11.775054 systemd[1]: Finished systemd-udev-settle.service. Dec 13 02:17:11.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:11.777570 systemd[1]: Starting lvm2-activation-early.service... Dec 13 02:17:11.818801 lvm[1563]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:17:11.851939 systemd[1]: Finished lvm2-activation-early.service. Dec 13 02:17:11.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:11.853559 systemd[1]: Reached target cryptsetup.target. Dec 13 02:17:11.856657 systemd[1]: Starting lvm2-activation.service... Dec 13 02:17:11.871442 lvm[1564]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:17:11.905749 systemd[1]: Finished lvm2-activation.service. Dec 13 02:17:11.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:11.910739 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:17:11.914257 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 02:17:11.914302 systemd[1]: Reached target local-fs.target. Dec 13 02:17:11.919770 systemd[1]: Reached target machines.target. Dec 13 02:17:11.924329 systemd[1]: Starting ldconfig.service... Dec 13 02:17:11.927186 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:17:11.927276 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:17:11.929758 systemd[1]: Starting systemd-boot-update.service... Dec 13 02:17:11.948789 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 02:17:11.958524 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 02:17:11.962694 systemd[1]: Starting systemd-sysext.service... Dec 13 02:17:11.989744 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1566 (bootctl) Dec 13 02:17:11.994527 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 02:17:12.010363 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 02:17:12.020313 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 02:17:12.020568 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 02:17:12.042410 kernel: loop0: detected capacity change from 0 to 210664 Dec 13 02:17:12.054169 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 02:17:12.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:12.221412 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 02:17:12.243445 kernel: loop1: detected capacity change from 0 to 210664 Dec 13 02:17:12.261809 (sd-sysext)[1580]: Using extensions 'kubernetes'. Dec 13 02:17:12.262801 (sd-sysext)[1580]: Merged extensions into '/usr'. Dec 13 02:17:12.275342 systemd-fsck[1577]: fsck.fat 4.2 (2021-01-31) Dec 13 02:17:12.275342 systemd-fsck[1577]: /dev/nvme0n1p1: 789 files, 119291/258078 clusters Dec 13 02:17:12.279077 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 02:17:12.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:12.295698 systemd[1]: Mounting boot.mount... Dec 13 02:17:12.313995 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:17:12.317161 systemd[1]: Mounting usr-share-oem.mount... Dec 13 02:17:12.319807 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:17:12.323894 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:17:12.327668 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:17:12.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:12.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:12.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:12.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:12.331914 systemd[1]: Starting modprobe@loop.service... Dec 13 02:17:12.332935 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:17:12.333141 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:17:12.333345 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:17:12.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:12.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:12.340050 systemd[1]: Mounted boot.mount. Dec 13 02:17:12.343051 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:17:12.343242 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:17:12.345288 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:17:12.345477 systemd[1]: Finished modprobe@loop.service. Dec 13 02:17:12.347133 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:17:12.349046 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:17:12.349216 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:17:12.350751 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:17:12.355685 systemd[1]: Mounted usr-share-oem.mount. Dec 13 02:17:12.361110 systemd[1]: Finished systemd-sysext.service. Dec 13 02:17:12.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:12.366072 systemd[1]: Starting ensure-sysext.service... Dec 13 02:17:12.377402 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 02:17:12.382471 systemd[1]: Reloading. Dec 13 02:17:12.457303 systemd-tmpfiles[1599]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 02:17:12.481316 systemd-tmpfiles[1599]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 02:17:12.501249 /usr/lib/systemd/system-generators/torcx-generator[1621]: time="2024-12-13T02:17:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:17:12.501760 /usr/lib/systemd/system-generators/torcx-generator[1621]: time="2024-12-13T02:17:12Z" level=info msg="torcx already run" Dec 13 02:17:12.530737 systemd-tmpfiles[1599]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 02:17:12.708758 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:17:12.709148 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:17:12.757957 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:17:12.895514 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 02:17:12.897000 audit: BPF prog-id=30 op=LOAD Dec 13 02:17:12.897000 audit: BPF prog-id=27 op=UNLOAD Dec 13 02:17:12.897000 audit: BPF prog-id=31 op=LOAD Dec 13 02:17:12.897000 audit: BPF prog-id=32 op=LOAD Dec 13 02:17:12.897000 audit: BPF prog-id=28 op=UNLOAD Dec 13 02:17:12.897000 audit: BPF prog-id=29 op=UNLOAD Dec 13 02:17:12.898000 audit: BPF prog-id=33 op=LOAD Dec 13 02:17:12.898000 audit: BPF prog-id=26 op=UNLOAD Dec 13 02:17:12.900000 audit: BPF prog-id=34 op=LOAD Dec 13 02:17:12.900000 audit: BPF prog-id=35 op=LOAD Dec 13 02:17:12.900000 audit: BPF prog-id=24 op=UNLOAD Dec 13 02:17:12.900000 audit: BPF prog-id=25 op=UNLOAD Dec 13 02:17:12.902000 audit: BPF prog-id=36 op=LOAD Dec 13 02:17:12.902000 audit: BPF prog-id=21 op=UNLOAD Dec 13 02:17:12.902000 audit: BPF prog-id=37 op=LOAD Dec 13 02:17:12.902000 audit: BPF prog-id=38 op=LOAD Dec 13 02:17:12.902000 audit: BPF prog-id=22 op=UNLOAD Dec 13 02:17:12.902000 audit: BPF prog-id=23 op=UNLOAD Dec 13 02:17:12.912458 systemd[1]: Finished systemd-boot-update.service. Dec 13 02:17:12.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:12.916412 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 02:17:12.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:12.920059 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 02:17:12.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:12.939178 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:17:12.941733 systemd[1]: Starting audit-rules.service... Dec 13 02:17:12.950262 systemd[1]: Starting clean-ca-certificates.service... Dec 13 02:17:12.953180 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:17:12.956587 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:17:12.964190 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:17:12.968196 systemd[1]: Starting modprobe@loop.service... Dec 13 02:17:12.969787 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:17:12.970730 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:17:12.975696 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 02:17:12.977000 audit: BPF prog-id=39 op=LOAD Dec 13 02:17:12.982719 systemd[1]: Starting systemd-resolved.service... Dec 13 02:17:12.984000 audit: BPF prog-id=40 op=LOAD Dec 13 02:17:12.988069 systemd[1]: Starting systemd-timesyncd.service... Dec 13 02:17:12.995876 systemd[1]: Starting systemd-update-utmp.service... Dec 13 02:17:12.997921 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:17:13.003563 systemd[1]: Finished clean-ca-certificates.service. Dec 13 02:17:13.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:13.005668 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:17:13.005844 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:17:13.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:13.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:13.007606 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:17:13.007783 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:17:13.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:13.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:13.009684 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:17:13.009853 systemd[1]: Finished modprobe@loop.service. Dec 13 02:17:13.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:13.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:13.011883 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:17:13.012027 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:17:13.012139 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:17:13.017703 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:17:13.018114 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:17:13.021128 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:17:13.025753 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:17:13.028647 systemd[1]: Starting modprobe@loop.service... Dec 13 02:17:13.030600 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:17:13.030808 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:17:13.031000 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:17:13.031130 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:17:13.035048 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:17:13.035242 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:17:13.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:13.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:13.037236 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:17:13.037451 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:17:13.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:13.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:13.040000 audit[1687]: SYSTEM_BOOT pid=1687 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 02:17:13.039045 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:17:13.048651 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:17:13.049510 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:17:13.053617 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:17:13.060062 systemd[1]: Starting modprobe@drm.service... Dec 13 02:17:13.062975 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:17:13.065947 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:17:13.066611 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:17:13.067110 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:17:13.067443 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:17:13.076498 systemd[1]: Finished ensure-sysext.service. Dec 13 02:17:13.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:13.078540 systemd[1]: Finished systemd-update-utmp.service. Dec 13 02:17:13.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:13.092827 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:17:13.093266 systemd[1]: Finished modprobe@drm.service. Dec 13 02:17:13.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:13.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:13.094772 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:17:13.095008 systemd[1]: Finished modprobe@loop.service. Dec 13 02:17:13.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:13.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:13.103933 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:17:13.104101 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:17:13.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:13.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:13.105567 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:17:13.105738 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:17:13.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:13.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:13.106898 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:17:13.106950 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:17:13.138897 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 02:17:13.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:13.187698 systemd[1]: Started systemd-timesyncd.service. Dec 13 02:17:13.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:17:13.188831 systemd[1]: Reached target time-set.target. Dec 13 02:17:13.193714 augenrules[1702]: No rules Dec 13 02:17:13.192000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 02:17:13.192000 audit[1702]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffea794d790 a2=420 a3=0 items=0 ppid=1673 pid=1702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:17:13.192000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 02:17:13.194433 systemd[1]: Finished audit-rules.service. Dec 13 02:17:13.220173 systemd-resolved[1684]: Positive Trust Anchors: Dec 13 02:17:13.220455 systemd-resolved[1684]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:17:13.220545 systemd-resolved[1684]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:17:13.274611 systemd-resolved[1684]: Defaulting to hostname 'linux'. Dec 13 02:17:13.276830 systemd[1]: Started systemd-resolved.service. Dec 13 02:17:13.277983 systemd[1]: Reached target network.target. Dec 13 02:17:13.278934 systemd[1]: Reached target nss-lookup.target. Dec 13 02:17:13.394975 systemd-networkd[1458]: eth0: Gained IPv6LL Dec 13 02:17:13.396282 systemd-timesyncd[1686]: Network configuration changed, trying to establish connection. Dec 13 02:17:13.398044 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 02:17:13.401188 systemd[1]: Reached target network-online.target. Dec 13 02:17:13.413012 ldconfig[1565]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 02:17:13.430659 systemd[1]: Finished ldconfig.service. Dec 13 02:17:13.434027 systemd[1]: Starting systemd-update-done.service... Dec 13 02:17:13.448677 systemd[1]: Finished systemd-update-done.service. Dec 13 02:17:13.451156 systemd[1]: Reached target sysinit.target. Dec 13 02:17:13.452779 systemd[1]: Started motdgen.path. Dec 13 02:17:13.453822 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 02:17:13.455608 systemd[1]: Started logrotate.timer. Dec 13 02:17:13.456587 systemd[1]: Started mdadm.timer. Dec 13 02:17:13.459082 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 02:17:13.461764 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:17:13.461818 systemd[1]: Reached target paths.target. Dec 13 02:17:13.463950 systemd[1]: Reached target timers.target. Dec 13 02:17:13.466301 systemd[1]: Listening on dbus.socket. Dec 13 02:17:13.468555 systemd[1]: Starting docker.socket... Dec 13 02:17:13.475032 systemd[1]: Listening on sshd.socket. Dec 13 02:17:13.476483 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:17:13.477283 systemd[1]: Listening on docker.socket. Dec 13 02:17:13.479170 systemd[1]: Reached target sockets.target. Dec 13 02:17:13.480148 systemd[1]: Reached target basic.target. Dec 13 02:17:13.481059 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:17:13.481093 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:17:13.482732 systemd[1]: Started amazon-ssm-agent.service. Dec 13 02:17:13.486061 systemd[1]: Starting containerd.service... Dec 13 02:17:13.488645 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 02:17:13.494618 systemd[1]: Starting dbus.service... Dec 13 02:17:13.497515 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 02:17:13.502715 systemd[1]: Starting extend-filesystems.service... Dec 13 02:17:13.505472 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 02:17:13.509144 systemd[1]: Starting kubelet.service... Dec 13 02:17:13.511825 systemd[1]: Starting motdgen.service... Dec 13 02:17:13.515037 systemd[1]: Started nvidia.service. Dec 13 02:17:13.518905 systemd[1]: Starting prepare-helm.service... Dec 13 02:17:13.531457 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 02:17:13.536710 systemd[1]: Starting sshd-keygen.service... Dec 13 02:17:13.596370 jq[1715]: false Dec 13 02:17:13.542924 systemd[1]: Starting systemd-logind.service... Dec 13 02:17:13.544520 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:17:13.544609 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 02:17:13.545612 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 02:17:13.642671 jq[1725]: true Dec 13 02:17:13.547066 systemd[1]: Starting update-engine.service... Dec 13 02:17:13.552636 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 02:17:13.603157 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 02:17:13.603401 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 02:17:13.663079 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 02:17:13.663533 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 02:17:13.666967 tar[1727]: linux-amd64/helm Dec 13 02:17:13.692976 jq[1729]: true Dec 13 02:17:13.865003 extend-filesystems[1716]: Found loop1 Dec 13 02:17:13.865003 extend-filesystems[1716]: Found nvme0n1 Dec 13 02:17:13.865003 extend-filesystems[1716]: Found nvme0n1p1 Dec 13 02:17:13.865003 extend-filesystems[1716]: Found nvme0n1p2 Dec 13 02:17:13.865003 extend-filesystems[1716]: Found nvme0n1p3 Dec 13 02:17:13.865003 extend-filesystems[1716]: Found usr Dec 13 02:17:13.865003 extend-filesystems[1716]: Found nvme0n1p4 Dec 13 02:17:13.865003 extend-filesystems[1716]: Found nvme0n1p6 Dec 13 02:17:13.865003 extend-filesystems[1716]: Found nvme0n1p7 Dec 13 02:17:13.865003 extend-filesystems[1716]: Found nvme0n1p9 Dec 13 02:17:13.865003 extend-filesystems[1716]: Checking size of /dev/nvme0n1p9 Dec 13 02:17:13.877841 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 02:17:13.878083 systemd[1]: Finished motdgen.service. Dec 13 02:17:13.932556 dbus-daemon[1714]: [system] SELinux support is enabled Dec 13 02:17:13.945693 systemd[1]: Started dbus.service. Dec 13 02:17:13.952793 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 02:17:13.952830 systemd[1]: Reached target system-config.target. Dec 13 02:17:13.954005 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 02:17:13.954035 systemd[1]: Reached target user-config.target. Dec 13 02:17:13.976298 extend-filesystems[1716]: Resized partition /dev/nvme0n1p9 Dec 13 02:17:13.987751 dbus-daemon[1714]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1458 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 02:17:13.997956 systemd[1]: Starting systemd-hostnamed.service... Dec 13 02:17:14.009987 extend-filesystems[1780]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 02:17:14.018293 amazon-ssm-agent[1711]: 2024/12/13 02:17:14 Failed to load instance info from vault. RegistrationKey does not exist. Dec 13 02:17:14.019055 bash[1779]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:17:14.020494 amazon-ssm-agent[1711]: Initializing new seelog logger Dec 13 02:17:14.022323 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 02:17:14.023655 amazon-ssm-agent[1711]: New Seelog Logger Creation Complete Dec 13 02:17:14.024479 amazon-ssm-agent[1711]: 2024/12/13 02:17:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 02:17:14.024479 amazon-ssm-agent[1711]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 02:17:14.025564 amazon-ssm-agent[1711]: 2024/12/13 02:17:14 processing appconfig overrides Dec 13 02:17:14.037401 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 02:17:14.061249 update_engine[1724]: I1213 02:17:14.060686 1724 main.cc:92] Flatcar Update Engine starting Dec 13 02:17:14.082363 systemd[1]: Started update-engine.service. Dec 13 02:17:14.083675 update_engine[1724]: I1213 02:17:14.082723 1724 update_check_scheduler.cc:74] Next update check in 4m22s Dec 13 02:17:14.086353 systemd[1]: Started locksmithd.service. Dec 13 02:17:14.105364 env[1736]: time="2024-12-13T02:17:14.103541760Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 02:17:14.131078 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 02:17:14.190182 extend-filesystems[1780]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 02:17:14.190182 extend-filesystems[1780]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 02:17:14.190182 extend-filesystems[1780]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 02:17:14.195369 extend-filesystems[1716]: Resized filesystem in /dev/nvme0n1p9 Dec 13 02:17:14.191180 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 02:17:14.191507 systemd[1]: Finished extend-filesystems.service. Dec 13 02:17:14.270312 systemd-logind[1723]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 02:17:14.270351 systemd-logind[1723]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 02:17:14.270374 systemd-logind[1723]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 02:17:14.286581 systemd-logind[1723]: New seat seat0. Dec 13 02:17:14.294448 systemd[1]: Started systemd-logind.service. Dec 13 02:17:14.306252 env[1736]: time="2024-12-13T02:17:14.306112524Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 02:17:14.306471 env[1736]: time="2024-12-13T02:17:14.306442646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:17:14.310784 env[1736]: time="2024-12-13T02:17:14.310724965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:17:14.310784 env[1736]: time="2024-12-13T02:17:14.310779867Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:17:14.311468 env[1736]: time="2024-12-13T02:17:14.311426449Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:17:14.311468 env[1736]: time="2024-12-13T02:17:14.311463141Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 02:17:14.311674 env[1736]: time="2024-12-13T02:17:14.311483030Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 02:17:14.311674 env[1736]: time="2024-12-13T02:17:14.311496428Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 02:17:14.311674 env[1736]: time="2024-12-13T02:17:14.311655526Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:17:14.311972 env[1736]: time="2024-12-13T02:17:14.311945150Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:17:14.312291 env[1736]: time="2024-12-13T02:17:14.312259174Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:17:14.312359 env[1736]: time="2024-12-13T02:17:14.312294205Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 02:17:14.312439 env[1736]: time="2024-12-13T02:17:14.312365428Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 02:17:14.312439 env[1736]: time="2024-12-13T02:17:14.312396145Z" level=info msg="metadata content store policy set" policy=shared Dec 13 02:17:14.335313 env[1736]: time="2024-12-13T02:17:14.334840929Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 02:17:14.335550 env[1736]: time="2024-12-13T02:17:14.335327337Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 02:17:14.335550 env[1736]: time="2024-12-13T02:17:14.335449545Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 02:17:14.335735 env[1736]: time="2024-12-13T02:17:14.335640477Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 02:17:14.335735 env[1736]: time="2024-12-13T02:17:14.335669231Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 02:17:14.335919 env[1736]: time="2024-12-13T02:17:14.335736689Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 02:17:14.335919 env[1736]: time="2024-12-13T02:17:14.335758030Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 02:17:14.335919 env[1736]: time="2024-12-13T02:17:14.335818526Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 02:17:14.335919 env[1736]: time="2024-12-13T02:17:14.335840581Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 02:17:14.335919 env[1736]: time="2024-12-13T02:17:14.335860511Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 02:17:14.336399 env[1736]: time="2024-12-13T02:17:14.335922047Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 02:17:14.336399 env[1736]: time="2024-12-13T02:17:14.335941886Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 02:17:14.336588 env[1736]: time="2024-12-13T02:17:14.336420126Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 02:17:14.336860 env[1736]: time="2024-12-13T02:17:14.336800322Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 02:17:14.337914 env[1736]: time="2024-12-13T02:17:14.337847702Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 02:17:14.338034 env[1736]: time="2024-12-13T02:17:14.337942526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 02:17:14.338034 env[1736]: time="2024-12-13T02:17:14.337967109Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 02:17:14.338419 env[1736]: time="2024-12-13T02:17:14.338203597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 02:17:14.340448 env[1736]: time="2024-12-13T02:17:14.338405757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 02:17:14.340530 env[1736]: time="2024-12-13T02:17:14.340460848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 02:17:14.340530 env[1736]: time="2024-12-13T02:17:14.340503913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 02:17:14.340530 env[1736]: time="2024-12-13T02:17:14.340525228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 02:17:14.340644 env[1736]: time="2024-12-13T02:17:14.340544169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 02:17:14.340644 env[1736]: time="2024-12-13T02:17:14.340579640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 02:17:14.340644 env[1736]: time="2024-12-13T02:17:14.340600505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 02:17:14.340644 env[1736]: time="2024-12-13T02:17:14.340623958Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 02:17:14.342633 env[1736]: time="2024-12-13T02:17:14.342592349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 02:17:14.342717 env[1736]: time="2024-12-13T02:17:14.342654015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 02:17:14.342717 env[1736]: time="2024-12-13T02:17:14.342676048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 02:17:14.342717 env[1736]: time="2024-12-13T02:17:14.342694068Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 02:17:14.342951 env[1736]: time="2024-12-13T02:17:14.342717402Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 02:17:14.342951 env[1736]: time="2024-12-13T02:17:14.342736048Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 02:17:14.342951 env[1736]: time="2024-12-13T02:17:14.342761863Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 02:17:14.342951 env[1736]: time="2024-12-13T02:17:14.342810141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 02:17:14.343284 env[1736]: time="2024-12-13T02:17:14.343214063Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 02:17:14.345942 env[1736]: time="2024-12-13T02:17:14.343303852Z" level=info msg="Connect containerd service" Dec 13 02:17:14.345942 env[1736]: time="2024-12-13T02:17:14.343358466Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 02:17:14.369311 env[1736]: time="2024-12-13T02:17:14.369249388Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:17:14.370083 env[1736]: time="2024-12-13T02:17:14.370021023Z" level=info msg="Start subscribing containerd event" Dec 13 02:17:14.370955 env[1736]: time="2024-12-13T02:17:14.370924112Z" level=info msg="Start recovering state" Dec 13 02:17:14.371253 env[1736]: time="2024-12-13T02:17:14.371235755Z" level=info msg="Start event monitor" Dec 13 02:17:14.373544 env[1736]: time="2024-12-13T02:17:14.373514980Z" level=info msg="Start snapshots syncer" Dec 13 02:17:14.373674 env[1736]: time="2024-12-13T02:17:14.373659111Z" level=info msg="Start cni network conf syncer for default" Dec 13 02:17:14.373772 env[1736]: time="2024-12-13T02:17:14.373756786Z" level=info msg="Start streaming server" Dec 13 02:17:14.374457 env[1736]: time="2024-12-13T02:17:14.374433875Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 02:17:14.375993 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 02:17:14.377764 env[1736]: time="2024-12-13T02:17:14.377733635Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 02:17:14.410040 systemd[1]: Started containerd.service. Dec 13 02:17:14.410656 env[1736]: time="2024-12-13T02:17:14.410607790Z" level=info msg="containerd successfully booted in 0.316881s" Dec 13 02:17:14.419086 dbus-daemon[1714]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 02:17:14.419266 systemd[1]: Started systemd-hostnamed.service. Dec 13 02:17:14.422100 dbus-daemon[1714]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1781 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 02:17:14.427459 systemd[1]: Starting polkit.service... Dec 13 02:17:14.487169 polkitd[1820]: Started polkitd version 121 Dec 13 02:17:14.541584 polkitd[1820]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 02:17:14.547703 polkitd[1820]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 02:17:14.560610 polkitd[1820]: Finished loading, compiling and executing 2 rules Dec 13 02:17:14.561229 dbus-daemon[1714]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 02:17:14.561432 systemd[1]: Started polkit.service. Dec 13 02:17:14.563407 polkitd[1820]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 02:17:14.652471 systemd-hostnamed[1781]: Hostname set to (transient) Dec 13 02:17:14.652598 systemd-resolved[1684]: System hostname changed to 'ip-172-31-19-93'. Dec 13 02:17:14.867469 coreos-metadata[1713]: Dec 13 02:17:14.862 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 02:17:14.875582 coreos-metadata[1713]: Dec 13 02:17:14.875 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Dec 13 02:17:14.877320 coreos-metadata[1713]: Dec 13 02:17:14.877 INFO Fetch successful Dec 13 02:17:14.877596 coreos-metadata[1713]: Dec 13 02:17:14.877 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 02:17:14.880834 coreos-metadata[1713]: Dec 13 02:17:14.880 INFO Fetch successful Dec 13 02:17:14.888247 unknown[1713]: wrote ssh authorized keys file for user: core Dec 13 02:17:14.951639 update-ssh-keys[1893]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:17:14.952128 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 02:17:15.068205 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO Create new startup processor Dec 13 02:17:15.068637 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [LongRunningPluginsManager] registered plugins: {} Dec 13 02:17:15.068637 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO Initializing bookkeeping folders Dec 13 02:17:15.068637 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO removing the completed state files Dec 13 02:17:15.068637 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO Initializing bookkeeping folders for long running plugins Dec 13 02:17:15.068637 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Dec 13 02:17:15.068637 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO Initializing healthcheck folders for long running plugins Dec 13 02:17:15.068637 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO Initializing locations for inventory plugin Dec 13 02:17:15.068637 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO Initializing default location for custom inventory Dec 13 02:17:15.068637 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO Initializing default location for file inventory Dec 13 02:17:15.068637 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO Initializing default location for role inventory Dec 13 02:17:15.068637 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO Init the cloudwatchlogs publisher Dec 13 02:17:15.068637 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [instanceID=i-08fdc84439f149d31] Successfully loaded platform independent plugin aws:runPowerShellScript Dec 13 02:17:15.068637 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [instanceID=i-08fdc84439f149d31] Successfully loaded platform independent plugin aws:updateSsmAgent Dec 13 02:17:15.068637 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [instanceID=i-08fdc84439f149d31] Successfully loaded platform independent plugin aws:configureDocker Dec 13 02:17:15.068637 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [instanceID=i-08fdc84439f149d31] Successfully loaded platform independent plugin aws:runDockerAction Dec 13 02:17:15.068637 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [instanceID=i-08fdc84439f149d31] Successfully loaded platform independent plugin aws:downloadContent Dec 13 02:17:15.068637 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [instanceID=i-08fdc84439f149d31] Successfully loaded platform independent plugin aws:softwareInventory Dec 13 02:17:15.068637 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [instanceID=i-08fdc84439f149d31] Successfully loaded platform independent plugin aws:refreshAssociation Dec 13 02:17:15.068637 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [instanceID=i-08fdc84439f149d31] Successfully loaded platform independent plugin aws:configurePackage Dec 13 02:17:15.068637 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [instanceID=i-08fdc84439f149d31] Successfully loaded platform independent plugin aws:runDocument Dec 13 02:17:15.069452 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [instanceID=i-08fdc84439f149d31] Successfully loaded platform dependent plugin aws:runShellScript Dec 13 02:17:15.069452 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Dec 13 02:17:15.069452 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO OS: linux, Arch: amd64 Dec 13 02:17:15.071596 amazon-ssm-agent[1711]: datastore file /var/lib/amazon/ssm/i-08fdc84439f149d31/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Dec 13 02:17:15.111024 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [MessagingDeliveryService] Starting document processing engine... Dec 13 02:17:15.206969 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [MessagingDeliveryService] [EngineProcessor] Starting Dec 13 02:17:15.306369 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Dec 13 02:17:15.400967 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [MessagingDeliveryService] Starting message polling Dec 13 02:17:15.496533 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [MessagingDeliveryService] Starting send replies to MDS Dec 13 02:17:15.599057 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [instanceID=i-08fdc84439f149d31] Starting association polling Dec 13 02:17:15.694168 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Dec 13 02:17:15.789504 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [MessagingDeliveryService] [Association] Launching response handler Dec 13 02:17:15.820734 tar[1727]: linux-amd64/LICENSE Dec 13 02:17:15.821216 tar[1727]: linux-amd64/README.md Dec 13 02:17:15.827572 systemd[1]: Finished prepare-helm.service. Dec 13 02:17:15.885045 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Dec 13 02:17:15.980617 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Dec 13 02:17:16.026464 locksmithd[1793]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 02:17:16.078403 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Dec 13 02:17:16.174449 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [OfflineService] Starting document processing engine... Dec 13 02:17:16.243821 systemd[1]: Started kubelet.service. Dec 13 02:17:16.270786 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [OfflineService] [EngineProcessor] Starting Dec 13 02:17:16.367257 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [OfflineService] [EngineProcessor] Initial processing Dec 13 02:17:16.464064 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [OfflineService] Starting message polling Dec 13 02:17:16.561101 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [OfflineService] Starting send replies to MDS Dec 13 02:17:16.658715 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [LongRunningPluginsManager] starting long running plugin manager Dec 13 02:17:16.755915 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Dec 13 02:17:16.853368 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 02:17:16.953672 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [MessageGatewayService] Starting session document processing engine... Dec 13 02:17:17.051551 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [MessageGatewayService] [EngineProcessor] Starting Dec 13 02:17:17.150682 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Dec 13 02:17:17.235473 kubelet[1919]: E1213 02:17:17.235304 1919 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:17:17.238200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:17:17.238374 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:17:17.238689 systemd[1]: kubelet.service: Consumed 1.305s CPU time. Dec 13 02:17:17.248806 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-08fdc84439f149d31, requestId: f8552261-4aab-4d1f-b51e-d6e1d0296edd Dec 13 02:17:17.347256 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Dec 13 02:17:17.445916 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [MessageGatewayService] listening reply. Dec 13 02:17:17.544898 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [StartupProcessor] Executing startup processor tasks Dec 13 02:17:17.643880 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Dec 13 02:17:17.743086 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Dec 13 02:17:17.832988 sshd_keygen[1748]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 02:17:17.850326 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.6 Dec 13 02:17:17.877318 systemd[1]: Finished sshd-keygen.service. Dec 13 02:17:17.880778 systemd[1]: Starting issuegen.service... Dec 13 02:17:17.891193 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 02:17:17.891488 systemd[1]: Finished issuegen.service. Dec 13 02:17:17.896227 systemd[1]: Starting systemd-user-sessions.service... Dec 13 02:17:17.911537 systemd[1]: Finished systemd-user-sessions.service. Dec 13 02:17:17.916040 systemd[1]: Started getty@tty1.service. Dec 13 02:17:17.921264 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 02:17:17.924057 systemd[1]: Reached target getty.target. Dec 13 02:17:17.925833 systemd[1]: Reached target multi-user.target. Dec 13 02:17:17.930252 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 02:17:17.944261 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 02:17:17.944489 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 02:17:17.945932 systemd[1]: Startup finished in 914ms (kernel) + 9.813s (initrd) + 15.198s (userspace) = 25.926s. Dec 13 02:17:17.952467 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-08fdc84439f149d31?role=subscribe&stream=input Dec 13 02:17:18.049840 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-08fdc84439f149d31?role=subscribe&stream=input Dec 13 02:17:18.149843 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [MessageGatewayService] Starting receiving message from control channel Dec 13 02:17:18.250029 amazon-ssm-agent[1711]: 2024-12-13 02:17:15 INFO [MessageGatewayService] [EngineProcessor] Initial processing Dec 13 02:17:21.920475 systemd[1]: Created slice system-sshd.slice. Dec 13 02:17:21.922275 systemd[1]: Started sshd@0-172.31.19.93:22-139.178.68.195:37178.service. Dec 13 02:17:22.172562 sshd[1940]: Accepted publickey for core from 139.178.68.195 port 37178 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:17:22.174854 sshd[1940]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:17:22.242764 systemd[1]: Created slice user-500.slice. Dec 13 02:17:22.244251 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 02:17:22.250530 systemd-logind[1723]: New session 1 of user core. Dec 13 02:17:22.258709 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 02:17:22.262790 systemd[1]: Starting user@500.service... Dec 13 02:17:22.267194 (systemd)[1943]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:17:22.397735 systemd[1943]: Queued start job for default target default.target. Dec 13 02:17:22.398477 systemd[1943]: Reached target paths.target. Dec 13 02:17:22.398607 systemd[1943]: Reached target sockets.target. Dec 13 02:17:22.398631 systemd[1943]: Reached target timers.target. Dec 13 02:17:22.398648 systemd[1943]: Reached target basic.target. Dec 13 02:17:22.398929 systemd[1]: Started user@500.service. Dec 13 02:17:22.400227 systemd[1]: Started session-1.scope. Dec 13 02:17:22.400958 systemd[1943]: Reached target default.target. Dec 13 02:17:22.401163 systemd[1943]: Startup finished in 126ms. Dec 13 02:17:22.553561 systemd[1]: Started sshd@1-172.31.19.93:22-139.178.68.195:37182.service. Dec 13 02:17:22.709822 sshd[1952]: Accepted publickey for core from 139.178.68.195 port 37182 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:17:22.711440 sshd[1952]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:17:22.715681 systemd-logind[1723]: New session 2 of user core. Dec 13 02:17:22.716995 systemd[1]: Started session-2.scope. Dec 13 02:17:22.843554 sshd[1952]: pam_unix(sshd:session): session closed for user core Dec 13 02:17:22.847294 systemd[1]: sshd@1-172.31.19.93:22-139.178.68.195:37182.service: Deactivated successfully. Dec 13 02:17:22.848408 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 02:17:22.849200 systemd-logind[1723]: Session 2 logged out. Waiting for processes to exit. Dec 13 02:17:22.850917 systemd-logind[1723]: Removed session 2. Dec 13 02:17:22.873174 systemd[1]: Started sshd@2-172.31.19.93:22-139.178.68.195:37186.service. Dec 13 02:17:23.037095 sshd[1958]: Accepted publickey for core from 139.178.68.195 port 37186 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:17:23.039029 sshd[1958]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:17:23.044446 systemd-logind[1723]: New session 3 of user core. Dec 13 02:17:23.045360 systemd[1]: Started session-3.scope. Dec 13 02:17:23.170201 sshd[1958]: pam_unix(sshd:session): session closed for user core Dec 13 02:17:23.174835 systemd[1]: sshd@2-172.31.19.93:22-139.178.68.195:37186.service: Deactivated successfully. Dec 13 02:17:23.175756 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 02:17:23.176506 systemd-logind[1723]: Session 3 logged out. Waiting for processes to exit. Dec 13 02:17:23.177538 systemd-logind[1723]: Removed session 3. Dec 13 02:17:23.196038 systemd[1]: Started sshd@3-172.31.19.93:22-139.178.68.195:37194.service. Dec 13 02:17:23.362048 sshd[1964]: Accepted publickey for core from 139.178.68.195 port 37194 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:17:23.368839 sshd[1964]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:17:23.385647 systemd[1]: Started session-4.scope. Dec 13 02:17:23.386273 systemd-logind[1723]: New session 4 of user core. Dec 13 02:17:23.516007 sshd[1964]: pam_unix(sshd:session): session closed for user core Dec 13 02:17:23.522009 systemd[1]: sshd@3-172.31.19.93:22-139.178.68.195:37194.service: Deactivated successfully. Dec 13 02:17:23.523284 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 02:17:23.524559 systemd-logind[1723]: Session 4 logged out. Waiting for processes to exit. Dec 13 02:17:23.525659 systemd-logind[1723]: Removed session 4. Dec 13 02:17:23.541816 systemd[1]: Started sshd@4-172.31.19.93:22-139.178.68.195:37204.service. Dec 13 02:17:23.701304 sshd[1970]: Accepted publickey for core from 139.178.68.195 port 37204 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:17:23.703448 sshd[1970]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:17:23.709508 systemd[1]: Started session-5.scope. Dec 13 02:17:23.710424 systemd-logind[1723]: New session 5 of user core. Dec 13 02:17:23.861229 sudo[1973]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 02:17:23.861568 sudo[1973]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 02:17:23.904097 systemd[1]: Starting docker.service... Dec 13 02:17:23.970350 env[1983]: time="2024-12-13T02:17:23.970290529Z" level=info msg="Starting up" Dec 13 02:17:23.979151 env[1983]: time="2024-12-13T02:17:23.979053476Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 02:17:23.979151 env[1983]: time="2024-12-13T02:17:23.979136077Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 02:17:23.979366 env[1983]: time="2024-12-13T02:17:23.979163556Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 02:17:23.979366 env[1983]: time="2024-12-13T02:17:23.979179225Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 02:17:23.987930 env[1983]: time="2024-12-13T02:17:23.987883866Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 02:17:23.987930 env[1983]: time="2024-12-13T02:17:23.987910861Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 02:17:23.987930 env[1983]: time="2024-12-13T02:17:23.987933341Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 02:17:23.988168 env[1983]: time="2024-12-13T02:17:23.987944699Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 02:17:24.011891 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3789084188-merged.mount: Deactivated successfully. Dec 13 02:17:24.285357 env[1983]: time="2024-12-13T02:17:24.285199693Z" level=info msg="Loading containers: start." Dec 13 02:17:24.531560 kernel: Initializing XFRM netlink socket Dec 13 02:17:24.589635 env[1983]: time="2024-12-13T02:17:24.589591646Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 02:17:24.590908 (udev-worker)[1993]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:17:24.591263 systemd-timesyncd[1686]: Network configuration changed, trying to establish connection. Dec 13 02:17:24.747716 systemd-networkd[1458]: docker0: Link UP Dec 13 02:17:24.755733 systemd-timesyncd[1686]: Contacted time server 162.159.200.123:123 (2.flatcar.pool.ntp.org). Dec 13 02:17:24.755892 systemd-timesyncd[1686]: Initial clock synchronization to Fri 2024-12-13 02:17:24.803716 UTC. Dec 13 02:17:24.770542 env[1983]: time="2024-12-13T02:17:24.770495951Z" level=info msg="Loading containers: done." Dec 13 02:17:24.800070 env[1983]: time="2024-12-13T02:17:24.800015299Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 02:17:24.800458 env[1983]: time="2024-12-13T02:17:24.800322906Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 02:17:24.800542 env[1983]: time="2024-12-13T02:17:24.800507303Z" level=info msg="Daemon has completed initialization" Dec 13 02:17:24.830453 systemd[1]: Started docker.service. Dec 13 02:17:24.850054 env[1983]: time="2024-12-13T02:17:24.846241409Z" level=info msg="API listen on /run/docker.sock" Dec 13 02:17:25.643066 amazon-ssm-agent[1711]: 2024-12-13 02:17:25 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Dec 13 02:17:26.481289 env[1736]: time="2024-12-13T02:17:26.481246752Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 02:17:27.252866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3043991620.mount: Deactivated successfully. Dec 13 02:17:27.255183 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 02:17:27.255430 systemd[1]: Stopped kubelet.service. Dec 13 02:17:27.255488 systemd[1]: kubelet.service: Consumed 1.305s CPU time. Dec 13 02:17:27.257457 systemd[1]: Starting kubelet.service... Dec 13 02:17:27.645505 systemd[1]: Started kubelet.service. Dec 13 02:17:27.808643 kubelet[2115]: E1213 02:17:27.808524 2115 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:17:27.815130 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:17:27.815307 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:17:30.801369 env[1736]: time="2024-12-13T02:17:30.801305806Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:30.804620 env[1736]: time="2024-12-13T02:17:30.804570711Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:30.809430 env[1736]: time="2024-12-13T02:17:30.809354061Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:30.814899 env[1736]: time="2024-12-13T02:17:30.814815747Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:30.820794 env[1736]: time="2024-12-13T02:17:30.820686117Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 02:17:30.836337 env[1736]: time="2024-12-13T02:17:30.836289813Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 02:17:34.614006 env[1736]: time="2024-12-13T02:17:34.613946436Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:34.618353 env[1736]: time="2024-12-13T02:17:34.618302723Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:34.621569 env[1736]: time="2024-12-13T02:17:34.621524592Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:34.624613 env[1736]: time="2024-12-13T02:17:34.624567011Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:34.625844 env[1736]: time="2024-12-13T02:17:34.625796246Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 02:17:34.638715 env[1736]: time="2024-12-13T02:17:34.638665348Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 02:17:37.233292 env[1736]: time="2024-12-13T02:17:37.233230168Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:37.267824 env[1736]: time="2024-12-13T02:17:37.267763785Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:37.291287 env[1736]: time="2024-12-13T02:17:37.291232716Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:37.312943 env[1736]: time="2024-12-13T02:17:37.312878088Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:37.313690 env[1736]: time="2024-12-13T02:17:37.313653619Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 02:17:37.332026 env[1736]: time="2024-12-13T02:17:37.331986590Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 02:17:37.838605 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 02:17:37.838874 systemd[1]: Stopped kubelet.service. Dec 13 02:17:37.840661 systemd[1]: Starting kubelet.service... Dec 13 02:17:38.437909 systemd[1]: Started kubelet.service. Dec 13 02:17:38.568696 kubelet[2141]: E1213 02:17:38.568648 2141 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:17:38.572591 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:17:38.572931 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:17:39.010934 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount875722227.mount: Deactivated successfully. Dec 13 02:17:39.873629 env[1736]: time="2024-12-13T02:17:39.873578618Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:39.876738 env[1736]: time="2024-12-13T02:17:39.876676299Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:39.879002 env[1736]: time="2024-12-13T02:17:39.878947781Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:39.881165 env[1736]: time="2024-12-13T02:17:39.881110280Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:39.881792 env[1736]: time="2024-12-13T02:17:39.881761882Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 02:17:39.898965 env[1736]: time="2024-12-13T02:17:39.898925073Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 02:17:40.480139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1381906861.mount: Deactivated successfully. Dec 13 02:17:42.462208 env[1736]: time="2024-12-13T02:17:42.461981051Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:42.467787 env[1736]: time="2024-12-13T02:17:42.467737615Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:42.476299 env[1736]: time="2024-12-13T02:17:42.476249646Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:42.482367 env[1736]: time="2024-12-13T02:17:42.482316620Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:42.483326 env[1736]: time="2024-12-13T02:17:42.483282198Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 02:17:42.497331 env[1736]: time="2024-12-13T02:17:42.497289886Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 02:17:43.102337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount26065474.mount: Deactivated successfully. Dec 13 02:17:43.115400 env[1736]: time="2024-12-13T02:17:43.115338064Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:43.123861 env[1736]: time="2024-12-13T02:17:43.123807032Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:43.126786 env[1736]: time="2024-12-13T02:17:43.126644743Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:43.129209 env[1736]: time="2024-12-13T02:17:43.129171348Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:43.129730 env[1736]: time="2024-12-13T02:17:43.129696416Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 02:17:43.156063 env[1736]: time="2024-12-13T02:17:43.156013098Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 02:17:43.724412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2425028019.mount: Deactivated successfully. Dec 13 02:17:44.658542 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 02:17:47.351990 env[1736]: time="2024-12-13T02:17:47.351927730Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:47.355230 env[1736]: time="2024-12-13T02:17:47.355170849Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:47.357761 env[1736]: time="2024-12-13T02:17:47.357706351Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:47.360584 env[1736]: time="2024-12-13T02:17:47.360529937Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:47.361432 env[1736]: time="2024-12-13T02:17:47.361377129Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 02:17:48.588612 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 02:17:48.588939 systemd[1]: Stopped kubelet.service. Dec 13 02:17:48.594691 systemd[1]: Starting kubelet.service... Dec 13 02:17:50.272903 systemd[1]: Started kubelet.service. Dec 13 02:17:50.370937 kubelet[2231]: E1213 02:17:50.370796 2231 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:17:50.374047 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:17:50.374224 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:17:50.553369 systemd[1]: Stopped kubelet.service. Dec 13 02:17:50.557125 systemd[1]: Starting kubelet.service... Dec 13 02:17:50.588619 systemd[1]: Reloading. Dec 13 02:17:50.761983 /usr/lib/systemd/system-generators/torcx-generator[2265]: time="2024-12-13T02:17:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:17:50.762023 /usr/lib/systemd/system-generators/torcx-generator[2265]: time="2024-12-13T02:17:50Z" level=info msg="torcx already run" Dec 13 02:17:50.930183 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:17:50.930206 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:17:50.970040 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:17:51.204276 systemd[1]: Started kubelet.service. Dec 13 02:17:51.211139 systemd[1]: Stopping kubelet.service... Dec 13 02:17:51.212469 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:17:51.212696 systemd[1]: Stopped kubelet.service. Dec 13 02:17:51.216261 systemd[1]: Starting kubelet.service... Dec 13 02:17:51.639824 systemd[1]: Started kubelet.service. Dec 13 02:17:51.706416 kubelet[2322]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:17:51.706746 kubelet[2322]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:17:51.706797 kubelet[2322]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:17:51.708306 kubelet[2322]: I1213 02:17:51.708262 2322 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:17:52.019487 kubelet[2322]: I1213 02:17:52.018996 2322 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 02:17:52.019487 kubelet[2322]: I1213 02:17:52.019026 2322 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:17:52.019487 kubelet[2322]: I1213 02:17:52.019301 2322 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 02:17:52.038072 kubelet[2322]: I1213 02:17:52.037742 2322 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:17:52.038925 kubelet[2322]: E1213 02:17:52.038901 2322 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.19.93:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.19.93:6443: connect: connection refused Dec 13 02:17:52.049516 kubelet[2322]: I1213 02:17:52.049487 2322 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:17:52.051850 kubelet[2322]: I1213 02:17:52.051797 2322 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:17:52.052074 kubelet[2322]: I1213 02:17:52.051846 2322 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-93","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:17:52.052211 kubelet[2322]: I1213 02:17:52.052093 2322 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:17:52.052211 kubelet[2322]: I1213 02:17:52.052108 2322 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:17:52.053268 kubelet[2322]: I1213 02:17:52.053244 2322 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:17:52.054410 kubelet[2322]: I1213 02:17:52.054373 2322 kubelet.go:400] "Attempting to sync node with API server" Dec 13 02:17:52.054498 kubelet[2322]: I1213 02:17:52.054420 2322 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:17:52.054498 kubelet[2322]: I1213 02:17:52.054453 2322 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:17:52.054498 kubelet[2322]: I1213 02:17:52.054475 2322 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:17:52.067771 kubelet[2322]: W1213 02:17:52.067613 2322 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-93&limit=500&resourceVersion=0": dial tcp 172.31.19.93:6443: connect: connection refused Dec 13 02:17:52.067771 kubelet[2322]: E1213 02:17:52.067712 2322 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.19.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-93&limit=500&resourceVersion=0": dial tcp 172.31.19.93:6443: connect: connection refused Dec 13 02:17:52.067980 kubelet[2322]: W1213 02:17:52.067840 2322 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.93:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.93:6443: connect: connection refused Dec 13 02:17:52.067980 kubelet[2322]: E1213 02:17:52.067882 2322 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.19.93:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.93:6443: connect: connection refused Dec 13 02:17:52.068080 kubelet[2322]: I1213 02:17:52.067978 2322 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:17:52.070192 kubelet[2322]: I1213 02:17:52.070156 2322 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:17:52.070343 kubelet[2322]: W1213 02:17:52.070253 2322 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 02:17:52.071203 kubelet[2322]: I1213 02:17:52.071168 2322 server.go:1264] "Started kubelet" Dec 13 02:17:52.091917 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 02:17:52.092073 kubelet[2322]: I1213 02:17:52.092049 2322 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:17:52.093007 kubelet[2322]: E1213 02:17:52.092772 2322 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.93:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.93:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-93.18109afdf556b6a3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-93,UID:ip-172-31-19-93,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-93,},FirstTimestamp:2024-12-13 02:17:52.071145123 +0000 UTC m=+0.425291944,LastTimestamp:2024-12-13 02:17:52.071145123 +0000 UTC m=+0.425291944,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-93,}" Dec 13 02:17:52.097887 kubelet[2322]: I1213 02:17:52.097822 2322 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:17:52.099076 kubelet[2322]: I1213 02:17:52.099043 2322 server.go:455] "Adding debug handlers to kubelet server" Dec 13 02:17:52.104277 kubelet[2322]: I1213 02:17:52.100371 2322 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:17:52.104277 kubelet[2322]: I1213 02:17:52.100651 2322 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:17:52.104277 kubelet[2322]: I1213 02:17:52.103867 2322 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:17:52.104842 kubelet[2322]: I1213 02:17:52.104823 2322 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 02:17:52.105024 kubelet[2322]: I1213 02:17:52.105012 2322 reconciler.go:26] "Reconciler: start to sync state" Dec 13 02:17:52.106255 kubelet[2322]: W1213 02:17:52.106185 2322 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.93:6443: connect: connection refused Dec 13 02:17:52.106456 kubelet[2322]: E1213 02:17:52.106440 2322 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.19.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.93:6443: connect: connection refused Dec 13 02:17:52.106959 kubelet[2322]: I1213 02:17:52.106936 2322 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:17:52.108177 kubelet[2322]: E1213 02:17:52.108147 2322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-93?timeout=10s\": dial tcp 172.31.19.93:6443: connect: connection refused" interval="200ms" Dec 13 02:17:52.109358 kubelet[2322]: E1213 02:17:52.109339 2322 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:17:52.109719 kubelet[2322]: I1213 02:17:52.109704 2322 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:17:52.109875 kubelet[2322]: I1213 02:17:52.109831 2322 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:17:52.131184 kubelet[2322]: I1213 02:17:52.131130 2322 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:17:52.136126 kubelet[2322]: I1213 02:17:52.135599 2322 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:17:52.136126 kubelet[2322]: I1213 02:17:52.135643 2322 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:17:52.136126 kubelet[2322]: I1213 02:17:52.135687 2322 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 02:17:52.136126 kubelet[2322]: E1213 02:17:52.135755 2322 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:17:52.147955 kubelet[2322]: W1213 02:17:52.147897 2322 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.93:6443: connect: connection refused Dec 13 02:17:52.148464 kubelet[2322]: E1213 02:17:52.148438 2322 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.19.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.93:6443: connect: connection refused Dec 13 02:17:52.148912 kubelet[2322]: I1213 02:17:52.148885 2322 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:17:52.148912 kubelet[2322]: I1213 02:17:52.148903 2322 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:17:52.149032 kubelet[2322]: I1213 02:17:52.148924 2322 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:17:52.152543 kubelet[2322]: I1213 02:17:52.152495 2322 policy_none.go:49] "None policy: Start" Dec 13 02:17:52.153351 kubelet[2322]: I1213 02:17:52.153329 2322 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:17:52.153462 kubelet[2322]: I1213 02:17:52.153359 2322 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:17:52.160336 systemd[1]: Created slice kubepods.slice. Dec 13 02:17:52.165504 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 02:17:52.169983 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 02:17:52.180260 kubelet[2322]: I1213 02:17:52.180234 2322 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:17:52.180757 kubelet[2322]: I1213 02:17:52.180706 2322 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 02:17:52.181144 kubelet[2322]: I1213 02:17:52.181132 2322 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:17:52.186962 kubelet[2322]: E1213 02:17:52.186935 2322 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-93\" not found" Dec 13 02:17:52.206041 kubelet[2322]: I1213 02:17:52.206009 2322 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-93" Dec 13 02:17:52.206577 kubelet[2322]: E1213 02:17:52.206540 2322 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.93:6443/api/v1/nodes\": dial tcp 172.31.19.93:6443: connect: connection refused" node="ip-172-31-19-93" Dec 13 02:17:52.237071 kubelet[2322]: I1213 02:17:52.237000 2322 topology_manager.go:215] "Topology Admit Handler" podUID="06c0a6de2a6653825dcf54a1549e494e" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-19-93" Dec 13 02:17:52.238774 kubelet[2322]: I1213 02:17:52.238743 2322 topology_manager.go:215] "Topology Admit Handler" podUID="ea7a7b17988a2fccbf769776b8bf7e6e" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-19-93" Dec 13 02:17:52.241122 kubelet[2322]: I1213 02:17:52.241077 2322 topology_manager.go:215] "Topology Admit Handler" podUID="fb3deee6dba12a9ad6c24e01c94658aa" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-19-93" Dec 13 02:17:52.251820 systemd[1]: Created slice kubepods-burstable-pod06c0a6de2a6653825dcf54a1549e494e.slice. Dec 13 02:17:52.263604 systemd[1]: Created slice kubepods-burstable-podea7a7b17988a2fccbf769776b8bf7e6e.slice. Dec 13 02:17:52.267862 systemd[1]: Created slice kubepods-burstable-podfb3deee6dba12a9ad6c24e01c94658aa.slice. Dec 13 02:17:52.309778 kubelet[2322]: E1213 02:17:52.309647 2322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-93?timeout=10s\": dial tcp 172.31.19.93:6443: connect: connection refused" interval="400ms" Dec 13 02:17:52.405984 kubelet[2322]: I1213 02:17:52.405932 2322 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06c0a6de2a6653825dcf54a1549e494e-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-93\" (UID: \"06c0a6de2a6653825dcf54a1549e494e\") " pod="kube-system/kube-apiserver-ip-172-31-19-93" Dec 13 02:17:52.405984 kubelet[2322]: I1213 02:17:52.405982 2322 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06c0a6de2a6653825dcf54a1549e494e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-93\" (UID: \"06c0a6de2a6653825dcf54a1549e494e\") " pod="kube-system/kube-apiserver-ip-172-31-19-93" Dec 13 02:17:52.406256 kubelet[2322]: I1213 02:17:52.406011 2322 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fb3deee6dba12a9ad6c24e01c94658aa-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-93\" (UID: \"fb3deee6dba12a9ad6c24e01c94658aa\") " pod="kube-system/kube-scheduler-ip-172-31-19-93" Dec 13 02:17:52.406256 kubelet[2322]: I1213 02:17:52.406035 2322 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06c0a6de2a6653825dcf54a1549e494e-ca-certs\") pod \"kube-apiserver-ip-172-31-19-93\" (UID: \"06c0a6de2a6653825dcf54a1549e494e\") " pod="kube-system/kube-apiserver-ip-172-31-19-93" Dec 13 02:17:52.406256 kubelet[2322]: I1213 02:17:52.406068 2322 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea7a7b17988a2fccbf769776b8bf7e6e-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-93\" (UID: \"ea7a7b17988a2fccbf769776b8bf7e6e\") " pod="kube-system/kube-controller-manager-ip-172-31-19-93" Dec 13 02:17:52.406256 kubelet[2322]: I1213 02:17:52.406090 2322 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ea7a7b17988a2fccbf769776b8bf7e6e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-93\" (UID: \"ea7a7b17988a2fccbf769776b8bf7e6e\") " pod="kube-system/kube-controller-manager-ip-172-31-19-93" Dec 13 02:17:52.406256 kubelet[2322]: I1213 02:17:52.406111 2322 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea7a7b17988a2fccbf769776b8bf7e6e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-93\" (UID: \"ea7a7b17988a2fccbf769776b8bf7e6e\") " pod="kube-system/kube-controller-manager-ip-172-31-19-93" Dec 13 02:17:52.406504 kubelet[2322]: I1213 02:17:52.406135 2322 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea7a7b17988a2fccbf769776b8bf7e6e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-93\" (UID: \"ea7a7b17988a2fccbf769776b8bf7e6e\") " pod="kube-system/kube-controller-manager-ip-172-31-19-93" Dec 13 02:17:52.406504 kubelet[2322]: I1213 02:17:52.406158 2322 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea7a7b17988a2fccbf769776b8bf7e6e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-93\" (UID: \"ea7a7b17988a2fccbf769776b8bf7e6e\") " pod="kube-system/kube-controller-manager-ip-172-31-19-93" Dec 13 02:17:52.408791 kubelet[2322]: I1213 02:17:52.408753 2322 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-93" Dec 13 02:17:52.409657 kubelet[2322]: E1213 02:17:52.409595 2322 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.93:6443/api/v1/nodes\": dial tcp 172.31.19.93:6443: connect: connection refused" node="ip-172-31-19-93" Dec 13 02:17:52.561608 env[1736]: time="2024-12-13T02:17:52.561488319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-93,Uid:06c0a6de2a6653825dcf54a1549e494e,Namespace:kube-system,Attempt:0,}" Dec 13 02:17:52.567103 env[1736]: time="2024-12-13T02:17:52.567062314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-93,Uid:ea7a7b17988a2fccbf769776b8bf7e6e,Namespace:kube-system,Attempt:0,}" Dec 13 02:17:52.572869 env[1736]: time="2024-12-13T02:17:52.572603299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-93,Uid:fb3deee6dba12a9ad6c24e01c94658aa,Namespace:kube-system,Attempt:0,}" Dec 13 02:17:52.710940 kubelet[2322]: E1213 02:17:52.710890 2322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-93?timeout=10s\": dial tcp 172.31.19.93:6443: connect: connection refused" interval="800ms" Dec 13 02:17:52.812412 kubelet[2322]: I1213 02:17:52.812036 2322 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-93" Dec 13 02:17:52.813556 kubelet[2322]: E1213 02:17:52.813474 2322 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.93:6443/api/v1/nodes\": dial tcp 172.31.19.93:6443: connect: connection refused" node="ip-172-31-19-93" Dec 13 02:17:53.083041 kubelet[2322]: W1213 02:17:53.082919 2322 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.93:6443: connect: connection refused Dec 13 02:17:53.083041 kubelet[2322]: E1213 02:17:53.082971 2322 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.19.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.93:6443: connect: connection refused Dec 13 02:17:53.118444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2602450258.mount: Deactivated successfully. Dec 13 02:17:53.169045 env[1736]: time="2024-12-13T02:17:53.168991236Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:53.170959 env[1736]: time="2024-12-13T02:17:53.170911919Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:53.175154 env[1736]: time="2024-12-13T02:17:53.175101761Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:53.177737 env[1736]: time="2024-12-13T02:17:53.177535971Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:53.179261 env[1736]: time="2024-12-13T02:17:53.179218175Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:53.183180 env[1736]: time="2024-12-13T02:17:53.182570207Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:53.190134 env[1736]: time="2024-12-13T02:17:53.189042160Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:53.192105 kubelet[2322]: W1213 02:17:53.191645 2322 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-93&limit=500&resourceVersion=0": dial tcp 172.31.19.93:6443: connect: connection refused Dec 13 02:17:53.192105 kubelet[2322]: E1213 02:17:53.191729 2322 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.19.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-93&limit=500&resourceVersion=0": dial tcp 172.31.19.93:6443: connect: connection refused Dec 13 02:17:53.193572 env[1736]: time="2024-12-13T02:17:53.193503136Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:53.202996 env[1736]: time="2024-12-13T02:17:53.202940665Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:53.209826 env[1736]: time="2024-12-13T02:17:53.209778572Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:53.213096 env[1736]: time="2024-12-13T02:17:53.213047152Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:53.213850 env[1736]: time="2024-12-13T02:17:53.213817911Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:17:53.245551 env[1736]: time="2024-12-13T02:17:53.245377211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:17:53.245551 env[1736]: time="2024-12-13T02:17:53.245513963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:17:53.245978 env[1736]: time="2024-12-13T02:17:53.245530536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:17:53.246714 env[1736]: time="2024-12-13T02:17:53.245978563Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d10321986bb51756ab79c23e892d93cfcf10631f4fbfe41b4dac4236bce5ffe8 pid=2361 runtime=io.containerd.runc.v2 Dec 13 02:17:53.295196 systemd[1]: Started cri-containerd-d10321986bb51756ab79c23e892d93cfcf10631f4fbfe41b4dac4236bce5ffe8.scope. Dec 13 02:17:53.336422 env[1736]: time="2024-12-13T02:17:53.328694589Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:17:53.336422 env[1736]: time="2024-12-13T02:17:53.328748706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:17:53.336422 env[1736]: time="2024-12-13T02:17:53.328766427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:17:53.336422 env[1736]: time="2024-12-13T02:17:53.328971867Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/51382992cef408bb533709ef9007ff8d27576d6a8d7aed9940fb9e4d13e7b8cb pid=2386 runtime=io.containerd.runc.v2 Dec 13 02:17:53.336749 env[1736]: time="2024-12-13T02:17:53.336673537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:17:53.336806 env[1736]: time="2024-12-13T02:17:53.336756608Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:17:53.336857 env[1736]: time="2024-12-13T02:17:53.336822631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:17:53.337059 env[1736]: time="2024-12-13T02:17:53.337016748Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f50b548d548e09609a94b94edc6dbafb64525e87127da6d3b0ea8d2953fbcf43 pid=2399 runtime=io.containerd.runc.v2 Dec 13 02:17:53.350848 kubelet[2322]: W1213 02:17:53.350722 2322 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.93:6443: connect: connection refused Dec 13 02:17:53.350848 kubelet[2322]: E1213 02:17:53.350799 2322 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.19.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.93:6443: connect: connection refused Dec 13 02:17:53.361492 systemd[1]: Started cri-containerd-51382992cef408bb533709ef9007ff8d27576d6a8d7aed9940fb9e4d13e7b8cb.scope. Dec 13 02:17:53.373278 systemd[1]: Started cri-containerd-f50b548d548e09609a94b94edc6dbafb64525e87127da6d3b0ea8d2953fbcf43.scope. Dec 13 02:17:53.429625 env[1736]: time="2024-12-13T02:17:53.429574979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-93,Uid:06c0a6de2a6653825dcf54a1549e494e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d10321986bb51756ab79c23e892d93cfcf10631f4fbfe41b4dac4236bce5ffe8\"" Dec 13 02:17:53.440961 env[1736]: time="2024-12-13T02:17:53.440913792Z" level=info msg="CreateContainer within sandbox \"d10321986bb51756ab79c23e892d93cfcf10631f4fbfe41b4dac4236bce5ffe8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 02:17:53.472213 env[1736]: time="2024-12-13T02:17:53.472136280Z" level=info msg="CreateContainer within sandbox \"d10321986bb51756ab79c23e892d93cfcf10631f4fbfe41b4dac4236bce5ffe8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2534b4aec981dcddbeaf3bba8b02e0f3dd83844859b9a41efd8b761dbf907d56\"" Dec 13 02:17:53.473946 env[1736]: time="2024-12-13T02:17:53.473897748Z" level=info msg="StartContainer for \"2534b4aec981dcddbeaf3bba8b02e0f3dd83844859b9a41efd8b761dbf907d56\"" Dec 13 02:17:53.475020 env[1736]: time="2024-12-13T02:17:53.474977184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-93,Uid:fb3deee6dba12a9ad6c24e01c94658aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"f50b548d548e09609a94b94edc6dbafb64525e87127da6d3b0ea8d2953fbcf43\"" Dec 13 02:17:53.478133 env[1736]: time="2024-12-13T02:17:53.477977938Z" level=info msg="CreateContainer within sandbox \"f50b548d548e09609a94b94edc6dbafb64525e87127da6d3b0ea8d2953fbcf43\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 02:17:53.490290 env[1736]: time="2024-12-13T02:17:53.490129310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-93,Uid:ea7a7b17988a2fccbf769776b8bf7e6e,Namespace:kube-system,Attempt:0,} returns sandbox id \"51382992cef408bb533709ef9007ff8d27576d6a8d7aed9940fb9e4d13e7b8cb\"" Dec 13 02:17:53.495406 env[1736]: time="2024-12-13T02:17:53.495338821Z" level=info msg="CreateContainer within sandbox \"51382992cef408bb533709ef9007ff8d27576d6a8d7aed9940fb9e4d13e7b8cb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 02:17:53.504844 env[1736]: time="2024-12-13T02:17:53.504796500Z" level=info msg="CreateContainer within sandbox \"f50b548d548e09609a94b94edc6dbafb64525e87127da6d3b0ea8d2953fbcf43\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"82ab37ab04d28243591b9f56657f368b8fabc8320a461810502ee50b266ae3d6\"" Dec 13 02:17:53.505702 env[1736]: time="2024-12-13T02:17:53.505652568Z" level=info msg="StartContainer for \"82ab37ab04d28243591b9f56657f368b8fabc8320a461810502ee50b266ae3d6\"" Dec 13 02:17:53.509725 systemd[1]: Started cri-containerd-2534b4aec981dcddbeaf3bba8b02e0f3dd83844859b9a41efd8b761dbf907d56.scope. Dec 13 02:17:53.511876 kubelet[2322]: E1213 02:17:53.511645 2322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-93?timeout=10s\": dial tcp 172.31.19.93:6443: connect: connection refused" interval="1.6s" Dec 13 02:17:53.532658 env[1736]: time="2024-12-13T02:17:53.532604600Z" level=info msg="CreateContainer within sandbox \"51382992cef408bb533709ef9007ff8d27576d6a8d7aed9940fb9e4d13e7b8cb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3ad4259dcea149ddcc3ad0f5a1734d0234f3193d9c4c470350d9d779e0c53907\"" Dec 13 02:17:53.533653 env[1736]: time="2024-12-13T02:17:53.533616202Z" level=info msg="StartContainer for \"3ad4259dcea149ddcc3ad0f5a1734d0234f3193d9c4c470350d9d779e0c53907\"" Dec 13 02:17:53.558501 systemd[1]: Started cri-containerd-82ab37ab04d28243591b9f56657f368b8fabc8320a461810502ee50b266ae3d6.scope. Dec 13 02:17:53.623726 kubelet[2322]: I1213 02:17:53.623683 2322 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-93" Dec 13 02:17:53.624299 kubelet[2322]: E1213 02:17:53.624258 2322 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.93:6443/api/v1/nodes\": dial tcp 172.31.19.93:6443: connect: connection refused" node="ip-172-31-19-93" Dec 13 02:17:53.631460 kubelet[2322]: W1213 02:17:53.631339 2322 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.93:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.93:6443: connect: connection refused Dec 13 02:17:53.631646 kubelet[2322]: E1213 02:17:53.631474 2322 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.19.93:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.93:6443: connect: connection refused Dec 13 02:17:53.649765 systemd[1]: Started cri-containerd-3ad4259dcea149ddcc3ad0f5a1734d0234f3193d9c4c470350d9d779e0c53907.scope. Dec 13 02:17:53.703730 env[1736]: time="2024-12-13T02:17:53.703677019Z" level=info msg="StartContainer for \"2534b4aec981dcddbeaf3bba8b02e0f3dd83844859b9a41efd8b761dbf907d56\" returns successfully" Dec 13 02:17:53.769846 env[1736]: time="2024-12-13T02:17:53.769726582Z" level=info msg="StartContainer for \"3ad4259dcea149ddcc3ad0f5a1734d0234f3193d9c4c470350d9d779e0c53907\" returns successfully" Dec 13 02:17:53.782601 env[1736]: time="2024-12-13T02:17:53.782547273Z" level=info msg="StartContainer for \"82ab37ab04d28243591b9f56657f368b8fabc8320a461810502ee50b266ae3d6\" returns successfully" Dec 13 02:17:54.082686 kubelet[2322]: E1213 02:17:54.082574 2322 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.19.93:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.19.93:6443: connect: connection refused Dec 13 02:17:55.048916 kubelet[2322]: W1213 02:17:55.048832 2322 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.93:6443: connect: connection refused Dec 13 02:17:55.048916 kubelet[2322]: E1213 02:17:55.048925 2322 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.19.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.93:6443: connect: connection refused Dec 13 02:17:55.112876 kubelet[2322]: E1213 02:17:55.112803 2322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-93?timeout=10s\": dial tcp 172.31.19.93:6443: connect: connection refused" interval="3.2s" Dec 13 02:17:55.226467 kubelet[2322]: I1213 02:17:55.226441 2322 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-93" Dec 13 02:17:55.668092 amazon-ssm-agent[1711]: 2024-12-13 02:17:55 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Dec 13 02:17:57.448587 kubelet[2322]: I1213 02:17:57.448548 2322 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-19-93" Dec 13 02:17:58.069192 kubelet[2322]: I1213 02:17:58.069149 2322 apiserver.go:52] "Watching apiserver" Dec 13 02:17:58.107147 kubelet[2322]: I1213 02:17:58.105740 2322 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 02:17:59.025485 update_engine[1724]: I1213 02:17:59.025442 1724 update_attempter.cc:509] Updating boot flags... Dec 13 02:17:59.606520 systemd[1]: Reloading. Dec 13 02:17:59.743451 /usr/lib/systemd/system-generators/torcx-generator[2795]: time="2024-12-13T02:17:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:17:59.743600 /usr/lib/systemd/system-generators/torcx-generator[2795]: time="2024-12-13T02:17:59Z" level=info msg="torcx already run" Dec 13 02:17:59.977281 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:17:59.977305 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:18:00.037025 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:18:00.411151 systemd[1]: Stopping kubelet.service... Dec 13 02:18:00.431022 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:18:00.431361 systemd[1]: Stopped kubelet.service. Dec 13 02:18:00.434880 systemd[1]: Starting kubelet.service... Dec 13 02:18:02.879648 systemd[1]: Started kubelet.service. Dec 13 02:18:03.137857 kubelet[2852]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:18:03.137857 kubelet[2852]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:18:03.137857 kubelet[2852]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:18:03.138317 kubelet[2852]: I1213 02:18:03.138131 2852 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:18:03.143899 sudo[2864]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 02:18:03.144715 sudo[2864]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 02:18:03.148067 kubelet[2852]: I1213 02:18:03.148027 2852 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 02:18:03.148067 kubelet[2852]: I1213 02:18:03.148066 2852 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:18:03.148465 kubelet[2852]: I1213 02:18:03.148442 2852 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 02:18:03.150323 kubelet[2852]: I1213 02:18:03.150281 2852 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 02:18:03.154511 kubelet[2852]: I1213 02:18:03.154377 2852 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:18:03.166191 kubelet[2852]: I1213 02:18:03.166154 2852 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:18:03.167068 kubelet[2852]: I1213 02:18:03.167022 2852 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:18:03.167329 kubelet[2852]: I1213 02:18:03.167071 2852 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-93","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:18:03.167531 kubelet[2852]: I1213 02:18:03.167350 2852 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:18:03.167531 kubelet[2852]: I1213 02:18:03.167366 2852 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:18:03.167531 kubelet[2852]: I1213 02:18:03.167434 2852 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:18:03.169497 kubelet[2852]: I1213 02:18:03.169462 2852 kubelet.go:400] "Attempting to sync node with API server" Dec 13 02:18:03.173768 kubelet[2852]: I1213 02:18:03.173737 2852 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:18:03.174017 kubelet[2852]: I1213 02:18:03.174006 2852 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:18:03.174110 kubelet[2852]: I1213 02:18:03.174102 2852 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:18:03.175590 kubelet[2852]: I1213 02:18:03.175564 2852 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:18:03.176122 kubelet[2852]: I1213 02:18:03.176088 2852 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:18:03.176800 kubelet[2852]: I1213 02:18:03.176778 2852 server.go:1264] "Started kubelet" Dec 13 02:18:03.221574 kubelet[2852]: I1213 02:18:03.221539 2852 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:18:03.228182 kubelet[2852]: I1213 02:18:03.228128 2852 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:18:03.231720 kubelet[2852]: I1213 02:18:03.231358 2852 server.go:455] "Adding debug handlers to kubelet server" Dec 13 02:18:03.241602 kubelet[2852]: I1213 02:18:03.241556 2852 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:18:03.247820 kubelet[2852]: I1213 02:18:03.247669 2852 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:18:03.248252 kubelet[2852]: I1213 02:18:03.248162 2852 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:18:03.254793 kubelet[2852]: I1213 02:18:03.254765 2852 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:18:03.259318 kubelet[2852]: I1213 02:18:03.259088 2852 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 02:18:03.260401 kubelet[2852]: I1213 02:18:03.260368 2852 reconciler.go:26] "Reconciler: start to sync state" Dec 13 02:18:03.269849 kubelet[2852]: I1213 02:18:03.269807 2852 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:18:03.269849 kubelet[2852]: I1213 02:18:03.269840 2852 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:18:03.304143 kubelet[2852]: I1213 02:18:03.304044 2852 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:18:03.305874 kubelet[2852]: I1213 02:18:03.305711 2852 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:18:03.305874 kubelet[2852]: I1213 02:18:03.305766 2852 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:18:03.305874 kubelet[2852]: I1213 02:18:03.305794 2852 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 02:18:03.305874 kubelet[2852]: E1213 02:18:03.305845 2852 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:18:03.419980 kubelet[2852]: E1213 02:18:03.417446 2852 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:18:03.471959 kubelet[2852]: E1213 02:18:03.469991 2852 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Dec 13 02:18:03.471959 kubelet[2852]: E1213 02:18:03.470441 2852 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:18:03.510318 kubelet[2852]: I1213 02:18:03.510286 2852 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-93" Dec 13 02:18:03.586760 kubelet[2852]: I1213 02:18:03.586582 2852 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-19-93" Dec 13 02:18:03.586760 kubelet[2852]: I1213 02:18:03.586730 2852 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-19-93" Dec 13 02:18:03.617762 kubelet[2852]: E1213 02:18:03.617731 2852 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 02:18:03.672525 kubelet[2852]: I1213 02:18:03.672145 2852 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:18:03.672525 kubelet[2852]: I1213 02:18:03.672171 2852 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:18:03.672525 kubelet[2852]: I1213 02:18:03.672196 2852 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:18:03.673472 kubelet[2852]: I1213 02:18:03.673448 2852 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 02:18:03.673632 kubelet[2852]: I1213 02:18:03.673598 2852 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 02:18:03.673721 kubelet[2852]: I1213 02:18:03.673714 2852 policy_none.go:49] "None policy: Start" Dec 13 02:18:03.676783 kubelet[2852]: I1213 02:18:03.675298 2852 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:18:03.676783 kubelet[2852]: I1213 02:18:03.675329 2852 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:18:03.676783 kubelet[2852]: I1213 02:18:03.675824 2852 state_mem.go:75] "Updated machine memory state" Dec 13 02:18:03.710911 kubelet[2852]: I1213 02:18:03.708208 2852 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:18:03.710911 kubelet[2852]: I1213 02:18:03.708492 2852 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 02:18:03.710911 kubelet[2852]: I1213 02:18:03.708949 2852 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:18:04.019060 kubelet[2852]: I1213 02:18:04.018901 2852 topology_manager.go:215] "Topology Admit Handler" podUID="06c0a6de2a6653825dcf54a1549e494e" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-19-93" Dec 13 02:18:04.019292 kubelet[2852]: I1213 02:18:04.019066 2852 topology_manager.go:215] "Topology Admit Handler" podUID="ea7a7b17988a2fccbf769776b8bf7e6e" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-19-93" Dec 13 02:18:04.019292 kubelet[2852]: I1213 02:18:04.019144 2852 topology_manager.go:215] "Topology Admit Handler" podUID="fb3deee6dba12a9ad6c24e01c94658aa" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-19-93" Dec 13 02:18:04.062802 kubelet[2852]: I1213 02:18:04.062766 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fb3deee6dba12a9ad6c24e01c94658aa-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-93\" (UID: \"fb3deee6dba12a9ad6c24e01c94658aa\") " pod="kube-system/kube-scheduler-ip-172-31-19-93" Dec 13 02:18:04.063059 kubelet[2852]: I1213 02:18:04.063026 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06c0a6de2a6653825dcf54a1549e494e-ca-certs\") pod \"kube-apiserver-ip-172-31-19-93\" (UID: \"06c0a6de2a6653825dcf54a1549e494e\") " pod="kube-system/kube-apiserver-ip-172-31-19-93" Dec 13 02:18:04.063212 kubelet[2852]: I1213 02:18:04.063084 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ea7a7b17988a2fccbf769776b8bf7e6e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-93\" (UID: \"ea7a7b17988a2fccbf769776b8bf7e6e\") " pod="kube-system/kube-controller-manager-ip-172-31-19-93" Dec 13 02:18:04.063212 kubelet[2852]: I1213 02:18:04.063181 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea7a7b17988a2fccbf769776b8bf7e6e-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-93\" (UID: \"ea7a7b17988a2fccbf769776b8bf7e6e\") " pod="kube-system/kube-controller-manager-ip-172-31-19-93" Dec 13 02:18:04.063320 kubelet[2852]: I1213 02:18:04.063207 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea7a7b17988a2fccbf769776b8bf7e6e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-93\" (UID: \"ea7a7b17988a2fccbf769776b8bf7e6e\") " pod="kube-system/kube-controller-manager-ip-172-31-19-93" Dec 13 02:18:04.063320 kubelet[2852]: I1213 02:18:04.063250 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea7a7b17988a2fccbf769776b8bf7e6e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-93\" (UID: \"ea7a7b17988a2fccbf769776b8bf7e6e\") " pod="kube-system/kube-controller-manager-ip-172-31-19-93" Dec 13 02:18:04.063320 kubelet[2852]: I1213 02:18:04.063278 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea7a7b17988a2fccbf769776b8bf7e6e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-93\" (UID: \"ea7a7b17988a2fccbf769776b8bf7e6e\") " pod="kube-system/kube-controller-manager-ip-172-31-19-93" Dec 13 02:18:04.063486 kubelet[2852]: I1213 02:18:04.063320 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06c0a6de2a6653825dcf54a1549e494e-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-93\" (UID: \"06c0a6de2a6653825dcf54a1549e494e\") " pod="kube-system/kube-apiserver-ip-172-31-19-93" Dec 13 02:18:04.063486 kubelet[2852]: I1213 02:18:04.063350 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06c0a6de2a6653825dcf54a1549e494e-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-93\" (UID: \"06c0a6de2a6653825dcf54a1549e494e\") " pod="kube-system/kube-apiserver-ip-172-31-19-93" Dec 13 02:18:04.190937 kubelet[2852]: I1213 02:18:04.190852 2852 apiserver.go:52] "Watching apiserver" Dec 13 02:18:04.260898 kubelet[2852]: I1213 02:18:04.260865 2852 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 02:18:04.512094 kubelet[2852]: I1213 02:18:04.512009 2852 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-93" podStartSLOduration=0.511985146 podStartE2EDuration="511.985146ms" podCreationTimestamp="2024-12-13 02:18:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:18:04.444056894 +0000 UTC m=+1.502841211" watchObservedRunningTime="2024-12-13 02:18:04.511985146 +0000 UTC m=+1.570769461" Dec 13 02:18:04.610196 kubelet[2852]: I1213 02:18:04.610117 2852 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-93" podStartSLOduration=0.61009555 podStartE2EDuration="610.09555ms" podCreationTimestamp="2024-12-13 02:18:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:18:04.514618745 +0000 UTC m=+1.573403063" watchObservedRunningTime="2024-12-13 02:18:04.61009555 +0000 UTC m=+1.668879861" Dec 13 02:18:04.693276 kubelet[2852]: I1213 02:18:04.693209 2852 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-93" podStartSLOduration=0.693173618 podStartE2EDuration="693.173618ms" podCreationTimestamp="2024-12-13 02:18:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:18:04.613365243 +0000 UTC m=+1.672149562" watchObservedRunningTime="2024-12-13 02:18:04.693173618 +0000 UTC m=+1.751957937" Dec 13 02:18:04.795006 sudo[2864]: pam_unix(sudo:session): session closed for user root Dec 13 02:18:07.664587 sudo[1973]: pam_unix(sudo:session): session closed for user root Dec 13 02:18:07.687599 sshd[1970]: pam_unix(sshd:session): session closed for user core Dec 13 02:18:07.694741 systemd[1]: sshd@4-172.31.19.93:22-139.178.68.195:37204.service: Deactivated successfully. Dec 13 02:18:07.695962 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 02:18:07.696161 systemd[1]: session-5.scope: Consumed 4.863s CPU time. Dec 13 02:18:07.696848 systemd-logind[1723]: Session 5 logged out. Waiting for processes to exit. Dec 13 02:18:07.697964 systemd-logind[1723]: Removed session 5. Dec 13 02:18:14.787553 kubelet[2852]: I1213 02:18:14.787523 2852 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 02:18:14.788806 env[1736]: time="2024-12-13T02:18:14.788646731Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 02:18:14.789355 kubelet[2852]: I1213 02:18:14.789338 2852 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 02:18:15.667762 kubelet[2852]: I1213 02:18:15.667723 2852 topology_manager.go:215] "Topology Admit Handler" podUID="306d3245-e0b7-4b18-8891-1917f47e9c5e" podNamespace="kube-system" podName="kube-proxy-b5wkc" Dec 13 02:18:15.681186 systemd[1]: Created slice kubepods-besteffort-pod306d3245_e0b7_4b18_8891_1917f47e9c5e.slice. Dec 13 02:18:15.698968 kubelet[2852]: I1213 02:18:15.698933 2852 topology_manager.go:215] "Topology Admit Handler" podUID="1439dffd-8e60-4462-91a6-f3a229a5140f" podNamespace="kube-system" podName="cilium-mg7zt" Dec 13 02:18:15.707921 systemd[1]: Created slice kubepods-burstable-pod1439dffd_8e60_4462_91a6_f3a229a5140f.slice. Dec 13 02:18:15.754965 kubelet[2852]: I1213 02:18:15.754927 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/306d3245-e0b7-4b18-8891-1917f47e9c5e-kube-proxy\") pod \"kube-proxy-b5wkc\" (UID: \"306d3245-e0b7-4b18-8891-1917f47e9c5e\") " pod="kube-system/kube-proxy-b5wkc" Dec 13 02:18:15.755153 kubelet[2852]: I1213 02:18:15.754973 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/306d3245-e0b7-4b18-8891-1917f47e9c5e-xtables-lock\") pod \"kube-proxy-b5wkc\" (UID: \"306d3245-e0b7-4b18-8891-1917f47e9c5e\") " pod="kube-system/kube-proxy-b5wkc" Dec 13 02:18:15.755153 kubelet[2852]: I1213 02:18:15.755005 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/306d3245-e0b7-4b18-8891-1917f47e9c5e-lib-modules\") pod \"kube-proxy-b5wkc\" (UID: \"306d3245-e0b7-4b18-8891-1917f47e9c5e\") " pod="kube-system/kube-proxy-b5wkc" Dec 13 02:18:15.755153 kubelet[2852]: I1213 02:18:15.755026 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh6rl\" (UniqueName: \"kubernetes.io/projected/306d3245-e0b7-4b18-8891-1917f47e9c5e-kube-api-access-xh6rl\") pod \"kube-proxy-b5wkc\" (UID: \"306d3245-e0b7-4b18-8891-1917f47e9c5e\") " pod="kube-system/kube-proxy-b5wkc" Dec 13 02:18:15.856030 kubelet[2852]: I1213 02:18:15.855992 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-cni-path\") pod \"cilium-mg7zt\" (UID: \"1439dffd-8e60-4462-91a6-f3a229a5140f\") " pod="kube-system/cilium-mg7zt" Dec 13 02:18:15.856580 kubelet[2852]: I1213 02:18:15.856552 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1439dffd-8e60-4462-91a6-f3a229a5140f-hubble-tls\") pod \"cilium-mg7zt\" (UID: \"1439dffd-8e60-4462-91a6-f3a229a5140f\") " pod="kube-system/cilium-mg7zt" Dec 13 02:18:15.856702 kubelet[2852]: I1213 02:18:15.856687 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-cilium-cgroup\") pod \"cilium-mg7zt\" (UID: \"1439dffd-8e60-4462-91a6-f3a229a5140f\") " pod="kube-system/cilium-mg7zt" Dec 13 02:18:15.856792 kubelet[2852]: I1213 02:18:15.856779 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-etc-cni-netd\") pod \"cilium-mg7zt\" (UID: \"1439dffd-8e60-4462-91a6-f3a229a5140f\") " pod="kube-system/cilium-mg7zt" Dec 13 02:18:15.856903 kubelet[2852]: I1213 02:18:15.856888 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-cilium-run\") pod \"cilium-mg7zt\" (UID: \"1439dffd-8e60-4462-91a6-f3a229a5140f\") " pod="kube-system/cilium-mg7zt" Dec 13 02:18:15.857086 kubelet[2852]: I1213 02:18:15.857065 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-hostproc\") pod \"cilium-mg7zt\" (UID: \"1439dffd-8e60-4462-91a6-f3a229a5140f\") " pod="kube-system/cilium-mg7zt" Dec 13 02:18:15.857206 kubelet[2852]: I1213 02:18:15.857187 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-host-proc-sys-kernel\") pod \"cilium-mg7zt\" (UID: \"1439dffd-8e60-4462-91a6-f3a229a5140f\") " pod="kube-system/cilium-mg7zt" Dec 13 02:18:15.857306 kubelet[2852]: I1213 02:18:15.857292 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1439dffd-8e60-4462-91a6-f3a229a5140f-clustermesh-secrets\") pod \"cilium-mg7zt\" (UID: \"1439dffd-8e60-4462-91a6-f3a229a5140f\") " pod="kube-system/cilium-mg7zt" Dec 13 02:18:15.857409 kubelet[2852]: I1213 02:18:15.857396 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1439dffd-8e60-4462-91a6-f3a229a5140f-cilium-config-path\") pod \"cilium-mg7zt\" (UID: \"1439dffd-8e60-4462-91a6-f3a229a5140f\") " pod="kube-system/cilium-mg7zt" Dec 13 02:18:15.857515 kubelet[2852]: I1213 02:18:15.857499 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-bpf-maps\") pod \"cilium-mg7zt\" (UID: \"1439dffd-8e60-4462-91a6-f3a229a5140f\") " pod="kube-system/cilium-mg7zt" Dec 13 02:18:15.857617 kubelet[2852]: I1213 02:18:15.857593 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-xtables-lock\") pod \"cilium-mg7zt\" (UID: \"1439dffd-8e60-4462-91a6-f3a229a5140f\") " pod="kube-system/cilium-mg7zt" Dec 13 02:18:15.857722 kubelet[2852]: I1213 02:18:15.857708 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-host-proc-sys-net\") pod \"cilium-mg7zt\" (UID: \"1439dffd-8e60-4462-91a6-f3a229a5140f\") " pod="kube-system/cilium-mg7zt" Dec 13 02:18:15.857821 kubelet[2852]: I1213 02:18:15.857806 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5r8c\" (UniqueName: \"kubernetes.io/projected/1439dffd-8e60-4462-91a6-f3a229a5140f-kube-api-access-w5r8c\") pod \"cilium-mg7zt\" (UID: \"1439dffd-8e60-4462-91a6-f3a229a5140f\") " pod="kube-system/cilium-mg7zt" Dec 13 02:18:15.857913 kubelet[2852]: I1213 02:18:15.857895 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-lib-modules\") pod \"cilium-mg7zt\" (UID: \"1439dffd-8e60-4462-91a6-f3a229a5140f\") " pod="kube-system/cilium-mg7zt" Dec 13 02:18:15.913020 kubelet[2852]: I1213 02:18:15.912976 2852 topology_manager.go:215] "Topology Admit Handler" podUID="19797b68-934c-429d-a7f8-c7f19ea2c0d1" podNamespace="kube-system" podName="cilium-operator-599987898-vwzhr" Dec 13 02:18:15.920490 systemd[1]: Created slice kubepods-besteffort-pod19797b68_934c_429d_a7f8_c7f19ea2c0d1.slice. Dec 13 02:18:16.025613 env[1736]: time="2024-12-13T02:18:16.025549982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b5wkc,Uid:306d3245-e0b7-4b18-8891-1917f47e9c5e,Namespace:kube-system,Attempt:0,}" Dec 13 02:18:16.089425 kubelet[2852]: I1213 02:18:16.084594 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19797b68-934c-429d-a7f8-c7f19ea2c0d1-cilium-config-path\") pod \"cilium-operator-599987898-vwzhr\" (UID: \"19797b68-934c-429d-a7f8-c7f19ea2c0d1\") " pod="kube-system/cilium-operator-599987898-vwzhr" Dec 13 02:18:16.089425 kubelet[2852]: I1213 02:18:16.084647 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s64p\" (UniqueName: \"kubernetes.io/projected/19797b68-934c-429d-a7f8-c7f19ea2c0d1-kube-api-access-9s64p\") pod \"cilium-operator-599987898-vwzhr\" (UID: \"19797b68-934c-429d-a7f8-c7f19ea2c0d1\") " pod="kube-system/cilium-operator-599987898-vwzhr" Dec 13 02:18:16.089634 env[1736]: time="2024-12-13T02:18:16.088293047Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:18:16.089634 env[1736]: time="2024-12-13T02:18:16.088345596Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:18:16.089634 env[1736]: time="2024-12-13T02:18:16.088360202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:18:16.089634 env[1736]: time="2024-12-13T02:18:16.088643945Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0fb814a8cded3ee037096cb375ae387d6764baa8687cba1a2ce7aa1e995f984a pid=2934 runtime=io.containerd.runc.v2 Dec 13 02:18:16.107077 systemd[1]: Started cri-containerd-0fb814a8cded3ee037096cb375ae387d6764baa8687cba1a2ce7aa1e995f984a.scope. Dec 13 02:18:16.163310 env[1736]: time="2024-12-13T02:18:16.163245099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b5wkc,Uid:306d3245-e0b7-4b18-8891-1917f47e9c5e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0fb814a8cded3ee037096cb375ae387d6764baa8687cba1a2ce7aa1e995f984a\"" Dec 13 02:18:16.173272 env[1736]: time="2024-12-13T02:18:16.173096380Z" level=info msg="CreateContainer within sandbox \"0fb814a8cded3ee037096cb375ae387d6764baa8687cba1a2ce7aa1e995f984a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 02:18:16.206127 env[1736]: time="2024-12-13T02:18:16.206067052Z" level=info msg="CreateContainer within sandbox \"0fb814a8cded3ee037096cb375ae387d6764baa8687cba1a2ce7aa1e995f984a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ed343e40194d24fd917c7f8bd736877d926d492e0d89a2bf7fb7c4636e87599a\"" Dec 13 02:18:16.207568 env[1736]: time="2024-12-13T02:18:16.207514419Z" level=info msg="StartContainer for \"ed343e40194d24fd917c7f8bd736877d926d492e0d89a2bf7fb7c4636e87599a\"" Dec 13 02:18:16.225599 env[1736]: time="2024-12-13T02:18:16.225549008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-vwzhr,Uid:19797b68-934c-429d-a7f8-c7f19ea2c0d1,Namespace:kube-system,Attempt:0,}" Dec 13 02:18:16.239887 systemd[1]: Started cri-containerd-ed343e40194d24fd917c7f8bd736877d926d492e0d89a2bf7fb7c4636e87599a.scope. Dec 13 02:18:16.256620 env[1736]: time="2024-12-13T02:18:16.256528714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:18:16.256797 env[1736]: time="2024-12-13T02:18:16.256645455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:18:16.256797 env[1736]: time="2024-12-13T02:18:16.256676502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:18:16.257298 env[1736]: time="2024-12-13T02:18:16.257239807Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e00b39791083e79d0d76a5d4b466d79280113b556424a7bd34299756b63bd5ba pid=2990 runtime=io.containerd.runc.v2 Dec 13 02:18:16.279509 systemd[1]: Started cri-containerd-e00b39791083e79d0d76a5d4b466d79280113b556424a7bd34299756b63bd5ba.scope. Dec 13 02:18:16.313163 env[1736]: time="2024-12-13T02:18:16.312712621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mg7zt,Uid:1439dffd-8e60-4462-91a6-f3a229a5140f,Namespace:kube-system,Attempt:0,}" Dec 13 02:18:16.348773 env[1736]: time="2024-12-13T02:18:16.348560810Z" level=info msg="StartContainer for \"ed343e40194d24fd917c7f8bd736877d926d492e0d89a2bf7fb7c4636e87599a\" returns successfully" Dec 13 02:18:16.376482 env[1736]: time="2024-12-13T02:18:16.376339204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:18:16.376670 env[1736]: time="2024-12-13T02:18:16.376514579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:18:16.376670 env[1736]: time="2024-12-13T02:18:16.376550220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:18:16.377058 env[1736]: time="2024-12-13T02:18:16.376994223Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e28ee8e8112d5b141f919c5e583316b6ddb568aa457affcbe6638242f5ce044 pid=3039 runtime=io.containerd.runc.v2 Dec 13 02:18:16.397140 systemd[1]: Started cri-containerd-0e28ee8e8112d5b141f919c5e583316b6ddb568aa457affcbe6638242f5ce044.scope. Dec 13 02:18:16.424654 env[1736]: time="2024-12-13T02:18:16.424530590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-vwzhr,Uid:19797b68-934c-429d-a7f8-c7f19ea2c0d1,Namespace:kube-system,Attempt:0,} returns sandbox id \"e00b39791083e79d0d76a5d4b466d79280113b556424a7bd34299756b63bd5ba\"" Dec 13 02:18:16.434890 env[1736]: time="2024-12-13T02:18:16.434761028Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 02:18:16.457437 env[1736]: time="2024-12-13T02:18:16.457369243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mg7zt,Uid:1439dffd-8e60-4462-91a6-f3a229a5140f,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e28ee8e8112d5b141f919c5e583316b6ddb568aa457affcbe6638242f5ce044\"" Dec 13 02:18:20.287379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1644641351.mount: Deactivated successfully. Dec 13 02:18:21.508311 env[1736]: time="2024-12-13T02:18:21.508257638Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:18:21.511556 env[1736]: time="2024-12-13T02:18:21.511510700Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:18:21.513922 env[1736]: time="2024-12-13T02:18:21.513865823Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:18:21.514750 env[1736]: time="2024-12-13T02:18:21.514646157Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 02:18:21.516720 env[1736]: time="2024-12-13T02:18:21.516682192Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 02:18:21.521356 env[1736]: time="2024-12-13T02:18:21.520440048Z" level=info msg="CreateContainer within sandbox \"e00b39791083e79d0d76a5d4b466d79280113b556424a7bd34299756b63bd5ba\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 02:18:21.551691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2290513227.mount: Deactivated successfully. Dec 13 02:18:21.557091 env[1736]: time="2024-12-13T02:18:21.557002124Z" level=info msg="CreateContainer within sandbox \"e00b39791083e79d0d76a5d4b466d79280113b556424a7bd34299756b63bd5ba\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f1186afa389a22dc29372ed9de3b11e403de72bae99a9782841140516db18146\"" Dec 13 02:18:21.560011 env[1736]: time="2024-12-13T02:18:21.559970306Z" level=info msg="StartContainer for \"f1186afa389a22dc29372ed9de3b11e403de72bae99a9782841140516db18146\"" Dec 13 02:18:21.601607 systemd[1]: Started cri-containerd-f1186afa389a22dc29372ed9de3b11e403de72bae99a9782841140516db18146.scope. Dec 13 02:18:21.649539 env[1736]: time="2024-12-13T02:18:21.649484212Z" level=info msg="StartContainer for \"f1186afa389a22dc29372ed9de3b11e403de72bae99a9782841140516db18146\" returns successfully" Dec 13 02:18:21.687575 kubelet[2852]: I1213 02:18:21.687495 2852 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b5wkc" podStartSLOduration=6.687469466 podStartE2EDuration="6.687469466s" podCreationTimestamp="2024-12-13 02:18:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:18:16.67051112 +0000 UTC m=+13.729295438" watchObservedRunningTime="2024-12-13 02:18:21.687469466 +0000 UTC m=+18.746253787" Dec 13 02:18:22.548567 systemd[1]: run-containerd-runc-k8s.io-f1186afa389a22dc29372ed9de3b11e403de72bae99a9782841140516db18146-runc.5LSJLd.mount: Deactivated successfully. Dec 13 02:18:23.328232 kubelet[2852]: I1213 02:18:23.328171 2852 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-vwzhr" podStartSLOduration=3.2399760300000002 podStartE2EDuration="8.32814818s" podCreationTimestamp="2024-12-13 02:18:15 +0000 UTC" firstStartedPulling="2024-12-13 02:18:16.428247373 +0000 UTC m=+13.487031678" lastFinishedPulling="2024-12-13 02:18:21.516419533 +0000 UTC m=+18.575203828" observedRunningTime="2024-12-13 02:18:21.689113336 +0000 UTC m=+18.747897650" watchObservedRunningTime="2024-12-13 02:18:23.32814818 +0000 UTC m=+20.386932499" Dec 13 02:18:29.802066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3080284853.mount: Deactivated successfully. Dec 13 02:18:34.216596 env[1736]: time="2024-12-13T02:18:34.216539428Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:18:34.220548 env[1736]: time="2024-12-13T02:18:34.220502010Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:18:34.223519 env[1736]: time="2024-12-13T02:18:34.223468995Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:18:34.224215 env[1736]: time="2024-12-13T02:18:34.224174031Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 02:18:34.228630 env[1736]: time="2024-12-13T02:18:34.228502812Z" level=info msg="CreateContainer within sandbox \"0e28ee8e8112d5b141f919c5e583316b6ddb568aa457affcbe6638242f5ce044\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:18:34.251974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2485360188.mount: Deactivated successfully. Dec 13 02:18:34.262006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount363179725.mount: Deactivated successfully. Dec 13 02:18:34.266630 env[1736]: time="2024-12-13T02:18:34.266579564Z" level=info msg="CreateContainer within sandbox \"0e28ee8e8112d5b141f919c5e583316b6ddb568aa457affcbe6638242f5ce044\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cd80539490e664ad2c34c40aebb021de67811e988251c9de37fa78c7a441a9bf\"" Dec 13 02:18:34.269655 env[1736]: time="2024-12-13T02:18:34.268526312Z" level=info msg="StartContainer for \"cd80539490e664ad2c34c40aebb021de67811e988251c9de37fa78c7a441a9bf\"" Dec 13 02:18:34.303158 systemd[1]: Started cri-containerd-cd80539490e664ad2c34c40aebb021de67811e988251c9de37fa78c7a441a9bf.scope. Dec 13 02:18:34.352963 env[1736]: time="2024-12-13T02:18:34.352006922Z" level=info msg="StartContainer for \"cd80539490e664ad2c34c40aebb021de67811e988251c9de37fa78c7a441a9bf\" returns successfully" Dec 13 02:18:34.370682 systemd[1]: cri-containerd-cd80539490e664ad2c34c40aebb021de67811e988251c9de37fa78c7a441a9bf.scope: Deactivated successfully. Dec 13 02:18:34.663600 env[1736]: time="2024-12-13T02:18:34.663541476Z" level=info msg="shim disconnected" id=cd80539490e664ad2c34c40aebb021de67811e988251c9de37fa78c7a441a9bf Dec 13 02:18:34.663600 env[1736]: time="2024-12-13T02:18:34.663596869Z" level=warning msg="cleaning up after shim disconnected" id=cd80539490e664ad2c34c40aebb021de67811e988251c9de37fa78c7a441a9bf namespace=k8s.io Dec 13 02:18:34.664153 env[1736]: time="2024-12-13T02:18:34.663610233Z" level=info msg="cleaning up dead shim" Dec 13 02:18:34.676705 env[1736]: time="2024-12-13T02:18:34.676641944Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:18:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3295 runtime=io.containerd.runc.v2\n" Dec 13 02:18:34.734783 env[1736]: time="2024-12-13T02:18:34.734710102Z" level=info msg="CreateContainer within sandbox \"0e28ee8e8112d5b141f919c5e583316b6ddb568aa457affcbe6638242f5ce044\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:18:34.756395 env[1736]: time="2024-12-13T02:18:34.756327986Z" level=info msg="CreateContainer within sandbox \"0e28ee8e8112d5b141f919c5e583316b6ddb568aa457affcbe6638242f5ce044\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"76a3ef34ef7beff9d9498e6397d320e505a3740cba04a8f1c43d8d8558602fb3\"" Dec 13 02:18:34.757154 env[1736]: time="2024-12-13T02:18:34.757115814Z" level=info msg="StartContainer for \"76a3ef34ef7beff9d9498e6397d320e505a3740cba04a8f1c43d8d8558602fb3\"" Dec 13 02:18:34.785347 systemd[1]: Started cri-containerd-76a3ef34ef7beff9d9498e6397d320e505a3740cba04a8f1c43d8d8558602fb3.scope. Dec 13 02:18:34.838247 env[1736]: time="2024-12-13T02:18:34.838173736Z" level=info msg="StartContainer for \"76a3ef34ef7beff9d9498e6397d320e505a3740cba04a8f1c43d8d8558602fb3\" returns successfully" Dec 13 02:18:34.858309 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:18:34.859190 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:18:34.859893 systemd[1]: Stopping systemd-sysctl.service... Dec 13 02:18:34.862326 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:18:34.866882 systemd[1]: cri-containerd-76a3ef34ef7beff9d9498e6397d320e505a3740cba04a8f1c43d8d8558602fb3.scope: Deactivated successfully. Dec 13 02:18:34.879812 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:18:34.916485 env[1736]: time="2024-12-13T02:18:34.916356423Z" level=info msg="shim disconnected" id=76a3ef34ef7beff9d9498e6397d320e505a3740cba04a8f1c43d8d8558602fb3 Dec 13 02:18:34.916485 env[1736]: time="2024-12-13T02:18:34.916449243Z" level=warning msg="cleaning up after shim disconnected" id=76a3ef34ef7beff9d9498e6397d320e505a3740cba04a8f1c43d8d8558602fb3 namespace=k8s.io Dec 13 02:18:34.917425 env[1736]: time="2024-12-13T02:18:34.916465676Z" level=info msg="cleaning up dead shim" Dec 13 02:18:34.955493 env[1736]: time="2024-12-13T02:18:34.955444415Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:18:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3361 runtime=io.containerd.runc.v2\n" Dec 13 02:18:35.248057 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd80539490e664ad2c34c40aebb021de67811e988251c9de37fa78c7a441a9bf-rootfs.mount: Deactivated successfully. Dec 13 02:18:35.748623 env[1736]: time="2024-12-13T02:18:35.748568256Z" level=info msg="CreateContainer within sandbox \"0e28ee8e8112d5b141f919c5e583316b6ddb568aa457affcbe6638242f5ce044\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:18:35.797947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount40706213.mount: Deactivated successfully. Dec 13 02:18:35.812793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4007276840.mount: Deactivated successfully. Dec 13 02:18:35.816519 env[1736]: time="2024-12-13T02:18:35.816470187Z" level=info msg="CreateContainer within sandbox \"0e28ee8e8112d5b141f919c5e583316b6ddb568aa457affcbe6638242f5ce044\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"be883b267d169c25b2790876f525e3e156cedd51aff556664930ae627f875ef8\"" Dec 13 02:18:35.817735 env[1736]: time="2024-12-13T02:18:35.817703057Z" level=info msg="StartContainer for \"be883b267d169c25b2790876f525e3e156cedd51aff556664930ae627f875ef8\"" Dec 13 02:18:35.851901 systemd[1]: Started cri-containerd-be883b267d169c25b2790876f525e3e156cedd51aff556664930ae627f875ef8.scope. Dec 13 02:18:35.892311 env[1736]: time="2024-12-13T02:18:35.892251821Z" level=info msg="StartContainer for \"be883b267d169c25b2790876f525e3e156cedd51aff556664930ae627f875ef8\" returns successfully" Dec 13 02:18:35.901474 systemd[1]: cri-containerd-be883b267d169c25b2790876f525e3e156cedd51aff556664930ae627f875ef8.scope: Deactivated successfully. Dec 13 02:18:35.957227 env[1736]: time="2024-12-13T02:18:35.957177440Z" level=info msg="shim disconnected" id=be883b267d169c25b2790876f525e3e156cedd51aff556664930ae627f875ef8 Dec 13 02:18:35.957227 env[1736]: time="2024-12-13T02:18:35.957223897Z" level=warning msg="cleaning up after shim disconnected" id=be883b267d169c25b2790876f525e3e156cedd51aff556664930ae627f875ef8 namespace=k8s.io Dec 13 02:18:35.957227 env[1736]: time="2024-12-13T02:18:35.957236877Z" level=info msg="cleaning up dead shim" Dec 13 02:18:35.967572 env[1736]: time="2024-12-13T02:18:35.967526500Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:18:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3416 runtime=io.containerd.runc.v2\n" Dec 13 02:18:36.749948 env[1736]: time="2024-12-13T02:18:36.749864684Z" level=info msg="CreateContainer within sandbox \"0e28ee8e8112d5b141f919c5e583316b6ddb568aa457affcbe6638242f5ce044\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:18:36.775811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3756626466.mount: Deactivated successfully. Dec 13 02:18:36.786938 env[1736]: time="2024-12-13T02:18:36.786886870Z" level=info msg="CreateContainer within sandbox \"0e28ee8e8112d5b141f919c5e583316b6ddb568aa457affcbe6638242f5ce044\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"74c5c2b651753f89b139140783f1119cb459b2e29ea895a4b11b3351eafb7e0e\"" Dec 13 02:18:36.787981 env[1736]: time="2024-12-13T02:18:36.787946144Z" level=info msg="StartContainer for \"74c5c2b651753f89b139140783f1119cb459b2e29ea895a4b11b3351eafb7e0e\"" Dec 13 02:18:36.826106 systemd[1]: Started cri-containerd-74c5c2b651753f89b139140783f1119cb459b2e29ea895a4b11b3351eafb7e0e.scope. Dec 13 02:18:36.867509 systemd[1]: cri-containerd-74c5c2b651753f89b139140783f1119cb459b2e29ea895a4b11b3351eafb7e0e.scope: Deactivated successfully. Dec 13 02:18:36.872548 env[1736]: time="2024-12-13T02:18:36.872464581Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1439dffd_8e60_4462_91a6_f3a229a5140f.slice/cri-containerd-74c5c2b651753f89b139140783f1119cb459b2e29ea895a4b11b3351eafb7e0e.scope/memory.events\": no such file or directory" Dec 13 02:18:36.873966 env[1736]: time="2024-12-13T02:18:36.873925126Z" level=info msg="StartContainer for \"74c5c2b651753f89b139140783f1119cb459b2e29ea895a4b11b3351eafb7e0e\" returns successfully" Dec 13 02:18:36.910905 env[1736]: time="2024-12-13T02:18:36.910853623Z" level=info msg="shim disconnected" id=74c5c2b651753f89b139140783f1119cb459b2e29ea895a4b11b3351eafb7e0e Dec 13 02:18:36.910905 env[1736]: time="2024-12-13T02:18:36.910901794Z" level=warning msg="cleaning up after shim disconnected" id=74c5c2b651753f89b139140783f1119cb459b2e29ea895a4b11b3351eafb7e0e namespace=k8s.io Dec 13 02:18:36.911308 env[1736]: time="2024-12-13T02:18:36.910914108Z" level=info msg="cleaning up dead shim" Dec 13 02:18:36.921577 env[1736]: time="2024-12-13T02:18:36.921531232Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:18:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3472 runtime=io.containerd.runc.v2\n" Dec 13 02:18:37.788190 env[1736]: time="2024-12-13T02:18:37.788121231Z" level=info msg="CreateContainer within sandbox \"0e28ee8e8112d5b141f919c5e583316b6ddb568aa457affcbe6638242f5ce044\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:18:37.849146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount38204437.mount: Deactivated successfully. Dec 13 02:18:37.866133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1872363059.mount: Deactivated successfully. Dec 13 02:18:37.869807 env[1736]: time="2024-12-13T02:18:37.869760869Z" level=info msg="CreateContainer within sandbox \"0e28ee8e8112d5b141f919c5e583316b6ddb568aa457affcbe6638242f5ce044\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7e8a6a930eae1378577bfae6d7454bfd318e79e4c9a4ac24ef82e575ca065b6f\"" Dec 13 02:18:37.873539 env[1736]: time="2024-12-13T02:18:37.870629820Z" level=info msg="StartContainer for \"7e8a6a930eae1378577bfae6d7454bfd318e79e4c9a4ac24ef82e575ca065b6f\"" Dec 13 02:18:37.898902 systemd[1]: Started cri-containerd-7e8a6a930eae1378577bfae6d7454bfd318e79e4c9a4ac24ef82e575ca065b6f.scope. Dec 13 02:18:37.943748 env[1736]: time="2024-12-13T02:18:37.943697913Z" level=info msg="StartContainer for \"7e8a6a930eae1378577bfae6d7454bfd318e79e4c9a4ac24ef82e575ca065b6f\" returns successfully" Dec 13 02:18:38.092797 kubelet[2852]: I1213 02:18:38.091522 2852 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 02:18:38.125990 kubelet[2852]: I1213 02:18:38.125917 2852 topology_manager.go:215] "Topology Admit Handler" podUID="70ca5381-00dd-4f5d-b784-a651d4cf84d0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fwd42" Dec 13 02:18:38.127242 kubelet[2852]: I1213 02:18:38.127197 2852 topology_manager.go:215] "Topology Admit Handler" podUID="dc697eaf-28aa-4a18-bfe5-0bfed48442cd" podNamespace="kube-system" podName="coredns-7db6d8ff4d-jw94b" Dec 13 02:18:38.146200 systemd[1]: Created slice kubepods-burstable-pod70ca5381_00dd_4f5d_b784_a651d4cf84d0.slice. Dec 13 02:18:38.150172 systemd[1]: Created slice kubepods-burstable-poddc697eaf_28aa_4a18_bfe5_0bfed48442cd.slice. Dec 13 02:18:38.154106 kubelet[2852]: I1213 02:18:38.154077 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70ca5381-00dd-4f5d-b784-a651d4cf84d0-config-volume\") pod \"coredns-7db6d8ff4d-fwd42\" (UID: \"70ca5381-00dd-4f5d-b784-a651d4cf84d0\") " pod="kube-system/coredns-7db6d8ff4d-fwd42" Dec 13 02:18:38.154307 kubelet[2852]: I1213 02:18:38.154271 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc697eaf-28aa-4a18-bfe5-0bfed48442cd-config-volume\") pod \"coredns-7db6d8ff4d-jw94b\" (UID: \"dc697eaf-28aa-4a18-bfe5-0bfed48442cd\") " pod="kube-system/coredns-7db6d8ff4d-jw94b" Dec 13 02:18:38.154408 kubelet[2852]: I1213 02:18:38.154312 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d7r4\" (UniqueName: \"kubernetes.io/projected/dc697eaf-28aa-4a18-bfe5-0bfed48442cd-kube-api-access-2d7r4\") pod \"coredns-7db6d8ff4d-jw94b\" (UID: \"dc697eaf-28aa-4a18-bfe5-0bfed48442cd\") " pod="kube-system/coredns-7db6d8ff4d-jw94b" Dec 13 02:18:38.154408 kubelet[2852]: I1213 02:18:38.154342 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2tzj\" (UniqueName: \"kubernetes.io/projected/70ca5381-00dd-4f5d-b784-a651d4cf84d0-kube-api-access-g2tzj\") pod \"coredns-7db6d8ff4d-fwd42\" (UID: \"70ca5381-00dd-4f5d-b784-a651d4cf84d0\") " pod="kube-system/coredns-7db6d8ff4d-fwd42" Dec 13 02:18:38.456333 env[1736]: time="2024-12-13T02:18:38.456212917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jw94b,Uid:dc697eaf-28aa-4a18-bfe5-0bfed48442cd,Namespace:kube-system,Attempt:0,}" Dec 13 02:18:38.460394 env[1736]: time="2024-12-13T02:18:38.460340616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fwd42,Uid:70ca5381-00dd-4f5d-b784-a651d4cf84d0,Namespace:kube-system,Attempt:0,}" Dec 13 02:18:40.482335 systemd-networkd[1458]: cilium_host: Link UP Dec 13 02:18:40.514725 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 02:18:40.482502 systemd-networkd[1458]: cilium_net: Link UP Dec 13 02:18:40.482508 systemd-networkd[1458]: cilium_net: Gained carrier Dec 13 02:18:40.482691 systemd-networkd[1458]: cilium_host: Gained carrier Dec 13 02:18:40.484851 (udev-worker)[3593]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:18:40.486624 (udev-worker)[3631]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:18:40.516203 systemd-networkd[1458]: cilium_host: Gained IPv6LL Dec 13 02:18:40.691886 (udev-worker)[3659]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:18:40.699254 systemd-networkd[1458]: cilium_vxlan: Link UP Dec 13 02:18:40.699264 systemd-networkd[1458]: cilium_vxlan: Gained carrier Dec 13 02:18:41.120547 systemd-networkd[1458]: cilium_net: Gained IPv6LL Dec 13 02:18:41.142405 kernel: NET: Registered PF_ALG protocol family Dec 13 02:18:42.231157 systemd-networkd[1458]: lxc_health: Link UP Dec 13 02:18:42.231965 (udev-worker)[3658]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:18:42.265912 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:18:42.265610 systemd-networkd[1458]: lxc_health: Gained carrier Dec 13 02:18:42.354849 kubelet[2852]: I1213 02:18:42.352824 2852 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mg7zt" podStartSLOduration=9.586476158 podStartE2EDuration="27.352788984s" podCreationTimestamp="2024-12-13 02:18:15 +0000 UTC" firstStartedPulling="2024-12-13 02:18:16.459501057 +0000 UTC m=+13.518285360" lastFinishedPulling="2024-12-13 02:18:34.225813877 +0000 UTC m=+31.284598186" observedRunningTime="2024-12-13 02:18:38.823226217 +0000 UTC m=+35.882010535" watchObservedRunningTime="2024-12-13 02:18:42.352788984 +0000 UTC m=+39.411573302" Dec 13 02:18:42.355797 systemd-networkd[1458]: cilium_vxlan: Gained IPv6LL Dec 13 02:18:43.131912 (udev-worker)[3960]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:18:43.138577 systemd-networkd[1458]: lxc975a04616377: Link UP Dec 13 02:18:43.148184 kernel: eth0: renamed from tmp6526b Dec 13 02:18:43.152933 systemd-networkd[1458]: lxc975a04616377: Gained carrier Dec 13 02:18:43.153598 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc975a04616377: link becomes ready Dec 13 02:18:43.180360 systemd-networkd[1458]: lxcbecbc05994d6: Link UP Dec 13 02:18:43.195410 kernel: eth0: renamed from tmp515e9 Dec 13 02:18:43.203126 systemd-networkd[1458]: lxcbecbc05994d6: Gained carrier Dec 13 02:18:43.203621 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcbecbc05994d6: link becomes ready Dec 13 02:18:44.239515 systemd-networkd[1458]: lxc_health: Gained IPv6LL Dec 13 02:18:44.850739 systemd-networkd[1458]: lxc975a04616377: Gained IPv6LL Dec 13 02:18:44.979022 systemd-networkd[1458]: lxcbecbc05994d6: Gained IPv6LL Dec 13 02:18:48.678276 env[1736]: time="2024-12-13T02:18:48.678178849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:18:48.678780 env[1736]: time="2024-12-13T02:18:48.678296713Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:18:48.678780 env[1736]: time="2024-12-13T02:18:48.678329512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:18:48.678780 env[1736]: time="2024-12-13T02:18:48.678569900Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6526bb78307e1668075855dfe483b38bf9a473e43b22140295800bd82848f798 pid=4010 runtime=io.containerd.runc.v2 Dec 13 02:18:48.695409 env[1736]: time="2024-12-13T02:18:48.693744108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:18:48.695409 env[1736]: time="2024-12-13T02:18:48.693890077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:18:48.695409 env[1736]: time="2024-12-13T02:18:48.693953389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:18:48.695409 env[1736]: time="2024-12-13T02:18:48.694575043Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/515e9d2fada66f05ef2f64603e6dc6bbbd35b189da5a301589fe5f6c9a606d8e pid=4020 runtime=io.containerd.runc.v2 Dec 13 02:18:48.744035 systemd[1]: run-containerd-runc-k8s.io-515e9d2fada66f05ef2f64603e6dc6bbbd35b189da5a301589fe5f6c9a606d8e-runc.JkgK71.mount: Deactivated successfully. Dec 13 02:18:48.769035 systemd[1]: Started cri-containerd-515e9d2fada66f05ef2f64603e6dc6bbbd35b189da5a301589fe5f6c9a606d8e.scope. Dec 13 02:18:48.772602 systemd[1]: Started cri-containerd-6526bb78307e1668075855dfe483b38bf9a473e43b22140295800bd82848f798.scope. Dec 13 02:18:48.896879 env[1736]: time="2024-12-13T02:18:48.896823016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-jw94b,Uid:dc697eaf-28aa-4a18-bfe5-0bfed48442cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"515e9d2fada66f05ef2f64603e6dc6bbbd35b189da5a301589fe5f6c9a606d8e\"" Dec 13 02:18:48.905165 env[1736]: time="2024-12-13T02:18:48.904457069Z" level=info msg="CreateContainer within sandbox \"515e9d2fada66f05ef2f64603e6dc6bbbd35b189da5a301589fe5f6c9a606d8e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:18:48.936697 env[1736]: time="2024-12-13T02:18:48.936588140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fwd42,Uid:70ca5381-00dd-4f5d-b784-a651d4cf84d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"6526bb78307e1668075855dfe483b38bf9a473e43b22140295800bd82848f798\"" Dec 13 02:18:48.942039 env[1736]: time="2024-12-13T02:18:48.941992199Z" level=info msg="CreateContainer within sandbox \"6526bb78307e1668075855dfe483b38bf9a473e43b22140295800bd82848f798\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:18:48.944369 env[1736]: time="2024-12-13T02:18:48.944300181Z" level=info msg="CreateContainer within sandbox \"515e9d2fada66f05ef2f64603e6dc6bbbd35b189da5a301589fe5f6c9a606d8e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"edcc871ce095af092b11eed26f0f6b3ca16cb7f1a003e93016bf8cb85323b908\"" Dec 13 02:18:48.946528 env[1736]: time="2024-12-13T02:18:48.946490869Z" level=info msg="StartContainer for \"edcc871ce095af092b11eed26f0f6b3ca16cb7f1a003e93016bf8cb85323b908\"" Dec 13 02:18:48.991947 env[1736]: time="2024-12-13T02:18:48.991805275Z" level=info msg="CreateContainer within sandbox \"6526bb78307e1668075855dfe483b38bf9a473e43b22140295800bd82848f798\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5b921f2e5544942f0d7d665083f0547fe51024ec5d0512d94b1e46d704218267\"" Dec 13 02:18:48.999407 env[1736]: time="2024-12-13T02:18:48.995355095Z" level=info msg="StartContainer for \"5b921f2e5544942f0d7d665083f0547fe51024ec5d0512d94b1e46d704218267\"" Dec 13 02:18:49.006746 systemd[1]: Started cri-containerd-edcc871ce095af092b11eed26f0f6b3ca16cb7f1a003e93016bf8cb85323b908.scope. Dec 13 02:18:49.054310 systemd[1]: Started cri-containerd-5b921f2e5544942f0d7d665083f0547fe51024ec5d0512d94b1e46d704218267.scope. Dec 13 02:18:49.116801 env[1736]: time="2024-12-13T02:18:49.116739312Z" level=info msg="StartContainer for \"edcc871ce095af092b11eed26f0f6b3ca16cb7f1a003e93016bf8cb85323b908\" returns successfully" Dec 13 02:18:49.125150 env[1736]: time="2024-12-13T02:18:49.125095089Z" level=info msg="StartContainer for \"5b921f2e5544942f0d7d665083f0547fe51024ec5d0512d94b1e46d704218267\" returns successfully" Dec 13 02:18:49.834954 systemd[1]: Started sshd@5-172.31.19.93:22-139.178.68.195:48960.service. Dec 13 02:18:49.864867 kubelet[2852]: I1213 02:18:49.864473 2852 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fwd42" podStartSLOduration=34.864447425 podStartE2EDuration="34.864447425s" podCreationTimestamp="2024-12-13 02:18:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:18:49.842760487 +0000 UTC m=+46.901544827" watchObservedRunningTime="2024-12-13 02:18:49.864447425 +0000 UTC m=+46.923231743" Dec 13 02:18:49.864867 kubelet[2852]: I1213 02:18:49.864588 2852 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jw94b" podStartSLOduration=34.864580695 podStartE2EDuration="34.864580695s" podCreationTimestamp="2024-12-13 02:18:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:18:49.863148198 +0000 UTC m=+46.921932513" watchObservedRunningTime="2024-12-13 02:18:49.864580695 +0000 UTC m=+46.923365014" Dec 13 02:18:50.043094 sshd[4158]: Accepted publickey for core from 139.178.68.195 port 48960 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:18:50.045649 sshd[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:18:50.055518 systemd-logind[1723]: New session 6 of user core. Dec 13 02:18:50.055582 systemd[1]: Started session-6.scope. Dec 13 02:18:50.558311 sshd[4158]: pam_unix(sshd:session): session closed for user core Dec 13 02:18:50.572753 systemd[1]: sshd@5-172.31.19.93:22-139.178.68.195:48960.service: Deactivated successfully. Dec 13 02:18:50.576758 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 02:18:50.582344 systemd-logind[1723]: Session 6 logged out. Waiting for processes to exit. Dec 13 02:18:50.583735 systemd-logind[1723]: Removed session 6. Dec 13 02:18:55.580610 systemd[1]: Started sshd@6-172.31.19.93:22-139.178.68.195:48970.service. Dec 13 02:18:55.768316 sshd[4177]: Accepted publickey for core from 139.178.68.195 port 48970 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:18:55.770019 sshd[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:18:55.777113 systemd[1]: Started session-7.scope. Dec 13 02:18:55.777754 systemd-logind[1723]: New session 7 of user core. Dec 13 02:18:56.048197 sshd[4177]: pam_unix(sshd:session): session closed for user core Dec 13 02:18:56.052352 systemd[1]: sshd@6-172.31.19.93:22-139.178.68.195:48970.service: Deactivated successfully. Dec 13 02:18:56.053360 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 02:18:56.054486 systemd-logind[1723]: Session 7 logged out. Waiting for processes to exit. Dec 13 02:18:56.056149 systemd-logind[1723]: Removed session 7. Dec 13 02:19:01.079590 systemd[1]: Started sshd@7-172.31.19.93:22-139.178.68.195:60226.service. Dec 13 02:19:01.272298 sshd[4189]: Accepted publickey for core from 139.178.68.195 port 60226 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:19:01.274104 sshd[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:01.287553 systemd-logind[1723]: New session 8 of user core. Dec 13 02:19:01.288047 systemd[1]: Started session-8.scope. Dec 13 02:19:01.716004 sshd[4189]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:01.725276 systemd-logind[1723]: Session 8 logged out. Waiting for processes to exit. Dec 13 02:19:01.725662 systemd[1]: sshd@7-172.31.19.93:22-139.178.68.195:60226.service: Deactivated successfully. Dec 13 02:19:01.726950 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 02:19:01.734574 systemd-logind[1723]: Removed session 8. Dec 13 02:19:06.742321 systemd[1]: Started sshd@8-172.31.19.93:22-139.178.68.195:36832.service. Dec 13 02:19:06.906689 sshd[4203]: Accepted publickey for core from 139.178.68.195 port 36832 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:19:06.908505 sshd[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:06.917857 systemd[1]: Started session-9.scope. Dec 13 02:19:06.919652 systemd-logind[1723]: New session 9 of user core. Dec 13 02:19:07.130373 sshd[4203]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:07.134085 systemd[1]: sshd@8-172.31.19.93:22-139.178.68.195:36832.service: Deactivated successfully. Dec 13 02:19:07.135136 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 02:19:07.135974 systemd-logind[1723]: Session 9 logged out. Waiting for processes to exit. Dec 13 02:19:07.136922 systemd-logind[1723]: Removed session 9. Dec 13 02:19:07.158511 systemd[1]: Started sshd@9-172.31.19.93:22-139.178.68.195:36836.service. Dec 13 02:19:07.322576 sshd[4215]: Accepted publickey for core from 139.178.68.195 port 36836 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:19:07.324083 sshd[4215]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:07.329459 systemd[1]: Started session-10.scope. Dec 13 02:19:07.330236 systemd-logind[1723]: New session 10 of user core. Dec 13 02:19:07.666730 sshd[4215]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:07.672134 systemd-logind[1723]: Session 10 logged out. Waiting for processes to exit. Dec 13 02:19:07.673957 systemd[1]: sshd@9-172.31.19.93:22-139.178.68.195:36836.service: Deactivated successfully. Dec 13 02:19:07.675348 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 02:19:07.677075 systemd-logind[1723]: Removed session 10. Dec 13 02:19:07.695868 systemd[1]: Started sshd@10-172.31.19.93:22-139.178.68.195:36838.service. Dec 13 02:19:07.891071 sshd[4225]: Accepted publickey for core from 139.178.68.195 port 36838 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:19:07.893075 sshd[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:07.902823 systemd-logind[1723]: New session 11 of user core. Dec 13 02:19:07.904342 systemd[1]: Started session-11.scope. Dec 13 02:19:08.166959 sshd[4225]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:08.170976 systemd[1]: sshd@10-172.31.19.93:22-139.178.68.195:36838.service: Deactivated successfully. Dec 13 02:19:08.172250 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 02:19:08.173071 systemd-logind[1723]: Session 11 logged out. Waiting for processes to exit. Dec 13 02:19:08.174340 systemd-logind[1723]: Removed session 11. Dec 13 02:19:13.205439 systemd[1]: Started sshd@11-172.31.19.93:22-139.178.68.195:36852.service. Dec 13 02:19:13.365497 sshd[4238]: Accepted publickey for core from 139.178.68.195 port 36852 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:19:13.367100 sshd[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:13.372339 systemd-logind[1723]: New session 12 of user core. Dec 13 02:19:13.372960 systemd[1]: Started session-12.scope. Dec 13 02:19:13.599195 sshd[4238]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:13.603094 systemd[1]: sshd@11-172.31.19.93:22-139.178.68.195:36852.service: Deactivated successfully. Dec 13 02:19:13.604065 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 02:19:13.604847 systemd-logind[1723]: Session 12 logged out. Waiting for processes to exit. Dec 13 02:19:13.607437 systemd-logind[1723]: Removed session 12. Dec 13 02:19:18.626431 systemd[1]: Started sshd@12-172.31.19.93:22-139.178.68.195:51582.service. Dec 13 02:19:18.787838 sshd[4252]: Accepted publickey for core from 139.178.68.195 port 51582 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:19:18.789616 sshd[4252]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:18.800358 systemd-logind[1723]: New session 13 of user core. Dec 13 02:19:18.800644 systemd[1]: Started session-13.scope. Dec 13 02:19:19.026509 sshd[4252]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:19.030366 systemd[1]: sshd@12-172.31.19.93:22-139.178.68.195:51582.service: Deactivated successfully. Dec 13 02:19:19.031315 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 02:19:19.032159 systemd-logind[1723]: Session 13 logged out. Waiting for processes to exit. Dec 13 02:19:19.033240 systemd-logind[1723]: Removed session 13. Dec 13 02:19:24.055504 systemd[1]: Started sshd@13-172.31.19.93:22-139.178.68.195:51586.service. Dec 13 02:19:24.246805 sshd[4264]: Accepted publickey for core from 139.178.68.195 port 51586 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:19:24.248973 sshd[4264]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:24.255317 systemd[1]: Started session-14.scope. Dec 13 02:19:24.255839 systemd-logind[1723]: New session 14 of user core. Dec 13 02:19:24.490068 sshd[4264]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:24.493993 systemd-logind[1723]: Session 14 logged out. Waiting for processes to exit. Dec 13 02:19:24.494245 systemd[1]: sshd@13-172.31.19.93:22-139.178.68.195:51586.service: Deactivated successfully. Dec 13 02:19:24.495329 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 02:19:24.496478 systemd-logind[1723]: Removed session 14. Dec 13 02:19:24.517276 systemd[1]: Started sshd@14-172.31.19.93:22-139.178.68.195:51596.service. Dec 13 02:19:24.676732 sshd[4276]: Accepted publickey for core from 139.178.68.195 port 51596 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:19:24.678513 sshd[4276]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:24.694231 systemd-logind[1723]: New session 15 of user core. Dec 13 02:19:24.694911 systemd[1]: Started session-15.scope. Dec 13 02:19:25.444044 sshd[4276]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:25.458376 systemd[1]: sshd@14-172.31.19.93:22-139.178.68.195:51596.service: Deactivated successfully. Dec 13 02:19:25.459593 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 02:19:25.461004 systemd-logind[1723]: Session 15 logged out. Waiting for processes to exit. Dec 13 02:19:25.462115 systemd-logind[1723]: Removed session 15. Dec 13 02:19:25.470947 systemd[1]: Started sshd@15-172.31.19.93:22-139.178.68.195:51598.service. Dec 13 02:19:25.648876 sshd[4286]: Accepted publickey for core from 139.178.68.195 port 51598 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:19:25.650483 sshd[4286]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:25.656478 systemd-logind[1723]: New session 16 of user core. Dec 13 02:19:25.656664 systemd[1]: Started session-16.scope. Dec 13 02:19:28.284800 sshd[4286]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:28.290759 systemd[1]: sshd@15-172.31.19.93:22-139.178.68.195:51598.service: Deactivated successfully. Dec 13 02:19:28.292481 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 02:19:28.293757 systemd-logind[1723]: Session 16 logged out. Waiting for processes to exit. Dec 13 02:19:28.295126 systemd-logind[1723]: Removed session 16. Dec 13 02:19:28.313729 systemd[1]: Started sshd@16-172.31.19.93:22-139.178.68.195:42502.service. Dec 13 02:19:28.484420 sshd[4302]: Accepted publickey for core from 139.178.68.195 port 42502 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:19:28.486684 sshd[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:28.492876 systemd[1]: Started session-17.scope. Dec 13 02:19:28.493785 systemd-logind[1723]: New session 17 of user core. Dec 13 02:19:29.049680 sshd[4302]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:29.053782 systemd[1]: sshd@16-172.31.19.93:22-139.178.68.195:42502.service: Deactivated successfully. Dec 13 02:19:29.056259 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 02:19:29.057365 systemd-logind[1723]: Session 17 logged out. Waiting for processes to exit. Dec 13 02:19:29.058492 systemd-logind[1723]: Removed session 17. Dec 13 02:19:29.078059 systemd[1]: Started sshd@17-172.31.19.93:22-139.178.68.195:42504.service. Dec 13 02:19:29.259201 sshd[4312]: Accepted publickey for core from 139.178.68.195 port 42504 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:19:29.265003 sshd[4312]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:29.275159 systemd[1]: Started session-18.scope. Dec 13 02:19:29.276096 systemd-logind[1723]: New session 18 of user core. Dec 13 02:19:29.514006 sshd[4312]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:29.518482 systemd[1]: sshd@17-172.31.19.93:22-139.178.68.195:42504.service: Deactivated successfully. Dec 13 02:19:29.520023 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 02:19:29.520456 systemd-logind[1723]: Session 18 logged out. Waiting for processes to exit. Dec 13 02:19:29.522141 systemd-logind[1723]: Removed session 18. Dec 13 02:19:34.543130 systemd[1]: Started sshd@18-172.31.19.93:22-139.178.68.195:42520.service. Dec 13 02:19:34.742506 sshd[4324]: Accepted publickey for core from 139.178.68.195 port 42520 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:19:34.744086 sshd[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:34.760012 systemd[1]: Started session-19.scope. Dec 13 02:19:34.760665 systemd-logind[1723]: New session 19 of user core. Dec 13 02:19:34.999521 sshd[4324]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:35.006796 systemd[1]: sshd@18-172.31.19.93:22-139.178.68.195:42520.service: Deactivated successfully. Dec 13 02:19:35.007906 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 02:19:35.008898 systemd-logind[1723]: Session 19 logged out. Waiting for processes to exit. Dec 13 02:19:35.009820 systemd-logind[1723]: Removed session 19. Dec 13 02:19:40.024789 systemd[1]: Started sshd@19-172.31.19.93:22-139.178.68.195:45862.service. Dec 13 02:19:40.188585 sshd[4339]: Accepted publickey for core from 139.178.68.195 port 45862 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:19:40.191184 sshd[4339]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:40.201377 systemd[1]: Started session-20.scope. Dec 13 02:19:40.202981 systemd-logind[1723]: New session 20 of user core. Dec 13 02:19:40.458591 sshd[4339]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:40.464942 systemd[1]: sshd@19-172.31.19.93:22-139.178.68.195:45862.service: Deactivated successfully. Dec 13 02:19:40.466245 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 02:19:40.467377 systemd-logind[1723]: Session 20 logged out. Waiting for processes to exit. Dec 13 02:19:40.469350 systemd-logind[1723]: Removed session 20. Dec 13 02:19:45.486368 systemd[1]: Started sshd@20-172.31.19.93:22-139.178.68.195:45878.service. Dec 13 02:19:45.671424 sshd[4351]: Accepted publickey for core from 139.178.68.195 port 45878 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:19:45.675932 sshd[4351]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:45.691430 systemd[1]: Started session-21.scope. Dec 13 02:19:45.695435 systemd-logind[1723]: New session 21 of user core. Dec 13 02:19:45.903042 sshd[4351]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:45.906978 systemd-logind[1723]: Session 21 logged out. Waiting for processes to exit. Dec 13 02:19:45.907209 systemd[1]: sshd@20-172.31.19.93:22-139.178.68.195:45878.service: Deactivated successfully. Dec 13 02:19:45.908558 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 02:19:45.910050 systemd-logind[1723]: Removed session 21. Dec 13 02:19:50.936070 systemd[1]: Started sshd@21-172.31.19.93:22-139.178.68.195:53964.service. Dec 13 02:19:51.101001 sshd[4366]: Accepted publickey for core from 139.178.68.195 port 53964 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:19:51.103064 sshd[4366]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:51.109061 systemd[1]: Started session-22.scope. Dec 13 02:19:51.110719 systemd-logind[1723]: New session 22 of user core. Dec 13 02:19:51.322635 sshd[4366]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:51.335229 systemd[1]: sshd@21-172.31.19.93:22-139.178.68.195:53964.service: Deactivated successfully. Dec 13 02:19:51.341456 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 02:19:51.344420 systemd-logind[1723]: Session 22 logged out. Waiting for processes to exit. Dec 13 02:19:51.365873 systemd[1]: Started sshd@22-172.31.19.93:22-139.178.68.195:53980.service. Dec 13 02:19:51.370126 systemd-logind[1723]: Removed session 22. Dec 13 02:19:51.556790 sshd[4378]: Accepted publickey for core from 139.178.68.195 port 53980 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:19:51.558480 sshd[4378]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:51.564254 systemd[1]: Started session-23.scope. Dec 13 02:19:51.565068 systemd-logind[1723]: New session 23 of user core. Dec 13 02:19:53.708432 env[1736]: time="2024-12-13T02:19:53.708088964Z" level=info msg="StopContainer for \"f1186afa389a22dc29372ed9de3b11e403de72bae99a9782841140516db18146\" with timeout 30 (s)" Dec 13 02:19:53.709718 env[1736]: time="2024-12-13T02:19:53.709585686Z" level=info msg="Stop container \"f1186afa389a22dc29372ed9de3b11e403de72bae99a9782841140516db18146\" with signal terminated" Dec 13 02:19:53.750333 systemd[1]: run-containerd-runc-k8s.io-7e8a6a930eae1378577bfae6d7454bfd318e79e4c9a4ac24ef82e575ca065b6f-runc.J2rgpC.mount: Deactivated successfully. Dec 13 02:19:53.815561 env[1736]: time="2024-12-13T02:19:53.815005446Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:19:53.817119 systemd[1]: cri-containerd-f1186afa389a22dc29372ed9de3b11e403de72bae99a9782841140516db18146.scope: Deactivated successfully. Dec 13 02:19:53.829554 env[1736]: time="2024-12-13T02:19:53.829511029Z" level=info msg="StopContainer for \"7e8a6a930eae1378577bfae6d7454bfd318e79e4c9a4ac24ef82e575ca065b6f\" with timeout 2 (s)" Dec 13 02:19:53.830287 env[1736]: time="2024-12-13T02:19:53.830257813Z" level=info msg="Stop container \"7e8a6a930eae1378577bfae6d7454bfd318e79e4c9a4ac24ef82e575ca065b6f\" with signal terminated" Dec 13 02:19:53.847437 systemd-networkd[1458]: lxc_health: Link DOWN Dec 13 02:19:53.847447 systemd-networkd[1458]: lxc_health: Lost carrier Dec 13 02:19:53.879390 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1186afa389a22dc29372ed9de3b11e403de72bae99a9782841140516db18146-rootfs.mount: Deactivated successfully. Dec 13 02:19:53.975290 env[1736]: time="2024-12-13T02:19:53.975181396Z" level=info msg="shim disconnected" id=f1186afa389a22dc29372ed9de3b11e403de72bae99a9782841140516db18146 Dec 13 02:19:53.975912 env[1736]: time="2024-12-13T02:19:53.975885664Z" level=warning msg="cleaning up after shim disconnected" id=f1186afa389a22dc29372ed9de3b11e403de72bae99a9782841140516db18146 namespace=k8s.io Dec 13 02:19:53.976013 env[1736]: time="2024-12-13T02:19:53.975997821Z" level=info msg="cleaning up dead shim" Dec 13 02:19:53.976835 systemd[1]: cri-containerd-7e8a6a930eae1378577bfae6d7454bfd318e79e4c9a4ac24ef82e575ca065b6f.scope: Deactivated successfully. Dec 13 02:19:53.977826 systemd[1]: cri-containerd-7e8a6a930eae1378577bfae6d7454bfd318e79e4c9a4ac24ef82e575ca065b6f.scope: Consumed 8.215s CPU time. Dec 13 02:19:53.999522 env[1736]: time="2024-12-13T02:19:53.999483436Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:19:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4433 runtime=io.containerd.runc.v2\n" Dec 13 02:19:54.003674 env[1736]: time="2024-12-13T02:19:54.003623435Z" level=info msg="StopContainer for \"f1186afa389a22dc29372ed9de3b11e403de72bae99a9782841140516db18146\" returns successfully" Dec 13 02:19:54.004881 env[1736]: time="2024-12-13T02:19:54.004843762Z" level=info msg="StopPodSandbox for \"e00b39791083e79d0d76a5d4b466d79280113b556424a7bd34299756b63bd5ba\"" Dec 13 02:19:54.005004 env[1736]: time="2024-12-13T02:19:54.004922936Z" level=info msg="Container to stop \"f1186afa389a22dc29372ed9de3b11e403de72bae99a9782841140516db18146\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:19:54.007691 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e00b39791083e79d0d76a5d4b466d79280113b556424a7bd34299756b63bd5ba-shm.mount: Deactivated successfully. Dec 13 02:19:54.028111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e8a6a930eae1378577bfae6d7454bfd318e79e4c9a4ac24ef82e575ca065b6f-rootfs.mount: Deactivated successfully. Dec 13 02:19:54.033851 systemd[1]: cri-containerd-e00b39791083e79d0d76a5d4b466d79280113b556424a7bd34299756b63bd5ba.scope: Deactivated successfully. Dec 13 02:19:54.055545 env[1736]: time="2024-12-13T02:19:54.055419790Z" level=info msg="shim disconnected" id=7e8a6a930eae1378577bfae6d7454bfd318e79e4c9a4ac24ef82e575ca065b6f Dec 13 02:19:54.056337 env[1736]: time="2024-12-13T02:19:54.056307303Z" level=warning msg="cleaning up after shim disconnected" id=7e8a6a930eae1378577bfae6d7454bfd318e79e4c9a4ac24ef82e575ca065b6f namespace=k8s.io Dec 13 02:19:54.056495 env[1736]: time="2024-12-13T02:19:54.056474944Z" level=info msg="cleaning up dead shim" Dec 13 02:19:54.082555 env[1736]: time="2024-12-13T02:19:54.082502851Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:19:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4464 runtime=io.containerd.runc.v2\n" Dec 13 02:19:54.086046 env[1736]: time="2024-12-13T02:19:54.086002015Z" level=info msg="StopContainer for \"7e8a6a930eae1378577bfae6d7454bfd318e79e4c9a4ac24ef82e575ca065b6f\" returns successfully" Dec 13 02:19:54.086980 env[1736]: time="2024-12-13T02:19:54.086947376Z" level=info msg="StopPodSandbox for \"0e28ee8e8112d5b141f919c5e583316b6ddb568aa457affcbe6638242f5ce044\"" Dec 13 02:19:54.087952 env[1736]: time="2024-12-13T02:19:54.087049351Z" level=info msg="Container to stop \"74c5c2b651753f89b139140783f1119cb459b2e29ea895a4b11b3351eafb7e0e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:19:54.087952 env[1736]: time="2024-12-13T02:19:54.087074172Z" level=info msg="Container to stop \"7e8a6a930eae1378577bfae6d7454bfd318e79e4c9a4ac24ef82e575ca065b6f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:19:54.087952 env[1736]: time="2024-12-13T02:19:54.087091840Z" level=info msg="Container to stop \"76a3ef34ef7beff9d9498e6397d320e505a3740cba04a8f1c43d8d8558602fb3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:19:54.087952 env[1736]: time="2024-12-13T02:19:54.087110478Z" level=info msg="Container to stop \"be883b267d169c25b2790876f525e3e156cedd51aff556664930ae627f875ef8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:19:54.087952 env[1736]: time="2024-12-13T02:19:54.087121348Z" level=info msg="Container to stop \"cd80539490e664ad2c34c40aebb021de67811e988251c9de37fa78c7a441a9bf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:19:54.100419 systemd[1]: cri-containerd-0e28ee8e8112d5b141f919c5e583316b6ddb568aa457affcbe6638242f5ce044.scope: Deactivated successfully. Dec 13 02:19:54.112477 env[1736]: time="2024-12-13T02:19:54.112263862Z" level=info msg="shim disconnected" id=e00b39791083e79d0d76a5d4b466d79280113b556424a7bd34299756b63bd5ba Dec 13 02:19:54.113108 env[1736]: time="2024-12-13T02:19:54.112478832Z" level=warning msg="cleaning up after shim disconnected" id=e00b39791083e79d0d76a5d4b466d79280113b556424a7bd34299756b63bd5ba namespace=k8s.io Dec 13 02:19:54.113108 env[1736]: time="2024-12-13T02:19:54.112941369Z" level=info msg="cleaning up dead shim" Dec 13 02:19:54.144255 env[1736]: time="2024-12-13T02:19:54.144193749Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:19:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4497 runtime=io.containerd.runc.v2\n" Dec 13 02:19:54.146027 env[1736]: time="2024-12-13T02:19:54.145952857Z" level=info msg="TearDown network for sandbox \"e00b39791083e79d0d76a5d4b466d79280113b556424a7bd34299756b63bd5ba\" successfully" Dec 13 02:19:54.146027 env[1736]: time="2024-12-13T02:19:54.146015960Z" level=info msg="StopPodSandbox for \"e00b39791083e79d0d76a5d4b466d79280113b556424a7bd34299756b63bd5ba\" returns successfully" Dec 13 02:19:54.193888 env[1736]: time="2024-12-13T02:19:54.193837074Z" level=info msg="shim disconnected" id=0e28ee8e8112d5b141f919c5e583316b6ddb568aa457affcbe6638242f5ce044 Dec 13 02:19:54.194295 env[1736]: time="2024-12-13T02:19:54.194267885Z" level=warning msg="cleaning up after shim disconnected" id=0e28ee8e8112d5b141f919c5e583316b6ddb568aa457affcbe6638242f5ce044 namespace=k8s.io Dec 13 02:19:54.194493 env[1736]: time="2024-12-13T02:19:54.194472203Z" level=info msg="cleaning up dead shim" Dec 13 02:19:54.211910 env[1736]: time="2024-12-13T02:19:54.211826638Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:19:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4521 runtime=io.containerd.runc.v2\n" Dec 13 02:19:54.213132 env[1736]: time="2024-12-13T02:19:54.213083165Z" level=info msg="TearDown network for sandbox \"0e28ee8e8112d5b141f919c5e583316b6ddb568aa457affcbe6638242f5ce044\" successfully" Dec 13 02:19:54.213132 env[1736]: time="2024-12-13T02:19:54.213125680Z" level=info msg="StopPodSandbox for \"0e28ee8e8112d5b141f919c5e583316b6ddb568aa457affcbe6638242f5ce044\" returns successfully" Dec 13 02:19:54.285424 kubelet[2852]: I1213 02:19:54.284977 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-cilium-cgroup\") pod \"1439dffd-8e60-4462-91a6-f3a229a5140f\" (UID: \"1439dffd-8e60-4462-91a6-f3a229a5140f\") " Dec 13 02:19:54.286180 kubelet[2852]: I1213 02:19:54.286147 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-etc-cni-netd\") pod \"1439dffd-8e60-4462-91a6-f3a229a5140f\" (UID: \"1439dffd-8e60-4462-91a6-f3a229a5140f\") " Dec 13 02:19:54.286371 kubelet[2852]: I1213 02:19:54.286354 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1439dffd-8e60-4462-91a6-f3a229a5140f-hubble-tls\") pod \"1439dffd-8e60-4462-91a6-f3a229a5140f\" (UID: \"1439dffd-8e60-4462-91a6-f3a229a5140f\") " Dec 13 02:19:54.286515 kubelet[2852]: I1213 02:19:54.286500 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-xtables-lock\") pod \"1439dffd-8e60-4462-91a6-f3a229a5140f\" (UID: \"1439dffd-8e60-4462-91a6-f3a229a5140f\") " Dec 13 02:19:54.290794 kubelet[2852]: I1213 02:19:54.286644 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5r8c\" (UniqueName: \"kubernetes.io/projected/1439dffd-8e60-4462-91a6-f3a229a5140f-kube-api-access-w5r8c\") pod \"1439dffd-8e60-4462-91a6-f3a229a5140f\" (UID: \"1439dffd-8e60-4462-91a6-f3a229a5140f\") " Dec 13 02:19:54.290794 kubelet[2852]: I1213 02:19:54.286677 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9s64p\" (UniqueName: \"kubernetes.io/projected/19797b68-934c-429d-a7f8-c7f19ea2c0d1-kube-api-access-9s64p\") pod \"19797b68-934c-429d-a7f8-c7f19ea2c0d1\" (UID: \"19797b68-934c-429d-a7f8-c7f19ea2c0d1\") " Dec 13 02:19:54.290794 kubelet[2852]: I1213 02:19:54.286703 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-cilium-run\") pod \"1439dffd-8e60-4462-91a6-f3a229a5140f\" (UID: \"1439dffd-8e60-4462-91a6-f3a229a5140f\") " Dec 13 02:19:54.290794 kubelet[2852]: I1213 02:19:54.286728 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-hostproc\") pod \"1439dffd-8e60-4462-91a6-f3a229a5140f\" (UID: \"1439dffd-8e60-4462-91a6-f3a229a5140f\") " Dec 13 02:19:54.297129 kubelet[2852]: I1213 02:19:54.292405 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1439dffd-8e60-4462-91a6-f3a229a5140f" (UID: "1439dffd-8e60-4462-91a6-f3a229a5140f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:19:54.298574 kubelet[2852]: I1213 02:19:54.286750 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-host-proc-sys-kernel\") pod \"1439dffd-8e60-4462-91a6-f3a229a5140f\" (UID: \"1439dffd-8e60-4462-91a6-f3a229a5140f\") " Dec 13 02:19:54.299916 kubelet[2852]: I1213 02:19:54.299581 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1439dffd-8e60-4462-91a6-f3a229a5140f-clustermesh-secrets\") pod \"1439dffd-8e60-4462-91a6-f3a229a5140f\" (UID: \"1439dffd-8e60-4462-91a6-f3a229a5140f\") " Dec 13 02:19:54.300098 kubelet[2852]: I1213 02:19:54.300074 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1439dffd-8e60-4462-91a6-f3a229a5140f-cilium-config-path\") pod \"1439dffd-8e60-4462-91a6-f3a229a5140f\" (UID: \"1439dffd-8e60-4462-91a6-f3a229a5140f\") " Dec 13 02:19:54.301318 kubelet[2852]: I1213 02:19:54.300111 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-host-proc-sys-net\") pod \"1439dffd-8e60-4462-91a6-f3a229a5140f\" (UID: \"1439dffd-8e60-4462-91a6-f3a229a5140f\") " Dec 13 02:19:54.301318 kubelet[2852]: I1213 02:19:54.300134 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-lib-modules\") pod \"1439dffd-8e60-4462-91a6-f3a229a5140f\" (UID: \"1439dffd-8e60-4462-91a6-f3a229a5140f\") " Dec 13 02:19:54.301318 kubelet[2852]: I1213 02:19:54.301229 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-bpf-maps\") pod \"1439dffd-8e60-4462-91a6-f3a229a5140f\" (UID: \"1439dffd-8e60-4462-91a6-f3a229a5140f\") " Dec 13 02:19:54.301318 kubelet[2852]: I1213 02:19:54.301278 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-cni-path\") pod \"1439dffd-8e60-4462-91a6-f3a229a5140f\" (UID: \"1439dffd-8e60-4462-91a6-f3a229a5140f\") " Dec 13 02:19:54.301318 kubelet[2852]: I1213 02:19:54.301312 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19797b68-934c-429d-a7f8-c7f19ea2c0d1-cilium-config-path\") pod \"19797b68-934c-429d-a7f8-c7f19ea2c0d1\" (UID: \"19797b68-934c-429d-a7f8-c7f19ea2c0d1\") " Dec 13 02:19:54.301611 kubelet[2852]: I1213 02:19:54.301379 2852 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-etc-cni-netd\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:54.304420 kubelet[2852]: I1213 02:19:54.292371 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1439dffd-8e60-4462-91a6-f3a229a5140f" (UID: "1439dffd-8e60-4462-91a6-f3a229a5140f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:19:54.309710 kubelet[2852]: I1213 02:19:54.309647 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1439dffd-8e60-4462-91a6-f3a229a5140f" (UID: "1439dffd-8e60-4462-91a6-f3a229a5140f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:19:54.310180 kubelet[2852]: I1213 02:19:54.310140 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1439dffd-8e60-4462-91a6-f3a229a5140f" (UID: "1439dffd-8e60-4462-91a6-f3a229a5140f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:19:54.310955 kubelet[2852]: I1213 02:19:54.310482 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1439dffd-8e60-4462-91a6-f3a229a5140f" (UID: "1439dffd-8e60-4462-91a6-f3a229a5140f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:19:54.310955 kubelet[2852]: I1213 02:19:54.310552 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-cni-path" (OuterVolumeSpecName: "cni-path") pod "1439dffd-8e60-4462-91a6-f3a229a5140f" (UID: "1439dffd-8e60-4462-91a6-f3a229a5140f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:19:54.310955 kubelet[2852]: I1213 02:19:54.285981 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1439dffd-8e60-4462-91a6-f3a229a5140f" (UID: "1439dffd-8e60-4462-91a6-f3a229a5140f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:19:54.312449 kubelet[2852]: I1213 02:19:54.312420 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1439dffd-8e60-4462-91a6-f3a229a5140f" (UID: "1439dffd-8e60-4462-91a6-f3a229a5140f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:19:54.313331 kubelet[2852]: I1213 02:19:54.313294 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1439dffd-8e60-4462-91a6-f3a229a5140f" (UID: "1439dffd-8e60-4462-91a6-f3a229a5140f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:19:54.313454 kubelet[2852]: I1213 02:19:54.313340 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-hostproc" (OuterVolumeSpecName: "hostproc") pod "1439dffd-8e60-4462-91a6-f3a229a5140f" (UID: "1439dffd-8e60-4462-91a6-f3a229a5140f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:19:54.329247 kubelet[2852]: I1213 02:19:54.329003 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19797b68-934c-429d-a7f8-c7f19ea2c0d1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "19797b68-934c-429d-a7f8-c7f19ea2c0d1" (UID: "19797b68-934c-429d-a7f8-c7f19ea2c0d1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:19:54.336154 kubelet[2852]: I1213 02:19:54.336079 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1439dffd-8e60-4462-91a6-f3a229a5140f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1439dffd-8e60-4462-91a6-f3a229a5140f" (UID: "1439dffd-8e60-4462-91a6-f3a229a5140f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:19:54.336998 kubelet[2852]: I1213 02:19:54.336856 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19797b68-934c-429d-a7f8-c7f19ea2c0d1-kube-api-access-9s64p" (OuterVolumeSpecName: "kube-api-access-9s64p") pod "19797b68-934c-429d-a7f8-c7f19ea2c0d1" (UID: "19797b68-934c-429d-a7f8-c7f19ea2c0d1"). InnerVolumeSpecName "kube-api-access-9s64p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:19:54.338215 kubelet[2852]: I1213 02:19:54.338133 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1439dffd-8e60-4462-91a6-f3a229a5140f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1439dffd-8e60-4462-91a6-f3a229a5140f" (UID: "1439dffd-8e60-4462-91a6-f3a229a5140f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:19:54.341677 kubelet[2852]: I1213 02:19:54.341648 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1439dffd-8e60-4462-91a6-f3a229a5140f-kube-api-access-w5r8c" (OuterVolumeSpecName: "kube-api-access-w5r8c") pod "1439dffd-8e60-4462-91a6-f3a229a5140f" (UID: "1439dffd-8e60-4462-91a6-f3a229a5140f"). InnerVolumeSpecName "kube-api-access-w5r8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:19:54.343207 kubelet[2852]: I1213 02:19:54.343174 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1439dffd-8e60-4462-91a6-f3a229a5140f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1439dffd-8e60-4462-91a6-f3a229a5140f" (UID: "1439dffd-8e60-4462-91a6-f3a229a5140f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:19:54.402733 kubelet[2852]: I1213 02:19:54.402692 2852 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-xtables-lock\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:54.402733 kubelet[2852]: I1213 02:19:54.402731 2852 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-w5r8c\" (UniqueName: \"kubernetes.io/projected/1439dffd-8e60-4462-91a6-f3a229a5140f-kube-api-access-w5r8c\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:54.402733 kubelet[2852]: I1213 02:19:54.402751 2852 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-9s64p\" (UniqueName: \"kubernetes.io/projected/19797b68-934c-429d-a7f8-c7f19ea2c0d1-kube-api-access-9s64p\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:54.403100 kubelet[2852]: I1213 02:19:54.402765 2852 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-cilium-run\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:54.403100 kubelet[2852]: I1213 02:19:54.402777 2852 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-hostproc\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:54.403100 kubelet[2852]: I1213 02:19:54.402788 2852 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-host-proc-sys-kernel\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:54.403100 kubelet[2852]: I1213 02:19:54.402798 2852 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1439dffd-8e60-4462-91a6-f3a229a5140f-clustermesh-secrets\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:54.403100 kubelet[2852]: I1213 02:19:54.402808 2852 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1439dffd-8e60-4462-91a6-f3a229a5140f-cilium-config-path\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:54.403100 kubelet[2852]: I1213 02:19:54.402818 2852 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-host-proc-sys-net\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:54.403100 kubelet[2852]: I1213 02:19:54.402830 2852 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-lib-modules\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:54.403100 kubelet[2852]: I1213 02:19:54.402840 2852 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-bpf-maps\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:54.403337 kubelet[2852]: I1213 02:19:54.402851 2852 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/19797b68-934c-429d-a7f8-c7f19ea2c0d1-cilium-config-path\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:54.403337 kubelet[2852]: I1213 02:19:54.402863 2852 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-cni-path\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:54.403337 kubelet[2852]: I1213 02:19:54.402873 2852 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1439dffd-8e60-4462-91a6-f3a229a5140f-hubble-tls\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:54.403337 kubelet[2852]: I1213 02:19:54.402914 2852 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1439dffd-8e60-4462-91a6-f3a229a5140f-cilium-cgroup\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:54.729508 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e28ee8e8112d5b141f919c5e583316b6ddb568aa457affcbe6638242f5ce044-rootfs.mount: Deactivated successfully. Dec 13 02:19:54.729715 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e28ee8e8112d5b141f919c5e583316b6ddb568aa457affcbe6638242f5ce044-shm.mount: Deactivated successfully. Dec 13 02:19:54.729796 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e00b39791083e79d0d76a5d4b466d79280113b556424a7bd34299756b63bd5ba-rootfs.mount: Deactivated successfully. Dec 13 02:19:54.729875 systemd[1]: var-lib-kubelet-pods-19797b68\x2d934c\x2d429d\x2da7f8\x2dc7f19ea2c0d1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9s64p.mount: Deactivated successfully. Dec 13 02:19:54.729955 systemd[1]: var-lib-kubelet-pods-1439dffd\x2d8e60\x2d4462\x2d91a6\x2df3a229a5140f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw5r8c.mount: Deactivated successfully. Dec 13 02:19:54.730038 systemd[1]: var-lib-kubelet-pods-1439dffd\x2d8e60\x2d4462\x2d91a6\x2df3a229a5140f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:19:54.730186 systemd[1]: var-lib-kubelet-pods-1439dffd\x2d8e60\x2d4462\x2d91a6\x2df3a229a5140f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:19:55.040936 kubelet[2852]: I1213 02:19:55.040841 2852 scope.go:117] "RemoveContainer" containerID="f1186afa389a22dc29372ed9de3b11e403de72bae99a9782841140516db18146" Dec 13 02:19:55.043617 env[1736]: time="2024-12-13T02:19:55.043522665Z" level=info msg="RemoveContainer for \"f1186afa389a22dc29372ed9de3b11e403de72bae99a9782841140516db18146\"" Dec 13 02:19:55.052799 env[1736]: time="2024-12-13T02:19:55.052746382Z" level=info msg="RemoveContainer for \"f1186afa389a22dc29372ed9de3b11e403de72bae99a9782841140516db18146\" returns successfully" Dec 13 02:19:55.053879 kubelet[2852]: I1213 02:19:55.053585 2852 scope.go:117] "RemoveContainer" containerID="7e8a6a930eae1378577bfae6d7454bfd318e79e4c9a4ac24ef82e575ca065b6f" Dec 13 02:19:55.055079 systemd[1]: Removed slice kubepods-besteffort-pod19797b68_934c_429d_a7f8_c7f19ea2c0d1.slice. Dec 13 02:19:55.069735 env[1736]: time="2024-12-13T02:19:55.067759905Z" level=info msg="RemoveContainer for \"7e8a6a930eae1378577bfae6d7454bfd318e79e4c9a4ac24ef82e575ca065b6f\"" Dec 13 02:19:55.068997 systemd[1]: Removed slice kubepods-burstable-pod1439dffd_8e60_4462_91a6_f3a229a5140f.slice. Dec 13 02:19:55.069118 systemd[1]: kubepods-burstable-pod1439dffd_8e60_4462_91a6_f3a229a5140f.slice: Consumed 8.339s CPU time. Dec 13 02:19:55.076326 env[1736]: time="2024-12-13T02:19:55.075898472Z" level=info msg="RemoveContainer for \"7e8a6a930eae1378577bfae6d7454bfd318e79e4c9a4ac24ef82e575ca065b6f\" returns successfully" Dec 13 02:19:55.076724 kubelet[2852]: I1213 02:19:55.076688 2852 scope.go:117] "RemoveContainer" containerID="74c5c2b651753f89b139140783f1119cb459b2e29ea895a4b11b3351eafb7e0e" Dec 13 02:19:55.082258 env[1736]: time="2024-12-13T02:19:55.081224209Z" level=info msg="RemoveContainer for \"74c5c2b651753f89b139140783f1119cb459b2e29ea895a4b11b3351eafb7e0e\"" Dec 13 02:19:55.087162 env[1736]: time="2024-12-13T02:19:55.087119552Z" level=info msg="RemoveContainer for \"74c5c2b651753f89b139140783f1119cb459b2e29ea895a4b11b3351eafb7e0e\" returns successfully" Dec 13 02:19:55.087431 kubelet[2852]: I1213 02:19:55.087407 2852 scope.go:117] "RemoveContainer" containerID="be883b267d169c25b2790876f525e3e156cedd51aff556664930ae627f875ef8" Dec 13 02:19:55.090179 env[1736]: time="2024-12-13T02:19:55.089612129Z" level=info msg="RemoveContainer for \"be883b267d169c25b2790876f525e3e156cedd51aff556664930ae627f875ef8\"" Dec 13 02:19:55.097730 env[1736]: time="2024-12-13T02:19:55.096681225Z" level=info msg="RemoveContainer for \"be883b267d169c25b2790876f525e3e156cedd51aff556664930ae627f875ef8\" returns successfully" Dec 13 02:19:55.098520 kubelet[2852]: I1213 02:19:55.098478 2852 scope.go:117] "RemoveContainer" containerID="76a3ef34ef7beff9d9498e6397d320e505a3740cba04a8f1c43d8d8558602fb3" Dec 13 02:19:55.104946 env[1736]: time="2024-12-13T02:19:55.104686049Z" level=info msg="RemoveContainer for \"76a3ef34ef7beff9d9498e6397d320e505a3740cba04a8f1c43d8d8558602fb3\"" Dec 13 02:19:55.112450 env[1736]: time="2024-12-13T02:19:55.112393301Z" level=info msg="RemoveContainer for \"76a3ef34ef7beff9d9498e6397d320e505a3740cba04a8f1c43d8d8558602fb3\" returns successfully" Dec 13 02:19:55.112794 kubelet[2852]: I1213 02:19:55.112768 2852 scope.go:117] "RemoveContainer" containerID="cd80539490e664ad2c34c40aebb021de67811e988251c9de37fa78c7a441a9bf" Dec 13 02:19:55.115093 env[1736]: time="2024-12-13T02:19:55.115012830Z" level=info msg="RemoveContainer for \"cd80539490e664ad2c34c40aebb021de67811e988251c9de37fa78c7a441a9bf\"" Dec 13 02:19:55.124456 env[1736]: time="2024-12-13T02:19:55.124334001Z" level=info msg="RemoveContainer for \"cd80539490e664ad2c34c40aebb021de67811e988251c9de37fa78c7a441a9bf\" returns successfully" Dec 13 02:19:55.125276 kubelet[2852]: I1213 02:19:55.125234 2852 scope.go:117] "RemoveContainer" containerID="7e8a6a930eae1378577bfae6d7454bfd318e79e4c9a4ac24ef82e575ca065b6f" Dec 13 02:19:55.128345 env[1736]: time="2024-12-13T02:19:55.128081677Z" level=error msg="ContainerStatus for \"7e8a6a930eae1378577bfae6d7454bfd318e79e4c9a4ac24ef82e575ca065b6f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7e8a6a930eae1378577bfae6d7454bfd318e79e4c9a4ac24ef82e575ca065b6f\": not found" Dec 13 02:19:55.131867 kubelet[2852]: E1213 02:19:55.131756 2852 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7e8a6a930eae1378577bfae6d7454bfd318e79e4c9a4ac24ef82e575ca065b6f\": not found" containerID="7e8a6a930eae1378577bfae6d7454bfd318e79e4c9a4ac24ef82e575ca065b6f" Dec 13 02:19:55.134986 kubelet[2852]: I1213 02:19:55.132116 2852 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7e8a6a930eae1378577bfae6d7454bfd318e79e4c9a4ac24ef82e575ca065b6f"} err="failed to get container status \"7e8a6a930eae1378577bfae6d7454bfd318e79e4c9a4ac24ef82e575ca065b6f\": rpc error: code = NotFound desc = an error occurred when try to find container \"7e8a6a930eae1378577bfae6d7454bfd318e79e4c9a4ac24ef82e575ca065b6f\": not found" Dec 13 02:19:55.135129 kubelet[2852]: I1213 02:19:55.134990 2852 scope.go:117] "RemoveContainer" containerID="74c5c2b651753f89b139140783f1119cb459b2e29ea895a4b11b3351eafb7e0e" Dec 13 02:19:55.135521 env[1736]: time="2024-12-13T02:19:55.135448506Z" level=error msg="ContainerStatus for \"74c5c2b651753f89b139140783f1119cb459b2e29ea895a4b11b3351eafb7e0e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"74c5c2b651753f89b139140783f1119cb459b2e29ea895a4b11b3351eafb7e0e\": not found" Dec 13 02:19:55.135761 kubelet[2852]: E1213 02:19:55.135664 2852 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"74c5c2b651753f89b139140783f1119cb459b2e29ea895a4b11b3351eafb7e0e\": not found" containerID="74c5c2b651753f89b139140783f1119cb459b2e29ea895a4b11b3351eafb7e0e" Dec 13 02:19:55.135842 kubelet[2852]: I1213 02:19:55.135770 2852 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"74c5c2b651753f89b139140783f1119cb459b2e29ea895a4b11b3351eafb7e0e"} err="failed to get container status \"74c5c2b651753f89b139140783f1119cb459b2e29ea895a4b11b3351eafb7e0e\": rpc error: code = NotFound desc = an error occurred when try to find container \"74c5c2b651753f89b139140783f1119cb459b2e29ea895a4b11b3351eafb7e0e\": not found" Dec 13 02:19:55.135842 kubelet[2852]: I1213 02:19:55.135799 2852 scope.go:117] "RemoveContainer" containerID="be883b267d169c25b2790876f525e3e156cedd51aff556664930ae627f875ef8" Dec 13 02:19:55.136371 env[1736]: time="2024-12-13T02:19:55.136104914Z" level=error msg="ContainerStatus for \"be883b267d169c25b2790876f525e3e156cedd51aff556664930ae627f875ef8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"be883b267d169c25b2790876f525e3e156cedd51aff556664930ae627f875ef8\": not found" Dec 13 02:19:55.136632 kubelet[2852]: E1213 02:19:55.136558 2852 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"be883b267d169c25b2790876f525e3e156cedd51aff556664930ae627f875ef8\": not found" containerID="be883b267d169c25b2790876f525e3e156cedd51aff556664930ae627f875ef8" Dec 13 02:19:55.136800 kubelet[2852]: I1213 02:19:55.136634 2852 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"be883b267d169c25b2790876f525e3e156cedd51aff556664930ae627f875ef8"} err="failed to get container status \"be883b267d169c25b2790876f525e3e156cedd51aff556664930ae627f875ef8\": rpc error: code = NotFound desc = an error occurred when try to find container \"be883b267d169c25b2790876f525e3e156cedd51aff556664930ae627f875ef8\": not found" Dec 13 02:19:55.136800 kubelet[2852]: I1213 02:19:55.136655 2852 scope.go:117] "RemoveContainer" containerID="76a3ef34ef7beff9d9498e6397d320e505a3740cba04a8f1c43d8d8558602fb3" Dec 13 02:19:55.137255 env[1736]: time="2024-12-13T02:19:55.137202885Z" level=error msg="ContainerStatus for \"76a3ef34ef7beff9d9498e6397d320e505a3740cba04a8f1c43d8d8558602fb3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"76a3ef34ef7beff9d9498e6397d320e505a3740cba04a8f1c43d8d8558602fb3\": not found" Dec 13 02:19:55.137688 kubelet[2852]: E1213 02:19:55.137665 2852 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"76a3ef34ef7beff9d9498e6397d320e505a3740cba04a8f1c43d8d8558602fb3\": not found" containerID="76a3ef34ef7beff9d9498e6397d320e505a3740cba04a8f1c43d8d8558602fb3" Dec 13 02:19:55.137781 kubelet[2852]: I1213 02:19:55.137709 2852 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"76a3ef34ef7beff9d9498e6397d320e505a3740cba04a8f1c43d8d8558602fb3"} err="failed to get container status \"76a3ef34ef7beff9d9498e6397d320e505a3740cba04a8f1c43d8d8558602fb3\": rpc error: code = NotFound desc = an error occurred when try to find container \"76a3ef34ef7beff9d9498e6397d320e505a3740cba04a8f1c43d8d8558602fb3\": not found" Dec 13 02:19:55.137781 kubelet[2852]: I1213 02:19:55.137730 2852 scope.go:117] "RemoveContainer" containerID="cd80539490e664ad2c34c40aebb021de67811e988251c9de37fa78c7a441a9bf" Dec 13 02:19:55.138027 env[1736]: time="2024-12-13T02:19:55.137973383Z" level=error msg="ContainerStatus for \"cd80539490e664ad2c34c40aebb021de67811e988251c9de37fa78c7a441a9bf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cd80539490e664ad2c34c40aebb021de67811e988251c9de37fa78c7a441a9bf\": not found" Dec 13 02:19:55.138166 kubelet[2852]: E1213 02:19:55.138140 2852 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cd80539490e664ad2c34c40aebb021de67811e988251c9de37fa78c7a441a9bf\": not found" containerID="cd80539490e664ad2c34c40aebb021de67811e988251c9de37fa78c7a441a9bf" Dec 13 02:19:55.138242 kubelet[2852]: I1213 02:19:55.138170 2852 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cd80539490e664ad2c34c40aebb021de67811e988251c9de37fa78c7a441a9bf"} err="failed to get container status \"cd80539490e664ad2c34c40aebb021de67811e988251c9de37fa78c7a441a9bf\": rpc error: code = NotFound desc = an error occurred when try to find container \"cd80539490e664ad2c34c40aebb021de67811e988251c9de37fa78c7a441a9bf\": not found" Dec 13 02:19:55.325686 kubelet[2852]: I1213 02:19:55.322489 2852 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1439dffd-8e60-4462-91a6-f3a229a5140f" path="/var/lib/kubelet/pods/1439dffd-8e60-4462-91a6-f3a229a5140f/volumes" Dec 13 02:19:55.325686 kubelet[2852]: I1213 02:19:55.324916 2852 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19797b68-934c-429d-a7f8-c7f19ea2c0d1" path="/var/lib/kubelet/pods/19797b68-934c-429d-a7f8-c7f19ea2c0d1/volumes" Dec 13 02:19:55.593762 sshd[4378]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:55.598849 systemd[1]: sshd@22-172.31.19.93:22-139.178.68.195:53980.service: Deactivated successfully. Dec 13 02:19:55.602974 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 02:19:55.605524 systemd-logind[1723]: Session 23 logged out. Waiting for processes to exit. Dec 13 02:19:55.610676 systemd-logind[1723]: Removed session 23. Dec 13 02:19:55.624073 systemd[1]: Started sshd@23-172.31.19.93:22-139.178.68.195:53994.service. Dec 13 02:19:55.816165 sshd[4542]: Accepted publickey for core from 139.178.68.195 port 53994 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:19:55.820307 sshd[4542]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:55.828460 systemd-logind[1723]: New session 24 of user core. Dec 13 02:19:55.829225 systemd[1]: Started session-24.scope. Dec 13 02:19:56.781636 sshd[4542]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:56.785890 systemd-logind[1723]: Session 24 logged out. Waiting for processes to exit. Dec 13 02:19:56.788529 systemd[1]: sshd@23-172.31.19.93:22-139.178.68.195:53994.service: Deactivated successfully. Dec 13 02:19:56.789566 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 02:19:56.792034 systemd-logind[1723]: Removed session 24. Dec 13 02:19:56.823951 systemd[1]: Started sshd@24-172.31.19.93:22-139.178.68.195:42296.service. Dec 13 02:19:56.866846 kubelet[2852]: I1213 02:19:56.866808 2852 topology_manager.go:215] "Topology Admit Handler" podUID="9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a" podNamespace="kube-system" podName="cilium-rtw26" Dec 13 02:19:56.871264 kubelet[2852]: E1213 02:19:56.871227 2852 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19797b68-934c-429d-a7f8-c7f19ea2c0d1" containerName="cilium-operator" Dec 13 02:19:56.871530 kubelet[2852]: E1213 02:19:56.871513 2852 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1439dffd-8e60-4462-91a6-f3a229a5140f" containerName="mount-bpf-fs" Dec 13 02:19:56.871644 kubelet[2852]: E1213 02:19:56.871632 2852 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1439dffd-8e60-4462-91a6-f3a229a5140f" containerName="mount-cgroup" Dec 13 02:19:56.871737 kubelet[2852]: E1213 02:19:56.871725 2852 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1439dffd-8e60-4462-91a6-f3a229a5140f" containerName="apply-sysctl-overwrites" Dec 13 02:19:56.871822 kubelet[2852]: E1213 02:19:56.871812 2852 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1439dffd-8e60-4462-91a6-f3a229a5140f" containerName="clean-cilium-state" Dec 13 02:19:56.871908 kubelet[2852]: E1213 02:19:56.871897 2852 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1439dffd-8e60-4462-91a6-f3a229a5140f" containerName="cilium-agent" Dec 13 02:19:56.872077 kubelet[2852]: I1213 02:19:56.872051 2852 memory_manager.go:354] "RemoveStaleState removing state" podUID="19797b68-934c-429d-a7f8-c7f19ea2c0d1" containerName="cilium-operator" Dec 13 02:19:56.872244 kubelet[2852]: I1213 02:19:56.872231 2852 memory_manager.go:354] "RemoveStaleState removing state" podUID="1439dffd-8e60-4462-91a6-f3a229a5140f" containerName="cilium-agent" Dec 13 02:19:56.890643 systemd[1]: Created slice kubepods-burstable-pod9e2cb78a_8d0e_4a9a_8cdf_ab8bc88d771a.slice. Dec 13 02:19:56.926317 kubelet[2852]: I1213 02:19:56.926173 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-cni-path\") pod \"cilium-rtw26\" (UID: \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\") " pod="kube-system/cilium-rtw26" Dec 13 02:19:56.926589 kubelet[2852]: I1213 02:19:56.926570 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-bpf-maps\") pod \"cilium-rtw26\" (UID: \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\") " pod="kube-system/cilium-rtw26" Dec 13 02:19:56.926777 kubelet[2852]: I1213 02:19:56.926762 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-xtables-lock\") pod \"cilium-rtw26\" (UID: \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\") " pod="kube-system/cilium-rtw26" Dec 13 02:19:56.926929 kubelet[2852]: I1213 02:19:56.926912 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-host-proc-sys-kernel\") pod \"cilium-rtw26\" (UID: \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\") " pod="kube-system/cilium-rtw26" Dec 13 02:19:56.927066 kubelet[2852]: I1213 02:19:56.927050 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vstxt\" (UniqueName: \"kubernetes.io/projected/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-kube-api-access-vstxt\") pod \"cilium-rtw26\" (UID: \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\") " pod="kube-system/cilium-rtw26" Dec 13 02:19:56.927202 kubelet[2852]: I1213 02:19:56.927186 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-cilium-run\") pod \"cilium-rtw26\" (UID: \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\") " pod="kube-system/cilium-rtw26" Dec 13 02:19:56.927346 kubelet[2852]: I1213 02:19:56.927332 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-lib-modules\") pod \"cilium-rtw26\" (UID: \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\") " pod="kube-system/cilium-rtw26" Dec 13 02:19:56.927514 kubelet[2852]: I1213 02:19:56.927499 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-host-proc-sys-net\") pod \"cilium-rtw26\" (UID: \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\") " pod="kube-system/cilium-rtw26" Dec 13 02:19:56.927675 kubelet[2852]: I1213 02:19:56.927661 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-hostproc\") pod \"cilium-rtw26\" (UID: \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\") " pod="kube-system/cilium-rtw26" Dec 13 02:19:56.927816 kubelet[2852]: I1213 02:19:56.927801 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-clustermesh-secrets\") pod \"cilium-rtw26\" (UID: \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\") " pod="kube-system/cilium-rtw26" Dec 13 02:19:56.927944 kubelet[2852]: I1213 02:19:56.927930 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-cilium-ipsec-secrets\") pod \"cilium-rtw26\" (UID: \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\") " pod="kube-system/cilium-rtw26" Dec 13 02:19:56.928072 kubelet[2852]: I1213 02:19:56.928058 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-hubble-tls\") pod \"cilium-rtw26\" (UID: \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\") " pod="kube-system/cilium-rtw26" Dec 13 02:19:56.928214 kubelet[2852]: I1213 02:19:56.928200 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-cilium-cgroup\") pod \"cilium-rtw26\" (UID: \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\") " pod="kube-system/cilium-rtw26" Dec 13 02:19:56.928352 kubelet[2852]: I1213 02:19:56.928338 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-etc-cni-netd\") pod \"cilium-rtw26\" (UID: \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\") " pod="kube-system/cilium-rtw26" Dec 13 02:19:56.928556 kubelet[2852]: I1213 02:19:56.928533 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-cilium-config-path\") pod \"cilium-rtw26\" (UID: \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\") " pod="kube-system/cilium-rtw26" Dec 13 02:19:57.000531 sshd[4552]: Accepted publickey for core from 139.178.68.195 port 42296 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:19:57.002191 sshd[4552]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:57.014293 systemd[1]: Started session-25.scope. Dec 13 02:19:57.015040 systemd-logind[1723]: New session 25 of user core. Dec 13 02:19:57.200995 env[1736]: time="2024-12-13T02:19:57.199783109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rtw26,Uid:9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a,Namespace:kube-system,Attempt:0,}" Dec 13 02:19:57.201596 amazon-ssm-agent[1711]: 2024-12-13 02:19:57 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 02:19:57.267631 env[1736]: time="2024-12-13T02:19:57.267532858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:19:57.267631 env[1736]: time="2024-12-13T02:19:57.267583753Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:19:57.267631 env[1736]: time="2024-12-13T02:19:57.267600351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:19:57.268563 env[1736]: time="2024-12-13T02:19:57.268036932Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f3c1e343c14784ad8cdaa45e98294a78737f2668a06f671cd3f4c997fcfd90a4 pid=4574 runtime=io.containerd.runc.v2 Dec 13 02:19:57.286795 systemd[1]: Started cri-containerd-f3c1e343c14784ad8cdaa45e98294a78737f2668a06f671cd3f4c997fcfd90a4.scope. Dec 13 02:19:57.375748 env[1736]: time="2024-12-13T02:19:57.375691036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rtw26,Uid:9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3c1e343c14784ad8cdaa45e98294a78737f2668a06f671cd3f4c997fcfd90a4\"" Dec 13 02:19:57.390941 env[1736]: time="2024-12-13T02:19:57.390889236Z" level=info msg="CreateContainer within sandbox \"f3c1e343c14784ad8cdaa45e98294a78737f2668a06f671cd3f4c997fcfd90a4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:19:57.426389 env[1736]: time="2024-12-13T02:19:57.425887444Z" level=info msg="CreateContainer within sandbox \"f3c1e343c14784ad8cdaa45e98294a78737f2668a06f671cd3f4c997fcfd90a4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"73f6d87c933f71451e1cda0d04515ebdd72aa596877280fc722a111720153259\"" Dec 13 02:19:57.426891 env[1736]: time="2024-12-13T02:19:57.426859483Z" level=info msg="StartContainer for \"73f6d87c933f71451e1cda0d04515ebdd72aa596877280fc722a111720153259\"" Dec 13 02:19:57.480446 systemd[1]: Started cri-containerd-73f6d87c933f71451e1cda0d04515ebdd72aa596877280fc722a111720153259.scope. Dec 13 02:19:57.490064 sshd[4552]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:57.498652 systemd[1]: sshd@24-172.31.19.93:22-139.178.68.195:42296.service: Deactivated successfully. Dec 13 02:19:57.499761 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 02:19:57.504247 systemd-logind[1723]: Session 25 logged out. Waiting for processes to exit. Dec 13 02:19:57.506495 systemd-logind[1723]: Removed session 25. Dec 13 02:19:57.516775 systemd[1]: Started sshd@25-172.31.19.93:22-139.178.68.195:42310.service. Dec 13 02:19:57.533430 systemd[1]: cri-containerd-73f6d87c933f71451e1cda0d04515ebdd72aa596877280fc722a111720153259.scope: Deactivated successfully. Dec 13 02:19:57.588818 env[1736]: time="2024-12-13T02:19:57.588731151Z" level=info msg="shim disconnected" id=73f6d87c933f71451e1cda0d04515ebdd72aa596877280fc722a111720153259 Dec 13 02:19:57.589123 env[1736]: time="2024-12-13T02:19:57.589097348Z" level=warning msg="cleaning up after shim disconnected" id=73f6d87c933f71451e1cda0d04515ebdd72aa596877280fc722a111720153259 namespace=k8s.io Dec 13 02:19:57.589236 env[1736]: time="2024-12-13T02:19:57.589222497Z" level=info msg="cleaning up dead shim" Dec 13 02:19:57.610048 env[1736]: time="2024-12-13T02:19:57.609983070Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:19:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4639 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T02:19:57Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/73f6d87c933f71451e1cda0d04515ebdd72aa596877280fc722a111720153259/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 02:19:57.610990 env[1736]: time="2024-12-13T02:19:57.610817541Z" level=error msg="copy shim log" error="read /proc/self/fd/42: file already closed" Dec 13 02:19:57.612831 env[1736]: time="2024-12-13T02:19:57.611887368Z" level=error msg="Failed to pipe stderr of container \"73f6d87c933f71451e1cda0d04515ebdd72aa596877280fc722a111720153259\"" error="reading from a closed fifo" Dec 13 02:19:57.613014 env[1736]: time="2024-12-13T02:19:57.612484380Z" level=error msg="Failed to pipe stdout of container \"73f6d87c933f71451e1cda0d04515ebdd72aa596877280fc722a111720153259\"" error="reading from a closed fifo" Dec 13 02:19:57.615203 env[1736]: time="2024-12-13T02:19:57.615148313Z" level=error msg="StartContainer for \"73f6d87c933f71451e1cda0d04515ebdd72aa596877280fc722a111720153259\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 02:19:57.615601 kubelet[2852]: E1213 02:19:57.615552 2852 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="73f6d87c933f71451e1cda0d04515ebdd72aa596877280fc722a111720153259" Dec 13 02:19:57.621792 kubelet[2852]: E1213 02:19:57.621685 2852 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 02:19:57.621792 kubelet[2852]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 02:19:57.621792 kubelet[2852]: rm /hostbin/cilium-mount Dec 13 02:19:57.621990 kubelet[2852]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vstxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-rtw26_kube-system(9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 02:19:57.626712 kubelet[2852]: E1213 02:19:57.626662 2852 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-rtw26" podUID="9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a" Dec 13 02:19:57.715680 sshd[4636]: Accepted publickey for core from 139.178.68.195 port 42310 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:19:57.718082 sshd[4636]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:57.725842 systemd[1]: Started session-26.scope. Dec 13 02:19:57.726827 systemd-logind[1723]: New session 26 of user core. Dec 13 02:19:58.144854 env[1736]: time="2024-12-13T02:19:58.144813193Z" level=info msg="StopPodSandbox for \"f3c1e343c14784ad8cdaa45e98294a78737f2668a06f671cd3f4c997fcfd90a4\"" Dec 13 02:19:58.145118 env[1736]: time="2024-12-13T02:19:58.145090280Z" level=info msg="Container to stop \"73f6d87c933f71451e1cda0d04515ebdd72aa596877280fc722a111720153259\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:19:58.165514 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f3c1e343c14784ad8cdaa45e98294a78737f2668a06f671cd3f4c997fcfd90a4-shm.mount: Deactivated successfully. Dec 13 02:19:58.184370 systemd[1]: cri-containerd-f3c1e343c14784ad8cdaa45e98294a78737f2668a06f671cd3f4c997fcfd90a4.scope: Deactivated successfully. Dec 13 02:19:58.232335 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3c1e343c14784ad8cdaa45e98294a78737f2668a06f671cd3f4c997fcfd90a4-rootfs.mount: Deactivated successfully. Dec 13 02:19:58.257924 env[1736]: time="2024-12-13T02:19:58.257859523Z" level=info msg="shim disconnected" id=f3c1e343c14784ad8cdaa45e98294a78737f2668a06f671cd3f4c997fcfd90a4 Dec 13 02:19:58.258933 env[1736]: time="2024-12-13T02:19:58.258891973Z" level=warning msg="cleaning up after shim disconnected" id=f3c1e343c14784ad8cdaa45e98294a78737f2668a06f671cd3f4c997fcfd90a4 namespace=k8s.io Dec 13 02:19:58.259124 env[1736]: time="2024-12-13T02:19:58.258923367Z" level=info msg="cleaning up dead shim" Dec 13 02:19:58.272228 env[1736]: time="2024-12-13T02:19:58.272149262Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:19:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4677 runtime=io.containerd.runc.v2\n" Dec 13 02:19:58.272607 env[1736]: time="2024-12-13T02:19:58.272569123Z" level=info msg="TearDown network for sandbox \"f3c1e343c14784ad8cdaa45e98294a78737f2668a06f671cd3f4c997fcfd90a4\" successfully" Dec 13 02:19:58.272607 env[1736]: time="2024-12-13T02:19:58.272598704Z" level=info msg="StopPodSandbox for \"f3c1e343c14784ad8cdaa45e98294a78737f2668a06f671cd3f4c997fcfd90a4\" returns successfully" Dec 13 02:19:58.361888 kubelet[2852]: I1213 02:19:58.361784 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-cni-path\") pod \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\" (UID: \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\") " Dec 13 02:19:58.362538 kubelet[2852]: I1213 02:19:58.362153 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-clustermesh-secrets\") pod \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\" (UID: \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\") " Dec 13 02:19:58.362538 kubelet[2852]: I1213 02:19:58.362189 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-hubble-tls\") pod \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\" (UID: \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\") " Dec 13 02:19:58.362538 kubelet[2852]: I1213 02:19:58.362213 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-cilium-cgroup\") pod \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\" (UID: \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\") " Dec 13 02:19:58.362538 kubelet[2852]: I1213 02:19:58.362237 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-hostproc\") pod \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\" (UID: \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\") " Dec 13 02:19:58.362538 kubelet[2852]: I1213 02:19:58.362263 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-bpf-maps\") pod \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\" (UID: \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\") " Dec 13 02:19:58.362538 kubelet[2852]: I1213 02:19:58.362285 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-cilium-run\") pod \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\" (UID: \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\") " Dec 13 02:19:58.362538 kubelet[2852]: I1213 02:19:58.362305 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-xtables-lock\") pod \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\" (UID: \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\") " Dec 13 02:19:58.362538 kubelet[2852]: I1213 02:19:58.362342 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-cilium-ipsec-secrets\") pod \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\" (UID: \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\") " Dec 13 02:19:58.362538 kubelet[2852]: I1213 02:19:58.362365 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vstxt\" (UniqueName: \"kubernetes.io/projected/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-kube-api-access-vstxt\") pod \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\" (UID: \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\") " Dec 13 02:19:58.362538 kubelet[2852]: I1213 02:19:58.362406 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-etc-cni-netd\") pod \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\" (UID: \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\") " Dec 13 02:19:58.362538 kubelet[2852]: I1213 02:19:58.362431 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-cilium-config-path\") pod \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\" (UID: \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\") " Dec 13 02:19:58.362538 kubelet[2852]: I1213 02:19:58.362452 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-host-proc-sys-kernel\") pod \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\" (UID: \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\") " Dec 13 02:19:58.362538 kubelet[2852]: I1213 02:19:58.362475 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-lib-modules\") pod \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\" (UID: \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\") " Dec 13 02:19:58.362538 kubelet[2852]: I1213 02:19:58.362496 2852 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-host-proc-sys-net\") pod \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\" (UID: \"9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a\") " Dec 13 02:19:58.363198 kubelet[2852]: I1213 02:19:58.363061 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-cni-path" (OuterVolumeSpecName: "cni-path") pod "9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a" (UID: "9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:19:58.369326 kubelet[2852]: I1213 02:19:58.369271 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a" (UID: "9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:19:58.369755 kubelet[2852]: I1213 02:19:58.369714 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-hostproc" (OuterVolumeSpecName: "hostproc") pod "9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a" (UID: "9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:19:58.371635 systemd[1]: var-lib-kubelet-pods-9e2cb78a\x2d8d0e\x2d4a9a\x2d8cdf\x2dab8bc88d771a-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 02:19:58.371857 systemd[1]: var-lib-kubelet-pods-9e2cb78a\x2d8d0e\x2d4a9a\x2d8cdf\x2dab8bc88d771a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:19:58.378775 systemd[1]: var-lib-kubelet-pods-9e2cb78a\x2d8d0e\x2d4a9a\x2d8cdf\x2dab8bc88d771a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:19:58.379856 kubelet[2852]: I1213 02:19:58.379612 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a" (UID: "9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:19:58.379856 kubelet[2852]: I1213 02:19:58.379646 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a" (UID: "9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:19:58.379856 kubelet[2852]: I1213 02:19:58.379664 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a" (UID: "9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:19:58.380030 kubelet[2852]: I1213 02:19:58.379926 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a" (UID: "9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:19:58.380030 kubelet[2852]: I1213 02:19:58.379953 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a" (UID: "9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:19:58.382633 kubelet[2852]: I1213 02:19:58.382604 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a" (UID: "9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:19:58.382805 kubelet[2852]: I1213 02:19:58.382788 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a" (UID: "9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:19:58.383336 kubelet[2852]: I1213 02:19:58.383309 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a" (UID: "9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:19:58.383464 kubelet[2852]: I1213 02:19:58.383446 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a" (UID: "9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:19:58.383541 kubelet[2852]: I1213 02:19:58.383522 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a" (UID: "9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:19:58.384153 kubelet[2852]: I1213 02:19:58.384128 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a" (UID: "9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:19:58.386294 kubelet[2852]: I1213 02:19:58.386247 2852 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-kube-api-access-vstxt" (OuterVolumeSpecName: "kube-api-access-vstxt") pod "9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a" (UID: "9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a"). InnerVolumeSpecName "kube-api-access-vstxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:19:58.468206 kubelet[2852]: I1213 02:19:58.462893 2852 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vstxt\" (UniqueName: \"kubernetes.io/projected/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-kube-api-access-vstxt\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:58.468206 kubelet[2852]: I1213 02:19:58.462931 2852 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-etc-cni-netd\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:58.468206 kubelet[2852]: I1213 02:19:58.462952 2852 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-cilium-config-path\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:58.468206 kubelet[2852]: I1213 02:19:58.462966 2852 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-host-proc-sys-kernel\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:58.468206 kubelet[2852]: I1213 02:19:58.462979 2852 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-host-proc-sys-net\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:58.468206 kubelet[2852]: I1213 02:19:58.463008 2852 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-lib-modules\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:58.468206 kubelet[2852]: I1213 02:19:58.463021 2852 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-cni-path\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:58.468206 kubelet[2852]: I1213 02:19:58.463135 2852 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-clustermesh-secrets\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:58.468206 kubelet[2852]: I1213 02:19:58.463153 2852 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-hubble-tls\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:58.468206 kubelet[2852]: I1213 02:19:58.463173 2852 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-cilium-cgroup\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:58.468206 kubelet[2852]: I1213 02:19:58.463186 2852 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-bpf-maps\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:58.468206 kubelet[2852]: I1213 02:19:58.463197 2852 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-cilium-run\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:58.468206 kubelet[2852]: I1213 02:19:58.463209 2852 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-hostproc\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:58.468206 kubelet[2852]: I1213 02:19:58.463748 2852 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-xtables-lock\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:58.468206 kubelet[2852]: I1213 02:19:58.463775 2852 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a-cilium-ipsec-secrets\") on node \"ip-172-31-19-93\" DevicePath \"\"" Dec 13 02:19:58.784525 kubelet[2852]: E1213 02:19:58.784404 2852 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:19:59.040818 systemd[1]: var-lib-kubelet-pods-9e2cb78a\x2d8d0e\x2d4a9a\x2d8cdf\x2dab8bc88d771a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvstxt.mount: Deactivated successfully. Dec 13 02:19:59.148572 kubelet[2852]: I1213 02:19:59.148545 2852 scope.go:117] "RemoveContainer" containerID="73f6d87c933f71451e1cda0d04515ebdd72aa596877280fc722a111720153259" Dec 13 02:19:59.152776 env[1736]: time="2024-12-13T02:19:59.152221033Z" level=info msg="RemoveContainer for \"73f6d87c933f71451e1cda0d04515ebdd72aa596877280fc722a111720153259\"" Dec 13 02:19:59.158696 env[1736]: time="2024-12-13T02:19:59.158650660Z" level=info msg="RemoveContainer for \"73f6d87c933f71451e1cda0d04515ebdd72aa596877280fc722a111720153259\" returns successfully" Dec 13 02:19:59.161066 systemd[1]: Removed slice kubepods-burstable-pod9e2cb78a_8d0e_4a9a_8cdf_ab8bc88d771a.slice. Dec 13 02:19:59.234044 kubelet[2852]: I1213 02:19:59.234000 2852 topology_manager.go:215] "Topology Admit Handler" podUID="2e46f978-54be-4fe2-8b8b-f767bae67bca" podNamespace="kube-system" podName="cilium-fnfmj" Dec 13 02:19:59.234208 kubelet[2852]: E1213 02:19:59.234084 2852 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a" containerName="mount-cgroup" Dec 13 02:19:59.234208 kubelet[2852]: I1213 02:19:59.234127 2852 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a" containerName="mount-cgroup" Dec 13 02:19:59.246096 systemd[1]: Created slice kubepods-burstable-pod2e46f978_54be_4fe2_8b8b_f767bae67bca.slice. Dec 13 02:19:59.269935 kubelet[2852]: I1213 02:19:59.269893 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e46f978-54be-4fe2-8b8b-f767bae67bca-lib-modules\") pod \"cilium-fnfmj\" (UID: \"2e46f978-54be-4fe2-8b8b-f767bae67bca\") " pod="kube-system/cilium-fnfmj" Dec 13 02:19:59.270419 kubelet[2852]: I1213 02:19:59.270239 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2e46f978-54be-4fe2-8b8b-f767bae67bca-host-proc-sys-net\") pod \"cilium-fnfmj\" (UID: \"2e46f978-54be-4fe2-8b8b-f767bae67bca\") " pod="kube-system/cilium-fnfmj" Dec 13 02:19:59.270872 kubelet[2852]: I1213 02:19:59.270839 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ftgj\" (UniqueName: \"kubernetes.io/projected/2e46f978-54be-4fe2-8b8b-f767bae67bca-kube-api-access-4ftgj\") pod \"cilium-fnfmj\" (UID: \"2e46f978-54be-4fe2-8b8b-f767bae67bca\") " pod="kube-system/cilium-fnfmj" Dec 13 02:19:59.270967 kubelet[2852]: I1213 02:19:59.270887 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2e46f978-54be-4fe2-8b8b-f767bae67bca-cilium-run\") pod \"cilium-fnfmj\" (UID: \"2e46f978-54be-4fe2-8b8b-f767bae67bca\") " pod="kube-system/cilium-fnfmj" Dec 13 02:19:59.270967 kubelet[2852]: I1213 02:19:59.270917 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2e46f978-54be-4fe2-8b8b-f767bae67bca-hostproc\") pod \"cilium-fnfmj\" (UID: \"2e46f978-54be-4fe2-8b8b-f767bae67bca\") " pod="kube-system/cilium-fnfmj" Dec 13 02:19:59.270967 kubelet[2852]: I1213 02:19:59.270944 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e46f978-54be-4fe2-8b8b-f767bae67bca-xtables-lock\") pod \"cilium-fnfmj\" (UID: \"2e46f978-54be-4fe2-8b8b-f767bae67bca\") " pod="kube-system/cilium-fnfmj" Dec 13 02:19:59.271111 kubelet[2852]: I1213 02:19:59.270970 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2e46f978-54be-4fe2-8b8b-f767bae67bca-cilium-cgroup\") pod \"cilium-fnfmj\" (UID: \"2e46f978-54be-4fe2-8b8b-f767bae67bca\") " pod="kube-system/cilium-fnfmj" Dec 13 02:19:59.271111 kubelet[2852]: I1213 02:19:59.270996 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2e46f978-54be-4fe2-8b8b-f767bae67bca-cni-path\") pod \"cilium-fnfmj\" (UID: \"2e46f978-54be-4fe2-8b8b-f767bae67bca\") " pod="kube-system/cilium-fnfmj" Dec 13 02:19:59.271111 kubelet[2852]: I1213 02:19:59.271019 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2e46f978-54be-4fe2-8b8b-f767bae67bca-host-proc-sys-kernel\") pod \"cilium-fnfmj\" (UID: \"2e46f978-54be-4fe2-8b8b-f767bae67bca\") " pod="kube-system/cilium-fnfmj" Dec 13 02:19:59.271111 kubelet[2852]: I1213 02:19:59.271042 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2e46f978-54be-4fe2-8b8b-f767bae67bca-hubble-tls\") pod \"cilium-fnfmj\" (UID: \"2e46f978-54be-4fe2-8b8b-f767bae67bca\") " pod="kube-system/cilium-fnfmj" Dec 13 02:19:59.271111 kubelet[2852]: I1213 02:19:59.271066 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e46f978-54be-4fe2-8b8b-f767bae67bca-etc-cni-netd\") pod \"cilium-fnfmj\" (UID: \"2e46f978-54be-4fe2-8b8b-f767bae67bca\") " pod="kube-system/cilium-fnfmj" Dec 13 02:19:59.271111 kubelet[2852]: I1213 02:19:59.271094 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2e46f978-54be-4fe2-8b8b-f767bae67bca-clustermesh-secrets\") pod \"cilium-fnfmj\" (UID: \"2e46f978-54be-4fe2-8b8b-f767bae67bca\") " pod="kube-system/cilium-fnfmj" Dec 13 02:19:59.272839 kubelet[2852]: I1213 02:19:59.271118 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2e46f978-54be-4fe2-8b8b-f767bae67bca-cilium-config-path\") pod \"cilium-fnfmj\" (UID: \"2e46f978-54be-4fe2-8b8b-f767bae67bca\") " pod="kube-system/cilium-fnfmj" Dec 13 02:19:59.272839 kubelet[2852]: I1213 02:19:59.271145 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2e46f978-54be-4fe2-8b8b-f767bae67bca-bpf-maps\") pod \"cilium-fnfmj\" (UID: \"2e46f978-54be-4fe2-8b8b-f767bae67bca\") " pod="kube-system/cilium-fnfmj" Dec 13 02:19:59.272839 kubelet[2852]: I1213 02:19:59.271168 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2e46f978-54be-4fe2-8b8b-f767bae67bca-cilium-ipsec-secrets\") pod \"cilium-fnfmj\" (UID: \"2e46f978-54be-4fe2-8b8b-f767bae67bca\") " pod="kube-system/cilium-fnfmj" Dec 13 02:19:59.314189 kubelet[2852]: I1213 02:19:59.314040 2852 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a" path="/var/lib/kubelet/pods/9e2cb78a-8d0e-4a9a-8cdf-ab8bc88d771a/volumes" Dec 13 02:19:59.561683 env[1736]: time="2024-12-13T02:19:59.561633503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fnfmj,Uid:2e46f978-54be-4fe2-8b8b-f767bae67bca,Namespace:kube-system,Attempt:0,}" Dec 13 02:19:59.583892 env[1736]: time="2024-12-13T02:19:59.583741555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:19:59.583892 env[1736]: time="2024-12-13T02:19:59.583791812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:19:59.584103 env[1736]: time="2024-12-13T02:19:59.583807177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:19:59.584468 env[1736]: time="2024-12-13T02:19:59.584347691Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea8810b4f918c2b530555b949d7a3bc46b90d2af17ab07815a79bbfa92e993c0 pid=4705 runtime=io.containerd.runc.v2 Dec 13 02:19:59.602553 systemd[1]: Started cri-containerd-ea8810b4f918c2b530555b949d7a3bc46b90d2af17ab07815a79bbfa92e993c0.scope. Dec 13 02:19:59.637812 env[1736]: time="2024-12-13T02:19:59.637760911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fnfmj,Uid:2e46f978-54be-4fe2-8b8b-f767bae67bca,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea8810b4f918c2b530555b949d7a3bc46b90d2af17ab07815a79bbfa92e993c0\"" Dec 13 02:19:59.641452 env[1736]: time="2024-12-13T02:19:59.641409669Z" level=info msg="CreateContainer within sandbox \"ea8810b4f918c2b530555b949d7a3bc46b90d2af17ab07815a79bbfa92e993c0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:19:59.658046 env[1736]: time="2024-12-13T02:19:59.657995303Z" level=info msg="CreateContainer within sandbox \"ea8810b4f918c2b530555b949d7a3bc46b90d2af17ab07815a79bbfa92e993c0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"60a635a43df6f3d71f123d8f5866a56fd288940138c65860e0dc07ff25a7af03\"" Dec 13 02:19:59.660144 env[1736]: time="2024-12-13T02:19:59.658865252Z" level=info msg="StartContainer for \"60a635a43df6f3d71f123d8f5866a56fd288940138c65860e0dc07ff25a7af03\"" Dec 13 02:19:59.682190 systemd[1]: Started cri-containerd-60a635a43df6f3d71f123d8f5866a56fd288940138c65860e0dc07ff25a7af03.scope. Dec 13 02:19:59.718923 env[1736]: time="2024-12-13T02:19:59.718881694Z" level=info msg="StartContainer for \"60a635a43df6f3d71f123d8f5866a56fd288940138c65860e0dc07ff25a7af03\" returns successfully" Dec 13 02:19:59.740707 systemd[1]: cri-containerd-60a635a43df6f3d71f123d8f5866a56fd288940138c65860e0dc07ff25a7af03.scope: Deactivated successfully. Dec 13 02:19:59.804412 env[1736]: time="2024-12-13T02:19:59.804342583Z" level=info msg="shim disconnected" id=60a635a43df6f3d71f123d8f5866a56fd288940138c65860e0dc07ff25a7af03 Dec 13 02:19:59.804750 env[1736]: time="2024-12-13T02:19:59.804715679Z" level=warning msg="cleaning up after shim disconnected" id=60a635a43df6f3d71f123d8f5866a56fd288940138c65860e0dc07ff25a7af03 namespace=k8s.io Dec 13 02:19:59.804836 env[1736]: time="2024-12-13T02:19:59.804739368Z" level=info msg="cleaning up dead shim" Dec 13 02:19:59.824360 env[1736]: time="2024-12-13T02:19:59.824303405Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:19:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4788 runtime=io.containerd.runc.v2\n" Dec 13 02:20:00.210411 env[1736]: time="2024-12-13T02:20:00.202110073Z" level=info msg="CreateContainer within sandbox \"ea8810b4f918c2b530555b949d7a3bc46b90d2af17ab07815a79bbfa92e993c0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:20:00.244981 env[1736]: time="2024-12-13T02:20:00.244909856Z" level=info msg="CreateContainer within sandbox \"ea8810b4f918c2b530555b949d7a3bc46b90d2af17ab07815a79bbfa92e993c0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"66211247c634ef3a587cac2fa3afd10d230858548715345312d3db2296cf3ec9\"" Dec 13 02:20:00.246320 env[1736]: time="2024-12-13T02:20:00.246136169Z" level=info msg="StartContainer for \"66211247c634ef3a587cac2fa3afd10d230858548715345312d3db2296cf3ec9\"" Dec 13 02:20:00.289991 systemd[1]: Started cri-containerd-66211247c634ef3a587cac2fa3afd10d230858548715345312d3db2296cf3ec9.scope. Dec 13 02:20:00.323160 env[1736]: time="2024-12-13T02:20:00.323111481Z" level=info msg="StartContainer for \"66211247c634ef3a587cac2fa3afd10d230858548715345312d3db2296cf3ec9\" returns successfully" Dec 13 02:20:00.339270 systemd[1]: cri-containerd-66211247c634ef3a587cac2fa3afd10d230858548715345312d3db2296cf3ec9.scope: Deactivated successfully. Dec 13 02:20:00.379226 env[1736]: time="2024-12-13T02:20:00.379170809Z" level=info msg="shim disconnected" id=66211247c634ef3a587cac2fa3afd10d230858548715345312d3db2296cf3ec9 Dec 13 02:20:00.379226 env[1736]: time="2024-12-13T02:20:00.379219151Z" level=warning msg="cleaning up after shim disconnected" id=66211247c634ef3a587cac2fa3afd10d230858548715345312d3db2296cf3ec9 namespace=k8s.io Dec 13 02:20:00.379226 env[1736]: time="2024-12-13T02:20:00.379230956Z" level=info msg="cleaning up dead shim" Dec 13 02:20:00.393017 env[1736]: time="2024-12-13T02:20:00.392970183Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:20:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4849 runtime=io.containerd.runc.v2\n" Dec 13 02:20:00.711587 kubelet[2852]: W1213 02:20:00.711527 2852 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e2cb78a_8d0e_4a9a_8cdf_ab8bc88d771a.slice/cri-containerd-73f6d87c933f71451e1cda0d04515ebdd72aa596877280fc722a111720153259.scope WatchSource:0}: container "73f6d87c933f71451e1cda0d04515ebdd72aa596877280fc722a111720153259" in namespace "k8s.io": not found Dec 13 02:20:01.043734 systemd[1]: run-containerd-runc-k8s.io-66211247c634ef3a587cac2fa3afd10d230858548715345312d3db2296cf3ec9-runc.5sFXr6.mount: Deactivated successfully. Dec 13 02:20:01.044733 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66211247c634ef3a587cac2fa3afd10d230858548715345312d3db2296cf3ec9-rootfs.mount: Deactivated successfully. Dec 13 02:20:01.211401 env[1736]: time="2024-12-13T02:20:01.211335827Z" level=info msg="CreateContainer within sandbox \"ea8810b4f918c2b530555b949d7a3bc46b90d2af17ab07815a79bbfa92e993c0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:20:01.274969 env[1736]: time="2024-12-13T02:20:01.274910337Z" level=info msg="CreateContainer within sandbox \"ea8810b4f918c2b530555b949d7a3bc46b90d2af17ab07815a79bbfa92e993c0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"596f39cebdb38a071ce5b262d5809202a4cd1c861fca22f743321a71c42ff0a2\"" Dec 13 02:20:01.276248 env[1736]: time="2024-12-13T02:20:01.276206138Z" level=info msg="StartContainer for \"596f39cebdb38a071ce5b262d5809202a4cd1c861fca22f743321a71c42ff0a2\"" Dec 13 02:20:01.396297 systemd[1]: Started cri-containerd-596f39cebdb38a071ce5b262d5809202a4cd1c861fca22f743321a71c42ff0a2.scope. Dec 13 02:20:01.482291 env[1736]: time="2024-12-13T02:20:01.482110116Z" level=info msg="StartContainer for \"596f39cebdb38a071ce5b262d5809202a4cd1c861fca22f743321a71c42ff0a2\" returns successfully" Dec 13 02:20:01.498502 systemd[1]: cri-containerd-596f39cebdb38a071ce5b262d5809202a4cd1c861fca22f743321a71c42ff0a2.scope: Deactivated successfully. Dec 13 02:20:01.610687 env[1736]: time="2024-12-13T02:20:01.610622241Z" level=info msg="shim disconnected" id=596f39cebdb38a071ce5b262d5809202a4cd1c861fca22f743321a71c42ff0a2 Dec 13 02:20:01.610687 env[1736]: time="2024-12-13T02:20:01.610684915Z" level=warning msg="cleaning up after shim disconnected" id=596f39cebdb38a071ce5b262d5809202a4cd1c861fca22f743321a71c42ff0a2 namespace=k8s.io Dec 13 02:20:01.610687 env[1736]: time="2024-12-13T02:20:01.610697042Z" level=info msg="cleaning up dead shim" Dec 13 02:20:01.676030 env[1736]: time="2024-12-13T02:20:01.675879336Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:20:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4906 runtime=io.containerd.runc.v2\n" Dec 13 02:20:02.042683 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-596f39cebdb38a071ce5b262d5809202a4cd1c861fca22f743321a71c42ff0a2-rootfs.mount: Deactivated successfully. Dec 13 02:20:02.225866 env[1736]: time="2024-12-13T02:20:02.225818488Z" level=info msg="CreateContainer within sandbox \"ea8810b4f918c2b530555b949d7a3bc46b90d2af17ab07815a79bbfa92e993c0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:20:02.365633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1550512446.mount: Deactivated successfully. Dec 13 02:20:02.388412 env[1736]: time="2024-12-13T02:20:02.388284677Z" level=info msg="CreateContainer within sandbox \"ea8810b4f918c2b530555b949d7a3bc46b90d2af17ab07815a79bbfa92e993c0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"47b9340d13feb3ebec7589b01a953830db987b61ee909bab91d38ebe71ce82bc\"" Dec 13 02:20:02.392421 env[1736]: time="2024-12-13T02:20:02.391915631Z" level=info msg="StartContainer for \"47b9340d13feb3ebec7589b01a953830db987b61ee909bab91d38ebe71ce82bc\"" Dec 13 02:20:02.519333 systemd[1]: Started cri-containerd-47b9340d13feb3ebec7589b01a953830db987b61ee909bab91d38ebe71ce82bc.scope. Dec 13 02:20:02.660474 systemd[1]: cri-containerd-47b9340d13feb3ebec7589b01a953830db987b61ee909bab91d38ebe71ce82bc.scope: Deactivated successfully. Dec 13 02:20:02.672035 env[1736]: time="2024-12-13T02:20:02.671964178Z" level=info msg="StartContainer for \"47b9340d13feb3ebec7589b01a953830db987b61ee909bab91d38ebe71ce82bc\" returns successfully" Dec 13 02:20:02.752350 env[1736]: time="2024-12-13T02:20:02.752106023Z" level=info msg="shim disconnected" id=47b9340d13feb3ebec7589b01a953830db987b61ee909bab91d38ebe71ce82bc Dec 13 02:20:02.754117 env[1736]: time="2024-12-13T02:20:02.753938366Z" level=warning msg="cleaning up after shim disconnected" id=47b9340d13feb3ebec7589b01a953830db987b61ee909bab91d38ebe71ce82bc namespace=k8s.io Dec 13 02:20:02.754117 env[1736]: time="2024-12-13T02:20:02.754110333Z" level=info msg="cleaning up dead shim" Dec 13 02:20:02.791926 env[1736]: time="2024-12-13T02:20:02.791878719Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:20:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4961 runtime=io.containerd.runc.v2\n" Dec 13 02:20:03.042820 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47b9340d13feb3ebec7589b01a953830db987b61ee909bab91d38ebe71ce82bc-rootfs.mount: Deactivated successfully. Dec 13 02:20:03.234064 env[1736]: time="2024-12-13T02:20:03.233866677Z" level=info msg="CreateContainer within sandbox \"ea8810b4f918c2b530555b949d7a3bc46b90d2af17ab07815a79bbfa92e993c0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:20:03.297571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3476417202.mount: Deactivated successfully. Dec 13 02:20:03.334130 env[1736]: time="2024-12-13T02:20:03.333596347Z" level=info msg="CreateContainer within sandbox \"ea8810b4f918c2b530555b949d7a3bc46b90d2af17ab07815a79bbfa92e993c0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ea8e4082917e450678defebd27ec6330f5089a23266200625ddceb50c781beb4\"" Dec 13 02:20:03.343166 env[1736]: time="2024-12-13T02:20:03.343123978Z" level=info msg="StartContainer for \"ea8e4082917e450678defebd27ec6330f5089a23266200625ddceb50c781beb4\"" Dec 13 02:20:03.417498 env[1736]: time="2024-12-13T02:20:03.417435269Z" level=info msg="StopPodSandbox for \"0e28ee8e8112d5b141f919c5e583316b6ddb568aa457affcbe6638242f5ce044\"" Dec 13 02:20:03.417680 env[1736]: time="2024-12-13T02:20:03.417614096Z" level=info msg="TearDown network for sandbox \"0e28ee8e8112d5b141f919c5e583316b6ddb568aa457affcbe6638242f5ce044\" successfully" Dec 13 02:20:03.417749 env[1736]: time="2024-12-13T02:20:03.417682541Z" level=info msg="StopPodSandbox for \"0e28ee8e8112d5b141f919c5e583316b6ddb568aa457affcbe6638242f5ce044\" returns successfully" Dec 13 02:20:03.418474 env[1736]: time="2024-12-13T02:20:03.418441879Z" level=info msg="RemovePodSandbox for \"0e28ee8e8112d5b141f919c5e583316b6ddb568aa457affcbe6638242f5ce044\"" Dec 13 02:20:03.418597 env[1736]: time="2024-12-13T02:20:03.418498362Z" level=info msg="Forcibly stopping sandbox \"0e28ee8e8112d5b141f919c5e583316b6ddb568aa457affcbe6638242f5ce044\"" Dec 13 02:20:03.418654 env[1736]: time="2024-12-13T02:20:03.418610538Z" level=info msg="TearDown network for sandbox \"0e28ee8e8112d5b141f919c5e583316b6ddb568aa457affcbe6638242f5ce044\" successfully" Dec 13 02:20:03.440832 env[1736]: time="2024-12-13T02:20:03.440768056Z" level=info msg="RemovePodSandbox \"0e28ee8e8112d5b141f919c5e583316b6ddb568aa457affcbe6638242f5ce044\" returns successfully" Dec 13 02:20:03.441690 env[1736]: time="2024-12-13T02:20:03.441643883Z" level=info msg="StopPodSandbox for \"e00b39791083e79d0d76a5d4b466d79280113b556424a7bd34299756b63bd5ba\"" Dec 13 02:20:03.444459 env[1736]: time="2024-12-13T02:20:03.444360385Z" level=info msg="TearDown network for sandbox \"e00b39791083e79d0d76a5d4b466d79280113b556424a7bd34299756b63bd5ba\" successfully" Dec 13 02:20:03.444700 env[1736]: time="2024-12-13T02:20:03.444461616Z" level=info msg="StopPodSandbox for \"e00b39791083e79d0d76a5d4b466d79280113b556424a7bd34299756b63bd5ba\" returns successfully" Dec 13 02:20:03.445498 env[1736]: time="2024-12-13T02:20:03.445463115Z" level=info msg="RemovePodSandbox for \"e00b39791083e79d0d76a5d4b466d79280113b556424a7bd34299756b63bd5ba\"" Dec 13 02:20:03.445674 env[1736]: time="2024-12-13T02:20:03.445625824Z" level=info msg="Forcibly stopping sandbox \"e00b39791083e79d0d76a5d4b466d79280113b556424a7bd34299756b63bd5ba\"" Dec 13 02:20:03.448515 env[1736]: time="2024-12-13T02:20:03.445927832Z" level=info msg="TearDown network for sandbox \"e00b39791083e79d0d76a5d4b466d79280113b556424a7bd34299756b63bd5ba\" successfully" Dec 13 02:20:03.453040 systemd[1]: Started cri-containerd-ea8e4082917e450678defebd27ec6330f5089a23266200625ddceb50c781beb4.scope. Dec 13 02:20:03.478776 env[1736]: time="2024-12-13T02:20:03.478705668Z" level=info msg="RemovePodSandbox \"e00b39791083e79d0d76a5d4b466d79280113b556424a7bd34299756b63bd5ba\" returns successfully" Dec 13 02:20:03.484405 env[1736]: time="2024-12-13T02:20:03.481104550Z" level=info msg="StopPodSandbox for \"f3c1e343c14784ad8cdaa45e98294a78737f2668a06f671cd3f4c997fcfd90a4\"" Dec 13 02:20:03.484405 env[1736]: time="2024-12-13T02:20:03.481220645Z" level=info msg="TearDown network for sandbox \"f3c1e343c14784ad8cdaa45e98294a78737f2668a06f671cd3f4c997fcfd90a4\" successfully" Dec 13 02:20:03.484405 env[1736]: time="2024-12-13T02:20:03.481270744Z" level=info msg="StopPodSandbox for \"f3c1e343c14784ad8cdaa45e98294a78737f2668a06f671cd3f4c997fcfd90a4\" returns successfully" Dec 13 02:20:03.484405 env[1736]: time="2024-12-13T02:20:03.481711235Z" level=info msg="RemovePodSandbox for \"f3c1e343c14784ad8cdaa45e98294a78737f2668a06f671cd3f4c997fcfd90a4\"" Dec 13 02:20:03.484405 env[1736]: time="2024-12-13T02:20:03.481741947Z" level=info msg="Forcibly stopping sandbox \"f3c1e343c14784ad8cdaa45e98294a78737f2668a06f671cd3f4c997fcfd90a4\"" Dec 13 02:20:03.484405 env[1736]: time="2024-12-13T02:20:03.481831465Z" level=info msg="TearDown network for sandbox \"f3c1e343c14784ad8cdaa45e98294a78737f2668a06f671cd3f4c997fcfd90a4\" successfully" Dec 13 02:20:03.495197 env[1736]: time="2024-12-13T02:20:03.492240991Z" level=info msg="RemovePodSandbox \"f3c1e343c14784ad8cdaa45e98294a78737f2668a06f671cd3f4c997fcfd90a4\" returns successfully" Dec 13 02:20:03.555605 env[1736]: time="2024-12-13T02:20:03.554648967Z" level=info msg="StartContainer for \"ea8e4082917e450678defebd27ec6330f5089a23266200625ddceb50c781beb4\" returns successfully" Dec 13 02:20:03.786069 kubelet[2852]: E1213 02:20:03.786017 2852 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:20:03.836502 kubelet[2852]: W1213 02:20:03.831300 2852 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e46f978_54be_4fe2_8b8b_f767bae67bca.slice/cri-containerd-60a635a43df6f3d71f123d8f5866a56fd288940138c65860e0dc07ff25a7af03.scope WatchSource:0}: task 60a635a43df6f3d71f123d8f5866a56fd288940138c65860e0dc07ff25a7af03 not found: not found Dec 13 02:20:04.262290 kubelet[2852]: I1213 02:20:04.262204 2852 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fnfmj" podStartSLOduration=5.262180497 podStartE2EDuration="5.262180497s" podCreationTimestamp="2024-12-13 02:19:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:20:04.259982027 +0000 UTC m=+121.318766346" watchObservedRunningTime="2024-12-13 02:20:04.262180497 +0000 UTC m=+121.320964820" Dec 13 02:20:04.671414 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 02:20:04.780787 systemd[1]: run-containerd-runc-k8s.io-ea8e4082917e450678defebd27ec6330f5089a23266200625ddceb50c781beb4-runc.wZdlD9.mount: Deactivated successfully. Dec 13 02:20:05.828679 kubelet[2852]: I1213 02:20:05.828618 2852 setters.go:580] "Node became not ready" node="ip-172-31-19-93" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T02:20:05Z","lastTransitionTime":"2024-12-13T02:20:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 02:20:06.958489 kubelet[2852]: W1213 02:20:06.958443 2852 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e46f978_54be_4fe2_8b8b_f767bae67bca.slice/cri-containerd-66211247c634ef3a587cac2fa3afd10d230858548715345312d3db2296cf3ec9.scope WatchSource:0}: task 66211247c634ef3a587cac2fa3afd10d230858548715345312d3db2296cf3ec9 not found: not found Dec 13 02:20:07.194519 systemd[1]: run-containerd-runc-k8s.io-ea8e4082917e450678defebd27ec6330f5089a23266200625ddceb50c781beb4-runc.oQcGjX.mount: Deactivated successfully. Dec 13 02:20:08.147895 (udev-worker)[5555]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:20:08.147896 (udev-worker)[5556]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:20:08.148934 systemd-networkd[1458]: lxc_health: Link UP Dec 13 02:20:08.159218 systemd-networkd[1458]: lxc_health: Gained carrier Dec 13 02:20:08.159444 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:20:09.672584 systemd[1]: run-containerd-runc-k8s.io-ea8e4082917e450678defebd27ec6330f5089a23266200625ddceb50c781beb4-runc.A6Kzhs.mount: Deactivated successfully. Dec 13 02:20:09.906542 systemd-networkd[1458]: lxc_health: Gained IPv6LL Dec 13 02:20:10.070319 kubelet[2852]: W1213 02:20:10.070097 2852 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e46f978_54be_4fe2_8b8b_f767bae67bca.slice/cri-containerd-596f39cebdb38a071ce5b262d5809202a4cd1c861fca22f743321a71c42ff0a2.scope WatchSource:0}: task 596f39cebdb38a071ce5b262d5809202a4cd1c861fca22f743321a71c42ff0a2 not found: not found Dec 13 02:20:12.313196 systemd[1]: run-containerd-runc-k8s.io-ea8e4082917e450678defebd27ec6330f5089a23266200625ddceb50c781beb4-runc.k3zYyD.mount: Deactivated successfully. Dec 13 02:20:13.212963 kubelet[2852]: W1213 02:20:13.212922 2852 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e46f978_54be_4fe2_8b8b_f767bae67bca.slice/cri-containerd-47b9340d13feb3ebec7589b01a953830db987b61ee909bab91d38ebe71ce82bc.scope WatchSource:0}: task 47b9340d13feb3ebec7589b01a953830db987b61ee909bab91d38ebe71ce82bc not found: not found Dec 13 02:20:14.677378 sshd[4636]: pam_unix(sshd:session): session closed for user core Dec 13 02:20:14.682233 systemd-logind[1723]: Session 26 logged out. Waiting for processes to exit. Dec 13 02:20:14.684805 systemd[1]: sshd@25-172.31.19.93:22-139.178.68.195:42310.service: Deactivated successfully. Dec 13 02:20:14.685864 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 02:20:14.687902 systemd-logind[1723]: Removed session 26. Dec 13 02:20:28.018890 systemd[1]: cri-containerd-3ad4259dcea149ddcc3ad0f5a1734d0234f3193d9c4c470350d9d779e0c53907.scope: Deactivated successfully. Dec 13 02:20:28.019331 systemd[1]: cri-containerd-3ad4259dcea149ddcc3ad0f5a1734d0234f3193d9c4c470350d9d779e0c53907.scope: Consumed 3.655s CPU time. Dec 13 02:20:28.053814 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ad4259dcea149ddcc3ad0f5a1734d0234f3193d9c4c470350d9d779e0c53907-rootfs.mount: Deactivated successfully. Dec 13 02:20:28.065099 env[1736]: time="2024-12-13T02:20:28.065036460Z" level=info msg="shim disconnected" id=3ad4259dcea149ddcc3ad0f5a1734d0234f3193d9c4c470350d9d779e0c53907 Dec 13 02:20:28.065099 env[1736]: time="2024-12-13T02:20:28.065095656Z" level=warning msg="cleaning up after shim disconnected" id=3ad4259dcea149ddcc3ad0f5a1734d0234f3193d9c4c470350d9d779e0c53907 namespace=k8s.io Dec 13 02:20:28.065858 env[1736]: time="2024-12-13T02:20:28.065109150Z" level=info msg="cleaning up dead shim" Dec 13 02:20:28.077216 env[1736]: time="2024-12-13T02:20:28.077166364Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:20:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5666 runtime=io.containerd.runc.v2\n" Dec 13 02:20:28.293471 kubelet[2852]: I1213 02:20:28.292543 2852 scope.go:117] "RemoveContainer" containerID="3ad4259dcea149ddcc3ad0f5a1734d0234f3193d9c4c470350d9d779e0c53907" Dec 13 02:20:28.303609 env[1736]: time="2024-12-13T02:20:28.303208065Z" level=info msg="CreateContainer within sandbox \"51382992cef408bb533709ef9007ff8d27576d6a8d7aed9940fb9e4d13e7b8cb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 02:20:28.330283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2562744601.mount: Deactivated successfully. Dec 13 02:20:28.344206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount238818672.mount: Deactivated successfully. Dec 13 02:20:28.346427 env[1736]: time="2024-12-13T02:20:28.346336864Z" level=info msg="CreateContainer within sandbox \"51382992cef408bb533709ef9007ff8d27576d6a8d7aed9940fb9e4d13e7b8cb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"312f2da1acf5ac690da991c8a0e6bf16f443881faaedaec483ff6e9f8bae5644\"" Dec 13 02:20:28.346977 env[1736]: time="2024-12-13T02:20:28.346943764Z" level=info msg="StartContainer for \"312f2da1acf5ac690da991c8a0e6bf16f443881faaedaec483ff6e9f8bae5644\"" Dec 13 02:20:28.377866 systemd[1]: Started cri-containerd-312f2da1acf5ac690da991c8a0e6bf16f443881faaedaec483ff6e9f8bae5644.scope. Dec 13 02:20:28.469622 env[1736]: time="2024-12-13T02:20:28.469563765Z" level=info msg="StartContainer for \"312f2da1acf5ac690da991c8a0e6bf16f443881faaedaec483ff6e9f8bae5644\" returns successfully" Dec 13 02:20:33.714449 systemd[1]: cri-containerd-82ab37ab04d28243591b9f56657f368b8fabc8320a461810502ee50b266ae3d6.scope: Deactivated successfully. Dec 13 02:20:33.715480 systemd[1]: cri-containerd-82ab37ab04d28243591b9f56657f368b8fabc8320a461810502ee50b266ae3d6.scope: Consumed 2.058s CPU time. Dec 13 02:20:33.790284 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82ab37ab04d28243591b9f56657f368b8fabc8320a461810502ee50b266ae3d6-rootfs.mount: Deactivated successfully. Dec 13 02:20:33.811023 env[1736]: time="2024-12-13T02:20:33.810968217Z" level=info msg="shim disconnected" id=82ab37ab04d28243591b9f56657f368b8fabc8320a461810502ee50b266ae3d6 Dec 13 02:20:33.811023 env[1736]: time="2024-12-13T02:20:33.811015693Z" level=warning msg="cleaning up after shim disconnected" id=82ab37ab04d28243591b9f56657f368b8fabc8320a461810502ee50b266ae3d6 namespace=k8s.io Dec 13 02:20:33.811023 env[1736]: time="2024-12-13T02:20:33.811028474Z" level=info msg="cleaning up dead shim" Dec 13 02:20:33.825131 env[1736]: time="2024-12-13T02:20:33.825080715Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:20:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5723 runtime=io.containerd.runc.v2\n" Dec 13 02:20:34.312158 kubelet[2852]: I1213 02:20:34.312063 2852 scope.go:117] "RemoveContainer" containerID="82ab37ab04d28243591b9f56657f368b8fabc8320a461810502ee50b266ae3d6" Dec 13 02:20:34.318647 env[1736]: time="2024-12-13T02:20:34.318599014Z" level=info msg="CreateContainer within sandbox \"f50b548d548e09609a94b94edc6dbafb64525e87127da6d3b0ea8d2953fbcf43\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 02:20:34.342180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1736613117.mount: Deactivated successfully. Dec 13 02:20:34.354737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4002019139.mount: Deactivated successfully. Dec 13 02:20:34.360205 env[1736]: time="2024-12-13T02:20:34.360152117Z" level=info msg="CreateContainer within sandbox \"f50b548d548e09609a94b94edc6dbafb64525e87127da6d3b0ea8d2953fbcf43\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"aae6d74a50435491c75496564a0f417aaa81efbab342affc46c62c273aed3a95\"" Dec 13 02:20:34.361338 env[1736]: time="2024-12-13T02:20:34.361305813Z" level=info msg="StartContainer for \"aae6d74a50435491c75496564a0f417aaa81efbab342affc46c62c273aed3a95\"" Dec 13 02:20:34.393797 systemd[1]: Started cri-containerd-aae6d74a50435491c75496564a0f417aaa81efbab342affc46c62c273aed3a95.scope. Dec 13 02:20:34.468142 env[1736]: time="2024-12-13T02:20:34.468016622Z" level=info msg="StartContainer for \"aae6d74a50435491c75496564a0f417aaa81efbab342affc46c62c273aed3a95\" returns successfully" Dec 13 02:20:36.187310 kubelet[2852]: E1213 02:20:36.187241 2852 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-93?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"