Dec 13 02:18:33.078801 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 02:18:33.078837 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:18:33.078852 kernel: BIOS-provided physical RAM map: Dec 13 02:18:33.078862 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 02:18:33.078871 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 02:18:33.078881 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 02:18:33.078895 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Dec 13 02:18:33.078905 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Dec 13 02:18:33.078914 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Dec 13 02:18:33.078925 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 02:18:33.078935 kernel: NX (Execute Disable) protection: active Dec 13 02:18:33.078944 kernel: SMBIOS 2.7 present. Dec 13 02:18:33.078953 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 13 02:18:33.078963 kernel: Hypervisor detected: KVM Dec 13 02:18:33.078978 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 02:18:33.078989 kernel: kvm-clock: cpu 0, msr 5419b001, primary cpu clock Dec 13 02:18:33.079001 kernel: kvm-clock: using sched offset of 7560651533 cycles Dec 13 02:18:33.079012 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 02:18:33.079023 kernel: tsc: Detected 2499.998 MHz processor Dec 13 02:18:33.079035 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 02:18:33.079049 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 02:18:33.079060 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Dec 13 02:18:33.079072 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 02:18:33.079082 kernel: Using GB pages for direct mapping Dec 13 02:18:33.079092 kernel: ACPI: Early table checksum verification disabled Dec 13 02:18:33.079103 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Dec 13 02:18:33.079115 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Dec 13 02:18:33.079126 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 02:18:33.079136 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 13 02:18:33.079150 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Dec 13 02:18:33.079161 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 02:18:33.087328 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 02:18:33.087368 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 13 02:18:33.087381 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 02:18:33.087392 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 13 02:18:33.087403 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 13 02:18:33.087411 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 02:18:33.087423 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Dec 13 02:18:33.087430 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Dec 13 02:18:33.087437 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Dec 13 02:18:33.087448 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Dec 13 02:18:33.087456 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Dec 13 02:18:33.087463 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Dec 13 02:18:33.087471 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Dec 13 02:18:33.087481 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Dec 13 02:18:33.087489 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Dec 13 02:18:33.087497 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Dec 13 02:18:33.087504 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 02:18:33.087512 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 02:18:33.087519 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 13 02:18:33.087527 kernel: NUMA: Initialized distance table, cnt=1 Dec 13 02:18:33.087534 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Dec 13 02:18:33.087544 kernel: Zone ranges: Dec 13 02:18:33.087552 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 02:18:33.087560 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Dec 13 02:18:33.087568 kernel: Normal empty Dec 13 02:18:33.087576 kernel: Movable zone start for each node Dec 13 02:18:33.087584 kernel: Early memory node ranges Dec 13 02:18:33.087591 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 02:18:33.087599 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Dec 13 02:18:33.087606 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Dec 13 02:18:33.087616 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 02:18:33.087624 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 02:18:33.087631 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Dec 13 02:18:33.087638 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 02:18:33.087646 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 02:18:33.087653 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 13 02:18:33.087661 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 02:18:33.087668 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 02:18:33.087676 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 02:18:33.087685 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 02:18:33.087693 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 02:18:33.087700 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 02:18:33.087708 kernel: TSC deadline timer available Dec 13 02:18:33.087715 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 02:18:33.087722 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Dec 13 02:18:33.087730 kernel: Booting paravirtualized kernel on KVM Dec 13 02:18:33.087738 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 02:18:33.087745 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 02:18:33.087755 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 02:18:33.087763 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 02:18:33.087770 kernel: pcpu-alloc: [0] 0 1 Dec 13 02:18:33.087777 kernel: kvm-guest: stealtime: cpu 0, msr 7b61c0c0 Dec 13 02:18:33.087785 kernel: kvm-guest: PV spinlocks enabled Dec 13 02:18:33.087792 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 02:18:33.087800 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Dec 13 02:18:33.087807 kernel: Policy zone: DMA32 Dec 13 02:18:33.087816 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:18:33.087827 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 02:18:33.087834 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 02:18:33.087842 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 02:18:33.087849 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 02:18:33.087857 kernel: Memory: 1934420K/2057760K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 123080K reserved, 0K cma-reserved) Dec 13 02:18:33.087865 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 02:18:33.087872 kernel: Kernel/User page tables isolation: enabled Dec 13 02:18:33.087880 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 02:18:33.087889 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 02:18:33.087896 kernel: rcu: Hierarchical RCU implementation. Dec 13 02:18:33.087905 kernel: rcu: RCU event tracing is enabled. Dec 13 02:18:33.087912 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 02:18:33.087920 kernel: Rude variant of Tasks RCU enabled. Dec 13 02:18:33.087927 kernel: Tracing variant of Tasks RCU enabled. Dec 13 02:18:33.087935 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 02:18:33.087942 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 02:18:33.087950 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 02:18:33.087959 kernel: random: crng init done Dec 13 02:18:33.087967 kernel: Console: colour VGA+ 80x25 Dec 13 02:18:33.087975 kernel: printk: console [ttyS0] enabled Dec 13 02:18:33.087982 kernel: ACPI: Core revision 20210730 Dec 13 02:18:33.087990 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 13 02:18:33.087997 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 02:18:33.088005 kernel: x2apic enabled Dec 13 02:18:33.088012 kernel: Switched APIC routing to physical x2apic. Dec 13 02:18:33.088020 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 02:18:33.088029 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Dec 13 02:18:33.088037 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 02:18:33.088044 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 02:18:33.088052 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 02:18:33.088069 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 02:18:33.088079 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 02:18:33.088087 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 02:18:33.088095 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 02:18:33.088102 kernel: RETBleed: Vulnerable Dec 13 02:18:33.088110 kernel: Speculative Store Bypass: Vulnerable Dec 13 02:18:33.088118 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 02:18:33.088125 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 02:18:33.088133 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 02:18:33.088141 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 02:18:33.088151 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 02:18:33.088159 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 02:18:33.088167 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 02:18:33.088194 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 02:18:33.088208 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 02:18:33.088224 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 02:18:33.088232 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 02:18:33.088240 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 13 02:18:33.088248 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 02:18:33.088256 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 02:18:33.088264 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 02:18:33.088271 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 13 02:18:33.088279 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 13 02:18:33.088287 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 13 02:18:33.088294 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 13 02:18:33.088302 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 13 02:18:33.088310 kernel: Freeing SMP alternatives memory: 32K Dec 13 02:18:33.088320 kernel: pid_max: default: 32768 minimum: 301 Dec 13 02:18:33.088328 kernel: LSM: Security Framework initializing Dec 13 02:18:33.088336 kernel: SELinux: Initializing. Dec 13 02:18:33.088343 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 02:18:33.088351 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 02:18:33.088359 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 02:18:33.088367 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 02:18:33.088375 kernel: signal: max sigframe size: 3632 Dec 13 02:18:33.088383 kernel: rcu: Hierarchical SRCU implementation. Dec 13 02:18:33.088391 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 02:18:33.088401 kernel: smp: Bringing up secondary CPUs ... Dec 13 02:18:33.088408 kernel: x86: Booting SMP configuration: Dec 13 02:18:33.088416 kernel: .... node #0, CPUs: #1 Dec 13 02:18:33.088424 kernel: kvm-clock: cpu 1, msr 5419b041, secondary cpu clock Dec 13 02:18:33.088432 kernel: kvm-guest: stealtime: cpu 1, msr 7b71c0c0 Dec 13 02:18:33.088440 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 02:18:33.088449 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 02:18:33.088457 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 02:18:33.088465 kernel: smpboot: Max logical packages: 1 Dec 13 02:18:33.088475 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Dec 13 02:18:33.088483 kernel: devtmpfs: initialized Dec 13 02:18:33.088491 kernel: x86/mm: Memory block size: 128MB Dec 13 02:18:33.088499 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 02:18:33.088507 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 02:18:33.088515 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 02:18:33.088522 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 02:18:33.088531 kernel: audit: initializing netlink subsys (disabled) Dec 13 02:18:33.088538 kernel: audit: type=2000 audit(1734056312.246:1): state=initialized audit_enabled=0 res=1 Dec 13 02:18:33.088548 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 02:18:33.088556 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 02:18:33.088564 kernel: cpuidle: using governor menu Dec 13 02:18:33.088572 kernel: ACPI: bus type PCI registered Dec 13 02:18:33.088580 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 02:18:33.088588 kernel: dca service started, version 1.12.1 Dec 13 02:18:33.088596 kernel: PCI: Using configuration type 1 for base access Dec 13 02:18:33.088604 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 02:18:33.088612 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 02:18:33.088622 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 02:18:33.088629 kernel: ACPI: Added _OSI(Module Device) Dec 13 02:18:33.088637 kernel: ACPI: Added _OSI(Processor Device) Dec 13 02:18:33.088645 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 02:18:33.088653 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 02:18:33.088661 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 02:18:33.088669 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 02:18:33.088677 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 02:18:33.088685 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 02:18:33.088695 kernel: ACPI: Interpreter enabled Dec 13 02:18:33.088702 kernel: ACPI: PM: (supports S0 S5) Dec 13 02:18:33.088710 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 02:18:33.088718 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 02:18:33.088726 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 02:18:33.088734 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 02:18:33.088887 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 02:18:33.089038 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 02:18:33.089053 kernel: acpiphp: Slot [3] registered Dec 13 02:18:33.089061 kernel: acpiphp: Slot [4] registered Dec 13 02:18:33.089069 kernel: acpiphp: Slot [5] registered Dec 13 02:18:33.089078 kernel: acpiphp: Slot [6] registered Dec 13 02:18:33.089086 kernel: acpiphp: Slot [7] registered Dec 13 02:18:33.089093 kernel: acpiphp: Slot [8] registered Dec 13 02:18:33.089101 kernel: acpiphp: Slot [9] registered Dec 13 02:18:33.089109 kernel: acpiphp: Slot [10] registered Dec 13 02:18:33.089117 kernel: acpiphp: Slot [11] registered Dec 13 02:18:33.089127 kernel: acpiphp: Slot [12] registered Dec 13 02:18:33.089135 kernel: acpiphp: Slot [13] registered Dec 13 02:18:33.089143 kernel: acpiphp: Slot [14] registered Dec 13 02:18:33.089151 kernel: acpiphp: Slot [15] registered Dec 13 02:18:33.089159 kernel: acpiphp: Slot [16] registered Dec 13 02:18:33.089167 kernel: acpiphp: Slot [17] registered Dec 13 02:18:33.089208 kernel: acpiphp: Slot [18] registered Dec 13 02:18:33.089220 kernel: acpiphp: Slot [19] registered Dec 13 02:18:33.089232 kernel: acpiphp: Slot [20] registered Dec 13 02:18:33.089247 kernel: acpiphp: Slot [21] registered Dec 13 02:18:33.089259 kernel: acpiphp: Slot [22] registered Dec 13 02:18:33.089269 kernel: acpiphp: Slot [23] registered Dec 13 02:18:33.089281 kernel: acpiphp: Slot [24] registered Dec 13 02:18:33.089294 kernel: acpiphp: Slot [25] registered Dec 13 02:18:33.089306 kernel: acpiphp: Slot [26] registered Dec 13 02:18:33.089318 kernel: acpiphp: Slot [27] registered Dec 13 02:18:33.089332 kernel: acpiphp: Slot [28] registered Dec 13 02:18:33.089345 kernel: acpiphp: Slot [29] registered Dec 13 02:18:33.089358 kernel: acpiphp: Slot [30] registered Dec 13 02:18:33.090377 kernel: acpiphp: Slot [31] registered Dec 13 02:18:33.090392 kernel: PCI host bridge to bus 0000:00 Dec 13 02:18:33.090539 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 02:18:33.090646 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 02:18:33.090757 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 02:18:33.090858 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 02:18:33.090960 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 02:18:33.091091 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 02:18:33.091251 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 02:18:33.091375 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Dec 13 02:18:33.091490 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 02:18:33.091602 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Dec 13 02:18:33.091713 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 13 02:18:33.091825 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 13 02:18:33.091939 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 13 02:18:33.092052 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 13 02:18:33.092163 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 13 02:18:33.092297 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 13 02:18:33.092428 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Dec 13 02:18:33.092551 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Dec 13 02:18:33.092675 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 02:18:33.092904 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 02:18:33.093089 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 02:18:33.093374 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Dec 13 02:18:33.093528 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 02:18:33.093662 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Dec 13 02:18:33.093683 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 02:18:33.093703 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 02:18:33.093718 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 02:18:33.093733 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 02:18:33.093747 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 02:18:33.093762 kernel: iommu: Default domain type: Translated Dec 13 02:18:33.093777 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 02:18:33.093972 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 13 02:18:33.094104 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 02:18:33.094249 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 13 02:18:33.094272 kernel: vgaarb: loaded Dec 13 02:18:33.094288 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 02:18:33.094304 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 02:18:33.094318 kernel: PTP clock support registered Dec 13 02:18:33.094333 kernel: PCI: Using ACPI for IRQ routing Dec 13 02:18:33.094348 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 02:18:33.094363 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 02:18:33.094378 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Dec 13 02:18:33.094395 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 13 02:18:33.094411 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 13 02:18:33.094426 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 02:18:33.094441 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 02:18:33.094456 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 02:18:33.094472 kernel: pnp: PnP ACPI init Dec 13 02:18:33.094486 kernel: pnp: PnP ACPI: found 5 devices Dec 13 02:18:33.094500 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 02:18:33.094515 kernel: NET: Registered PF_INET protocol family Dec 13 02:18:33.094532 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 02:18:33.094547 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 02:18:33.094563 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 02:18:33.094578 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 02:18:33.094592 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 02:18:33.094607 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 02:18:33.094622 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 02:18:33.094637 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 02:18:33.094652 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 02:18:33.094670 kernel: NET: Registered PF_XDP protocol family Dec 13 02:18:33.094799 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 02:18:33.094916 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 02:18:33.095030 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 02:18:33.095143 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 02:18:33.095284 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 02:18:33.095415 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Dec 13 02:18:33.095438 kernel: PCI: CLS 0 bytes, default 64 Dec 13 02:18:33.095454 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 02:18:33.095469 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 02:18:33.095484 kernel: clocksource: Switched to clocksource tsc Dec 13 02:18:33.095499 kernel: Initialise system trusted keyrings Dec 13 02:18:33.095514 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 02:18:33.095529 kernel: Key type asymmetric registered Dec 13 02:18:33.095543 kernel: Asymmetric key parser 'x509' registered Dec 13 02:18:33.095557 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 02:18:33.095573 kernel: io scheduler mq-deadline registered Dec 13 02:18:33.095589 kernel: io scheduler kyber registered Dec 13 02:18:33.095603 kernel: io scheduler bfq registered Dec 13 02:18:33.095618 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 02:18:33.095634 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 02:18:33.095649 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 02:18:33.095664 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 02:18:33.095679 kernel: i8042: Warning: Keylock active Dec 13 02:18:33.095693 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 02:18:33.095711 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 02:18:33.095845 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 02:18:33.095964 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 02:18:33.096080 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T02:18:32 UTC (1734056312) Dec 13 02:18:33.096204 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 02:18:33.096220 kernel: intel_pstate: CPU model not supported Dec 13 02:18:33.096233 kernel: NET: Registered PF_INET6 protocol family Dec 13 02:18:33.096245 kernel: Segment Routing with IPv6 Dec 13 02:18:33.096262 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 02:18:33.096275 kernel: NET: Registered PF_PACKET protocol family Dec 13 02:18:33.096288 kernel: Key type dns_resolver registered Dec 13 02:18:33.096360 kernel: IPI shorthand broadcast: enabled Dec 13 02:18:33.096375 kernel: sched_clock: Marking stable (471568225, 294232682)->(848841768, -83040861) Dec 13 02:18:33.096387 kernel: registered taskstats version 1 Dec 13 02:18:33.096400 kernel: Loading compiled-in X.509 certificates Dec 13 02:18:33.096413 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 02:18:33.096426 kernel: Key type .fscrypt registered Dec 13 02:18:33.096441 kernel: Key type fscrypt-provisioning registered Dec 13 02:18:33.096454 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 02:18:33.096468 kernel: ima: Allocated hash algorithm: sha1 Dec 13 02:18:33.096481 kernel: ima: No architecture policies found Dec 13 02:18:33.096493 kernel: clk: Disabling unused clocks Dec 13 02:18:33.096507 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 02:18:33.096520 kernel: Write protecting the kernel read-only data: 28672k Dec 13 02:18:33.096532 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 02:18:33.096545 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 02:18:33.096560 kernel: Run /init as init process Dec 13 02:18:33.096572 kernel: with arguments: Dec 13 02:18:33.096585 kernel: /init Dec 13 02:18:33.096597 kernel: with environment: Dec 13 02:18:33.096609 kernel: HOME=/ Dec 13 02:18:33.096622 kernel: TERM=linux Dec 13 02:18:33.096635 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 02:18:33.096651 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:18:33.096670 systemd[1]: Detected virtualization amazon. Dec 13 02:18:33.096683 systemd[1]: Detected architecture x86-64. Dec 13 02:18:33.096695 systemd[1]: Running in initrd. Dec 13 02:18:33.096708 systemd[1]: No hostname configured, using default hostname. Dec 13 02:18:33.096736 systemd[1]: Hostname set to . Dec 13 02:18:33.096755 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:18:33.096769 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 02:18:33.096782 systemd[1]: Queued start job for default target initrd.target. Dec 13 02:18:33.096796 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:18:33.096809 systemd[1]: Reached target cryptsetup.target. Dec 13 02:18:33.096822 systemd[1]: Reached target paths.target. Dec 13 02:18:33.096836 systemd[1]: Reached target slices.target. Dec 13 02:18:33.096849 systemd[1]: Reached target swap.target. Dec 13 02:18:33.096862 systemd[1]: Reached target timers.target. Dec 13 02:18:33.096879 systemd[1]: Listening on iscsid.socket. Dec 13 02:18:33.096892 systemd[1]: Listening on iscsiuio.socket. Dec 13 02:18:33.096958 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 02:18:33.096974 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 02:18:33.096988 systemd[1]: Listening on systemd-journald.socket. Dec 13 02:18:33.097005 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:18:33.097019 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:18:33.097032 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:18:33.097049 systemd[1]: Reached target sockets.target. Dec 13 02:18:33.097062 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:18:33.097075 systemd[1]: Finished network-cleanup.service. Dec 13 02:18:33.097089 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 02:18:33.097102 systemd[1]: Starting systemd-journald.service... Dec 13 02:18:33.097115 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:18:33.097129 systemd[1]: Starting systemd-resolved.service... Dec 13 02:18:33.097142 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 02:18:33.097156 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:18:33.097190 kernel: audit: type=1130 audit(1734056313.075:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:33.097205 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 02:18:33.097219 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:18:33.097234 kernel: audit: type=1130 audit(1734056313.080:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:33.097255 systemd-journald[185]: Journal started Dec 13 02:18:33.097337 systemd-journald[185]: Runtime Journal (/run/log/journal/ec20a91308cf925fc71f7c99897b3062) is 4.8M, max 38.7M, 33.9M free. Dec 13 02:18:33.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:33.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:33.120216 systemd[1]: Started systemd-journald.service. Dec 13 02:18:33.117340 systemd-modules-load[186]: Inserted module 'overlay' Dec 13 02:18:33.268140 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 02:18:33.268199 kernel: Bridge firewalling registered Dec 13 02:18:33.268223 kernel: SCSI subsystem initialized Dec 13 02:18:33.268242 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 02:18:33.268258 kernel: device-mapper: uevent: version 1.0.3 Dec 13 02:18:33.268274 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 02:18:33.268290 kernel: audit: type=1130 audit(1734056313.254:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:33.268312 kernel: audit: type=1130 audit(1734056313.255:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:33.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:33.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:33.138486 systemd-resolved[187]: Positive Trust Anchors: Dec 13 02:18:33.275599 kernel: audit: type=1130 audit(1734056313.268:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:33.275628 kernel: audit: type=1130 audit(1734056313.274:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:33.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:33.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:33.138498 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:18:33.138551 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:18:33.143058 systemd-resolved[187]: Defaulting to hostname 'linux'. Dec 13 02:18:33.177269 systemd-modules-load[186]: Inserted module 'br_netfilter' Dec 13 02:18:33.218412 systemd-modules-load[186]: Inserted module 'dm_multipath' Dec 13 02:18:33.300065 kernel: audit: type=1130 audit(1734056313.287:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:33.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:33.255144 systemd[1]: Started systemd-resolved.service. Dec 13 02:18:33.255478 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:18:33.268519 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 02:18:33.275014 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:18:33.290899 systemd[1]: Reached target nss-lookup.target. Dec 13 02:18:33.307738 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 02:18:33.310803 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:18:33.340972 kernel: audit: type=1130 audit(1734056313.333:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:33.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:33.333619 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:18:33.344830 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 02:18:33.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:33.352506 systemd[1]: Starting dracut-cmdline.service... Dec 13 02:18:33.354455 kernel: audit: type=1130 audit(1734056313.346:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:33.365553 dracut-cmdline[206]: dracut-dracut-053 Dec 13 02:18:33.368104 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:18:33.467204 kernel: Loading iSCSI transport class v2.0-870. Dec 13 02:18:33.495202 kernel: iscsi: registered transport (tcp) Dec 13 02:18:33.527462 kernel: iscsi: registered transport (qla4xxx) Dec 13 02:18:33.527542 kernel: QLogic iSCSI HBA Driver Dec 13 02:18:33.576281 systemd[1]: Finished dracut-cmdline.service. Dec 13 02:18:33.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:33.579080 systemd[1]: Starting dracut-pre-udev.service... Dec 13 02:18:33.640233 kernel: raid6: avx512x4 gen() 14726 MB/s Dec 13 02:18:33.658250 kernel: raid6: avx512x4 xor() 5958 MB/s Dec 13 02:18:33.676239 kernel: raid6: avx512x2 gen() 10139 MB/s Dec 13 02:18:33.694241 kernel: raid6: avx512x2 xor() 16184 MB/s Dec 13 02:18:33.717515 kernel: raid6: avx512x1 gen() 14885 MB/s Dec 13 02:18:33.734355 kernel: raid6: avx512x1 xor() 12517 MB/s Dec 13 02:18:33.751222 kernel: raid6: avx2x4 gen() 12361 MB/s Dec 13 02:18:33.768232 kernel: raid6: avx2x4 xor() 6388 MB/s Dec 13 02:18:33.785234 kernel: raid6: avx2x2 gen() 12887 MB/s Dec 13 02:18:33.802247 kernel: raid6: avx2x2 xor() 15151 MB/s Dec 13 02:18:33.819308 kernel: raid6: avx2x1 gen() 12133 MB/s Dec 13 02:18:33.836221 kernel: raid6: avx2x1 xor() 13391 MB/s Dec 13 02:18:33.853234 kernel: raid6: sse2x4 gen() 6599 MB/s Dec 13 02:18:33.870258 kernel: raid6: sse2x4 xor() 5393 MB/s Dec 13 02:18:33.887218 kernel: raid6: sse2x2 gen() 8639 MB/s Dec 13 02:18:33.904233 kernel: raid6: sse2x2 xor() 4948 MB/s Dec 13 02:18:33.921229 kernel: raid6: sse2x1 gen() 8207 MB/s Dec 13 02:18:33.940926 kernel: raid6: sse2x1 xor() 4083 MB/s Dec 13 02:18:33.941035 kernel: raid6: using algorithm avx512x1 gen() 14885 MB/s Dec 13 02:18:33.941097 kernel: raid6: .... xor() 12517 MB/s, rmw enabled Dec 13 02:18:33.942755 kernel: raid6: using avx512x2 recovery algorithm Dec 13 02:18:33.960251 kernel: xor: automatically using best checksumming function avx Dec 13 02:18:34.093328 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 02:18:34.104045 systemd[1]: Finished dracut-pre-udev.service. Dec 13 02:18:34.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:34.106000 audit: BPF prog-id=7 op=LOAD Dec 13 02:18:34.106000 audit: BPF prog-id=8 op=LOAD Dec 13 02:18:34.107337 systemd[1]: Starting systemd-udevd.service... Dec 13 02:18:34.124600 systemd-udevd[384]: Using default interface naming scheme 'v252'. Dec 13 02:18:34.132128 systemd[1]: Started systemd-udevd.service. Dec 13 02:18:34.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:34.135319 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 02:18:34.162915 dracut-pre-trigger[392]: rd.md=0: removing MD RAID activation Dec 13 02:18:34.205973 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 02:18:34.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:34.208567 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:18:34.263158 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:18:34.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:34.326277 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 02:18:34.351937 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 02:18:34.352008 kernel: AES CTR mode by8 optimization enabled Dec 13 02:18:34.384193 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 02:18:34.406453 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 02:18:34.406683 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 13 02:18:34.407132 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:61:59:86:b7:43 Dec 13 02:18:34.414338 (udev-worker)[444]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:18:34.602273 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 02:18:34.602516 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 02:18:34.602539 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 02:18:34.602676 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 02:18:34.602699 kernel: GPT:9289727 != 16777215 Dec 13 02:18:34.602715 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 02:18:34.602741 kernel: GPT:9289727 != 16777215 Dec 13 02:18:34.602757 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 02:18:34.602773 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 02:18:34.602787 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (442) Dec 13 02:18:34.601392 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 02:18:34.617645 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 02:18:34.620141 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 02:18:34.631664 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:18:34.653731 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 02:18:34.656582 systemd[1]: Starting disk-uuid.service... Dec 13 02:18:34.668000 disk-uuid[593]: Primary Header is updated. Dec 13 02:18:34.668000 disk-uuid[593]: Secondary Entries is updated. Dec 13 02:18:34.668000 disk-uuid[593]: Secondary Header is updated. Dec 13 02:18:34.674201 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 02:18:34.682201 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 02:18:34.688200 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 02:18:35.687374 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 02:18:35.687446 disk-uuid[594]: The operation has completed successfully. Dec 13 02:18:35.838100 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 02:18:35.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:35.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:35.838280 systemd[1]: Finished disk-uuid.service. Dec 13 02:18:35.840157 systemd[1]: Starting verity-setup.service... Dec 13 02:18:35.859466 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 02:18:35.945483 systemd[1]: Found device dev-mapper-usr.device. Dec 13 02:18:35.949148 systemd[1]: Mounting sysusr-usr.mount... Dec 13 02:18:35.950976 systemd[1]: Finished verity-setup.service. Dec 13 02:18:35.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:36.034192 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 02:18:36.035011 systemd[1]: Mounted sysusr-usr.mount. Dec 13 02:18:36.036585 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 02:18:36.038755 systemd[1]: Starting ignition-setup.service... Dec 13 02:18:36.040950 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 02:18:36.059885 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:18:36.059946 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 02:18:36.059958 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 02:18:36.071288 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 02:18:36.104228 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 02:18:36.140064 systemd[1]: Finished ignition-setup.service. Dec 13 02:18:36.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:36.143117 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 02:18:36.164962 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 02:18:36.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:36.167000 audit: BPF prog-id=9 op=LOAD Dec 13 02:18:36.168299 systemd[1]: Starting systemd-networkd.service... Dec 13 02:18:36.202334 systemd-networkd[1107]: lo: Link UP Dec 13 02:18:36.202345 systemd-networkd[1107]: lo: Gained carrier Dec 13 02:18:36.208596 systemd-networkd[1107]: Enumeration completed Dec 13 02:18:36.209624 systemd-networkd[1107]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:18:36.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:36.222324 systemd[1]: Started systemd-networkd.service. Dec 13 02:18:36.225754 systemd[1]: Reached target network.target. Dec 13 02:18:36.230962 systemd-networkd[1107]: eth0: Link UP Dec 13 02:18:36.230973 systemd-networkd[1107]: eth0: Gained carrier Dec 13 02:18:36.233088 systemd[1]: Starting iscsiuio.service... Dec 13 02:18:36.244775 systemd[1]: Started iscsiuio.service. Dec 13 02:18:36.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:36.247647 systemd[1]: Starting iscsid.service... Dec 13 02:18:36.251767 systemd-networkd[1107]: eth0: DHCPv4 address 172.31.31.142/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 02:18:36.257281 iscsid[1112]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:18:36.257281 iscsid[1112]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 02:18:36.257281 iscsid[1112]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 02:18:36.257281 iscsid[1112]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 02:18:36.257281 iscsid[1112]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:18:36.257281 iscsid[1112]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 02:18:36.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:36.257376 systemd[1]: Started iscsid.service. Dec 13 02:18:36.259138 systemd[1]: Starting dracut-initqueue.service... Dec 13 02:18:36.279611 systemd[1]: Finished dracut-initqueue.service. Dec 13 02:18:36.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:36.281446 systemd[1]: Reached target remote-fs-pre.target. Dec 13 02:18:36.283200 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:18:36.284330 systemd[1]: Reached target remote-fs.target. Dec 13 02:18:36.287653 systemd[1]: Starting dracut-pre-mount.service... Dec 13 02:18:36.300085 systemd[1]: Finished dracut-pre-mount.service. Dec 13 02:18:36.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:36.905646 ignition[1085]: Ignition 2.14.0 Dec 13 02:18:36.905662 ignition[1085]: Stage: fetch-offline Dec 13 02:18:36.905811 ignition[1085]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:18:36.905856 ignition[1085]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:18:36.934014 ignition[1085]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:18:36.934380 ignition[1085]: Ignition finished successfully Dec 13 02:18:36.939524 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 02:18:36.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:36.942681 systemd[1]: Starting ignition-fetch.service... Dec 13 02:18:36.957343 ignition[1131]: Ignition 2.14.0 Dec 13 02:18:36.957357 ignition[1131]: Stage: fetch Dec 13 02:18:36.957553 ignition[1131]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:18:36.957588 ignition[1131]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:18:36.968524 ignition[1131]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:18:36.970014 ignition[1131]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:18:37.009347 ignition[1131]: INFO : PUT result: OK Dec 13 02:18:37.012537 ignition[1131]: DEBUG : parsed url from cmdline: "" Dec 13 02:18:37.012537 ignition[1131]: INFO : no config URL provided Dec 13 02:18:37.012537 ignition[1131]: INFO : reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:18:37.012537 ignition[1131]: INFO : no config at "/usr/lib/ignition/user.ign" Dec 13 02:18:37.017272 ignition[1131]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:18:37.017272 ignition[1131]: INFO : PUT result: OK Dec 13 02:18:37.017272 ignition[1131]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 02:18:37.021795 ignition[1131]: INFO : GET result: OK Dec 13 02:18:37.022872 ignition[1131]: DEBUG : parsing config with SHA512: c470009192c9a600723e573b2cf02acac5fab7136afe0184b91ae3647e634fc31fd45399b5e2a25251cfabbc4bf79d66ed2bcc344de4bb289d1ef9676f243366 Dec 13 02:18:37.030749 unknown[1131]: fetched base config from "system" Dec 13 02:18:37.030766 unknown[1131]: fetched base config from "system" Dec 13 02:18:37.030775 unknown[1131]: fetched user config from "aws" Dec 13 02:18:37.034574 ignition[1131]: fetch: fetch complete Dec 13 02:18:37.034585 ignition[1131]: fetch: fetch passed Dec 13 02:18:37.034651 ignition[1131]: Ignition finished successfully Dec 13 02:18:37.038459 systemd[1]: Finished ignition-fetch.service. Dec 13 02:18:37.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:37.040481 systemd[1]: Starting ignition-kargs.service... Dec 13 02:18:37.054308 ignition[1137]: Ignition 2.14.0 Dec 13 02:18:37.054605 ignition[1137]: Stage: kargs Dec 13 02:18:37.055371 ignition[1137]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:18:37.055939 ignition[1137]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:18:37.063218 ignition[1137]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:18:37.064577 ignition[1137]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:18:37.066041 ignition[1137]: INFO : PUT result: OK Dec 13 02:18:37.069508 ignition[1137]: kargs: kargs passed Dec 13 02:18:37.069561 ignition[1137]: Ignition finished successfully Dec 13 02:18:37.072015 systemd[1]: Finished ignition-kargs.service. Dec 13 02:18:37.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:37.075010 systemd[1]: Starting ignition-disks.service... Dec 13 02:18:37.088062 ignition[1143]: Ignition 2.14.0 Dec 13 02:18:37.088074 ignition[1143]: Stage: disks Dec 13 02:18:37.088259 ignition[1143]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:18:37.088280 ignition[1143]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:18:37.106988 ignition[1143]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:18:37.108339 ignition[1143]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:18:37.110319 ignition[1143]: INFO : PUT result: OK Dec 13 02:18:37.113548 ignition[1143]: disks: disks passed Dec 13 02:18:37.113613 ignition[1143]: Ignition finished successfully Dec 13 02:18:37.116149 systemd[1]: Finished ignition-disks.service. Dec 13 02:18:37.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:37.117234 systemd[1]: Reached target initrd-root-device.target. Dec 13 02:18:37.120155 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:18:37.121301 systemd[1]: Reached target local-fs.target. Dec 13 02:18:37.124214 systemd[1]: Reached target sysinit.target. Dec 13 02:18:37.125230 systemd[1]: Reached target basic.target. Dec 13 02:18:37.127028 systemd[1]: Starting systemd-fsck-root.service... Dec 13 02:18:37.166864 systemd-fsck[1151]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 02:18:37.172633 systemd[1]: Finished systemd-fsck-root.service. Dec 13 02:18:37.181641 kernel: kauditd_printk_skb: 22 callbacks suppressed Dec 13 02:18:37.181668 kernel: audit: type=1130 audit(1734056317.173:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:37.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:37.174656 systemd[1]: Mounting sysroot.mount... Dec 13 02:18:37.191400 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 02:18:37.192082 systemd[1]: Mounted sysroot.mount. Dec 13 02:18:37.193600 systemd[1]: Reached target initrd-root-fs.target. Dec 13 02:18:37.209616 systemd[1]: Mounting sysroot-usr.mount... Dec 13 02:18:37.210871 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 02:18:37.210917 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 02:18:37.210943 systemd[1]: Reached target ignition-diskful.target. Dec 13 02:18:37.220249 systemd[1]: Mounted sysroot-usr.mount. Dec 13 02:18:37.247803 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:18:37.252519 systemd[1]: Starting initrd-setup-root.service... Dec 13 02:18:37.260769 initrd-setup-root[1173]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 02:18:37.269236 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1168) Dec 13 02:18:37.272992 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:18:37.273037 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 02:18:37.273056 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 02:18:37.279200 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 02:18:37.281899 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:18:37.285334 initrd-setup-root[1199]: cut: /sysroot/etc/group: No such file or directory Dec 13 02:18:37.291243 initrd-setup-root[1207]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 02:18:37.296382 initrd-setup-root[1215]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 02:18:37.579875 systemd[1]: Finished initrd-setup-root.service. Dec 13 02:18:37.586862 kernel: audit: type=1130 audit(1734056317.579:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:37.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:37.581092 systemd[1]: Starting ignition-mount.service... Dec 13 02:18:37.590637 systemd[1]: Starting sysroot-boot.service... Dec 13 02:18:37.596446 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 02:18:37.596568 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 02:18:37.623534 ignition[1234]: INFO : Ignition 2.14.0 Dec 13 02:18:37.624762 ignition[1234]: INFO : Stage: mount Dec 13 02:18:37.625937 ignition[1234]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:18:37.627438 ignition[1234]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:18:37.632868 systemd[1]: Finished sysroot-boot.service. Dec 13 02:18:37.638319 kernel: audit: type=1130 audit(1734056317.633:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:37.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:37.644293 ignition[1234]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:18:37.645920 ignition[1234]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:18:37.648009 ignition[1234]: INFO : PUT result: OK Dec 13 02:18:37.651358 ignition[1234]: INFO : mount: mount passed Dec 13 02:18:37.652286 ignition[1234]: INFO : Ignition finished successfully Dec 13 02:18:37.654327 systemd[1]: Finished ignition-mount.service. Dec 13 02:18:37.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:37.659726 systemd[1]: Starting ignition-files.service... Dec 13 02:18:37.665445 kernel: audit: type=1130 audit(1734056317.656:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:37.670702 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:18:37.687553 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1243) Dec 13 02:18:37.687607 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:18:37.687627 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 02:18:37.688516 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 02:18:37.695210 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 02:18:37.699287 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:18:37.714213 ignition[1262]: INFO : Ignition 2.14.0 Dec 13 02:18:37.714213 ignition[1262]: INFO : Stage: files Dec 13 02:18:37.716672 ignition[1262]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:18:37.716672 ignition[1262]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:18:37.729940 ignition[1262]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:18:37.732503 ignition[1262]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:18:37.734787 ignition[1262]: INFO : PUT result: OK Dec 13 02:18:37.740591 ignition[1262]: DEBUG : files: compiled without relabeling support, skipping Dec 13 02:18:37.745647 ignition[1262]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 02:18:37.745647 ignition[1262]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 02:18:37.768603 ignition[1262]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 02:18:37.770799 ignition[1262]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 02:18:37.773794 unknown[1262]: wrote ssh authorized keys file for user: core Dec 13 02:18:37.775641 ignition[1262]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 02:18:37.789696 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:18:37.798125 ignition[1262]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 02:18:37.900909 ignition[1262]: INFO : GET result: OK Dec 13 02:18:38.061683 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:18:38.064591 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:18:38.066980 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:18:38.066980 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 02:18:38.073841 ignition[1262]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:18:38.082901 ignition[1262]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3459162618" Dec 13 02:18:38.087144 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1267) Dec 13 02:18:38.087172 ignition[1262]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3459162618": device or resource busy Dec 13 02:18:38.087172 ignition[1262]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3459162618", trying btrfs: device or resource busy Dec 13 02:18:38.087172 ignition[1262]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3459162618" Dec 13 02:18:38.087172 ignition[1262]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3459162618" Dec 13 02:18:38.094561 ignition[1262]: INFO : op(3): [started] unmounting "/mnt/oem3459162618" Dec 13 02:18:38.094561 ignition[1262]: INFO : op(3): [finished] unmounting "/mnt/oem3459162618" Dec 13 02:18:38.094561 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 02:18:38.099183 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 02:18:38.099183 ignition[1262]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 02:18:38.214370 systemd-networkd[1107]: eth0: Gained IPv6LL Dec 13 02:18:38.551579 ignition[1262]: INFO : GET result: OK Dec 13 02:18:38.710761 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 02:18:38.715825 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Dec 13 02:18:38.718333 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 02:18:38.718333 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:18:38.726449 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:18:38.728625 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:18:38.728625 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:18:38.733115 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:18:38.733115 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:18:38.733115 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:18:38.733115 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:18:38.733115 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 02:18:38.733115 ignition[1262]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:18:38.745785 ignition[1262]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem674978224" Dec 13 02:18:38.747243 ignition[1262]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem674978224": device or resource busy Dec 13 02:18:38.747243 ignition[1262]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem674978224", trying btrfs: device or resource busy Dec 13 02:18:38.747243 ignition[1262]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem674978224" Dec 13 02:18:38.753829 ignition[1262]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem674978224" Dec 13 02:18:38.753829 ignition[1262]: INFO : op(6): [started] unmounting "/mnt/oem674978224" Dec 13 02:18:38.753829 ignition[1262]: INFO : op(6): [finished] unmounting "/mnt/oem674978224" Dec 13 02:18:38.753829 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 02:18:38.753829 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 02:18:38.753829 ignition[1262]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:18:38.768027 ignition[1262]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1129491017" Dec 13 02:18:38.769701 ignition[1262]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1129491017": device or resource busy Dec 13 02:18:38.769701 ignition[1262]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1129491017", trying btrfs: device or resource busy Dec 13 02:18:38.769701 ignition[1262]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1129491017" Dec 13 02:18:38.769701 ignition[1262]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1129491017" Dec 13 02:18:38.769701 ignition[1262]: INFO : op(9): [started] unmounting "/mnt/oem1129491017" Dec 13 02:18:38.769701 ignition[1262]: INFO : op(9): [finished] unmounting "/mnt/oem1129491017" Dec 13 02:18:38.769701 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 02:18:38.769701 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:18:38.784779 ignition[1262]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 02:18:38.772627 systemd[1]: mnt-oem1129491017.mount: Deactivated successfully. Dec 13 02:18:39.172926 ignition[1262]: INFO : GET result: OK Dec 13 02:18:39.532738 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:18:39.532738 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 02:18:39.538888 ignition[1262]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:18:39.545258 ignition[1262]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1740818467" Dec 13 02:18:39.547650 ignition[1262]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1740818467": device or resource busy Dec 13 02:18:39.547650 ignition[1262]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1740818467", trying btrfs: device or resource busy Dec 13 02:18:39.547650 ignition[1262]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1740818467" Dec 13 02:18:39.547650 ignition[1262]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1740818467" Dec 13 02:18:39.547650 ignition[1262]: INFO : op(c): [started] unmounting "/mnt/oem1740818467" Dec 13 02:18:39.558244 ignition[1262]: INFO : op(c): [finished] unmounting "/mnt/oem1740818467" Dec 13 02:18:39.558244 ignition[1262]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 02:18:39.558244 ignition[1262]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 02:18:39.558244 ignition[1262]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 02:18:39.558244 ignition[1262]: INFO : files: op(11): [started] processing unit "amazon-ssm-agent.service" Dec 13 02:18:39.558244 ignition[1262]: INFO : files: op(11): op(12): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 02:18:39.558244 ignition[1262]: INFO : files: op(11): op(12): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 02:18:39.558244 ignition[1262]: INFO : files: op(11): [finished] processing unit "amazon-ssm-agent.service" Dec 13 02:18:39.558244 ignition[1262]: INFO : files: op(13): [started] processing unit "nvidia.service" Dec 13 02:18:39.558244 ignition[1262]: INFO : files: op(13): [finished] processing unit "nvidia.service" Dec 13 02:18:39.558244 ignition[1262]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Dec 13 02:18:39.558244 ignition[1262]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:18:39.558244 ignition[1262]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:18:39.558244 ignition[1262]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Dec 13 02:18:39.558244 ignition[1262]: INFO : files: op(16): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 02:18:39.595572 ignition[1262]: INFO : files: op(16): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 02:18:39.595572 ignition[1262]: INFO : files: op(17): [started] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 02:18:39.595572 ignition[1262]: INFO : files: op(17): [finished] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 02:18:39.595572 ignition[1262]: INFO : files: op(18): [started] setting preset to enabled for "nvidia.service" Dec 13 02:18:39.595572 ignition[1262]: INFO : files: op(18): [finished] setting preset to enabled for "nvidia.service" Dec 13 02:18:39.595572 ignition[1262]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Dec 13 02:18:39.595572 ignition[1262]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 02:18:39.559540 systemd[1]: mnt-oem1740818467.mount: Deactivated successfully. Dec 13 02:18:39.615193 ignition[1262]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:18:39.617096 ignition[1262]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:18:39.617096 ignition[1262]: INFO : files: files passed Dec 13 02:18:39.617096 ignition[1262]: INFO : Ignition finished successfully Dec 13 02:18:39.622300 systemd[1]: Finished ignition-files.service. Dec 13 02:18:39.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.628200 kernel: audit: type=1130 audit(1734056319.623:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.633158 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 02:18:39.634227 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 02:18:39.636529 systemd[1]: Starting ignition-quench.service... Dec 13 02:18:39.644341 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 02:18:39.652654 kernel: audit: type=1130 audit(1734056319.645:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.652680 kernel: audit: type=1131 audit(1734056319.645:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.644442 systemd[1]: Finished ignition-quench.service. Dec 13 02:18:39.659027 initrd-setup-root-after-ignition[1287]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:18:39.661480 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 02:18:39.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.663481 systemd[1]: Reached target ignition-complete.target. Dec 13 02:18:39.668499 kernel: audit: type=1130 audit(1734056319.663:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.669303 systemd[1]: Starting initrd-parse-etc.service... Dec 13 02:18:39.685328 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 02:18:39.685444 systemd[1]: Finished initrd-parse-etc.service. Dec 13 02:18:39.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.688328 systemd[1]: Reached target initrd-fs.target. Dec 13 02:18:39.697241 kernel: audit: type=1130 audit(1734056319.688:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.697273 kernel: audit: type=1131 audit(1734056319.688:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.697438 systemd[1]: Reached target initrd.target. Dec 13 02:18:39.697587 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 02:18:39.698748 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 02:18:39.715605 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 02:18:39.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.719563 systemd[1]: Starting initrd-cleanup.service... Dec 13 02:18:39.736289 systemd[1]: Stopped target nss-lookup.target. Dec 13 02:18:39.738480 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 02:18:39.740832 systemd[1]: Stopped target timers.target. Dec 13 02:18:39.743043 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 02:18:39.744364 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 02:18:39.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.746885 systemd[1]: Stopped target initrd.target. Dec 13 02:18:39.748864 systemd[1]: Stopped target basic.target. Dec 13 02:18:39.750956 systemd[1]: Stopped target ignition-complete.target. Dec 13 02:18:39.753104 systemd[1]: Stopped target ignition-diskful.target. Dec 13 02:18:39.755460 systemd[1]: Stopped target initrd-root-device.target. Dec 13 02:18:39.757730 systemd[1]: Stopped target remote-fs.target. Dec 13 02:18:39.759749 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 02:18:39.761971 systemd[1]: Stopped target sysinit.target. Dec 13 02:18:39.763909 systemd[1]: Stopped target local-fs.target. Dec 13 02:18:39.765909 systemd[1]: Stopped target local-fs-pre.target. Dec 13 02:18:39.768059 systemd[1]: Stopped target swap.target. Dec 13 02:18:39.769941 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 02:18:39.771538 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 02:18:39.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.775074 systemd[1]: Stopped target cryptsetup.target. Dec 13 02:18:39.777227 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 02:18:39.778621 systemd[1]: Stopped dracut-initqueue.service. Dec 13 02:18:39.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.781401 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 02:18:39.782953 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 02:18:39.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.785695 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 02:18:39.787086 systemd[1]: Stopped ignition-files.service. Dec 13 02:18:39.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.790733 systemd[1]: Stopping ignition-mount.service... Dec 13 02:18:39.792276 systemd[1]: Stopping iscsiuio.service... Dec 13 02:18:39.795052 systemd[1]: Stopping sysroot-boot.service... Dec 13 02:18:39.796046 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 02:18:39.796424 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 02:18:39.800855 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 02:18:39.801102 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 02:18:39.817140 ignition[1300]: INFO : Ignition 2.14.0 Dec 13 02:18:39.817140 ignition[1300]: INFO : Stage: umount Dec 13 02:18:39.817140 ignition[1300]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:18:39.817140 ignition[1300]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:18:39.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.831944 ignition[1300]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:18:39.833564 ignition[1300]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:18:39.836809 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 02:18:39.836974 systemd[1]: Stopped iscsiuio.service. Dec 13 02:18:39.839646 ignition[1300]: INFO : PUT result: OK Dec 13 02:18:39.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.847686 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 02:18:39.848006 systemd[1]: Finished initrd-cleanup.service. Dec 13 02:18:39.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.854215 ignition[1300]: INFO : umount: umount passed Dec 13 02:18:39.855647 ignition[1300]: INFO : Ignition finished successfully Dec 13 02:18:39.856234 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 02:18:39.856368 systemd[1]: Stopped ignition-mount.service. Dec 13 02:18:39.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.862644 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 02:18:39.863448 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 02:18:39.863507 systemd[1]: Stopped ignition-disks.service. Dec 13 02:18:39.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.879008 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 02:18:39.879218 systemd[1]: Stopped ignition-kargs.service. Dec 13 02:18:39.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.882842 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 02:18:39.882953 systemd[1]: Stopped ignition-fetch.service. Dec 13 02:18:39.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.885990 systemd[1]: Stopped target network.target. Dec 13 02:18:39.888187 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 02:18:39.888296 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 02:18:39.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.892624 systemd[1]: Stopped target paths.target. Dec 13 02:18:39.895714 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 02:18:39.899238 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 02:18:39.901308 systemd[1]: Stopped target slices.target. Dec 13 02:18:39.905135 systemd[1]: Stopped target sockets.target. Dec 13 02:18:39.907329 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 02:18:39.907371 systemd[1]: Closed iscsid.socket. Dec 13 02:18:39.917481 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 02:18:39.917553 systemd[1]: Closed iscsiuio.socket. Dec 13 02:18:39.927460 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 02:18:39.927555 systemd[1]: Stopped ignition-setup.service. Dec 13 02:18:39.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.930779 systemd[1]: Stopping systemd-networkd.service... Dec 13 02:18:39.931846 systemd[1]: Stopping systemd-resolved.service... Dec 13 02:18:39.934745 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 02:18:39.934863 systemd[1]: Stopped sysroot-boot.service. Dec 13 02:18:39.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.936234 systemd-networkd[1107]: eth0: DHCPv6 lease lost Dec 13 02:18:39.937905 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 02:18:39.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.938025 systemd[1]: Stopped initrd-setup-root.service. Dec 13 02:18:39.941272 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 02:18:39.941401 systemd[1]: Stopped systemd-networkd.service. Dec 13 02:18:39.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.946586 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 02:18:39.947864 systemd[1]: Stopped systemd-resolved.service. Dec 13 02:18:39.951672 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 02:18:39.951721 systemd[1]: Closed systemd-networkd.socket. Dec 13 02:18:39.953000 audit: BPF prog-id=9 op=UNLOAD Dec 13 02:18:39.953000 audit: BPF prog-id=6 op=UNLOAD Dec 13 02:18:39.954870 systemd[1]: Stopping network-cleanup.service... Dec 13 02:18:39.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.957264 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 02:18:39.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.957332 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 02:18:39.959484 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:18:39.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.959539 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:18:39.961781 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 02:18:39.961841 systemd[1]: Stopped systemd-modules-load.service. Dec 13 02:18:39.964851 systemd[1]: Stopping systemd-udevd.service... Dec 13 02:18:39.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.968004 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 02:18:39.976824 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 02:18:39.976950 systemd[1]: Stopped network-cleanup.service. Dec 13 02:18:39.981737 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 02:18:39.983738 systemd[1]: Stopped systemd-udevd.service. Dec 13 02:18:39.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.985886 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 02:18:39.985928 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 02:18:39.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.987008 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 02:18:39.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.987044 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 02:18:39.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:39.988059 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 02:18:39.988104 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 02:18:39.990284 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 02:18:39.990322 systemd[1]: Stopped dracut-cmdline.service. Dec 13 02:18:39.995844 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:18:39.995942 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 02:18:39.999470 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 02:18:40.008929 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 02:18:40.009001 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 02:18:40.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:40.016116 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 02:18:40.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:40.016296 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 02:18:40.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:40.019108 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:18:40.019220 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 02:18:40.027357 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 02:18:40.028620 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 02:18:40.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:40.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:40.028779 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 02:18:40.036457 systemd[1]: Reached target initrd-switch-root.target. Dec 13 02:18:40.042589 systemd[1]: Starting initrd-switch-root.service... Dec 13 02:18:40.066904 systemd[1]: Switching root. Dec 13 02:18:40.108890 iscsid[1112]: iscsid shutting down. Dec 13 02:18:40.110504 systemd-journald[185]: Received SIGTERM from PID 1 (n/a). Dec 13 02:18:40.110644 systemd-journald[185]: Journal stopped Dec 13 02:18:49.773902 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 02:18:49.774037 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 02:18:49.774161 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 02:18:49.774255 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 02:18:49.774277 kernel: SELinux: policy capability open_perms=1 Dec 13 02:18:49.774295 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 02:18:49.774314 kernel: SELinux: policy capability always_check_network=0 Dec 13 02:18:49.774336 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 02:18:49.774354 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 02:18:49.774386 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 02:18:49.774403 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 02:18:49.774426 systemd[1]: Successfully loaded SELinux policy in 130.024ms. Dec 13 02:18:49.774456 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.425ms. Dec 13 02:18:49.774477 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:18:49.774496 systemd[1]: Detected virtualization amazon. Dec 13 02:18:49.774557 systemd[1]: Detected architecture x86-64. Dec 13 02:18:49.774575 systemd[1]: Detected first boot. Dec 13 02:18:49.774593 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:18:49.774612 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 02:18:49.774633 kernel: kauditd_printk_skb: 43 callbacks suppressed Dec 13 02:18:49.774664 kernel: audit: type=1400 audit(1734056322.199:86): avc: denied { associate } for pid=1334 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 02:18:49.774683 kernel: audit: type=1300 audit(1734056322.199:86): arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8b2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=1317 pid=1334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:49.774701 kernel: audit: type=1327 audit(1734056322.199:86): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:18:49.774719 kernel: audit: type=1400 audit(1734056322.202:87): avc: denied { associate } for pid=1334 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 02:18:49.774740 kernel: audit: type=1300 audit(1734056322.202:87): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d989 a2=1ed a3=0 items=2 ppid=1317 pid=1334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:49.774757 kernel: audit: type=1307 audit(1734056322.202:87): cwd="/" Dec 13 02:18:49.774778 kernel: audit: type=1302 audit(1734056322.202:87): item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:49.774795 kernel: audit: type=1302 audit(1734056322.202:87): item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:49.774816 kernel: audit: type=1327 audit(1734056322.202:87): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:18:49.774833 systemd[1]: Populated /etc with preset unit settings. Dec 13 02:18:49.774852 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:18:49.774874 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:18:49.774901 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:18:49.774919 kernel: audit: type=1334 audit(1734056329.515:88): prog-id=12 op=LOAD Dec 13 02:18:49.774936 kernel: audit: type=1334 audit(1734056329.515:89): prog-id=3 op=UNLOAD Dec 13 02:18:49.774952 kernel: audit: type=1334 audit(1734056329.517:90): prog-id=13 op=LOAD Dec 13 02:18:49.774969 kernel: audit: type=1334 audit(1734056329.518:91): prog-id=14 op=LOAD Dec 13 02:18:49.774985 kernel: audit: type=1334 audit(1734056329.518:92): prog-id=4 op=UNLOAD Dec 13 02:18:49.775006 kernel: audit: type=1334 audit(1734056329.518:93): prog-id=5 op=UNLOAD Dec 13 02:18:49.775023 kernel: audit: type=1334 audit(1734056329.521:94): prog-id=15 op=LOAD Dec 13 02:18:49.775039 kernel: audit: type=1334 audit(1734056329.521:95): prog-id=12 op=UNLOAD Dec 13 02:18:49.775054 kernel: audit: type=1334 audit(1734056329.525:96): prog-id=16 op=LOAD Dec 13 02:18:49.775070 kernel: audit: type=1334 audit(1734056329.527:97): prog-id=17 op=LOAD Dec 13 02:18:49.775087 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 02:18:49.775105 systemd[1]: Stopped iscsid.service. Dec 13 02:18:49.775123 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 02:18:49.775143 systemd[1]: Stopped initrd-switch-root.service. Dec 13 02:18:49.775160 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 02:18:49.775188 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 02:18:49.775208 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 02:18:49.775225 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 02:18:49.775243 systemd[1]: Created slice system-getty.slice. Dec 13 02:18:49.775260 systemd[1]: Created slice system-modprobe.slice. Dec 13 02:18:49.775278 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 02:18:49.775297 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 02:18:49.775318 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 02:18:49.775335 systemd[1]: Created slice user.slice. Dec 13 02:18:49.775353 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:18:49.775370 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 02:18:49.775424 systemd[1]: Set up automount boot.automount. Dec 13 02:18:49.775452 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 02:18:49.775470 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 02:18:49.775517 systemd[1]: Stopped target initrd-fs.target. Dec 13 02:18:49.775535 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 02:18:49.775557 systemd[1]: Reached target integritysetup.target. Dec 13 02:18:49.775766 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:18:49.775790 systemd[1]: Reached target remote-fs.target. Dec 13 02:18:49.775808 systemd[1]: Reached target slices.target. Dec 13 02:18:49.775825 systemd[1]: Reached target swap.target. Dec 13 02:18:49.775843 systemd[1]: Reached target torcx.target. Dec 13 02:18:49.775862 systemd[1]: Reached target veritysetup.target. Dec 13 02:18:49.775879 systemd[1]: Listening on systemd-coredump.socket. Dec 13 02:18:49.775896 systemd[1]: Listening on systemd-initctl.socket. Dec 13 02:18:49.775917 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:18:49.775934 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:18:49.775952 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:18:49.775970 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 02:18:49.775987 systemd[1]: Mounting dev-hugepages.mount... Dec 13 02:18:49.776003 systemd[1]: Mounting dev-mqueue.mount... Dec 13 02:18:49.776021 systemd[1]: Mounting media.mount... Dec 13 02:18:49.776040 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:18:49.776069 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 02:18:49.776089 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 02:18:49.776107 systemd[1]: Mounting tmp.mount... Dec 13 02:18:49.776126 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 02:18:49.776144 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:18:49.776161 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:18:49.776194 systemd[1]: Starting modprobe@configfs.service... Dec 13 02:18:49.776212 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:18:49.776230 systemd[1]: Starting modprobe@drm.service... Dec 13 02:18:49.776247 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:18:49.776265 systemd[1]: Starting modprobe@fuse.service... Dec 13 02:18:49.776283 systemd[1]: Starting modprobe@loop.service... Dec 13 02:18:49.776305 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 02:18:49.776322 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 02:18:49.776339 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 02:18:49.776360 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 02:18:49.776377 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 02:18:49.776395 systemd[1]: Stopped systemd-journald.service. Dec 13 02:18:49.776413 systemd[1]: Starting systemd-journald.service... Dec 13 02:18:49.776430 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:18:49.776447 systemd[1]: Starting systemd-network-generator.service... Dec 13 02:18:49.776464 systemd[1]: Starting systemd-remount-fs.service... Dec 13 02:18:49.776482 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:18:49.776499 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 02:18:49.776520 systemd[1]: Stopped verity-setup.service. Dec 13 02:18:49.776538 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:18:49.777294 systemd[1]: Mounted dev-hugepages.mount. Dec 13 02:18:49.777320 systemd[1]: Mounted dev-mqueue.mount. Dec 13 02:18:49.777339 systemd[1]: Mounted media.mount. Dec 13 02:18:49.777357 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 02:18:49.777376 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 02:18:49.777394 systemd[1]: Mounted tmp.mount. Dec 13 02:18:49.777412 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:18:49.777437 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 02:18:49.777454 systemd[1]: Finished modprobe@configfs.service. Dec 13 02:18:49.777472 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:18:49.777495 systemd-journald[1401]: Journal started Dec 13 02:18:49.777586 systemd-journald[1401]: Runtime Journal (/run/log/journal/ec20a91308cf925fc71f7c99897b3062) is 4.8M, max 38.7M, 33.9M free. Dec 13 02:18:41.343000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 02:18:41.639000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:18:41.640000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:18:41.640000 audit: BPF prog-id=10 op=LOAD Dec 13 02:18:41.640000 audit: BPF prog-id=10 op=UNLOAD Dec 13 02:18:41.640000 audit: BPF prog-id=11 op=LOAD Dec 13 02:18:41.640000 audit: BPF prog-id=11 op=UNLOAD Dec 13 02:18:42.199000 audit[1334]: AVC avc: denied { associate } for pid=1334 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 02:18:42.199000 audit[1334]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8b2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=1317 pid=1334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:42.199000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:18:42.202000 audit[1334]: AVC avc: denied { associate } for pid=1334 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 02:18:42.202000 audit[1334]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d989 a2=1ed a3=0 items=2 ppid=1317 pid=1334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:42.202000 audit: CWD cwd="/" Dec 13 02:18:42.202000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:49.783443 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:18:49.803606 systemd[1]: Started systemd-journald.service. Dec 13 02:18:42.202000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:42.202000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:18:49.515000 audit: BPF prog-id=12 op=LOAD Dec 13 02:18:49.515000 audit: BPF prog-id=3 op=UNLOAD Dec 13 02:18:49.517000 audit: BPF prog-id=13 op=LOAD Dec 13 02:18:49.518000 audit: BPF prog-id=14 op=LOAD Dec 13 02:18:49.518000 audit: BPF prog-id=4 op=UNLOAD Dec 13 02:18:49.518000 audit: BPF prog-id=5 op=UNLOAD Dec 13 02:18:49.521000 audit: BPF prog-id=15 op=LOAD Dec 13 02:18:49.521000 audit: BPF prog-id=12 op=UNLOAD Dec 13 02:18:49.525000 audit: BPF prog-id=16 op=LOAD Dec 13 02:18:49.527000 audit: BPF prog-id=17 op=LOAD Dec 13 02:18:49.527000 audit: BPF prog-id=13 op=UNLOAD Dec 13 02:18:49.527000 audit: BPF prog-id=14 op=UNLOAD Dec 13 02:18:49.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.537000 audit: BPF prog-id=15 op=UNLOAD Dec 13 02:18:49.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.717000 audit: BPF prog-id=18 op=LOAD Dec 13 02:18:49.717000 audit: BPF prog-id=19 op=LOAD Dec 13 02:18:49.717000 audit: BPF prog-id=20 op=LOAD Dec 13 02:18:49.718000 audit: BPF prog-id=16 op=UNLOAD Dec 13 02:18:49.718000 audit: BPF prog-id=17 op=UNLOAD Dec 13 02:18:49.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.771000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 02:18:49.771000 audit[1401]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffc19518590 a2=4000 a3=7ffc1951862c items=0 ppid=1 pid=1401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:49.771000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 02:18:49.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.514288 systemd[1]: Queued start job for default target multi-user.target. Dec 13 02:18:42.134462 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:18:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:18:49.532670 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 02:18:42.135286 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:18:42Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 02:18:49.785813 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:18:42.135306 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:18:42Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 02:18:49.786242 systemd[1]: Finished modprobe@drm.service. Dec 13 02:18:42.135341 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:18:42Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 02:18:49.787862 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:18:42.135351 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:18:42Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 02:18:49.788136 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:18:42.135384 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:18:42Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 02:18:49.789901 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:18:42.135396 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:18:42Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 02:18:49.793608 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 02:18:42.135584 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:18:42Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 02:18:49.798541 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:18:42.135624 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:18:42Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 02:18:49.802504 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 02:18:42.135637 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:18:42Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 02:18:42.137229 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:18:42Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 02:18:42.137295 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:18:42Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 02:18:49.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.817313 systemd[1]: Finished systemd-network-generator.service. Dec 13 02:18:42.137327 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:18:42Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 02:18:49.819023 systemd[1]: Reached target network-pre.target. Dec 13 02:18:42.137351 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:18:42Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 02:18:42.137379 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:18:42Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 02:18:42.137401 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:18:42Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 02:18:48.731104 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:18:48Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:18:48.731501 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:18:48Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:18:48.731609 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:18:48Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:18:48.731788 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:18:48Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 02:18:48.731835 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:18:48Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 02:18:48.731889 /usr/lib/systemd/system-generators/torcx-generator[1334]: time="2024-12-13T02:18:48Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 02:18:49.824290 kernel: fuse: init (API version 7.34) Dec 13 02:18:49.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.830726 systemd[1]: Finished systemd-remount-fs.service. Dec 13 02:18:49.831954 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 02:18:49.834489 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 02:18:49.850497 systemd[1]: Starting systemd-journal-flush.service... Dec 13 02:18:49.851649 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:18:49.853499 systemd[1]: Starting systemd-random-seed.service... Dec 13 02:18:49.855370 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 02:18:49.856256 kernel: loop: module loaded Dec 13 02:18:49.855558 systemd[1]: Finished modprobe@fuse.service. Dec 13 02:18:49.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.858251 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:18:49.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.859786 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:18:49.859970 systemd[1]: Finished modprobe@loop.service. Dec 13 02:18:49.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.862896 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 02:18:49.864745 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:18:49.869160 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 02:18:49.879012 systemd-journald[1401]: Time spent on flushing to /var/log/journal/ec20a91308cf925fc71f7c99897b3062 is 132.648ms for 1208 entries. Dec 13 02:18:49.879012 systemd-journald[1401]: System Journal (/var/log/journal/ec20a91308cf925fc71f7c99897b3062) is 8.0M, max 195.6M, 187.6M free. Dec 13 02:18:50.034359 systemd-journald[1401]: Received client request to flush runtime journal. Dec 13 02:18:49.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:49.892282 systemd[1]: Finished systemd-random-seed.service. Dec 13 02:18:50.034769 udevadm[1423]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 02:18:49.893779 systemd[1]: Reached target first-boot-complete.target. Dec 13 02:18:49.964136 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:18:49.983913 systemd[1]: Starting systemd-udev-settle.service... Dec 13 02:18:50.036287 systemd[1]: Finished systemd-journal-flush.service. Dec 13 02:18:50.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:50.087785 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 02:18:50.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:50.090374 systemd[1]: Starting systemd-sysusers.service... Dec 13 02:18:50.253009 systemd[1]: Finished systemd-sysusers.service. Dec 13 02:18:50.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:50.255995 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:18:50.443369 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:18:50.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:51.024141 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 02:18:51.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:51.025000 audit: BPF prog-id=21 op=LOAD Dec 13 02:18:51.025000 audit: BPF prog-id=22 op=LOAD Dec 13 02:18:51.025000 audit: BPF prog-id=7 op=UNLOAD Dec 13 02:18:51.025000 audit: BPF prog-id=8 op=UNLOAD Dec 13 02:18:51.026869 systemd[1]: Starting systemd-udevd.service... Dec 13 02:18:51.066090 systemd-udevd[1453]: Using default interface naming scheme 'v252'. Dec 13 02:18:51.173063 systemd[1]: Started systemd-udevd.service. Dec 13 02:18:51.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:51.175000 audit: BPF prog-id=23 op=LOAD Dec 13 02:18:51.176559 systemd[1]: Starting systemd-networkd.service... Dec 13 02:18:51.233012 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 02:18:51.239000 audit: BPF prog-id=24 op=LOAD Dec 13 02:18:51.240000 audit: BPF prog-id=25 op=LOAD Dec 13 02:18:51.240000 audit: BPF prog-id=26 op=LOAD Dec 13 02:18:51.241345 systemd[1]: Starting systemd-userdbd.service... Dec 13 02:18:51.253141 (udev-worker)[1465]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:18:51.314728 systemd[1]: Started systemd-userdbd.service. Dec 13 02:18:51.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:51.397201 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 02:18:51.408454 kernel: ACPI: button: Power Button [PWRF] Dec 13 02:18:51.408551 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Dec 13 02:18:51.415202 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 02:18:51.372000 audit[1461]: AVC avc: denied { confidentiality } for pid=1461 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 02:18:51.372000 audit[1461]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=558fd7a81660 a1=337fc a2=7fc7967e2bc5 a3=5 items=110 ppid=1453 pid=1461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:51.372000 audit: CWD cwd="/" Dec 13 02:18:51.372000 audit: PATH item=0 name=(null) inode=43 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=1 name=(null) inode=13771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=2 name=(null) inode=13771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=3 name=(null) inode=13772 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=4 name=(null) inode=13771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=5 name=(null) inode=13773 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=6 name=(null) inode=13771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=7 name=(null) inode=13774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=8 name=(null) inode=13774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=9 name=(null) inode=13775 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=10 name=(null) inode=13774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=11 name=(null) inode=13776 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=12 name=(null) inode=13774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=13 name=(null) inode=13777 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=14 name=(null) inode=13774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=15 name=(null) inode=13778 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=16 name=(null) inode=13774 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=17 name=(null) inode=13779 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=18 name=(null) inode=13771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=19 name=(null) inode=13780 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=20 name=(null) inode=13780 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=21 name=(null) inode=13781 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=22 name=(null) inode=13780 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=23 name=(null) inode=13782 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=24 name=(null) inode=13780 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=25 name=(null) inode=13783 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=26 name=(null) inode=13780 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=27 name=(null) inode=13784 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=28 name=(null) inode=13780 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=29 name=(null) inode=13785 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=30 name=(null) inode=13771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=31 name=(null) inode=13786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=32 name=(null) inode=13786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=33 name=(null) inode=13787 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=34 name=(null) inode=13786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=35 name=(null) inode=13788 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=36 name=(null) inode=13786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=37 name=(null) inode=13789 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=38 name=(null) inode=13786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=39 name=(null) inode=13790 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=40 name=(null) inode=13786 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=41 name=(null) inode=13791 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=42 name=(null) inode=13771 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=43 name=(null) inode=13792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=44 name=(null) inode=13792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=45 name=(null) inode=13793 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=46 name=(null) inode=13792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=47 name=(null) inode=13794 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=48 name=(null) inode=13792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=49 name=(null) inode=13795 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=50 name=(null) inode=13792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=51 name=(null) inode=13796 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=52 name=(null) inode=13792 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=53 name=(null) inode=13797 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=54 name=(null) inode=43 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=55 name=(null) inode=13798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=56 name=(null) inode=13798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=57 name=(null) inode=13799 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=58 name=(null) inode=13798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=59 name=(null) inode=13800 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=60 name=(null) inode=13798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=61 name=(null) inode=13801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=62 name=(null) inode=13801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=63 name=(null) inode=13802 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=64 name=(null) inode=13801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=65 name=(null) inode=13803 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=66 name=(null) inode=13801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=67 name=(null) inode=13804 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=68 name=(null) inode=13801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=69 name=(null) inode=13805 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=70 name=(null) inode=13801 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=71 name=(null) inode=13806 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=72 name=(null) inode=13798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=73 name=(null) inode=13807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=74 name=(null) inode=13807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=75 name=(null) inode=13808 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=76 name=(null) inode=13807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=77 name=(null) inode=13809 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=78 name=(null) inode=13807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=79 name=(null) inode=13810 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=80 name=(null) inode=13807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=81 name=(null) inode=13811 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=82 name=(null) inode=13807 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=83 name=(null) inode=13812 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=84 name=(null) inode=13798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=85 name=(null) inode=13813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=86 name=(null) inode=13813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=87 name=(null) inode=13814 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=88 name=(null) inode=13813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=89 name=(null) inode=13815 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=90 name=(null) inode=13813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=91 name=(null) inode=13816 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=92 name=(null) inode=13813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=93 name=(null) inode=13817 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=94 name=(null) inode=13813 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=95 name=(null) inode=13818 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=96 name=(null) inode=13798 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=97 name=(null) inode=13819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=98 name=(null) inode=13819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=99 name=(null) inode=13820 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=100 name=(null) inode=13819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=101 name=(null) inode=13821 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=102 name=(null) inode=13819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=103 name=(null) inode=13822 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=104 name=(null) inode=13819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=105 name=(null) inode=13823 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=106 name=(null) inode=13819 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=107 name=(null) inode=13824 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PATH item=109 name=(null) inode=13825 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:18:51.372000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 02:18:51.471199 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Dec 13 02:18:51.490203 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Dec 13 02:18:51.505006 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 02:18:51.542985 systemd-networkd[1459]: lo: Link UP Dec 13 02:18:51.543000 systemd-networkd[1459]: lo: Gained carrier Dec 13 02:18:51.543670 systemd-networkd[1459]: Enumeration completed Dec 13 02:18:51.543792 systemd[1]: Started systemd-networkd.service. Dec 13 02:18:51.544600 systemd-networkd[1459]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:18:51.550298 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:18:51.550667 systemd-networkd[1459]: eth0: Link UP Dec 13 02:18:51.551190 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1454) Dec 13 02:18:51.551378 systemd-networkd[1459]: eth0: Gained carrier Dec 13 02:18:51.562454 systemd-networkd[1459]: eth0: DHCPv4 address 172.31.31.142/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 02:18:51.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:51.753322 systemd[1]: Finished systemd-udev-settle.service. Dec 13 02:18:51.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:51.761978 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:18:51.766349 systemd[1]: Starting lvm2-activation-early.service... Dec 13 02:18:51.768506 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 02:18:51.926098 lvm[1567]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:18:51.955596 systemd[1]: Finished lvm2-activation-early.service. Dec 13 02:18:51.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:51.956930 systemd[1]: Reached target cryptsetup.target. Dec 13 02:18:51.959343 systemd[1]: Starting lvm2-activation.service... Dec 13 02:18:51.965490 lvm[1569]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:18:51.995113 systemd[1]: Finished lvm2-activation.service. Dec 13 02:18:51.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:51.996359 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:18:51.997850 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 02:18:51.997941 systemd[1]: Reached target local-fs.target. Dec 13 02:18:51.999263 systemd[1]: Reached target machines.target. Dec 13 02:18:52.002142 systemd[1]: Starting ldconfig.service... Dec 13 02:18:52.003909 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:18:52.004123 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:18:52.005592 systemd[1]: Starting systemd-boot-update.service... Dec 13 02:18:52.008778 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 02:18:52.016466 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 02:18:52.022741 systemd[1]: Starting systemd-sysext.service... Dec 13 02:18:52.044869 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1571 (bootctl) Dec 13 02:18:52.046810 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 02:18:52.057621 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 02:18:52.066615 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 02:18:52.066877 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 02:18:52.081209 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 02:18:52.121935 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 02:18:52.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:52.265652 systemd-fsck[1581]: fsck.fat 4.2 (2021-01-31) Dec 13 02:18:52.265652 systemd-fsck[1581]: /dev/nvme0n1p1: 789 files, 119291/258078 clusters Dec 13 02:18:52.268171 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 02:18:52.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:52.271776 systemd[1]: Mounting boot.mount... Dec 13 02:18:52.294379 systemd[1]: Mounted boot.mount. Dec 13 02:18:52.315033 systemd[1]: Finished systemd-boot-update.service. Dec 13 02:18:52.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:52.446213 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 02:18:52.470210 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 02:18:52.483307 (sd-sysext)[1596]: Using extensions 'kubernetes'. Dec 13 02:18:52.483787 (sd-sysext)[1596]: Merged extensions into '/usr'. Dec 13 02:18:52.505007 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:18:52.507414 systemd[1]: Mounting usr-share-oem.mount... Dec 13 02:18:52.509171 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:18:52.515075 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:18:52.518576 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:18:52.523599 systemd[1]: Starting modprobe@loop.service... Dec 13 02:18:52.524539 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:18:52.524722 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:18:52.524896 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:18:52.528617 systemd[1]: Mounted usr-share-oem.mount. Dec 13 02:18:52.529917 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:18:52.530077 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:18:52.531936 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:18:52.532190 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:18:52.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:52.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:52.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:52.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:52.534064 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:18:52.534252 systemd[1]: Finished modprobe@loop.service. Dec 13 02:18:52.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:52.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:52.535987 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:18:52.536159 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:18:52.537967 systemd[1]: Finished systemd-sysext.service. Dec 13 02:18:52.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:52.540708 systemd[1]: Starting ensure-sysext.service... Dec 13 02:18:52.543298 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 02:18:52.555287 systemd[1]: Reloading. Dec 13 02:18:52.572226 systemd-tmpfiles[1603]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 02:18:52.575642 systemd-tmpfiles[1603]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 02:18:52.580301 systemd-tmpfiles[1603]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 02:18:52.691480 /usr/lib/systemd/system-generators/torcx-generator[1625]: time="2024-12-13T02:18:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:18:52.691527 /usr/lib/systemd/system-generators/torcx-generator[1625]: time="2024-12-13T02:18:52Z" level=info msg="torcx already run" Dec 13 02:18:52.924117 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:18:52.924146 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:18:52.967572 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:18:53.062642 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 02:18:53.063306 systemd-networkd[1459]: eth0: Gained IPv6LL Dec 13 02:18:53.073000 audit: BPF prog-id=27 op=LOAD Dec 13 02:18:53.073000 audit: BPF prog-id=24 op=UNLOAD Dec 13 02:18:53.074000 audit: BPF prog-id=28 op=LOAD Dec 13 02:18:53.074000 audit: BPF prog-id=29 op=LOAD Dec 13 02:18:53.074000 audit: BPF prog-id=25 op=UNLOAD Dec 13 02:18:53.074000 audit: BPF prog-id=26 op=UNLOAD Dec 13 02:18:53.078000 audit: BPF prog-id=30 op=LOAD Dec 13 02:18:53.078000 audit: BPF prog-id=23 op=UNLOAD Dec 13 02:18:53.088000 audit: BPF prog-id=31 op=LOAD Dec 13 02:18:53.088000 audit: BPF prog-id=18 op=UNLOAD Dec 13 02:18:53.088000 audit: BPF prog-id=32 op=LOAD Dec 13 02:18:53.088000 audit: BPF prog-id=33 op=LOAD Dec 13 02:18:53.088000 audit: BPF prog-id=19 op=UNLOAD Dec 13 02:18:53.088000 audit: BPF prog-id=20 op=UNLOAD Dec 13 02:18:53.089000 audit: BPF prog-id=34 op=LOAD Dec 13 02:18:53.090000 audit: BPF prog-id=35 op=LOAD Dec 13 02:18:53.090000 audit: BPF prog-id=21 op=UNLOAD Dec 13 02:18:53.090000 audit: BPF prog-id=22 op=UNLOAD Dec 13 02:18:53.107588 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 02:18:53.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:53.109225 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 02:18:53.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:53.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:53.112655 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 02:18:53.119627 systemd[1]: Starting audit-rules.service... Dec 13 02:18:53.126238 systemd[1]: Starting clean-ca-certificates.service... Dec 13 02:18:53.131000 audit: BPF prog-id=36 op=LOAD Dec 13 02:18:53.135000 audit: BPF prog-id=37 op=LOAD Dec 13 02:18:53.129067 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 02:18:53.133071 systemd[1]: Starting systemd-resolved.service... Dec 13 02:18:53.136992 systemd[1]: Starting systemd-timesyncd.service... Dec 13 02:18:53.142693 systemd[1]: Starting systemd-update-utmp.service... Dec 13 02:18:53.159229 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:18:53.159600 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:18:53.172000 audit[1681]: SYSTEM_BOOT pid=1681 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 02:18:53.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:53.161605 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:18:53.164284 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:18:53.167288 systemd[1]: Starting modprobe@loop.service... Dec 13 02:18:53.168400 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:18:53.168595 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:18:53.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:53.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:53.168771 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:18:53.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:53.170122 systemd[1]: Finished clean-ca-certificates.service. Dec 13 02:18:53.174860 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:18:53.175027 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:18:53.180925 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:18:53.181233 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:18:53.183573 systemd[1]: Finished systemd-update-utmp.service. Dec 13 02:18:53.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:53.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:53.189198 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:18:53.189452 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:18:53.199027 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:18:53.199421 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:18:53.201509 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:18:53.204286 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:18:53.205528 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:18:53.205720 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:18:53.205874 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:18:53.206080 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:18:53.207965 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:18:53.208155 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:18:53.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:53.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:53.214783 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:18:53.215241 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:18:53.217861 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:18:53.222751 systemd[1]: Starting modprobe@drm.service... Dec 13 02:18:53.223741 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:18:53.223933 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:18:53.224144 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:18:53.224316 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:18:53.225586 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:18:53.225783 systemd[1]: Finished modprobe@loop.service. Dec 13 02:18:53.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:53.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:53.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:53.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:53.228303 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:18:53.228493 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:18:53.229953 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:18:53.237591 systemd[1]: Finished ensure-sysext.service. Dec 13 02:18:53.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:53.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:53.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:53.242569 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:18:53.242753 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:18:53.243992 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:18:53.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:53.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:53.245288 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:18:53.245449 systemd[1]: Finished modprobe@drm.service. Dec 13 02:18:53.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:18:53.277376 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 02:18:53.338000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 02:18:53.338000 audit[1703]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd28e0d880 a2=420 a3=0 items=0 ppid=1676 pid=1703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:18:53.338000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 02:18:53.338920 augenrules[1703]: No rules Dec 13 02:18:53.340327 systemd[1]: Finished audit-rules.service. Dec 13 02:18:53.348653 systemd-resolved[1679]: Positive Trust Anchors: Dec 13 02:18:53.348675 systemd-resolved[1679]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:18:53.348717 systemd-resolved[1679]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:18:53.355961 systemd[1]: Started systemd-timesyncd.service. Dec 13 02:18:53.357080 systemd[1]: Reached target time-set.target. Dec 13 02:18:53.372841 systemd-resolved[1679]: Defaulting to hostname 'linux'. Dec 13 02:18:53.374581 systemd[1]: Started systemd-resolved.service. Dec 13 02:18:53.375741 systemd[1]: Reached target network.target. Dec 13 02:18:53.376747 systemd[1]: Reached target network-online.target. Dec 13 02:18:53.377693 systemd[1]: Reached target nss-lookup.target. Dec 13 02:18:53.515535 ldconfig[1570]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 02:18:53.529161 systemd[1]: Finished ldconfig.service. Dec 13 02:18:53.531803 systemd[1]: Starting systemd-update-done.service... Dec 13 02:18:53.543796 systemd[1]: Finished systemd-update-done.service. Dec 13 02:18:53.545118 systemd[1]: Reached target sysinit.target. Dec 13 02:18:53.546277 systemd[1]: Started motdgen.path. Dec 13 02:18:53.547223 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 02:18:53.549032 systemd[1]: Started logrotate.timer. Dec 13 02:18:53.550114 systemd[1]: Started mdadm.timer. Dec 13 02:18:53.550882 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 02:18:53.551830 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:18:53.551871 systemd[1]: Reached target paths.target. Dec 13 02:18:53.552745 systemd[1]: Reached target timers.target. Dec 13 02:18:53.554210 systemd[1]: Listening on dbus.socket. Dec 13 02:18:53.556234 systemd[1]: Starting docker.socket... Dec 13 02:18:53.560561 systemd[1]: Listening on sshd.socket. Dec 13 02:18:53.561617 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:18:53.562098 systemd[1]: Listening on docker.socket. Dec 13 02:18:53.563122 systemd[1]: Reached target sockets.target. Dec 13 02:18:53.564040 systemd[1]: Reached target basic.target. Dec 13 02:18:53.566091 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:18:53.566123 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:18:53.567772 systemd[1]: Started amazon-ssm-agent.service. Dec 13 02:18:53.570775 systemd[1]: Starting containerd.service... Dec 13 02:18:53.575391 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 02:18:53.578033 systemd[1]: Starting dbus.service... Dec 13 02:18:53.580324 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 02:18:53.584132 systemd[1]: Starting extend-filesystems.service... Dec 13 02:18:53.585207 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 02:18:53.587335 systemd[1]: Starting kubelet.service... Dec 13 02:18:53.589830 systemd[1]: Starting motdgen.service... Dec 13 02:18:53.592549 systemd[1]: Started nvidia.service. Dec 13 02:18:53.595466 systemd[1]: Starting prepare-helm.service... Dec 13 02:18:54.098811 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 02:18:54.099399 systemd-timesyncd[1680]: Contacted time server 144.202.66.214:123 (0.flatcar.pool.ntp.org). Dec 13 02:18:54.099465 systemd-timesyncd[1680]: Initial clock synchronization to Fri 2024-12-13 02:18:54.098652 UTC. Dec 13 02:18:54.103383 systemd[1]: Starting sshd-keygen.service... Dec 13 02:18:54.105330 systemd-resolved[1679]: Clock change detected. Flushing caches. Dec 13 02:18:54.108611 systemd[1]: Starting systemd-logind.service... Dec 13 02:18:54.109885 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:18:54.110394 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 02:18:54.160441 jq[1716]: false Dec 13 02:18:54.111812 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 02:18:54.114026 systemd[1]: Starting update-engine.service... Dec 13 02:18:54.118589 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 02:18:54.171301 jq[1726]: true Dec 13 02:18:54.163439 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 02:18:54.163655 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 02:18:54.218479 tar[1729]: linux-amd64/helm Dec 13 02:18:54.315744 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 02:18:54.316095 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 02:18:54.320575 jq[1733]: true Dec 13 02:18:54.429221 extend-filesystems[1717]: Found loop1 Dec 13 02:18:54.436404 extend-filesystems[1717]: Found nvme0n1 Dec 13 02:18:54.437905 dbus-daemon[1715]: [system] SELinux support is enabled Dec 13 02:18:54.438169 systemd[1]: Started dbus.service. Dec 13 02:18:54.443758 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 02:18:54.443982 systemd[1]: Finished motdgen.service. Dec 13 02:18:54.445072 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 02:18:54.445111 systemd[1]: Reached target system-config.target. Dec 13 02:18:54.447733 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 02:18:54.447768 systemd[1]: Reached target user-config.target. Dec 13 02:18:54.450533 extend-filesystems[1717]: Found nvme0n1p1 Dec 13 02:18:54.451926 extend-filesystems[1717]: Found nvme0n1p2 Dec 13 02:18:54.452852 extend-filesystems[1717]: Found nvme0n1p3 Dec 13 02:18:54.452852 extend-filesystems[1717]: Found usr Dec 13 02:18:54.454709 extend-filesystems[1717]: Found nvme0n1p4 Dec 13 02:18:54.454709 extend-filesystems[1717]: Found nvme0n1p6 Dec 13 02:18:54.456916 extend-filesystems[1717]: Found nvme0n1p7 Dec 13 02:18:54.456916 extend-filesystems[1717]: Found nvme0n1p9 Dec 13 02:18:54.456916 extend-filesystems[1717]: Checking size of /dev/nvme0n1p9 Dec 13 02:18:54.509512 dbus-daemon[1715]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1459 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 02:18:54.527092 systemd[1]: Starting systemd-hostnamed.service... Dec 13 02:18:54.542880 update_engine[1725]: I1213 02:18:54.542052 1725 main.cc:92] Flatcar Update Engine starting Dec 13 02:18:54.548382 update_engine[1725]: I1213 02:18:54.548240 1725 update_check_scheduler.cc:74] Next update check in 4m28s Dec 13 02:18:54.548621 systemd[1]: Started update-engine.service. Dec 13 02:18:54.553041 systemd[1]: Started locksmithd.service. Dec 13 02:18:54.565947 amazon-ssm-agent[1712]: 2024/12/13 02:18:54 Failed to load instance info from vault. RegistrationKey does not exist. Dec 13 02:18:54.568805 amazon-ssm-agent[1712]: Initializing new seelog logger Dec 13 02:18:54.577858 amazon-ssm-agent[1712]: New Seelog Logger Creation Complete Dec 13 02:18:54.577858 amazon-ssm-agent[1712]: 2024/12/13 02:18:54 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 02:18:54.577858 amazon-ssm-agent[1712]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 02:18:54.577858 amazon-ssm-agent[1712]: 2024/12/13 02:18:54 processing appconfig overrides Dec 13 02:18:54.590638 extend-filesystems[1717]: Resized partition /dev/nvme0n1p9 Dec 13 02:18:54.597589 extend-filesystems[1783]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 02:18:54.619357 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 02:18:54.736329 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 02:18:54.736223 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 02:18:54.736662 env[1730]: time="2024-12-13T02:18:54.735512539Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 02:18:54.770539 bash[1779]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:18:54.780712 extend-filesystems[1783]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 02:18:54.780712 extend-filesystems[1783]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 02:18:54.780712 extend-filesystems[1783]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 02:18:54.790266 extend-filesystems[1717]: Resized filesystem in /dev/nvme0n1p9 Dec 13 02:18:54.792205 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 02:18:54.792448 systemd[1]: Finished extend-filesystems.service. Dec 13 02:18:54.851159 systemd-logind[1724]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 02:18:54.851192 systemd-logind[1724]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 02:18:54.851215 systemd-logind[1724]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 02:18:54.865493 systemd-logind[1724]: New seat seat0. Dec 13 02:18:54.876367 systemd[1]: Started systemd-logind.service. Dec 13 02:18:54.904805 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 02:18:55.002671 env[1730]: time="2024-12-13T02:18:55.002617662Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 02:18:55.002865 env[1730]: time="2024-12-13T02:18:55.002803127Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:18:55.023348 env[1730]: time="2024-12-13T02:18:55.023261929Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:18:55.023348 env[1730]: time="2024-12-13T02:18:55.023347243Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:18:55.023874 env[1730]: time="2024-12-13T02:18:55.023841603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:18:55.024022 env[1730]: time="2024-12-13T02:18:55.023877843Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 02:18:55.024022 env[1730]: time="2024-12-13T02:18:55.023897301Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 02:18:55.024022 env[1730]: time="2024-12-13T02:18:55.023911515Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 02:18:55.024339 env[1730]: time="2024-12-13T02:18:55.024067106Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:18:55.024523 env[1730]: time="2024-12-13T02:18:55.024496911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:18:55.024729 env[1730]: time="2024-12-13T02:18:55.024699580Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:18:55.024787 env[1730]: time="2024-12-13T02:18:55.024732577Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 02:18:55.024837 env[1730]: time="2024-12-13T02:18:55.024804448Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 02:18:55.024837 env[1730]: time="2024-12-13T02:18:55.024824187Z" level=info msg="metadata content store policy set" policy=shared Dec 13 02:18:55.040789 env[1730]: time="2024-12-13T02:18:55.040736370Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 02:18:55.040789 env[1730]: time="2024-12-13T02:18:55.040794841Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 02:18:55.041043 env[1730]: time="2024-12-13T02:18:55.040816374Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 02:18:55.041043 env[1730]: time="2024-12-13T02:18:55.040931657Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 02:18:55.041043 env[1730]: time="2024-12-13T02:18:55.040952910Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 02:18:55.041043 env[1730]: time="2024-12-13T02:18:55.041022587Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 02:18:55.041210 env[1730]: time="2024-12-13T02:18:55.041044678Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 02:18:55.041210 env[1730]: time="2024-12-13T02:18:55.041064407Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 02:18:55.041210 env[1730]: time="2024-12-13T02:18:55.041083870Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 02:18:55.041210 env[1730]: time="2024-12-13T02:18:55.041104355Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 02:18:55.041210 env[1730]: time="2024-12-13T02:18:55.041124132Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 02:18:55.041210 env[1730]: time="2024-12-13T02:18:55.041144763Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 02:18:55.041441 env[1730]: time="2024-12-13T02:18:55.041323402Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 02:18:55.041483 env[1730]: time="2024-12-13T02:18:55.041435732Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 02:18:55.041913 env[1730]: time="2024-12-13T02:18:55.041890512Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 02:18:55.041969 env[1730]: time="2024-12-13T02:18:55.041931264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 02:18:55.041969 env[1730]: time="2024-12-13T02:18:55.041953472Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 02:18:55.042051 env[1730]: time="2024-12-13T02:18:55.042025608Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 02:18:55.042089 env[1730]: time="2024-12-13T02:18:55.042046879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 02:18:55.042142 env[1730]: time="2024-12-13T02:18:55.042125578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 02:18:55.042191 env[1730]: time="2024-12-13T02:18:55.042153091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 02:18:55.042191 env[1730]: time="2024-12-13T02:18:55.042173678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 02:18:55.042270 env[1730]: time="2024-12-13T02:18:55.042196970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 02:18:55.042270 env[1730]: time="2024-12-13T02:18:55.042215863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 02:18:55.042270 env[1730]: time="2024-12-13T02:18:55.042234124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 02:18:55.042270 env[1730]: time="2024-12-13T02:18:55.042257827Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 02:18:55.042509 env[1730]: time="2024-12-13T02:18:55.042428289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 02:18:55.042565 env[1730]: time="2024-12-13T02:18:55.042513382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 02:18:55.042565 env[1730]: time="2024-12-13T02:18:55.042532432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 02:18:55.042565 env[1730]: time="2024-12-13T02:18:55.042550585Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 02:18:55.042673 env[1730]: time="2024-12-13T02:18:55.042576327Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 02:18:55.042673 env[1730]: time="2024-12-13T02:18:55.042594727Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 02:18:55.042673 env[1730]: time="2024-12-13T02:18:55.042619598Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 02:18:55.042673 env[1730]: time="2024-12-13T02:18:55.042665468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 02:18:55.043079 env[1730]: time="2024-12-13T02:18:55.042949825Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 02:18:55.045430 env[1730]: time="2024-12-13T02:18:55.043090815Z" level=info msg="Connect containerd service" Dec 13 02:18:55.045430 env[1730]: time="2024-12-13T02:18:55.043140908Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 02:18:55.045430 env[1730]: time="2024-12-13T02:18:55.043998108Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:18:55.045430 env[1730]: time="2024-12-13T02:18:55.044436937Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 02:18:55.045430 env[1730]: time="2024-12-13T02:18:55.044487577Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 02:18:55.044631 systemd[1]: Started containerd.service. Dec 13 02:18:55.048659 env[1730]: time="2024-12-13T02:18:55.044745166Z" level=info msg="containerd successfully booted in 0.460627s" Dec 13 02:18:55.080906 dbus-daemon[1715]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 02:18:55.081085 systemd[1]: Started systemd-hostnamed.service. Dec 13 02:18:55.084705 dbus-daemon[1715]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1765 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 02:18:55.089251 systemd[1]: Starting polkit.service... Dec 13 02:18:55.104492 env[1730]: time="2024-12-13T02:18:55.104370134Z" level=info msg="Start subscribing containerd event" Dec 13 02:18:55.104492 env[1730]: time="2024-12-13T02:18:55.104465447Z" level=info msg="Start recovering state" Dec 13 02:18:55.104661 env[1730]: time="2024-12-13T02:18:55.104560679Z" level=info msg="Start event monitor" Dec 13 02:18:55.104661 env[1730]: time="2024-12-13T02:18:55.104590714Z" level=info msg="Start snapshots syncer" Dec 13 02:18:55.104661 env[1730]: time="2024-12-13T02:18:55.104609120Z" level=info msg="Start cni network conf syncer for default" Dec 13 02:18:55.104661 env[1730]: time="2024-12-13T02:18:55.104620758Z" level=info msg="Start streaming server" Dec 13 02:18:55.166431 polkitd[1826]: Started polkitd version 121 Dec 13 02:18:55.198852 polkitd[1826]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 02:18:55.206525 polkitd[1826]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 02:18:55.211854 polkitd[1826]: Finished loading, compiling and executing 2 rules Dec 13 02:18:55.212580 dbus-daemon[1715]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 02:18:55.212768 systemd[1]: Started polkit.service. Dec 13 02:18:55.213253 polkitd[1826]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 02:18:55.256743 systemd-hostnamed[1765]: Hostname set to (transient) Dec 13 02:18:55.256872 systemd-resolved[1679]: System hostname changed to 'ip-172-31-31-142'. Dec 13 02:18:55.330899 coreos-metadata[1714]: Dec 13 02:18:55.325 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 02:18:55.335267 coreos-metadata[1714]: Dec 13 02:18:55.335 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Dec 13 02:18:55.339811 coreos-metadata[1714]: Dec 13 02:18:55.339 INFO Fetch successful Dec 13 02:18:55.339811 coreos-metadata[1714]: Dec 13 02:18:55.339 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 02:18:55.340823 coreos-metadata[1714]: Dec 13 02:18:55.340 INFO Fetch successful Dec 13 02:18:55.345589 unknown[1714]: wrote ssh authorized keys file for user: core Dec 13 02:18:55.381299 update-ssh-keys[1861]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:18:55.382415 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 02:18:55.485204 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO Create new startup processor Dec 13 02:18:55.502183 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [LongRunningPluginsManager] registered plugins: {} Dec 13 02:18:55.505891 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO Initializing bookkeeping folders Dec 13 02:18:55.506141 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO removing the completed state files Dec 13 02:18:55.506234 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO Initializing bookkeeping folders for long running plugins Dec 13 02:18:55.506330 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Dec 13 02:18:55.506499 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO Initializing healthcheck folders for long running plugins Dec 13 02:18:55.506588 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO Initializing locations for inventory plugin Dec 13 02:18:55.506677 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO Initializing default location for custom inventory Dec 13 02:18:55.507138 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO Initializing default location for file inventory Dec 13 02:18:55.507262 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO Initializing default location for role inventory Dec 13 02:18:55.507405 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO Init the cloudwatchlogs publisher Dec 13 02:18:55.507510 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [instanceID=i-06cd93bb1ab0cd006] Successfully loaded platform independent plugin aws:runPowerShellScript Dec 13 02:18:55.507597 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [instanceID=i-06cd93bb1ab0cd006] Successfully loaded platform independent plugin aws:updateSsmAgent Dec 13 02:18:55.507692 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [instanceID=i-06cd93bb1ab0cd006] Successfully loaded platform independent plugin aws:configureDocker Dec 13 02:18:55.508300 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [instanceID=i-06cd93bb1ab0cd006] Successfully loaded platform independent plugin aws:runDockerAction Dec 13 02:18:55.508420 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [instanceID=i-06cd93bb1ab0cd006] Successfully loaded platform independent plugin aws:refreshAssociation Dec 13 02:18:55.508520 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [instanceID=i-06cd93bb1ab0cd006] Successfully loaded platform independent plugin aws:downloadContent Dec 13 02:18:55.508616 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [instanceID=i-06cd93bb1ab0cd006] Successfully loaded platform independent plugin aws:softwareInventory Dec 13 02:18:55.509950 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [instanceID=i-06cd93bb1ab0cd006] Successfully loaded platform independent plugin aws:configurePackage Dec 13 02:18:55.510092 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [instanceID=i-06cd93bb1ab0cd006] Successfully loaded platform independent plugin aws:runDocument Dec 13 02:18:55.510196 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [instanceID=i-06cd93bb1ab0cd006] Successfully loaded platform dependent plugin aws:runShellScript Dec 13 02:18:55.510398 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Dec 13 02:18:55.510513 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO OS: linux, Arch: amd64 Dec 13 02:18:55.513869 amazon-ssm-agent[1712]: datastore file /var/lib/amazon/ssm/i-06cd93bb1ab0cd006/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Dec 13 02:18:55.524341 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [MessagingDeliveryService] Starting document processing engine... Dec 13 02:18:55.620790 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [MessagingDeliveryService] [EngineProcessor] Starting Dec 13 02:18:55.715815 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Dec 13 02:18:55.810393 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [MessagingDeliveryService] Starting message polling Dec 13 02:18:55.905055 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [MessagingDeliveryService] Starting send replies to MDS Dec 13 02:18:56.000121 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [instanceID=i-06cd93bb1ab0cd006] Starting association polling Dec 13 02:18:56.065507 tar[1729]: linux-amd64/LICENSE Dec 13 02:18:56.066525 tar[1729]: linux-amd64/README.md Dec 13 02:18:56.075178 systemd[1]: Finished prepare-helm.service. Dec 13 02:18:56.099150 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Dec 13 02:18:56.195874 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [MessagingDeliveryService] [Association] Launching response handler Dec 13 02:18:56.293163 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Dec 13 02:18:56.390008 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Dec 13 02:18:56.414024 locksmithd[1774]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 02:18:56.487108 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Dec 13 02:18:56.584538 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [MessageGatewayService] Starting session document processing engine... Dec 13 02:18:56.585901 systemd[1]: Started kubelet.service. Dec 13 02:18:56.683695 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [MessageGatewayService] [EngineProcessor] Starting Dec 13 02:18:56.780190 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Dec 13 02:18:56.876936 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-06cd93bb1ab0cd006, requestId: 450a7eb2-359b-40a1-b6f0-1949a4c65901 Dec 13 02:18:56.974964 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [OfflineService] Starting document processing engine... Dec 13 02:18:57.073074 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [OfflineService] [EngineProcessor] Starting Dec 13 02:18:57.170438 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [OfflineService] [EngineProcessor] Initial processing Dec 13 02:18:57.268899 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [OfflineService] Starting message polling Dec 13 02:18:57.365918 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [OfflineService] Starting send replies to MDS Dec 13 02:18:57.464238 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [LongRunningPluginsManager] starting long running plugin manager Dec 13 02:18:57.561376 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Dec 13 02:18:57.578204 kubelet[1924]: E1213 02:18:57.578132 1924 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:18:57.581974 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:18:57.582215 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:18:57.582553 systemd[1]: kubelet.service: Consumed 1.324s CPU time. Dec 13 02:18:57.659603 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 02:18:57.761358 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [MessageGatewayService] listening reply. Dec 13 02:18:57.860270 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Dec 13 02:18:57.959204 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [StartupProcessor] Executing startup processor tasks Dec 13 02:18:58.001987 sshd_keygen[1747]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 02:18:58.043057 systemd[1]: Finished sshd-keygen.service. Dec 13 02:18:58.048052 systemd[1]: Starting issuegen.service... Dec 13 02:18:58.058233 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Dec 13 02:18:58.060586 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 02:18:58.060936 systemd[1]: Finished issuegen.service. Dec 13 02:18:58.064590 systemd[1]: Starting systemd-user-sessions.service... Dec 13 02:18:58.077161 systemd[1]: Finished systemd-user-sessions.service. Dec 13 02:18:58.081704 systemd[1]: Started getty@tty1.service. Dec 13 02:18:58.085373 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 02:18:58.087148 systemd[1]: Reached target getty.target. Dec 13 02:18:58.088517 systemd[1]: Reached target multi-user.target. Dec 13 02:18:58.092983 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 02:18:58.107865 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 02:18:58.108119 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 02:18:58.110167 systemd[1]: Startup finished in 730ms (kernel) + 8.409s (initrd) + 16.432s (userspace) = 25.572s. Dec 13 02:18:58.157561 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Dec 13 02:18:58.257085 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.6 Dec 13 02:18:58.356869 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-06cd93bb1ab0cd006?role=subscribe&stream=input Dec 13 02:18:58.456748 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-06cd93bb1ab0cd006?role=subscribe&stream=input Dec 13 02:18:58.556774 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [MessageGatewayService] Starting receiving message from control channel Dec 13 02:18:58.657063 amazon-ssm-agent[1712]: 2024-12-13 02:18:55 INFO [MessageGatewayService] [EngineProcessor] Initial processing Dec 13 02:18:58.792158 amazon-ssm-agent[1712]: 2024-12-13 02:18:58 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Dec 13 02:19:03.112680 systemd[1]: Created slice system-sshd.slice. Dec 13 02:19:03.115417 systemd[1]: Started sshd@0-172.31.31.142:22-139.178.68.195:51956.service. Dec 13 02:19:03.294457 sshd[1946]: Accepted publickey for core from 139.178.68.195 port 51956 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:19:03.297778 sshd[1946]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:03.314244 systemd[1]: Created slice user-500.slice. Dec 13 02:19:03.318753 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 02:19:03.321846 systemd-logind[1724]: New session 1 of user core. Dec 13 02:19:03.344474 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 02:19:03.351121 systemd[1]: Starting user@500.service... Dec 13 02:19:03.357407 (systemd)[1949]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:03.532871 systemd[1949]: Queued start job for default target default.target. Dec 13 02:19:03.533534 systemd[1949]: Reached target paths.target. Dec 13 02:19:03.533569 systemd[1949]: Reached target sockets.target. Dec 13 02:19:03.533589 systemd[1949]: Reached target timers.target. Dec 13 02:19:03.533606 systemd[1949]: Reached target basic.target. Dec 13 02:19:03.533663 systemd[1949]: Reached target default.target. Dec 13 02:19:03.533703 systemd[1949]: Startup finished in 164ms. Dec 13 02:19:03.534152 systemd[1]: Started user@500.service. Dec 13 02:19:03.535574 systemd[1]: Started session-1.scope. Dec 13 02:19:03.689461 systemd[1]: Started sshd@1-172.31.31.142:22-139.178.68.195:51968.service. Dec 13 02:19:03.858148 sshd[1958]: Accepted publickey for core from 139.178.68.195 port 51968 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:19:03.859788 sshd[1958]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:03.892541 systemd-logind[1724]: New session 2 of user core. Dec 13 02:19:03.893204 systemd[1]: Started session-2.scope. Dec 13 02:19:04.027809 sshd[1958]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:04.034412 systemd[1]: sshd@1-172.31.31.142:22-139.178.68.195:51968.service: Deactivated successfully. Dec 13 02:19:04.035427 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 02:19:04.036224 systemd-logind[1724]: Session 2 logged out. Waiting for processes to exit. Dec 13 02:19:04.037392 systemd-logind[1724]: Removed session 2. Dec 13 02:19:04.054515 systemd[1]: Started sshd@2-172.31.31.142:22-139.178.68.195:51976.service. Dec 13 02:19:04.231647 sshd[1964]: Accepted publickey for core from 139.178.68.195 port 51976 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:19:04.233430 sshd[1964]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:04.238271 systemd-logind[1724]: New session 3 of user core. Dec 13 02:19:04.238915 systemd[1]: Started session-3.scope. Dec 13 02:19:04.371333 sshd[1964]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:04.379193 systemd[1]: sshd@2-172.31.31.142:22-139.178.68.195:51976.service: Deactivated successfully. Dec 13 02:19:04.380066 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 02:19:04.380764 systemd-logind[1724]: Session 3 logged out. Waiting for processes to exit. Dec 13 02:19:04.381691 systemd-logind[1724]: Removed session 3. Dec 13 02:19:04.399414 systemd[1]: Started sshd@3-172.31.31.142:22-139.178.68.195:51982.service. Dec 13 02:19:04.571172 sshd[1970]: Accepted publickey for core from 139.178.68.195 port 51982 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:19:04.572800 sshd[1970]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:04.578956 systemd[1]: Started session-4.scope. Dec 13 02:19:04.579666 systemd-logind[1724]: New session 4 of user core. Dec 13 02:19:04.708802 sshd[1970]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:04.712751 systemd[1]: sshd@3-172.31.31.142:22-139.178.68.195:51982.service: Deactivated successfully. Dec 13 02:19:04.713790 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 02:19:04.714612 systemd-logind[1724]: Session 4 logged out. Waiting for processes to exit. Dec 13 02:19:04.715508 systemd-logind[1724]: Removed session 4. Dec 13 02:19:04.732695 systemd[1]: Started sshd@4-172.31.31.142:22-139.178.68.195:51998.service. Dec 13 02:19:04.898132 sshd[1976]: Accepted publickey for core from 139.178.68.195 port 51998 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:19:04.904440 sshd[1976]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:19:04.912435 systemd[1]: Started session-5.scope. Dec 13 02:19:04.913155 systemd-logind[1724]: New session 5 of user core. Dec 13 02:19:05.039484 sudo[1979]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 02:19:05.039965 sudo[1979]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 02:19:05.078175 systemd[1]: Starting docker.service... Dec 13 02:19:05.143127 env[1989]: time="2024-12-13T02:19:05.143040188Z" level=info msg="Starting up" Dec 13 02:19:05.146348 env[1989]: time="2024-12-13T02:19:05.146315656Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 02:19:05.146490 env[1989]: time="2024-12-13T02:19:05.146475027Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 02:19:05.146579 env[1989]: time="2024-12-13T02:19:05.146562147Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 02:19:05.146644 env[1989]: time="2024-12-13T02:19:05.146632212Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 02:19:05.149250 env[1989]: time="2024-12-13T02:19:05.148712501Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 02:19:05.149568 env[1989]: time="2024-12-13T02:19:05.149544884Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 02:19:05.149659 env[1989]: time="2024-12-13T02:19:05.149646366Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 02:19:05.149707 env[1989]: time="2024-12-13T02:19:05.149698802Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 02:19:05.160139 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport407056039-merged.mount: Deactivated successfully. Dec 13 02:19:05.323777 env[1989]: time="2024-12-13T02:19:05.323728251Z" level=info msg="Loading containers: start." Dec 13 02:19:05.505305 kernel: Initializing XFRM netlink socket Dec 13 02:19:05.548016 env[1989]: time="2024-12-13T02:19:05.547951473Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 02:19:05.549897 (udev-worker)[1999]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:19:05.704246 systemd-networkd[1459]: docker0: Link UP Dec 13 02:19:05.725874 env[1989]: time="2024-12-13T02:19:05.725833032Z" level=info msg="Loading containers: done." Dec 13 02:19:05.758190 env[1989]: time="2024-12-13T02:19:05.758142314Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 02:19:05.758808 env[1989]: time="2024-12-13T02:19:05.758441471Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 02:19:05.758913 env[1989]: time="2024-12-13T02:19:05.758898034Z" level=info msg="Daemon has completed initialization" Dec 13 02:19:05.784121 systemd[1]: Started docker.service. Dec 13 02:19:05.797804 env[1989]: time="2024-12-13T02:19:05.797739121Z" level=info msg="API listen on /run/docker.sock" Dec 13 02:19:06.993181 env[1730]: time="2024-12-13T02:19:06.993008946Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 02:19:07.597442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount613494036.mount: Deactivated successfully. Dec 13 02:19:07.599020 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 02:19:07.599208 systemd[1]: Stopped kubelet.service. Dec 13 02:19:07.599262 systemd[1]: kubelet.service: Consumed 1.324s CPU time. Dec 13 02:19:07.601056 systemd[1]: Starting kubelet.service... Dec 13 02:19:08.388446 systemd[1]: Started kubelet.service. Dec 13 02:19:08.458050 kubelet[2121]: E1213 02:19:08.457997 2121 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:19:08.464055 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:19:08.464177 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:19:10.782081 env[1730]: time="2024-12-13T02:19:10.782018310Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:10.785538 env[1730]: time="2024-12-13T02:19:10.785491953Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:10.788397 env[1730]: time="2024-12-13T02:19:10.788355865Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:10.791435 env[1730]: time="2024-12-13T02:19:10.791389146Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:10.792247 env[1730]: time="2024-12-13T02:19:10.792205856Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 02:19:10.803513 env[1730]: time="2024-12-13T02:19:10.803476050Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 02:19:13.674446 env[1730]: time="2024-12-13T02:19:13.674391303Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:13.678312 env[1730]: time="2024-12-13T02:19:13.678253923Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:13.681379 env[1730]: time="2024-12-13T02:19:13.681332078Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:13.684143 env[1730]: time="2024-12-13T02:19:13.684093058Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:13.685040 env[1730]: time="2024-12-13T02:19:13.685004060Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 02:19:13.696763 env[1730]: time="2024-12-13T02:19:13.696726435Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 02:19:15.672519 env[1730]: time="2024-12-13T02:19:15.672464767Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:15.676319 env[1730]: time="2024-12-13T02:19:15.676250888Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:15.679855 env[1730]: time="2024-12-13T02:19:15.679817982Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:15.682473 env[1730]: time="2024-12-13T02:19:15.682419234Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:15.683299 env[1730]: time="2024-12-13T02:19:15.683256082Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 02:19:15.696561 env[1730]: time="2024-12-13T02:19:15.696522217Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 02:19:17.274755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2638615183.mount: Deactivated successfully. Dec 13 02:19:18.398329 env[1730]: time="2024-12-13T02:19:18.398263018Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:18.402130 env[1730]: time="2024-12-13T02:19:18.402082754Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:18.404938 env[1730]: time="2024-12-13T02:19:18.404894545Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:18.407613 env[1730]: time="2024-12-13T02:19:18.407568703Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:18.408081 env[1730]: time="2024-12-13T02:19:18.408047126Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 02:19:18.420954 env[1730]: time="2024-12-13T02:19:18.420846966Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 02:19:18.525759 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 02:19:18.526080 systemd[1]: Stopped kubelet.service. Dec 13 02:19:18.527830 systemd[1]: Starting kubelet.service... Dec 13 02:19:19.371872 systemd[1]: Started kubelet.service. Dec 13 02:19:19.447014 kubelet[2153]: E1213 02:19:19.446960 2153 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:19:19.449563 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:19:19.449689 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:19:19.633387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount426712912.mount: Deactivated successfully. Dec 13 02:19:21.425796 env[1730]: time="2024-12-13T02:19:21.425739461Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:21.430248 env[1730]: time="2024-12-13T02:19:21.430200389Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:21.433600 env[1730]: time="2024-12-13T02:19:21.433553566Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:21.436948 env[1730]: time="2024-12-13T02:19:21.436902686Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:21.443340 env[1730]: time="2024-12-13T02:19:21.443288500Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 02:19:21.463215 env[1730]: time="2024-12-13T02:19:21.463163188Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 02:19:22.010959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3263477175.mount: Deactivated successfully. Dec 13 02:19:22.029232 env[1730]: time="2024-12-13T02:19:22.029179404Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:22.033132 env[1730]: time="2024-12-13T02:19:22.033082882Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:22.036907 env[1730]: time="2024-12-13T02:19:22.036864814Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:22.040825 env[1730]: time="2024-12-13T02:19:22.040704306Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:22.041541 env[1730]: time="2024-12-13T02:19:22.041502809Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 02:19:22.057104 env[1730]: time="2024-12-13T02:19:22.057027009Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 02:19:22.652495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount916134729.mount: Deactivated successfully. Dec 13 02:19:25.262826 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 02:19:26.762922 env[1730]: time="2024-12-13T02:19:26.762866361Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:26.766390 env[1730]: time="2024-12-13T02:19:26.766346911Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:26.769296 env[1730]: time="2024-12-13T02:19:26.769236278Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:26.776024 env[1730]: time="2024-12-13T02:19:26.775977117Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:26.776880 env[1730]: time="2024-12-13T02:19:26.776781326Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 02:19:28.820788 amazon-ssm-agent[1712]: 2024-12-13 02:19:28 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Dec 13 02:19:29.366237 systemd[1]: Stopped kubelet.service. Dec 13 02:19:29.370150 systemd[1]: Starting kubelet.service... Dec 13 02:19:29.396379 systemd[1]: Reloading. Dec 13 02:19:29.532759 /usr/lib/systemd/system-generators/torcx-generator[2256]: time="2024-12-13T02:19:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:19:29.532799 /usr/lib/systemd/system-generators/torcx-generator[2256]: time="2024-12-13T02:19:29Z" level=info msg="torcx already run" Dec 13 02:19:29.667790 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:19:29.667813 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:19:29.697647 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:19:29.856351 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 02:19:29.856456 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 02:19:29.856742 systemd[1]: Stopped kubelet.service. Dec 13 02:19:29.859430 systemd[1]: Starting kubelet.service... Dec 13 02:19:30.909853 systemd[1]: Started kubelet.service. Dec 13 02:19:30.984756 kubelet[2312]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:19:30.984756 kubelet[2312]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:19:30.984756 kubelet[2312]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:19:30.988130 kubelet[2312]: I1213 02:19:30.988057 2312 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:19:31.480559 kubelet[2312]: I1213 02:19:31.480519 2312 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 02:19:31.480559 kubelet[2312]: I1213 02:19:31.480553 2312 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:19:31.480856 kubelet[2312]: I1213 02:19:31.480834 2312 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 02:19:31.537186 kubelet[2312]: E1213 02:19:31.537148 2312 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.31.142:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.31.142:6443: connect: connection refused Dec 13 02:19:31.545050 kubelet[2312]: I1213 02:19:31.545010 2312 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:19:31.562454 kubelet[2312]: I1213 02:19:31.562417 2312 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:19:31.563945 kubelet[2312]: I1213 02:19:31.563911 2312 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:19:31.564331 kubelet[2312]: I1213 02:19:31.564303 2312 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:19:31.565417 kubelet[2312]: I1213 02:19:31.565391 2312 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:19:31.565626 kubelet[2312]: I1213 02:19:31.565419 2312 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:19:31.565741 kubelet[2312]: I1213 02:19:31.565724 2312 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:19:31.565862 kubelet[2312]: I1213 02:19:31.565851 2312 kubelet.go:396] "Attempting to sync node with API server" Dec 13 02:19:31.565929 kubelet[2312]: I1213 02:19:31.565875 2312 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:19:31.565929 kubelet[2312]: I1213 02:19:31.565918 2312 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:19:31.566009 kubelet[2312]: I1213 02:19:31.565940 2312 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:19:31.568120 kubelet[2312]: W1213 02:19:31.567817 2312 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.31.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.31.142:6443: connect: connection refused Dec 13 02:19:31.568120 kubelet[2312]: E1213 02:19:31.567910 2312 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.31.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.31.142:6443: connect: connection refused Dec 13 02:19:31.568120 kubelet[2312]: W1213 02:19:31.567994 2312 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.31.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-142&limit=500&resourceVersion=0": dial tcp 172.31.31.142:6443: connect: connection refused Dec 13 02:19:31.568120 kubelet[2312]: E1213 02:19:31.568033 2312 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.31.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-142&limit=500&resourceVersion=0": dial tcp 172.31.31.142:6443: connect: connection refused Dec 13 02:19:31.568364 kubelet[2312]: I1213 02:19:31.568309 2312 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:19:31.582705 kubelet[2312]: I1213 02:19:31.582662 2312 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:19:31.582869 kubelet[2312]: W1213 02:19:31.582767 2312 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 02:19:31.584000 kubelet[2312]: I1213 02:19:31.583858 2312 server.go:1256] "Started kubelet" Dec 13 02:19:31.596246 kubelet[2312]: I1213 02:19:31.596021 2312 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:19:31.597255 kubelet[2312]: I1213 02:19:31.597224 2312 server.go:461] "Adding debug handlers to kubelet server" Dec 13 02:19:31.598838 kubelet[2312]: I1213 02:19:31.598815 2312 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:19:31.599466 kubelet[2312]: I1213 02:19:31.599448 2312 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:19:31.603336 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 02:19:31.603543 kubelet[2312]: I1213 02:19:31.603093 2312 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:19:31.609775 kubelet[2312]: E1213 02:19:31.609744 2312 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.142:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.142:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-142.18109b1520c1ad41 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-142,UID:ip-172-31-31-142,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-142,},FirstTimestamp:2024-12-13 02:19:31.583823169 +0000 UTC m=+0.667626179,LastTimestamp:2024-12-13 02:19:31.583823169 +0000 UTC m=+0.667626179,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-142,}" Dec 13 02:19:31.612737 kubelet[2312]: E1213 02:19:31.612706 2312 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:19:31.612865 kubelet[2312]: I1213 02:19:31.612842 2312 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:19:31.616722 kubelet[2312]: I1213 02:19:31.616693 2312 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 02:19:31.616858 kubelet[2312]: I1213 02:19:31.616776 2312 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 02:19:31.617596 kubelet[2312]: E1213 02:19:31.617573 2312 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-142?timeout=10s\": dial tcp 172.31.31.142:6443: connect: connection refused" interval="200ms" Dec 13 02:19:31.617767 kubelet[2312]: I1213 02:19:31.617751 2312 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:19:31.617877 kubelet[2312]: I1213 02:19:31.617857 2312 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:19:31.618463 kubelet[2312]: W1213 02:19:31.618418 2312 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.31.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.142:6443: connect: connection refused Dec 13 02:19:31.618539 kubelet[2312]: E1213 02:19:31.618476 2312 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.31.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.142:6443: connect: connection refused Dec 13 02:19:31.620252 kubelet[2312]: I1213 02:19:31.620221 2312 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:19:31.656489 kubelet[2312]: I1213 02:19:31.656452 2312 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:19:31.658711 kubelet[2312]: I1213 02:19:31.658679 2312 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:19:31.658929 kubelet[2312]: I1213 02:19:31.658915 2312 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:19:31.659034 kubelet[2312]: I1213 02:19:31.659022 2312 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 02:19:31.659394 kubelet[2312]: E1213 02:19:31.659374 2312 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:19:31.667227 kubelet[2312]: I1213 02:19:31.667196 2312 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:19:31.667227 kubelet[2312]: I1213 02:19:31.667225 2312 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:19:31.667448 kubelet[2312]: I1213 02:19:31.667244 2312 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:19:31.671342 kubelet[2312]: W1213 02:19:31.671253 2312 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.31.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.142:6443: connect: connection refused Dec 13 02:19:31.671342 kubelet[2312]: E1213 02:19:31.671326 2312 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.31.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.142:6443: connect: connection refused Dec 13 02:19:31.708266 kubelet[2312]: I1213 02:19:31.708221 2312 policy_none.go:49] "None policy: Start" Dec 13 02:19:31.709505 kubelet[2312]: I1213 02:19:31.709479 2312 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:19:31.709643 kubelet[2312]: I1213 02:19:31.709517 2312 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:19:31.716335 kubelet[2312]: I1213 02:19:31.716314 2312 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-142" Dec 13 02:19:31.716885 kubelet[2312]: E1213 02:19:31.716868 2312 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.31.142:6443/api/v1/nodes\": dial tcp 172.31.31.142:6443: connect: connection refused" node="ip-172-31-31-142" Dec 13 02:19:31.738677 systemd[1]: Created slice kubepods.slice. Dec 13 02:19:31.747433 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 02:19:31.753605 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 02:19:31.760030 kubelet[2312]: E1213 02:19:31.759991 2312 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 02:19:31.760746 kubelet[2312]: I1213 02:19:31.760725 2312 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:19:31.761418 kubelet[2312]: I1213 02:19:31.761383 2312 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:19:31.764184 kubelet[2312]: E1213 02:19:31.764136 2312 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-142\" not found" Dec 13 02:19:31.820092 kubelet[2312]: E1213 02:19:31.819962 2312 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-142?timeout=10s\": dial tcp 172.31.31.142:6443: connect: connection refused" interval="400ms" Dec 13 02:19:31.919358 kubelet[2312]: I1213 02:19:31.919329 2312 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-142" Dec 13 02:19:31.919776 kubelet[2312]: E1213 02:19:31.919753 2312 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.31.142:6443/api/v1/nodes\": dial tcp 172.31.31.142:6443: connect: connection refused" node="ip-172-31-31-142" Dec 13 02:19:31.961344 kubelet[2312]: I1213 02:19:31.961257 2312 topology_manager.go:215] "Topology Admit Handler" podUID="2615fb9f9188e60bf4daa347a400b27b" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-31-142" Dec 13 02:19:31.965430 kubelet[2312]: I1213 02:19:31.965396 2312 topology_manager.go:215] "Topology Admit Handler" podUID="0a5685fa4334035c9d0f71b5fec042fb" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-31-142" Dec 13 02:19:31.969010 kubelet[2312]: I1213 02:19:31.968987 2312 topology_manager.go:215] "Topology Admit Handler" podUID="cd5148c18ffcc55b39cd22b3119779fb" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-31-142" Dec 13 02:19:31.991373 systemd[1]: Created slice kubepods-burstable-pod2615fb9f9188e60bf4daa347a400b27b.slice. Dec 13 02:19:32.015558 systemd[1]: Created slice kubepods-burstable-pod0a5685fa4334035c9d0f71b5fec042fb.slice. Dec 13 02:19:32.019299 kubelet[2312]: I1213 02:19:32.019101 2312 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2615fb9f9188e60bf4daa347a400b27b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-142\" (UID: \"2615fb9f9188e60bf4daa347a400b27b\") " pod="kube-system/kube-apiserver-ip-172-31-31-142" Dec 13 02:19:32.019299 kubelet[2312]: I1213 02:19:32.019242 2312 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a5685fa4334035c9d0f71b5fec042fb-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-142\" (UID: \"0a5685fa4334035c9d0f71b5fec042fb\") " pod="kube-system/kube-controller-manager-ip-172-31-31-142" Dec 13 02:19:32.019299 kubelet[2312]: I1213 02:19:32.019271 2312 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0a5685fa4334035c9d0f71b5fec042fb-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-142\" (UID: \"0a5685fa4334035c9d0f71b5fec042fb\") " pod="kube-system/kube-controller-manager-ip-172-31-31-142" Dec 13 02:19:32.021463 kubelet[2312]: I1213 02:19:32.019328 2312 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a5685fa4334035c9d0f71b5fec042fb-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-142\" (UID: \"0a5685fa4334035c9d0f71b5fec042fb\") " pod="kube-system/kube-controller-manager-ip-172-31-31-142" Dec 13 02:19:32.021463 kubelet[2312]: I1213 02:19:32.019359 2312 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a5685fa4334035c9d0f71b5fec042fb-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-142\" (UID: \"0a5685fa4334035c9d0f71b5fec042fb\") " pod="kube-system/kube-controller-manager-ip-172-31-31-142" Dec 13 02:19:32.021463 kubelet[2312]: I1213 02:19:32.019393 2312 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a5685fa4334035c9d0f71b5fec042fb-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-142\" (UID: \"0a5685fa4334035c9d0f71b5fec042fb\") " pod="kube-system/kube-controller-manager-ip-172-31-31-142" Dec 13 02:19:32.021463 kubelet[2312]: I1213 02:19:32.019426 2312 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cd5148c18ffcc55b39cd22b3119779fb-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-142\" (UID: \"cd5148c18ffcc55b39cd22b3119779fb\") " pod="kube-system/kube-scheduler-ip-172-31-31-142" Dec 13 02:19:32.021463 kubelet[2312]: I1213 02:19:32.019454 2312 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2615fb9f9188e60bf4daa347a400b27b-ca-certs\") pod \"kube-apiserver-ip-172-31-31-142\" (UID: \"2615fb9f9188e60bf4daa347a400b27b\") " pod="kube-system/kube-apiserver-ip-172-31-31-142" Dec 13 02:19:32.021699 kubelet[2312]: I1213 02:19:32.019494 2312 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2615fb9f9188e60bf4daa347a400b27b-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-142\" (UID: \"2615fb9f9188e60bf4daa347a400b27b\") " pod="kube-system/kube-apiserver-ip-172-31-31-142" Dec 13 02:19:32.045536 systemd[1]: Created slice kubepods-burstable-podcd5148c18ffcc55b39cd22b3119779fb.slice. Dec 13 02:19:32.220784 kubelet[2312]: E1213 02:19:32.220747 2312 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-142?timeout=10s\": dial tcp 172.31.31.142:6443: connect: connection refused" interval="800ms" Dec 13 02:19:32.305129 env[1730]: time="2024-12-13T02:19:32.305076780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-142,Uid:2615fb9f9188e60bf4daa347a400b27b,Namespace:kube-system,Attempt:0,}" Dec 13 02:19:32.321706 kubelet[2312]: I1213 02:19:32.321668 2312 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-142" Dec 13 02:19:32.322044 kubelet[2312]: E1213 02:19:32.322013 2312 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.31.142:6443/api/v1/nodes\": dial tcp 172.31.31.142:6443: connect: connection refused" node="ip-172-31-31-142" Dec 13 02:19:32.326248 env[1730]: time="2024-12-13T02:19:32.326201172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-142,Uid:0a5685fa4334035c9d0f71b5fec042fb,Namespace:kube-system,Attempt:0,}" Dec 13 02:19:32.350746 env[1730]: time="2024-12-13T02:19:32.350696597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-142,Uid:cd5148c18ffcc55b39cd22b3119779fb,Namespace:kube-system,Attempt:0,}" Dec 13 02:19:32.524178 kubelet[2312]: W1213 02:19:32.524116 2312 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.31.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-142&limit=500&resourceVersion=0": dial tcp 172.31.31.142:6443: connect: connection refused Dec 13 02:19:32.524178 kubelet[2312]: E1213 02:19:32.524174 2312 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.31.142:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-142&limit=500&resourceVersion=0": dial tcp 172.31.31.142:6443: connect: connection refused Dec 13 02:19:32.563750 kubelet[2312]: W1213 02:19:32.563581 2312 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.31.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.31.142:6443: connect: connection refused Dec 13 02:19:32.563750 kubelet[2312]: E1213 02:19:32.563656 2312 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.31.142:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.31.142:6443: connect: connection refused Dec 13 02:19:32.699245 kubelet[2312]: W1213 02:19:32.699174 2312 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.31.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.142:6443: connect: connection refused Dec 13 02:19:32.699245 kubelet[2312]: E1213 02:19:32.699247 2312 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.31.142:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.142:6443: connect: connection refused Dec 13 02:19:32.734146 kubelet[2312]: W1213 02:19:32.734102 2312 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.31.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.142:6443: connect: connection refused Dec 13 02:19:32.734146 kubelet[2312]: E1213 02:19:32.734149 2312 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.31.142:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.142:6443: connect: connection refused Dec 13 02:19:32.824468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3955243210.mount: Deactivated successfully. Dec 13 02:19:32.843951 env[1730]: time="2024-12-13T02:19:32.843894658Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:32.846385 env[1730]: time="2024-12-13T02:19:32.846336690Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:32.858818 env[1730]: time="2024-12-13T02:19:32.858768758Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:32.862491 env[1730]: time="2024-12-13T02:19:32.862445412Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:32.881654 env[1730]: time="2024-12-13T02:19:32.881595206Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:32.900753 env[1730]: time="2024-12-13T02:19:32.900698877Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:32.902256 env[1730]: time="2024-12-13T02:19:32.901932771Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:32.903782 env[1730]: time="2024-12-13T02:19:32.903745183Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:32.905814 env[1730]: time="2024-12-13T02:19:32.905776435Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:32.908369 env[1730]: time="2024-12-13T02:19:32.908332301Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:32.910358 env[1730]: time="2024-12-13T02:19:32.910323632Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:32.914400 env[1730]: time="2024-12-13T02:19:32.914363821Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:19:32.974077 env[1730]: time="2024-12-13T02:19:32.973769699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:19:32.974077 env[1730]: time="2024-12-13T02:19:32.973824940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:19:32.974077 env[1730]: time="2024-12-13T02:19:32.973841849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:19:32.974475 env[1730]: time="2024-12-13T02:19:32.974414391Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/02eaa13575b52123c0e4ebdab43824d642798b100740c923f339ec396029165d pid=2351 runtime=io.containerd.runc.v2 Dec 13 02:19:32.983605 env[1730]: time="2024-12-13T02:19:32.983505764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:19:32.983769 env[1730]: time="2024-12-13T02:19:32.983577363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:19:32.983769 env[1730]: time="2024-12-13T02:19:32.983592801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:19:32.983894 env[1730]: time="2024-12-13T02:19:32.983749266Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/86f56d030a9885f5056a928485b904e3922fd62e33450abb9e25bd9e071f0abe pid=2363 runtime=io.containerd.runc.v2 Dec 13 02:19:33.017178 systemd[1]: Started cri-containerd-02eaa13575b52123c0e4ebdab43824d642798b100740c923f339ec396029165d.scope. Dec 13 02:19:33.023272 kubelet[2312]: E1213 02:19:33.021904 2312 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-142?timeout=10s\": dial tcp 172.31.31.142:6443: connect: connection refused" interval="1.6s" Dec 13 02:19:33.032496 env[1730]: time="2024-12-13T02:19:33.032248000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:19:33.032496 env[1730]: time="2024-12-13T02:19:33.032326032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:19:33.032496 env[1730]: time="2024-12-13T02:19:33.032341693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:19:33.032940 env[1730]: time="2024-12-13T02:19:33.032853205Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9204f03ac6cf25536dc852d7fc087ee0cdc5f6c25754c67b69e0e47a011ca7be pid=2389 runtime=io.containerd.runc.v2 Dec 13 02:19:33.054954 systemd[1]: Started cri-containerd-86f56d030a9885f5056a928485b904e3922fd62e33450abb9e25bd9e071f0abe.scope. Dec 13 02:19:33.093819 systemd[1]: Started cri-containerd-9204f03ac6cf25536dc852d7fc087ee0cdc5f6c25754c67b69e0e47a011ca7be.scope. Dec 13 02:19:33.124661 kubelet[2312]: I1213 02:19:33.124621 2312 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-142" Dec 13 02:19:33.125412 kubelet[2312]: E1213 02:19:33.125381 2312 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.31.142:6443/api/v1/nodes\": dial tcp 172.31.31.142:6443: connect: connection refused" node="ip-172-31-31-142" Dec 13 02:19:33.161891 env[1730]: time="2024-12-13T02:19:33.159430142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-142,Uid:0a5685fa4334035c9d0f71b5fec042fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"02eaa13575b52123c0e4ebdab43824d642798b100740c923f339ec396029165d\"" Dec 13 02:19:33.168717 env[1730]: time="2024-12-13T02:19:33.168667659Z" level=info msg="CreateContainer within sandbox \"02eaa13575b52123c0e4ebdab43824d642798b100740c923f339ec396029165d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 02:19:33.194193 env[1730]: time="2024-12-13T02:19:33.194139787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-142,Uid:cd5148c18ffcc55b39cd22b3119779fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"86f56d030a9885f5056a928485b904e3922fd62e33450abb9e25bd9e071f0abe\"" Dec 13 02:19:33.202936 env[1730]: time="2024-12-13T02:19:33.202859527Z" level=info msg="CreateContainer within sandbox \"86f56d030a9885f5056a928485b904e3922fd62e33450abb9e25bd9e071f0abe\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 02:19:33.227675 env[1730]: time="2024-12-13T02:19:33.227632766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-142,Uid:2615fb9f9188e60bf4daa347a400b27b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9204f03ac6cf25536dc852d7fc087ee0cdc5f6c25754c67b69e0e47a011ca7be\"" Dec 13 02:19:33.228261 env[1730]: time="2024-12-13T02:19:33.228224824Z" level=info msg="CreateContainer within sandbox \"02eaa13575b52123c0e4ebdab43824d642798b100740c923f339ec396029165d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"712be35552494236f69af76139b16b028cb1b568415b80c8b9277606c0e5fde2\"" Dec 13 02:19:33.229994 env[1730]: time="2024-12-13T02:19:33.229920618Z" level=info msg="StartContainer for \"712be35552494236f69af76139b16b028cb1b568415b80c8b9277606c0e5fde2\"" Dec 13 02:19:33.234957 env[1730]: time="2024-12-13T02:19:33.234923695Z" level=info msg="CreateContainer within sandbox \"9204f03ac6cf25536dc852d7fc087ee0cdc5f6c25754c67b69e0e47a011ca7be\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 02:19:33.241824 env[1730]: time="2024-12-13T02:19:33.241786929Z" level=info msg="CreateContainer within sandbox \"86f56d030a9885f5056a928485b904e3922fd62e33450abb9e25bd9e071f0abe\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c0d7aced378fb719996360e5d8034dbf80c8b5b88ee30de223ff4e83ef884b05\"" Dec 13 02:19:33.242841 env[1730]: time="2024-12-13T02:19:33.242816622Z" level=info msg="StartContainer for \"c0d7aced378fb719996360e5d8034dbf80c8b5b88ee30de223ff4e83ef884b05\"" Dec 13 02:19:33.260152 systemd[1]: Started cri-containerd-712be35552494236f69af76139b16b028cb1b568415b80c8b9277606c0e5fde2.scope. Dec 13 02:19:33.270240 env[1730]: time="2024-12-13T02:19:33.270182503Z" level=info msg="CreateContainer within sandbox \"9204f03ac6cf25536dc852d7fc087ee0cdc5f6c25754c67b69e0e47a011ca7be\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bc57efc7a8fba3f1e610e8b59fd67323ac21efc9d6c5e71e9b5d0d03909169ea\"" Dec 13 02:19:33.271247 env[1730]: time="2024-12-13T02:19:33.271197457Z" level=info msg="StartContainer for \"bc57efc7a8fba3f1e610e8b59fd67323ac21efc9d6c5e71e9b5d0d03909169ea\"" Dec 13 02:19:33.284803 systemd[1]: Started cri-containerd-c0d7aced378fb719996360e5d8034dbf80c8b5b88ee30de223ff4e83ef884b05.scope. Dec 13 02:19:33.318751 systemd[1]: Started cri-containerd-bc57efc7a8fba3f1e610e8b59fd67323ac21efc9d6c5e71e9b5d0d03909169ea.scope. Dec 13 02:19:33.445370 env[1730]: time="2024-12-13T02:19:33.444248462Z" level=info msg="StartContainer for \"c0d7aced378fb719996360e5d8034dbf80c8b5b88ee30de223ff4e83ef884b05\" returns successfully" Dec 13 02:19:33.450320 env[1730]: time="2024-12-13T02:19:33.448933015Z" level=info msg="StartContainer for \"bc57efc7a8fba3f1e610e8b59fd67323ac21efc9d6c5e71e9b5d0d03909169ea\" returns successfully" Dec 13 02:19:33.456271 env[1730]: time="2024-12-13T02:19:33.456228208Z" level=info msg="StartContainer for \"712be35552494236f69af76139b16b028cb1b568415b80c8b9277606c0e5fde2\" returns successfully" Dec 13 02:19:33.710198 kubelet[2312]: E1213 02:19:33.710096 2312 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.31.142:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.31.142:6443: connect: connection refused Dec 13 02:19:34.623329 kubelet[2312]: E1213 02:19:34.623299 2312 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-142?timeout=10s\": dial tcp 172.31.31.142:6443: connect: connection refused" interval="3.2s" Dec 13 02:19:34.727554 kubelet[2312]: I1213 02:19:34.727529 2312 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-142" Dec 13 02:19:34.728189 kubelet[2312]: E1213 02:19:34.728175 2312 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.31.142:6443/api/v1/nodes\": dial tcp 172.31.31.142:6443: connect: connection refused" node="ip-172-31-31-142" Dec 13 02:19:36.570753 kubelet[2312]: I1213 02:19:36.570691 2312 apiserver.go:52] "Watching apiserver" Dec 13 02:19:36.617745 kubelet[2312]: I1213 02:19:36.617703 2312 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 02:19:36.891974 kubelet[2312]: E1213 02:19:36.891851 2312 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-31-142" not found Dec 13 02:19:37.281145 kubelet[2312]: E1213 02:19:37.281108 2312 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-31-142" not found Dec 13 02:19:37.723579 kubelet[2312]: E1213 02:19:37.723439 2312 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-31-142" not found Dec 13 02:19:37.835170 kubelet[2312]: E1213 02:19:37.835136 2312 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-31-142\" not found" node="ip-172-31-31-142" Dec 13 02:19:37.931330 kubelet[2312]: I1213 02:19:37.931252 2312 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-142" Dec 13 02:19:37.942415 kubelet[2312]: I1213 02:19:37.942381 2312 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-31-142" Dec 13 02:19:39.543483 update_engine[1725]: I1213 02:19:39.543401 1725 update_attempter.cc:509] Updating boot flags... Dec 13 02:19:39.653109 systemd[1]: Reloading. Dec 13 02:19:39.935894 /usr/lib/systemd/system-generators/torcx-generator[2690]: time="2024-12-13T02:19:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:19:39.935939 /usr/lib/systemd/system-generators/torcx-generator[2690]: time="2024-12-13T02:19:39Z" level=info msg="torcx already run" Dec 13 02:19:40.293193 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:19:40.293219 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:19:40.324753 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:19:40.561865 kubelet[2312]: I1213 02:19:40.561347 2312 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:19:40.561471 systemd[1]: Stopping kubelet.service... Dec 13 02:19:40.585884 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:19:40.587175 systemd[1]: Stopped kubelet.service. Dec 13 02:19:40.587250 systemd[1]: kubelet.service: Consumed 1.098s CPU time. Dec 13 02:19:40.591019 systemd[1]: Starting kubelet.service... Dec 13 02:19:41.885979 systemd[1]: Started kubelet.service. Dec 13 02:19:42.027482 kubelet[2842]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:19:42.027482 kubelet[2842]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:19:42.027482 kubelet[2842]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:19:42.027482 kubelet[2842]: I1213 02:19:42.026398 2842 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:19:42.044744 kubelet[2842]: I1213 02:19:42.042490 2842 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 02:19:42.044744 kubelet[2842]: I1213 02:19:42.042529 2842 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:19:42.044744 kubelet[2842]: I1213 02:19:42.043041 2842 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 02:19:42.056310 kubelet[2842]: I1213 02:19:42.056255 2842 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 02:19:42.071167 kubelet[2842]: I1213 02:19:42.071005 2842 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:19:42.081967 sudo[2855]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 02:19:42.082354 sudo[2855]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 02:19:42.108398 kubelet[2842]: I1213 02:19:42.108364 2842 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:19:42.108858 kubelet[2842]: I1213 02:19:42.108842 2842 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:19:42.109488 kubelet[2842]: I1213 02:19:42.109451 2842 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:19:42.109651 kubelet[2842]: I1213 02:19:42.109495 2842 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:19:42.109651 kubelet[2842]: I1213 02:19:42.109511 2842 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:19:42.109651 kubelet[2842]: I1213 02:19:42.109560 2842 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:19:42.110107 kubelet[2842]: I1213 02:19:42.109708 2842 kubelet.go:396] "Attempting to sync node with API server" Dec 13 02:19:42.110107 kubelet[2842]: I1213 02:19:42.109729 2842 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:19:42.110953 kubelet[2842]: I1213 02:19:42.110925 2842 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:19:42.118704 kubelet[2842]: I1213 02:19:42.117314 2842 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:19:42.129615 kubelet[2842]: I1213 02:19:42.129484 2842 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:19:42.130684 kubelet[2842]: I1213 02:19:42.130659 2842 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:19:42.138431 kubelet[2842]: I1213 02:19:42.138353 2842 server.go:1256] "Started kubelet" Dec 13 02:19:42.158523 kubelet[2842]: I1213 02:19:42.158310 2842 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:19:42.159937 kubelet[2842]: I1213 02:19:42.159916 2842 server.go:461] "Adding debug handlers to kubelet server" Dec 13 02:19:42.170113 kubelet[2842]: E1213 02:19:42.169732 2842 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:19:42.170961 kubelet[2842]: I1213 02:19:42.170883 2842 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:19:42.171367 kubelet[2842]: I1213 02:19:42.171351 2842 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:19:42.177303 kubelet[2842]: I1213 02:19:42.175516 2842 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:19:42.181011 kubelet[2842]: I1213 02:19:42.180980 2842 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:19:42.181359 kubelet[2842]: I1213 02:19:42.181340 2842 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 02:19:42.181681 kubelet[2842]: I1213 02:19:42.181663 2842 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 02:19:42.185168 kubelet[2842]: I1213 02:19:42.184137 2842 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:19:42.187473 kubelet[2842]: I1213 02:19:42.187180 2842 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:19:42.187473 kubelet[2842]: I1213 02:19:42.187199 2842 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:19:42.236013 kubelet[2842]: I1213 02:19:42.231989 2842 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:19:42.242348 kubelet[2842]: I1213 02:19:42.240614 2842 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:19:42.242348 kubelet[2842]: I1213 02:19:42.240648 2842 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:19:42.242348 kubelet[2842]: I1213 02:19:42.240670 2842 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 02:19:42.242348 kubelet[2842]: E1213 02:19:42.240763 2842 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:19:42.333388 kubelet[2842]: I1213 02:19:42.333357 2842 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:19:42.333388 kubelet[2842]: I1213 02:19:42.333387 2842 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:19:42.333601 kubelet[2842]: I1213 02:19:42.333416 2842 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:19:42.333829 kubelet[2842]: I1213 02:19:42.333615 2842 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 02:19:42.333924 kubelet[2842]: I1213 02:19:42.333837 2842 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 02:19:42.333924 kubelet[2842]: I1213 02:19:42.333857 2842 policy_none.go:49] "None policy: Start" Dec 13 02:19:42.335888 kubelet[2842]: I1213 02:19:42.335864 2842 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:19:42.335996 kubelet[2842]: I1213 02:19:42.335899 2842 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:19:42.336962 kubelet[2842]: I1213 02:19:42.336910 2842 state_mem.go:75] "Updated machine memory state" Dec 13 02:19:42.342004 kubelet[2842]: E1213 02:19:42.341912 2842 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 02:19:42.344601 kubelet[2842]: I1213 02:19:42.344507 2842 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:19:42.351068 kubelet[2842]: I1213 02:19:42.351037 2842 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:19:42.456222 kubelet[2842]: I1213 02:19:42.456144 2842 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-142" Dec 13 02:19:42.478414 kubelet[2842]: I1213 02:19:42.478382 2842 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-31-142" Dec 13 02:19:42.478697 kubelet[2842]: I1213 02:19:42.478686 2842 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-31-142" Dec 13 02:19:42.543238 kubelet[2842]: I1213 02:19:42.543200 2842 topology_manager.go:215] "Topology Admit Handler" podUID="2615fb9f9188e60bf4daa347a400b27b" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-31-142" Dec 13 02:19:42.543463 kubelet[2842]: I1213 02:19:42.543439 2842 topology_manager.go:215] "Topology Admit Handler" podUID="0a5685fa4334035c9d0f71b5fec042fb" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-31-142" Dec 13 02:19:42.543597 kubelet[2842]: I1213 02:19:42.543516 2842 topology_manager.go:215] "Topology Admit Handler" podUID="cd5148c18ffcc55b39cd22b3119779fb" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-31-142" Dec 13 02:19:42.592902 kubelet[2842]: I1213 02:19:42.592866 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2615fb9f9188e60bf4daa347a400b27b-ca-certs\") pod \"kube-apiserver-ip-172-31-31-142\" (UID: \"2615fb9f9188e60bf4daa347a400b27b\") " pod="kube-system/kube-apiserver-ip-172-31-31-142" Dec 13 02:19:42.593130 kubelet[2842]: I1213 02:19:42.593115 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2615fb9f9188e60bf4daa347a400b27b-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-142\" (UID: \"2615fb9f9188e60bf4daa347a400b27b\") " pod="kube-system/kube-apiserver-ip-172-31-31-142" Dec 13 02:19:42.593407 kubelet[2842]: I1213 02:19:42.593379 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2615fb9f9188e60bf4daa347a400b27b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-142\" (UID: \"2615fb9f9188e60bf4daa347a400b27b\") " pod="kube-system/kube-apiserver-ip-172-31-31-142" Dec 13 02:19:42.593512 kubelet[2842]: I1213 02:19:42.593430 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a5685fa4334035c9d0f71b5fec042fb-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-142\" (UID: \"0a5685fa4334035c9d0f71b5fec042fb\") " pod="kube-system/kube-controller-manager-ip-172-31-31-142" Dec 13 02:19:42.593512 kubelet[2842]: I1213 02:19:42.593460 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cd5148c18ffcc55b39cd22b3119779fb-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-142\" (UID: \"cd5148c18ffcc55b39cd22b3119779fb\") " pod="kube-system/kube-scheduler-ip-172-31-31-142" Dec 13 02:19:42.593512 kubelet[2842]: I1213 02:19:42.593488 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a5685fa4334035c9d0f71b5fec042fb-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-142\" (UID: \"0a5685fa4334035c9d0f71b5fec042fb\") " pod="kube-system/kube-controller-manager-ip-172-31-31-142" Dec 13 02:19:42.593512 kubelet[2842]: I1213 02:19:42.593520 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0a5685fa4334035c9d0f71b5fec042fb-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-142\" (UID: \"0a5685fa4334035c9d0f71b5fec042fb\") " pod="kube-system/kube-controller-manager-ip-172-31-31-142" Dec 13 02:19:42.595450 kubelet[2842]: I1213 02:19:42.593548 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a5685fa4334035c9d0f71b5fec042fb-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-142\" (UID: \"0a5685fa4334035c9d0f71b5fec042fb\") " pod="kube-system/kube-controller-manager-ip-172-31-31-142" Dec 13 02:19:42.595450 kubelet[2842]: I1213 02:19:42.594305 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a5685fa4334035c9d0f71b5fec042fb-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-142\" (UID: \"0a5685fa4334035c9d0f71b5fec042fb\") " pod="kube-system/kube-controller-manager-ip-172-31-31-142" Dec 13 02:19:43.118453 kubelet[2842]: I1213 02:19:43.118389 2842 apiserver.go:52] "Watching apiserver" Dec 13 02:19:43.172633 sudo[2855]: pam_unix(sudo:session): session closed for user root Dec 13 02:19:43.182348 kubelet[2842]: I1213 02:19:43.182195 2842 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 02:19:43.293145 kubelet[2842]: E1213 02:19:43.292233 2842 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-31-142\" already exists" pod="kube-system/kube-apiserver-ip-172-31-31-142" Dec 13 02:19:43.358610 kubelet[2842]: I1213 02:19:43.358558 2842 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-142" podStartSLOduration=1.3584895129999999 podStartE2EDuration="1.358489513s" podCreationTimestamp="2024-12-13 02:19:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:19:43.332136306 +0000 UTC m=+1.418509185" watchObservedRunningTime="2024-12-13 02:19:43.358489513 +0000 UTC m=+1.444862381" Dec 13 02:19:43.378853 kubelet[2842]: I1213 02:19:43.377857 2842 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-142" podStartSLOduration=1.377783091 podStartE2EDuration="1.377783091s" podCreationTimestamp="2024-12-13 02:19:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:19:43.358921577 +0000 UTC m=+1.445294454" watchObservedRunningTime="2024-12-13 02:19:43.377783091 +0000 UTC m=+1.464155971" Dec 13 02:19:43.378853 kubelet[2842]: I1213 02:19:43.378132 2842 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-142" podStartSLOduration=1.378084287 podStartE2EDuration="1.378084287s" podCreationTimestamp="2024-12-13 02:19:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:19:43.375680001 +0000 UTC m=+1.462052879" watchObservedRunningTime="2024-12-13 02:19:43.378084287 +0000 UTC m=+1.464457164" Dec 13 02:19:45.382741 sudo[1979]: pam_unix(sudo:session): session closed for user root Dec 13 02:19:45.406419 sshd[1976]: pam_unix(sshd:session): session closed for user core Dec 13 02:19:45.421882 systemd[1]: sshd@4-172.31.31.142:22-139.178.68.195:51998.service: Deactivated successfully. Dec 13 02:19:45.423012 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 02:19:45.423207 systemd[1]: session-5.scope: Consumed 4.367s CPU time. Dec 13 02:19:45.426356 systemd-logind[1724]: Session 5 logged out. Waiting for processes to exit. Dec 13 02:19:45.432785 systemd-logind[1724]: Removed session 5. Dec 13 02:19:52.950567 kubelet[2842]: I1213 02:19:52.950537 2842 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 02:19:52.951196 env[1730]: time="2024-12-13T02:19:52.951040063Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 02:19:52.951527 kubelet[2842]: I1213 02:19:52.951302 2842 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 02:19:53.818542 kubelet[2842]: I1213 02:19:53.818506 2842 topology_manager.go:215] "Topology Admit Handler" podUID="5a13cf76-3ddb-4d10-a2cb-2d6f6dc1683d" podNamespace="kube-system" podName="kube-proxy-2tnx5" Dec 13 02:19:53.821267 kubelet[2842]: I1213 02:19:53.821239 2842 topology_manager.go:215] "Topology Admit Handler" podUID="6f7ecbca-520f-4e94-8257-a269bd155f93" podNamespace="kube-system" podName="cilium-drwmr" Dec 13 02:19:53.830402 systemd[1]: Created slice kubepods-besteffort-pod5a13cf76_3ddb_4d10_a2cb_2d6f6dc1683d.slice. Dec 13 02:19:53.844535 systemd[1]: Created slice kubepods-burstable-pod6f7ecbca_520f_4e94_8257_a269bd155f93.slice. Dec 13 02:19:53.998188 kubelet[2842]: I1213 02:19:53.998041 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-xtables-lock\") pod \"cilium-drwmr\" (UID: \"6f7ecbca-520f-4e94-8257-a269bd155f93\") " pod="kube-system/cilium-drwmr" Dec 13 02:19:54.000641 kubelet[2842]: I1213 02:19:54.000010 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f7ecbca-520f-4e94-8257-a269bd155f93-cilium-config-path\") pod \"cilium-drwmr\" (UID: \"6f7ecbca-520f-4e94-8257-a269bd155f93\") " pod="kube-system/cilium-drwmr" Dec 13 02:19:54.000765 kubelet[2842]: I1213 02:19:54.000683 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-hostproc\") pod \"cilium-drwmr\" (UID: \"6f7ecbca-520f-4e94-8257-a269bd155f93\") " pod="kube-system/cilium-drwmr" Dec 13 02:19:54.000765 kubelet[2842]: I1213 02:19:54.000722 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f7ecbca-520f-4e94-8257-a269bd155f93-clustermesh-secrets\") pod \"cilium-drwmr\" (UID: \"6f7ecbca-520f-4e94-8257-a269bd155f93\") " pod="kube-system/cilium-drwmr" Dec 13 02:19:54.000765 kubelet[2842]: I1213 02:19:54.000751 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-host-proc-sys-kernel\") pod \"cilium-drwmr\" (UID: \"6f7ecbca-520f-4e94-8257-a269bd155f93\") " pod="kube-system/cilium-drwmr" Dec 13 02:19:54.000911 kubelet[2842]: I1213 02:19:54.000780 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qd8c\" (UniqueName: \"kubernetes.io/projected/6f7ecbca-520f-4e94-8257-a269bd155f93-kube-api-access-6qd8c\") pod \"cilium-drwmr\" (UID: \"6f7ecbca-520f-4e94-8257-a269bd155f93\") " pod="kube-system/cilium-drwmr" Dec 13 02:19:54.000911 kubelet[2842]: I1213 02:19:54.000814 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a13cf76-3ddb-4d10-a2cb-2d6f6dc1683d-lib-modules\") pod \"kube-proxy-2tnx5\" (UID: \"5a13cf76-3ddb-4d10-a2cb-2d6f6dc1683d\") " pod="kube-system/kube-proxy-2tnx5" Dec 13 02:19:54.000911 kubelet[2842]: I1213 02:19:54.000845 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a13cf76-3ddb-4d10-a2cb-2d6f6dc1683d-xtables-lock\") pod \"kube-proxy-2tnx5\" (UID: \"5a13cf76-3ddb-4d10-a2cb-2d6f6dc1683d\") " pod="kube-system/kube-proxy-2tnx5" Dec 13 02:19:54.000911 kubelet[2842]: I1213 02:19:54.000875 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-cilium-cgroup\") pod \"cilium-drwmr\" (UID: \"6f7ecbca-520f-4e94-8257-a269bd155f93\") " pod="kube-system/cilium-drwmr" Dec 13 02:19:54.000911 kubelet[2842]: I1213 02:19:54.000910 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5a13cf76-3ddb-4d10-a2cb-2d6f6dc1683d-kube-proxy\") pod \"kube-proxy-2tnx5\" (UID: \"5a13cf76-3ddb-4d10-a2cb-2d6f6dc1683d\") " pod="kube-system/kube-proxy-2tnx5" Dec 13 02:19:54.001122 kubelet[2842]: I1213 02:19:54.000963 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-bpf-maps\") pod \"cilium-drwmr\" (UID: \"6f7ecbca-520f-4e94-8257-a269bd155f93\") " pod="kube-system/cilium-drwmr" Dec 13 02:19:54.001122 kubelet[2842]: I1213 02:19:54.001014 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-cni-path\") pod \"cilium-drwmr\" (UID: \"6f7ecbca-520f-4e94-8257-a269bd155f93\") " pod="kube-system/cilium-drwmr" Dec 13 02:19:54.001122 kubelet[2842]: I1213 02:19:54.001047 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-etc-cni-netd\") pod \"cilium-drwmr\" (UID: \"6f7ecbca-520f-4e94-8257-a269bd155f93\") " pod="kube-system/cilium-drwmr" Dec 13 02:19:54.001122 kubelet[2842]: I1213 02:19:54.001082 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f7ecbca-520f-4e94-8257-a269bd155f93-hubble-tls\") pod \"cilium-drwmr\" (UID: \"6f7ecbca-520f-4e94-8257-a269bd155f93\") " pod="kube-system/cilium-drwmr" Dec 13 02:19:54.001122 kubelet[2842]: I1213 02:19:54.001120 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw4pw\" (UniqueName: \"kubernetes.io/projected/5a13cf76-3ddb-4d10-a2cb-2d6f6dc1683d-kube-api-access-tw4pw\") pod \"kube-proxy-2tnx5\" (UID: \"5a13cf76-3ddb-4d10-a2cb-2d6f6dc1683d\") " pod="kube-system/kube-proxy-2tnx5" Dec 13 02:19:54.001345 kubelet[2842]: I1213 02:19:54.001151 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-cilium-run\") pod \"cilium-drwmr\" (UID: \"6f7ecbca-520f-4e94-8257-a269bd155f93\") " pod="kube-system/cilium-drwmr" Dec 13 02:19:54.001345 kubelet[2842]: I1213 02:19:54.001181 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-lib-modules\") pod \"cilium-drwmr\" (UID: \"6f7ecbca-520f-4e94-8257-a269bd155f93\") " pod="kube-system/cilium-drwmr" Dec 13 02:19:54.001345 kubelet[2842]: I1213 02:19:54.001214 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-host-proc-sys-net\") pod \"cilium-drwmr\" (UID: \"6f7ecbca-520f-4e94-8257-a269bd155f93\") " pod="kube-system/cilium-drwmr" Dec 13 02:19:54.097771 kubelet[2842]: I1213 02:19:54.097659 2842 topology_manager.go:215] "Topology Admit Handler" podUID="7d4aa90b-a46f-4974-965e-6aca52f93915" podNamespace="kube-system" podName="cilium-operator-5cc964979-qfzdt" Dec 13 02:19:54.113976 systemd[1]: Created slice kubepods-besteffort-pod7d4aa90b_a46f_4974_965e_6aca52f93915.slice. Dec 13 02:19:54.202198 kubelet[2842]: I1213 02:19:54.202162 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7d4aa90b-a46f-4974-965e-6aca52f93915-cilium-config-path\") pod \"cilium-operator-5cc964979-qfzdt\" (UID: \"7d4aa90b-a46f-4974-965e-6aca52f93915\") " pod="kube-system/cilium-operator-5cc964979-qfzdt" Dec 13 02:19:54.202484 kubelet[2842]: I1213 02:19:54.202467 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwx76\" (UniqueName: \"kubernetes.io/projected/7d4aa90b-a46f-4974-965e-6aca52f93915-kube-api-access-nwx76\") pod \"cilium-operator-5cc964979-qfzdt\" (UID: \"7d4aa90b-a46f-4974-965e-6aca52f93915\") " pod="kube-system/cilium-operator-5cc964979-qfzdt" Dec 13 02:19:54.418383 env[1730]: time="2024-12-13T02:19:54.418236858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-qfzdt,Uid:7d4aa90b-a46f-4974-965e-6aca52f93915,Namespace:kube-system,Attempt:0,}" Dec 13 02:19:54.440522 env[1730]: time="2024-12-13T02:19:54.440186334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2tnx5,Uid:5a13cf76-3ddb-4d10-a2cb-2d6f6dc1683d,Namespace:kube-system,Attempt:0,}" Dec 13 02:19:54.451689 env[1730]: time="2024-12-13T02:19:54.451602894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-drwmr,Uid:6f7ecbca-520f-4e94-8257-a269bd155f93,Namespace:kube-system,Attempt:0,}" Dec 13 02:19:54.466531 env[1730]: time="2024-12-13T02:19:54.466442727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:19:54.466708 env[1730]: time="2024-12-13T02:19:54.466557955Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:19:54.466708 env[1730]: time="2024-12-13T02:19:54.466589016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:19:54.467774 env[1730]: time="2024-12-13T02:19:54.467305722Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f309a0c791f0115ca1814ab36b5a6f977569ac147cff38457f3d0c86bb970f2f pid=2926 runtime=io.containerd.runc.v2 Dec 13 02:19:54.491398 systemd[1]: Started cri-containerd-f309a0c791f0115ca1814ab36b5a6f977569ac147cff38457f3d0c86bb970f2f.scope. Dec 13 02:19:54.536045 env[1730]: time="2024-12-13T02:19:54.535959159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:19:54.536224 env[1730]: time="2024-12-13T02:19:54.536058683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:19:54.536224 env[1730]: time="2024-12-13T02:19:54.536087576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:19:54.536449 env[1730]: time="2024-12-13T02:19:54.536368546Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c4360defa49aec7175052efab6b02ec344f5c410e8361c730807c34344317cc4 pid=2968 runtime=io.containerd.runc.v2 Dec 13 02:19:54.547095 env[1730]: time="2024-12-13T02:19:54.546811196Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:19:54.547294 env[1730]: time="2024-12-13T02:19:54.547138350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:19:54.547294 env[1730]: time="2024-12-13T02:19:54.547226600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:19:54.552746 env[1730]: time="2024-12-13T02:19:54.552663951Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e0e643092d0a94e94cdf54a2f5a276b7e9d4bf50c1255fa20a587bf23e262f8a pid=2981 runtime=io.containerd.runc.v2 Dec 13 02:19:54.588068 systemd[1]: Started cri-containerd-c4360defa49aec7175052efab6b02ec344f5c410e8361c730807c34344317cc4.scope. Dec 13 02:19:54.605850 systemd[1]: Started cri-containerd-e0e643092d0a94e94cdf54a2f5a276b7e9d4bf50c1255fa20a587bf23e262f8a.scope. Dec 13 02:19:54.625386 env[1730]: time="2024-12-13T02:19:54.625331290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-qfzdt,Uid:7d4aa90b-a46f-4974-965e-6aca52f93915,Namespace:kube-system,Attempt:0,} returns sandbox id \"f309a0c791f0115ca1814ab36b5a6f977569ac147cff38457f3d0c86bb970f2f\"" Dec 13 02:19:54.628736 env[1730]: time="2024-12-13T02:19:54.628550424Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 02:19:54.659171 env[1730]: time="2024-12-13T02:19:54.659118951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-drwmr,Uid:6f7ecbca-520f-4e94-8257-a269bd155f93,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0e643092d0a94e94cdf54a2f5a276b7e9d4bf50c1255fa20a587bf23e262f8a\"" Dec 13 02:19:54.671764 env[1730]: time="2024-12-13T02:19:54.671646332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2tnx5,Uid:5a13cf76-3ddb-4d10-a2cb-2d6f6dc1683d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4360defa49aec7175052efab6b02ec344f5c410e8361c730807c34344317cc4\"" Dec 13 02:19:54.677481 env[1730]: time="2024-12-13T02:19:54.677447649Z" level=info msg="CreateContainer within sandbox \"c4360defa49aec7175052efab6b02ec344f5c410e8361c730807c34344317cc4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 02:19:54.718786 env[1730]: time="2024-12-13T02:19:54.718734302Z" level=info msg="CreateContainer within sandbox \"c4360defa49aec7175052efab6b02ec344f5c410e8361c730807c34344317cc4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d44f2da5bbbdee2d8b49465ea0e76065aab417e1c813f42516cad7a7de9a1a08\"" Dec 13 02:19:54.720788 env[1730]: time="2024-12-13T02:19:54.719627187Z" level=info msg="StartContainer for \"d44f2da5bbbdee2d8b49465ea0e76065aab417e1c813f42516cad7a7de9a1a08\"" Dec 13 02:19:54.744122 systemd[1]: Started cri-containerd-d44f2da5bbbdee2d8b49465ea0e76065aab417e1c813f42516cad7a7de9a1a08.scope. Dec 13 02:19:54.781942 env[1730]: time="2024-12-13T02:19:54.781889666Z" level=info msg="StartContainer for \"d44f2da5bbbdee2d8b49465ea0e76065aab417e1c813f42516cad7a7de9a1a08\" returns successfully" Dec 13 02:19:55.669739 env[1730]: time="2024-12-13T02:19:55.669351548Z" level=error msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" failed" error="failed to pull and unpack image \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://cdn03.quay.io/quayio-production-s3/sha256/ed/ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20241213%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20241213T021955Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=56fa20c2f4a0ddeba900ff224c30d17be448e77746171408ba33fb81eed37b55&cf_sign=B7Yz3kDclffKdQkJn9AL1cFkWX2c%2BdvXkqFmC8Y0BOcKWRwSNdnxBAuNSxVCydYxbRgubJqcG3mXB5Bk6Matjw%2BZtSKqMsAiwjdCTMIHxtOrXEt0wMe5ziEgTOJSu82rSUcddQHmwzxF%2FRYigGhayLruLJ2Kvnzii%2Bt29LKPZ48ZqCEmJAeeL9QdQPIxzIKuEEL382F%2BBvem8tu1jqyUTDWYhLYcpoGUc7xbeSfagG%2BqaowLNHa37FEbS0usF2szNDfcbyt%2F%2BUCI%2B1%2FyXnMK74SGDxvyo0s2RKIIiq0ezq1Fk%2B7cH9wpWA13PSB0yQDfOhRI2w8IxiGRNq0quHzcVg%3D%3D&cf_expiry=1734056995®ion=us-east-1&namespace=cilium&repo_name=operator-generic\": dial tcp: lookup cdn03.quay.io: no such host" Dec 13 02:19:55.670754 kubelet[2842]: E1213 02:19:55.670148 2842 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://cdn03.quay.io/quayio-production-s3/sha256/ed/ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20241213%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20241213T021955Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=56fa20c2f4a0ddeba900ff224c30d17be448e77746171408ba33fb81eed37b55&cf_sign=B7Yz3kDclffKdQkJn9AL1cFkWX2c%2BdvXkqFmC8Y0BOcKWRwSNdnxBAuNSxVCydYxbRgubJqcG3mXB5Bk6Matjw%2BZtSKqMsAiwjdCTMIHxtOrXEt0wMe5ziEgTOJSu82rSUcddQHmwzxF%2FRYigGhayLruLJ2Kvnzii%2Bt29LKPZ48ZqCEmJAeeL9QdQPIxzIKuEEL382F%2BBvem8tu1jqyUTDWYhLYcpoGUc7xbeSfagG%2BqaowLNHa37FEbS0usF2szNDfcbyt%2F%2BUCI%2B1%2FyXnMK74SGDxvyo0s2RKIIiq0ezq1Fk%2B7cH9wpWA13PSB0yQDfOhRI2w8IxiGRNq0quHzcVg%3D%3D&cf_expiry=1734056995®ion=us-east-1&namespace=cilium&repo_name=operator-generic\": dial tcp: lookup cdn03.quay.io: no such host" image="quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e" Dec 13 02:19:55.671488 kubelet[2842]: E1213 02:19:55.670662 2842 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://cdn03.quay.io/quayio-production-s3/sha256/ed/ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20241213%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20241213T021955Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=56fa20c2f4a0ddeba900ff224c30d17be448e77746171408ba33fb81eed37b55&cf_sign=B7Yz3kDclffKdQkJn9AL1cFkWX2c%2BdvXkqFmC8Y0BOcKWRwSNdnxBAuNSxVCydYxbRgubJqcG3mXB5Bk6Matjw%2BZtSKqMsAiwjdCTMIHxtOrXEt0wMe5ziEgTOJSu82rSUcddQHmwzxF%2FRYigGhayLruLJ2Kvnzii%2Bt29LKPZ48ZqCEmJAeeL9QdQPIxzIKuEEL382F%2BBvem8tu1jqyUTDWYhLYcpoGUc7xbeSfagG%2BqaowLNHa37FEbS0usF2szNDfcbyt%2F%2BUCI%2B1%2FyXnMK74SGDxvyo0s2RKIIiq0ezq1Fk%2B7cH9wpWA13PSB0yQDfOhRI2w8IxiGRNq0quHzcVg%3D%3D&cf_expiry=1734056995®ion=us-east-1&namespace=cilium&repo_name=operator-generic\": dial tcp: lookup cdn03.quay.io: no such host" image="quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e" Dec 13 02:19:55.674429 kubelet[2842]: E1213 02:19:55.674384 2842 kuberuntime_manager.go:1262] container &Container{Name:cilium-operator,Image:quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Command:[cilium-operator-generic],Args:[--config-dir=/tmp/cilium/config-map --debug=$(CILIUM_DEBUG)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:K8S_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CILIUM_K8S_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CILIUM_DEBUG,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:&ConfigMapKeySelector{LocalObjectReference:LocalObjectReference{Name:cilium-config,},Key:debug,Optional:*true,},SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cilium-config-path,ReadOnly:true,MountPath:/tmp/cilium/config-map,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-nwx76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 9234 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:3,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-operator-5cc964979-qfzdt_kube-system(7d4aa90b-a46f-4974-965e-6aca52f93915): ErrImagePull: failed to pull and unpack image "quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e": failed to copy: httpReadSeeker: failed open: failed to do request: Get "https://cdn03.quay.io/quayio-production-s3/sha256/ed/ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20241213%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20241213T021955Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=56fa20c2f4a0ddeba900ff224c30d17be448e77746171408ba33fb81eed37b55&cf_sign=B7Yz3kDclffKdQkJn9AL1cFkWX2c%2BdvXkqFmC8Y0BOcKWRwSNdnxBAuNSxVCydYxbRgubJqcG3mXB5Bk6Matjw%2BZtSKqMsAiwjdCTMIHxtOrXEt0wMe5ziEgTOJSu82rSUcddQHmwzxF%2FRYigGhayLruLJ2Kvnzii%2Bt29LKPZ48ZqCEmJAeeL9QdQPIxzIKuEEL382F%2BBvem8tu1jqyUTDWYhLYcpoGUc7xbeSfagG%2BqaowLNHa37FEbS0usF2szNDfcbyt%2F%2BUCI%2B1%2FyXnMK74SGDxvyo0s2RKIIiq0ezq1Fk%2B7cH9wpWA13PSB0yQDfOhRI2w8IxiGRNq0quHzcVg%3D%3D&cf_expiry=1734056995®ion=us-east-1&namespace=cilium&repo_name=operator-generic": dial tcp: lookup cdn03.quay.io: no such host Dec 13 02:19:55.674694 kubelet[2842]: E1213 02:19:55.674585 2842 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cilium-operator\" with ErrImagePull: \"failed to pull and unpack image \\\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \\\"https://cdn03.quay.io/quayio-production-s3/sha256/ed/ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20241213%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20241213T021955Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=56fa20c2f4a0ddeba900ff224c30d17be448e77746171408ba33fb81eed37b55&cf_sign=B7Yz3kDclffKdQkJn9AL1cFkWX2c%2BdvXkqFmC8Y0BOcKWRwSNdnxBAuNSxVCydYxbRgubJqcG3mXB5Bk6Matjw%2BZtSKqMsAiwjdCTMIHxtOrXEt0wMe5ziEgTOJSu82rSUcddQHmwzxF%2FRYigGhayLruLJ2Kvnzii%2Bt29LKPZ48ZqCEmJAeeL9QdQPIxzIKuEEL382F%2BBvem8tu1jqyUTDWYhLYcpoGUc7xbeSfagG%2BqaowLNHa37FEbS0usF2szNDfcbyt%2F%2BUCI%2B1%2FyXnMK74SGDxvyo0s2RKIIiq0ezq1Fk%2B7cH9wpWA13PSB0yQDfOhRI2w8IxiGRNq0quHzcVg%3D%3D&cf_expiry=1734056995®ion=us-east-1&namespace=cilium&repo_name=operator-generic\\\": dial tcp: lookup cdn03.quay.io: no such host\"" pod="kube-system/cilium-operator-5cc964979-qfzdt" podUID="7d4aa90b-a46f-4974-965e-6aca52f93915" Dec 13 02:19:55.676109 env[1730]: time="2024-12-13T02:19:55.676057057Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 02:19:56.340748 kubelet[2842]: E1213 02:19:56.340719 2842 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cilium-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\\\"\"" pod="kube-system/cilium-operator-5cc964979-qfzdt" podUID="7d4aa90b-a46f-4974-965e-6aca52f93915" Dec 13 02:19:56.352485 kubelet[2842]: I1213 02:19:56.352432 2842 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-2tnx5" podStartSLOduration=3.352156214 podStartE2EDuration="3.352156214s" podCreationTimestamp="2024-12-13 02:19:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:19:55.355197877 +0000 UTC m=+13.441570755" watchObservedRunningTime="2024-12-13 02:19:56.352156214 +0000 UTC m=+14.438529090" Dec 13 02:20:07.325876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2380917132.mount: Deactivated successfully. Dec 13 02:20:12.024434 env[1730]: time="2024-12-13T02:20:12.024382022Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:20:12.029233 env[1730]: time="2024-12-13T02:20:12.029177620Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:20:12.031871 env[1730]: time="2024-12-13T02:20:12.031831040Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:20:12.032439 env[1730]: time="2024-12-13T02:20:12.032400886Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 02:20:12.035234 env[1730]: time="2024-12-13T02:20:12.033847031Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 02:20:12.036560 env[1730]: time="2024-12-13T02:20:12.036529366Z" level=info msg="CreateContainer within sandbox \"e0e643092d0a94e94cdf54a2f5a276b7e9d4bf50c1255fa20a587bf23e262f8a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:20:12.064458 env[1730]: time="2024-12-13T02:20:12.064397583Z" level=info msg="CreateContainer within sandbox \"e0e643092d0a94e94cdf54a2f5a276b7e9d4bf50c1255fa20a587bf23e262f8a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c276c01218d5dde7edf996dbf2d3094606f819d71c0333d2bdef9e129176f59f\"" Dec 13 02:20:12.065577 env[1730]: time="2024-12-13T02:20:12.065536684Z" level=info msg="StartContainer for \"c276c01218d5dde7edf996dbf2d3094606f819d71c0333d2bdef9e129176f59f\"" Dec 13 02:20:12.097214 systemd[1]: Started cri-containerd-c276c01218d5dde7edf996dbf2d3094606f819d71c0333d2bdef9e129176f59f.scope. Dec 13 02:20:12.137030 env[1730]: time="2024-12-13T02:20:12.136977362Z" level=info msg="StartContainer for \"c276c01218d5dde7edf996dbf2d3094606f819d71c0333d2bdef9e129176f59f\" returns successfully" Dec 13 02:20:12.152667 systemd[1]: cri-containerd-c276c01218d5dde7edf996dbf2d3094606f819d71c0333d2bdef9e129176f59f.scope: Deactivated successfully. Dec 13 02:20:12.563737 env[1730]: time="2024-12-13T02:20:12.563661944Z" level=info msg="shim disconnected" id=c276c01218d5dde7edf996dbf2d3094606f819d71c0333d2bdef9e129176f59f Dec 13 02:20:12.563737 env[1730]: time="2024-12-13T02:20:12.563732303Z" level=warning msg="cleaning up after shim disconnected" id=c276c01218d5dde7edf996dbf2d3094606f819d71c0333d2bdef9e129176f59f namespace=k8s.io Dec 13 02:20:12.564056 env[1730]: time="2024-12-13T02:20:12.563745583Z" level=info msg="cleaning up dead shim" Dec 13 02:20:12.594623 env[1730]: time="2024-12-13T02:20:12.594550380Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:20:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3253 runtime=io.containerd.runc.v2\n" Dec 13 02:20:13.058456 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c276c01218d5dde7edf996dbf2d3094606f819d71c0333d2bdef9e129176f59f-rootfs.mount: Deactivated successfully. Dec 13 02:20:13.502004 env[1730]: time="2024-12-13T02:20:13.501605831Z" level=info msg="CreateContainer within sandbox \"e0e643092d0a94e94cdf54a2f5a276b7e9d4bf50c1255fa20a587bf23e262f8a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:20:13.536349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2715374419.mount: Deactivated successfully. Dec 13 02:20:13.550800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2055038953.mount: Deactivated successfully. Dec 13 02:20:13.555232 env[1730]: time="2024-12-13T02:20:13.555194282Z" level=info msg="CreateContainer within sandbox \"e0e643092d0a94e94cdf54a2f5a276b7e9d4bf50c1255fa20a587bf23e262f8a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e24b63777e989b5b1c7b5a925b14186bd748dc431a07d87b14eefb22395fb1d9\"" Dec 13 02:20:13.558155 env[1730]: time="2024-12-13T02:20:13.558122037Z" level=info msg="StartContainer for \"e24b63777e989b5b1c7b5a925b14186bd748dc431a07d87b14eefb22395fb1d9\"" Dec 13 02:20:13.581814 systemd[1]: Started cri-containerd-e24b63777e989b5b1c7b5a925b14186bd748dc431a07d87b14eefb22395fb1d9.scope. Dec 13 02:20:13.633036 env[1730]: time="2024-12-13T02:20:13.632985837Z" level=info msg="StartContainer for \"e24b63777e989b5b1c7b5a925b14186bd748dc431a07d87b14eefb22395fb1d9\" returns successfully" Dec 13 02:20:13.653861 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:20:13.654399 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:20:13.656647 systemd[1]: Stopping systemd-sysctl.service... Dec 13 02:20:13.661808 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:20:13.664756 systemd[1]: cri-containerd-e24b63777e989b5b1c7b5a925b14186bd748dc431a07d87b14eefb22395fb1d9.scope: Deactivated successfully. Dec 13 02:20:13.691127 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:20:13.724120 env[1730]: time="2024-12-13T02:20:13.724061906Z" level=info msg="shim disconnected" id=e24b63777e989b5b1c7b5a925b14186bd748dc431a07d87b14eefb22395fb1d9 Dec 13 02:20:13.724120 env[1730]: time="2024-12-13T02:20:13.724116983Z" level=warning msg="cleaning up after shim disconnected" id=e24b63777e989b5b1c7b5a925b14186bd748dc431a07d87b14eefb22395fb1d9 namespace=k8s.io Dec 13 02:20:13.724998 env[1730]: time="2024-12-13T02:20:13.724130003Z" level=info msg="cleaning up dead shim" Dec 13 02:20:13.735609 env[1730]: time="2024-12-13T02:20:13.735564215Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:20:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3319 runtime=io.containerd.runc.v2\n" Dec 13 02:20:14.526450 env[1730]: time="2024-12-13T02:20:14.519816078Z" level=info msg="CreateContainer within sandbox \"e0e643092d0a94e94cdf54a2f5a276b7e9d4bf50c1255fa20a587bf23e262f8a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:20:14.556625 env[1730]: time="2024-12-13T02:20:14.556575404Z" level=info msg="CreateContainer within sandbox \"e0e643092d0a94e94cdf54a2f5a276b7e9d4bf50c1255fa20a587bf23e262f8a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ab6dacbffba5085b3fd8c4c0ad926979797f82eb602290cfc766895b11d42018\"" Dec 13 02:20:14.558104 env[1730]: time="2024-12-13T02:20:14.557479573Z" level=info msg="StartContainer for \"ab6dacbffba5085b3fd8c4c0ad926979797f82eb602290cfc766895b11d42018\"" Dec 13 02:20:14.596427 systemd[1]: Started cri-containerd-ab6dacbffba5085b3fd8c4c0ad926979797f82eb602290cfc766895b11d42018.scope. Dec 13 02:20:14.646145 env[1730]: time="2024-12-13T02:20:14.646106454Z" level=info msg="StartContainer for \"ab6dacbffba5085b3fd8c4c0ad926979797f82eb602290cfc766895b11d42018\" returns successfully" Dec 13 02:20:14.653492 systemd[1]: cri-containerd-ab6dacbffba5085b3fd8c4c0ad926979797f82eb602290cfc766895b11d42018.scope: Deactivated successfully. Dec 13 02:20:14.698258 env[1730]: time="2024-12-13T02:20:14.698204031Z" level=info msg="shim disconnected" id=ab6dacbffba5085b3fd8c4c0ad926979797f82eb602290cfc766895b11d42018 Dec 13 02:20:14.698548 env[1730]: time="2024-12-13T02:20:14.698289746Z" level=warning msg="cleaning up after shim disconnected" id=ab6dacbffba5085b3fd8c4c0ad926979797f82eb602290cfc766895b11d42018 namespace=k8s.io Dec 13 02:20:14.698548 env[1730]: time="2024-12-13T02:20:14.698304772Z" level=info msg="cleaning up dead shim" Dec 13 02:20:14.733517 env[1730]: time="2024-12-13T02:20:14.733463827Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:20:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3381 runtime=io.containerd.runc.v2\n" Dec 13 02:20:15.060246 systemd[1]: run-containerd-runc-k8s.io-ab6dacbffba5085b3fd8c4c0ad926979797f82eb602290cfc766895b11d42018-runc.6rBnKP.mount: Deactivated successfully. Dec 13 02:20:15.060771 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab6dacbffba5085b3fd8c4c0ad926979797f82eb602290cfc766895b11d42018-rootfs.mount: Deactivated successfully. Dec 13 02:20:15.523363 env[1730]: time="2024-12-13T02:20:15.523058467Z" level=info msg="CreateContainer within sandbox \"e0e643092d0a94e94cdf54a2f5a276b7e9d4bf50c1255fa20a587bf23e262f8a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:20:15.553904 env[1730]: time="2024-12-13T02:20:15.553851994Z" level=info msg="CreateContainer within sandbox \"e0e643092d0a94e94cdf54a2f5a276b7e9d4bf50c1255fa20a587bf23e262f8a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"80c2b8aa96dd795eea859262772813f17a1f72455804255c38cacc3ab1c57ff8\"" Dec 13 02:20:15.556600 env[1730]: time="2024-12-13T02:20:15.556560918Z" level=info msg="StartContainer for \"80c2b8aa96dd795eea859262772813f17a1f72455804255c38cacc3ab1c57ff8\"" Dec 13 02:20:15.599762 systemd[1]: Started cri-containerd-80c2b8aa96dd795eea859262772813f17a1f72455804255c38cacc3ab1c57ff8.scope. Dec 13 02:20:15.633658 systemd[1]: cri-containerd-80c2b8aa96dd795eea859262772813f17a1f72455804255c38cacc3ab1c57ff8.scope: Deactivated successfully. Dec 13 02:20:15.644134 env[1730]: time="2024-12-13T02:20:15.644072659Z" level=info msg="StartContainer for \"80c2b8aa96dd795eea859262772813f17a1f72455804255c38cacc3ab1c57ff8\" returns successfully" Dec 13 02:20:15.685987 env[1730]: time="2024-12-13T02:20:15.685929929Z" level=info msg="shim disconnected" id=80c2b8aa96dd795eea859262772813f17a1f72455804255c38cacc3ab1c57ff8 Dec 13 02:20:15.685987 env[1730]: time="2024-12-13T02:20:15.685988739Z" level=warning msg="cleaning up after shim disconnected" id=80c2b8aa96dd795eea859262772813f17a1f72455804255c38cacc3ab1c57ff8 namespace=k8s.io Dec 13 02:20:15.685987 env[1730]: time="2024-12-13T02:20:15.686095702Z" level=info msg="cleaning up dead shim" Dec 13 02:20:15.696170 env[1730]: time="2024-12-13T02:20:15.696122066Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:20:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3437 runtime=io.containerd.runc.v2\n" Dec 13 02:20:16.059208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80c2b8aa96dd795eea859262772813f17a1f72455804255c38cacc3ab1c57ff8-rootfs.mount: Deactivated successfully. Dec 13 02:20:16.529492 env[1730]: time="2024-12-13T02:20:16.529447179Z" level=info msg="CreateContainer within sandbox \"e0e643092d0a94e94cdf54a2f5a276b7e9d4bf50c1255fa20a587bf23e262f8a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:20:16.566792 env[1730]: time="2024-12-13T02:20:16.566738392Z" level=info msg="CreateContainer within sandbox \"e0e643092d0a94e94cdf54a2f5a276b7e9d4bf50c1255fa20a587bf23e262f8a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0cd517e1d5546f3acc511cfadd06ea2bede3727c6b3258ad7f243071c51f3ea7\"" Dec 13 02:20:16.569099 env[1730]: time="2024-12-13T02:20:16.567672058Z" level=info msg="StartContainer for \"0cd517e1d5546f3acc511cfadd06ea2bede3727c6b3258ad7f243071c51f3ea7\"" Dec 13 02:20:16.601058 systemd[1]: Started cri-containerd-0cd517e1d5546f3acc511cfadd06ea2bede3727c6b3258ad7f243071c51f3ea7.scope. Dec 13 02:20:16.645428 env[1730]: time="2024-12-13T02:20:16.645345142Z" level=info msg="StartContainer for \"0cd517e1d5546f3acc511cfadd06ea2bede3727c6b3258ad7f243071c51f3ea7\" returns successfully" Dec 13 02:20:16.902734 kubelet[2842]: I1213 02:20:16.902593 2842 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 02:20:16.941154 kubelet[2842]: I1213 02:20:16.941120 2842 topology_manager.go:215] "Topology Admit Handler" podUID="de6ab9c1-077a-4e36-924d-e71f18537aab" podNamespace="kube-system" podName="coredns-76f75df574-b5t8g" Dec 13 02:20:16.946641 kubelet[2842]: I1213 02:20:16.946613 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kpl2\" (UniqueName: \"kubernetes.io/projected/de6ab9c1-077a-4e36-924d-e71f18537aab-kube-api-access-9kpl2\") pod \"coredns-76f75df574-b5t8g\" (UID: \"de6ab9c1-077a-4e36-924d-e71f18537aab\") " pod="kube-system/coredns-76f75df574-b5t8g" Dec 13 02:20:16.951454 kubelet[2842]: I1213 02:20:16.946811 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de6ab9c1-077a-4e36-924d-e71f18537aab-config-volume\") pod \"coredns-76f75df574-b5t8g\" (UID: \"de6ab9c1-077a-4e36-924d-e71f18537aab\") " pod="kube-system/coredns-76f75df574-b5t8g" Dec 13 02:20:16.951454 kubelet[2842]: I1213 02:20:16.947593 2842 topology_manager.go:215] "Topology Admit Handler" podUID="afcf2e54-935b-473b-8c12-cd27627384f1" podNamespace="kube-system" podName="coredns-76f75df574-xv5gh" Dec 13 02:20:16.961107 systemd[1]: Created slice kubepods-burstable-podde6ab9c1_077a_4e36_924d_e71f18537aab.slice. Dec 13 02:20:16.982897 systemd[1]: Created slice kubepods-burstable-podafcf2e54_935b_473b_8c12_cd27627384f1.slice. Dec 13 02:20:17.047547 kubelet[2842]: I1213 02:20:17.047475 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fgbc\" (UniqueName: \"kubernetes.io/projected/afcf2e54-935b-473b-8c12-cd27627384f1-kube-api-access-8fgbc\") pod \"coredns-76f75df574-xv5gh\" (UID: \"afcf2e54-935b-473b-8c12-cd27627384f1\") " pod="kube-system/coredns-76f75df574-xv5gh" Dec 13 02:20:17.047920 kubelet[2842]: I1213 02:20:17.047900 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afcf2e54-935b-473b-8c12-cd27627384f1-config-volume\") pod \"coredns-76f75df574-xv5gh\" (UID: \"afcf2e54-935b-473b-8c12-cd27627384f1\") " pod="kube-system/coredns-76f75df574-xv5gh" Dec 13 02:20:17.304328 env[1730]: time="2024-12-13T02:20:17.303238369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-b5t8g,Uid:de6ab9c1-077a-4e36-924d-e71f18537aab,Namespace:kube-system,Attempt:0,}" Dec 13 02:20:17.313342 env[1730]: time="2024-12-13T02:20:17.311581539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xv5gh,Uid:afcf2e54-935b-473b-8c12-cd27627384f1,Namespace:kube-system,Attempt:0,}" Dec 13 02:20:17.326232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2716090245.mount: Deactivated successfully. Dec 13 02:20:18.421038 env[1730]: time="2024-12-13T02:20:18.419934007Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:20:18.424756 env[1730]: time="2024-12-13T02:20:18.424710158Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:20:18.431455 env[1730]: time="2024-12-13T02:20:18.431411345Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:20:18.432140 env[1730]: time="2024-12-13T02:20:18.432101821Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 02:20:18.446864 env[1730]: time="2024-12-13T02:20:18.446462963Z" level=info msg="CreateContainer within sandbox \"f309a0c791f0115ca1814ab36b5a6f977569ac147cff38457f3d0c86bb970f2f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 02:20:18.483420 env[1730]: time="2024-12-13T02:20:18.482472094Z" level=info msg="CreateContainer within sandbox \"f309a0c791f0115ca1814ab36b5a6f977569ac147cff38457f3d0c86bb970f2f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"00451ce288b1da6895193d1a30b591d6caa57bb031c17144d76632bcd5693984\"" Dec 13 02:20:18.483926 env[1730]: time="2024-12-13T02:20:18.483875362Z" level=info msg="StartContainer for \"00451ce288b1da6895193d1a30b591d6caa57bb031c17144d76632bcd5693984\"" Dec 13 02:20:18.517554 systemd[1]: Started cri-containerd-00451ce288b1da6895193d1a30b591d6caa57bb031c17144d76632bcd5693984.scope. Dec 13 02:20:18.603025 env[1730]: time="2024-12-13T02:20:18.602545876Z" level=info msg="StartContainer for \"00451ce288b1da6895193d1a30b591d6caa57bb031c17144d76632bcd5693984\" returns successfully" Dec 13 02:20:19.063208 systemd[1]: run-containerd-runc-k8s.io-00451ce288b1da6895193d1a30b591d6caa57bb031c17144d76632bcd5693984-runc.wA0399.mount: Deactivated successfully. Dec 13 02:20:19.567034 kubelet[2842]: I1213 02:20:19.566999 2842 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-drwmr" podStartSLOduration=9.194521204 podStartE2EDuration="26.566941388s" podCreationTimestamp="2024-12-13 02:19:53 +0000 UTC" firstStartedPulling="2024-12-13 02:19:54.660442231 +0000 UTC m=+12.746815099" lastFinishedPulling="2024-12-13 02:20:12.032862414 +0000 UTC m=+30.119235283" observedRunningTime="2024-12-13 02:20:17.555254538 +0000 UTC m=+35.641627415" watchObservedRunningTime="2024-12-13 02:20:19.566941388 +0000 UTC m=+37.653314298" Dec 13 02:20:19.568104 kubelet[2842]: I1213 02:20:19.568081 2842 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-qfzdt" podStartSLOduration=1.762814395 podStartE2EDuration="25.56803721s" podCreationTimestamp="2024-12-13 02:19:54 +0000 UTC" firstStartedPulling="2024-12-13 02:19:54.627349514 +0000 UTC m=+12.713722383" lastFinishedPulling="2024-12-13 02:20:18.432572328 +0000 UTC m=+36.518945198" observedRunningTime="2024-12-13 02:20:19.567885375 +0000 UTC m=+37.654258239" watchObservedRunningTime="2024-12-13 02:20:19.56803721 +0000 UTC m=+37.654410088" Dec 13 02:20:22.221790 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 02:20:22.221944 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 02:20:22.219976 systemd-networkd[1459]: cilium_host: Link UP Dec 13 02:20:22.221185 systemd-networkd[1459]: cilium_net: Link UP Dec 13 02:20:22.221944 systemd-networkd[1459]: cilium_net: Gained carrier Dec 13 02:20:22.222566 systemd-networkd[1459]: cilium_host: Gained carrier Dec 13 02:20:22.224738 (udev-worker)[3640]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:20:22.225013 (udev-worker)[3639]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:20:22.475116 systemd-networkd[1459]: cilium_vxlan: Link UP Dec 13 02:20:22.475125 systemd-networkd[1459]: cilium_vxlan: Gained carrier Dec 13 02:20:22.842422 systemd-networkd[1459]: cilium_host: Gained IPv6LL Dec 13 02:20:22.970428 systemd-networkd[1459]: cilium_net: Gained IPv6LL Dec 13 02:20:23.223303 kernel: NET: Registered PF_ALG protocol family Dec 13 02:20:24.198882 (udev-worker)[3647]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:20:24.220464 systemd-networkd[1459]: lxc_health: Link UP Dec 13 02:20:24.227793 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:20:24.227623 systemd-networkd[1459]: lxc_health: Gained carrier Dec 13 02:20:24.384420 systemd-networkd[1459]: cilium_vxlan: Gained IPv6LL Dec 13 02:20:24.517748 systemd-networkd[1459]: lxc2574535267e8: Link UP Dec 13 02:20:24.531137 kernel: eth0: renamed from tmp8f7ee Dec 13 02:20:24.544707 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2574535267e8: link becomes ready Dec 13 02:20:24.544495 systemd-networkd[1459]: lxc2574535267e8: Gained carrier Dec 13 02:20:24.669009 systemd-networkd[1459]: lxc39f2dddaba01: Link UP Dec 13 02:20:24.689300 kernel: eth0: renamed from tmp3633a Dec 13 02:20:24.695379 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc39f2dddaba01: link becomes ready Dec 13 02:20:24.695515 systemd-networkd[1459]: lxc39f2dddaba01: Gained carrier Dec 13 02:20:25.520446 systemd-networkd[1459]: lxc_health: Gained IPv6LL Dec 13 02:20:25.979214 systemd-networkd[1459]: lxc2574535267e8: Gained IPv6LL Dec 13 02:20:26.124736 systemd-networkd[1459]: lxc39f2dddaba01: Gained IPv6LL Dec 13 02:20:29.200745 systemd[1]: Started sshd@5-172.31.31.142:22-139.178.68.195:39436.service. Dec 13 02:20:29.393876 sshd[4007]: Accepted publickey for core from 139.178.68.195 port 39436 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:20:29.396594 sshd[4007]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:20:29.404543 systemd[1]: Started session-6.scope. Dec 13 02:20:29.406365 systemd-logind[1724]: New session 6 of user core. Dec 13 02:20:29.916878 sshd[4007]: pam_unix(sshd:session): session closed for user core Dec 13 02:20:29.922153 systemd-logind[1724]: Session 6 logged out. Waiting for processes to exit. Dec 13 02:20:29.923644 systemd[1]: sshd@5-172.31.31.142:22-139.178.68.195:39436.service: Deactivated successfully. Dec 13 02:20:29.924673 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 02:20:29.926190 systemd-logind[1724]: Removed session 6. Dec 13 02:20:30.128680 env[1730]: time="2024-12-13T02:20:30.128563323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:20:30.128680 env[1730]: time="2024-12-13T02:20:30.128632959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:20:30.128680 env[1730]: time="2024-12-13T02:20:30.128648552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:20:30.130604 env[1730]: time="2024-12-13T02:20:30.130535121Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3633acc562d319bb57649bbfd3d48355fdc9358c63270ac36a053fd863735bd1 pid=4031 runtime=io.containerd.runc.v2 Dec 13 02:20:30.147678 env[1730]: time="2024-12-13T02:20:30.147566799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:20:30.148019 env[1730]: time="2024-12-13T02:20:30.147984289Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:20:30.148151 env[1730]: time="2024-12-13T02:20:30.148125200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:20:30.148685 env[1730]: time="2024-12-13T02:20:30.148551267Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f7ee8de912f69e9da3dfcd41fa7f4745984461a69b54e1572aad76bff2ebb3b pid=4044 runtime=io.containerd.runc.v2 Dec 13 02:20:30.189402 systemd[1]: run-containerd-runc-k8s.io-8f7ee8de912f69e9da3dfcd41fa7f4745984461a69b54e1572aad76bff2ebb3b-runc.B5HfnD.mount: Deactivated successfully. Dec 13 02:20:30.196140 systemd[1]: Started cri-containerd-3633acc562d319bb57649bbfd3d48355fdc9358c63270ac36a053fd863735bd1.scope. Dec 13 02:20:30.225409 systemd[1]: Started cri-containerd-8f7ee8de912f69e9da3dfcd41fa7f4745984461a69b54e1572aad76bff2ebb3b.scope. Dec 13 02:20:30.392053 env[1730]: time="2024-12-13T02:20:30.392000474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-b5t8g,Uid:de6ab9c1-077a-4e36-924d-e71f18537aab,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f7ee8de912f69e9da3dfcd41fa7f4745984461a69b54e1572aad76bff2ebb3b\"" Dec 13 02:20:30.402625 env[1730]: time="2024-12-13T02:20:30.402572898Z" level=info msg="CreateContainer within sandbox \"8f7ee8de912f69e9da3dfcd41fa7f4745984461a69b54e1572aad76bff2ebb3b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:20:30.449793 env[1730]: time="2024-12-13T02:20:30.449102328Z" level=info msg="CreateContainer within sandbox \"8f7ee8de912f69e9da3dfcd41fa7f4745984461a69b54e1572aad76bff2ebb3b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"18c78a60d0e50e0e80a63f6eb0b8d6df44a25688caebc75723fb4be3e01fba9f\"" Dec 13 02:20:30.451042 env[1730]: time="2024-12-13T02:20:30.450982524Z" level=info msg="StartContainer for \"18c78a60d0e50e0e80a63f6eb0b8d6df44a25688caebc75723fb4be3e01fba9f\"" Dec 13 02:20:30.455618 env[1730]: time="2024-12-13T02:20:30.455571273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xv5gh,Uid:afcf2e54-935b-473b-8c12-cd27627384f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"3633acc562d319bb57649bbfd3d48355fdc9358c63270ac36a053fd863735bd1\"" Dec 13 02:20:30.461088 env[1730]: time="2024-12-13T02:20:30.461041926Z" level=info msg="CreateContainer within sandbox \"3633acc562d319bb57649bbfd3d48355fdc9358c63270ac36a053fd863735bd1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:20:30.488645 systemd[1]: Started cri-containerd-18c78a60d0e50e0e80a63f6eb0b8d6df44a25688caebc75723fb4be3e01fba9f.scope. Dec 13 02:20:30.498294 env[1730]: time="2024-12-13T02:20:30.498214485Z" level=info msg="CreateContainer within sandbox \"3633acc562d319bb57649bbfd3d48355fdc9358c63270ac36a053fd863735bd1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"73e90b71a2b842dd95ba1ec270ac1b8963be8c9236a7f1480c12b4a37f9c18ac\"" Dec 13 02:20:30.499925 env[1730]: time="2024-12-13T02:20:30.499876574Z" level=info msg="StartContainer for \"73e90b71a2b842dd95ba1ec270ac1b8963be8c9236a7f1480c12b4a37f9c18ac\"" Dec 13 02:20:30.536207 systemd[1]: Started cri-containerd-73e90b71a2b842dd95ba1ec270ac1b8963be8c9236a7f1480c12b4a37f9c18ac.scope. Dec 13 02:20:30.591075 env[1730]: time="2024-12-13T02:20:30.591017787Z" level=info msg="StartContainer for \"18c78a60d0e50e0e80a63f6eb0b8d6df44a25688caebc75723fb4be3e01fba9f\" returns successfully" Dec 13 02:20:30.635736 env[1730]: time="2024-12-13T02:20:30.635678586Z" level=info msg="StartContainer for \"73e90b71a2b842dd95ba1ec270ac1b8963be8c9236a7f1480c12b4a37f9c18ac\" returns successfully" Dec 13 02:20:31.144182 systemd[1]: run-containerd-runc-k8s.io-3633acc562d319bb57649bbfd3d48355fdc9358c63270ac36a053fd863735bd1-runc.o0EJah.mount: Deactivated successfully. Dec 13 02:20:31.651741 kubelet[2842]: I1213 02:20:31.650787 2842 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-b5t8g" podStartSLOduration=37.64342205 podStartE2EDuration="37.64342205s" podCreationTimestamp="2024-12-13 02:19:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:20:31.619218488 +0000 UTC m=+49.705591360" watchObservedRunningTime="2024-12-13 02:20:31.64342205 +0000 UTC m=+49.729794928" Dec 13 02:20:32.626881 kubelet[2842]: I1213 02:20:32.626837 2842 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-xv5gh" podStartSLOduration=38.626771566 podStartE2EDuration="38.626771566s" podCreationTimestamp="2024-12-13 02:19:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:20:31.686345513 +0000 UTC m=+49.772718390" watchObservedRunningTime="2024-12-13 02:20:32.626771566 +0000 UTC m=+50.713144442" Dec 13 02:20:34.946698 systemd[1]: Started sshd@6-172.31.31.142:22-139.178.68.195:39446.service. Dec 13 02:20:35.149216 sshd[4196]: Accepted publickey for core from 139.178.68.195 port 39446 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:20:35.152391 sshd[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:20:35.160573 systemd[1]: Started session-7.scope. Dec 13 02:20:35.161379 systemd-logind[1724]: New session 7 of user core. Dec 13 02:20:35.486025 sshd[4196]: pam_unix(sshd:session): session closed for user core Dec 13 02:20:35.493206 systemd[1]: sshd@6-172.31.31.142:22-139.178.68.195:39446.service: Deactivated successfully. Dec 13 02:20:35.494865 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 02:20:35.496369 systemd-logind[1724]: Session 7 logged out. Waiting for processes to exit. Dec 13 02:20:35.498532 systemd-logind[1724]: Removed session 7. Dec 13 02:20:40.511168 systemd[1]: Started sshd@7-172.31.31.142:22-139.178.68.195:49606.service. Dec 13 02:20:40.673070 sshd[4209]: Accepted publickey for core from 139.178.68.195 port 49606 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:20:40.674848 sshd[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:20:40.681028 systemd[1]: Started session-8.scope. Dec 13 02:20:40.682831 systemd-logind[1724]: New session 8 of user core. Dec 13 02:20:40.927625 sshd[4209]: pam_unix(sshd:session): session closed for user core Dec 13 02:20:40.942329 systemd[1]: sshd@7-172.31.31.142:22-139.178.68.195:49606.service: Deactivated successfully. Dec 13 02:20:40.943463 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 02:20:40.944341 systemd-logind[1724]: Session 8 logged out. Waiting for processes to exit. Dec 13 02:20:40.945502 systemd-logind[1724]: Removed session 8. Dec 13 02:20:45.959041 systemd[1]: Started sshd@8-172.31.31.142:22-139.178.68.195:49614.service. Dec 13 02:20:46.138124 sshd[4224]: Accepted publickey for core from 139.178.68.195 port 49614 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:20:46.142200 sshd[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:20:46.163364 systemd[1]: Started session-9.scope. Dec 13 02:20:46.169303 systemd-logind[1724]: New session 9 of user core. Dec 13 02:20:46.483323 sshd[4224]: pam_unix(sshd:session): session closed for user core Dec 13 02:20:46.487188 systemd[1]: sshd@8-172.31.31.142:22-139.178.68.195:49614.service: Deactivated successfully. Dec 13 02:20:46.488297 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 02:20:46.488474 systemd-logind[1724]: Session 9 logged out. Waiting for processes to exit. Dec 13 02:20:46.489620 systemd-logind[1724]: Removed session 9. Dec 13 02:20:51.511349 systemd[1]: Started sshd@9-172.31.31.142:22-139.178.68.195:56894.service. Dec 13 02:20:51.678693 sshd[4237]: Accepted publickey for core from 139.178.68.195 port 56894 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:20:51.684244 sshd[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:20:51.703407 systemd-logind[1724]: New session 10 of user core. Dec 13 02:20:51.703599 systemd[1]: Started session-10.scope. Dec 13 02:20:51.976115 sshd[4237]: pam_unix(sshd:session): session closed for user core Dec 13 02:20:51.979487 systemd[1]: sshd@9-172.31.31.142:22-139.178.68.195:56894.service: Deactivated successfully. Dec 13 02:20:51.980472 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 02:20:51.981291 systemd-logind[1724]: Session 10 logged out. Waiting for processes to exit. Dec 13 02:20:51.982199 systemd-logind[1724]: Removed session 10. Dec 13 02:20:52.003675 systemd[1]: Started sshd@10-172.31.31.142:22-139.178.68.195:56910.service. Dec 13 02:20:52.164101 sshd[4250]: Accepted publickey for core from 139.178.68.195 port 56910 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:20:52.166688 sshd[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:20:52.177654 systemd[1]: Started session-11.scope. Dec 13 02:20:52.178359 systemd-logind[1724]: New session 11 of user core. Dec 13 02:20:52.502467 sshd[4250]: pam_unix(sshd:session): session closed for user core Dec 13 02:20:52.507119 systemd[1]: sshd@10-172.31.31.142:22-139.178.68.195:56910.service: Deactivated successfully. Dec 13 02:20:52.508201 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 02:20:52.509037 systemd-logind[1724]: Session 11 logged out. Waiting for processes to exit. Dec 13 02:20:52.510348 systemd-logind[1724]: Removed session 11. Dec 13 02:20:52.528562 systemd[1]: Started sshd@11-172.31.31.142:22-139.178.68.195:56922.service. Dec 13 02:20:52.740249 sshd[4260]: Accepted publickey for core from 139.178.68.195 port 56922 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:20:52.743270 sshd[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:20:52.753085 systemd-logind[1724]: New session 12 of user core. Dec 13 02:20:52.754176 systemd[1]: Started session-12.scope. Dec 13 02:20:53.043632 sshd[4260]: pam_unix(sshd:session): session closed for user core Dec 13 02:20:53.058520 systemd[1]: sshd@11-172.31.31.142:22-139.178.68.195:56922.service: Deactivated successfully. Dec 13 02:20:53.059749 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 02:20:53.061725 systemd-logind[1724]: Session 12 logged out. Waiting for processes to exit. Dec 13 02:20:53.063505 systemd-logind[1724]: Removed session 12. Dec 13 02:20:58.080934 systemd[1]: Started sshd@12-172.31.31.142:22-139.178.68.195:56540.service. Dec 13 02:20:58.253601 sshd[4275]: Accepted publickey for core from 139.178.68.195 port 56540 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:20:58.255495 sshd[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:20:58.262445 systemd[1]: Started session-13.scope. Dec 13 02:20:58.263700 systemd-logind[1724]: New session 13 of user core. Dec 13 02:20:58.523753 sshd[4275]: pam_unix(sshd:session): session closed for user core Dec 13 02:20:58.527893 systemd-logind[1724]: Session 13 logged out. Waiting for processes to exit. Dec 13 02:20:58.528138 systemd[1]: sshd@12-172.31.31.142:22-139.178.68.195:56540.service: Deactivated successfully. Dec 13 02:20:58.529132 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 02:20:58.530376 systemd-logind[1724]: Removed session 13. Dec 13 02:21:03.561732 systemd[1]: Started sshd@13-172.31.31.142:22-139.178.68.195:56542.service. Dec 13 02:21:03.750130 sshd[4287]: Accepted publickey for core from 139.178.68.195 port 56542 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:21:03.756147 sshd[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:21:03.781704 systemd[1]: Started session-14.scope. Dec 13 02:21:03.782337 systemd-logind[1724]: New session 14 of user core. Dec 13 02:21:04.221496 sshd[4287]: pam_unix(sshd:session): session closed for user core Dec 13 02:21:04.273351 systemd[1]: Started sshd@14-172.31.31.142:22-139.178.68.195:56546.service. Dec 13 02:21:04.277157 systemd[1]: sshd@13-172.31.31.142:22-139.178.68.195:56542.service: Deactivated successfully. Dec 13 02:21:04.278248 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 02:21:04.284589 systemd-logind[1724]: Session 14 logged out. Waiting for processes to exit. Dec 13 02:21:04.288910 systemd-logind[1724]: Removed session 14. Dec 13 02:21:04.454927 sshd[4298]: Accepted publickey for core from 139.178.68.195 port 56546 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:21:04.456723 sshd[4298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:21:04.465563 systemd[1]: Started session-15.scope. Dec 13 02:21:04.466369 systemd-logind[1724]: New session 15 of user core. Dec 13 02:21:05.331024 sshd[4298]: pam_unix(sshd:session): session closed for user core Dec 13 02:21:05.337992 systemd[1]: sshd@14-172.31.31.142:22-139.178.68.195:56546.service: Deactivated successfully. Dec 13 02:21:05.341773 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 02:21:05.343209 systemd-logind[1724]: Session 15 logged out. Waiting for processes to exit. Dec 13 02:21:05.345032 systemd-logind[1724]: Removed session 15. Dec 13 02:21:05.357314 systemd[1]: Started sshd@15-172.31.31.142:22-139.178.68.195:56558.service. Dec 13 02:21:05.552268 sshd[4309]: Accepted publickey for core from 139.178.68.195 port 56558 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:21:05.554819 sshd[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:21:05.565620 systemd-logind[1724]: New session 16 of user core. Dec 13 02:21:05.565803 systemd[1]: Started session-16.scope. Dec 13 02:21:07.754079 sshd[4309]: pam_unix(sshd:session): session closed for user core Dec 13 02:21:07.759194 systemd[1]: sshd@15-172.31.31.142:22-139.178.68.195:56558.service: Deactivated successfully. Dec 13 02:21:07.760803 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 02:21:07.761886 systemd-logind[1724]: Session 16 logged out. Waiting for processes to exit. Dec 13 02:21:07.763488 systemd-logind[1724]: Removed session 16. Dec 13 02:21:07.781846 systemd[1]: Started sshd@16-172.31.31.142:22-139.178.68.195:54888.service. Dec 13 02:21:07.952501 sshd[4326]: Accepted publickey for core from 139.178.68.195 port 54888 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:21:07.955011 sshd[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:21:07.961948 systemd[1]: Started session-17.scope. Dec 13 02:21:07.962649 systemd-logind[1724]: New session 17 of user core. Dec 13 02:21:08.375723 sshd[4326]: pam_unix(sshd:session): session closed for user core Dec 13 02:21:08.379400 systemd[1]: sshd@16-172.31.31.142:22-139.178.68.195:54888.service: Deactivated successfully. Dec 13 02:21:08.380395 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 02:21:08.381176 systemd-logind[1724]: Session 17 logged out. Waiting for processes to exit. Dec 13 02:21:08.382383 systemd-logind[1724]: Removed session 17. Dec 13 02:21:08.401736 systemd[1]: Started sshd@17-172.31.31.142:22-139.178.68.195:54896.service. Dec 13 02:21:08.571004 sshd[4336]: Accepted publickey for core from 139.178.68.195 port 54896 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:21:08.574786 sshd[4336]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:21:08.593849 systemd-logind[1724]: New session 18 of user core. Dec 13 02:21:08.594042 systemd[1]: Started session-18.scope. Dec 13 02:21:08.871981 sshd[4336]: pam_unix(sshd:session): session closed for user core Dec 13 02:21:08.875606 systemd[1]: sshd@17-172.31.31.142:22-139.178.68.195:54896.service: Deactivated successfully. Dec 13 02:21:08.876555 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 02:21:08.878100 systemd-logind[1724]: Session 18 logged out. Waiting for processes to exit. Dec 13 02:21:08.880034 systemd-logind[1724]: Removed session 18. Dec 13 02:21:13.903928 systemd[1]: Started sshd@18-172.31.31.142:22-139.178.68.195:54900.service. Dec 13 02:21:14.074501 sshd[4348]: Accepted publickey for core from 139.178.68.195 port 54900 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:21:14.078607 sshd[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:21:14.098904 systemd[1]: Started session-19.scope. Dec 13 02:21:14.101939 systemd-logind[1724]: New session 19 of user core. Dec 13 02:21:14.455732 sshd[4348]: pam_unix(sshd:session): session closed for user core Dec 13 02:21:14.459801 systemd[1]: sshd@18-172.31.31.142:22-139.178.68.195:54900.service: Deactivated successfully. Dec 13 02:21:14.460789 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 02:21:14.466181 systemd-logind[1724]: Session 19 logged out. Waiting for processes to exit. Dec 13 02:21:14.473179 systemd-logind[1724]: Removed session 19. Dec 13 02:21:19.498665 systemd[1]: Started sshd@19-172.31.31.142:22-139.178.68.195:55618.service. Dec 13 02:21:19.667575 sshd[4363]: Accepted publickey for core from 139.178.68.195 port 55618 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:21:19.671884 sshd[4363]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:21:19.684833 systemd-logind[1724]: New session 20 of user core. Dec 13 02:21:19.686193 systemd[1]: Started session-20.scope. Dec 13 02:21:19.935922 sshd[4363]: pam_unix(sshd:session): session closed for user core Dec 13 02:21:19.943318 systemd[1]: sshd@19-172.31.31.142:22-139.178.68.195:55618.service: Deactivated successfully. Dec 13 02:21:19.944698 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 02:21:19.951421 systemd-logind[1724]: Session 20 logged out. Waiting for processes to exit. Dec 13 02:21:19.955572 systemd-logind[1724]: Removed session 20. Dec 13 02:21:24.964408 systemd[1]: Started sshd@20-172.31.31.142:22-139.178.68.195:55624.service. Dec 13 02:21:25.135696 sshd[4375]: Accepted publickey for core from 139.178.68.195 port 55624 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:21:25.137596 sshd[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:21:25.166361 systemd-logind[1724]: New session 21 of user core. Dec 13 02:21:25.166864 systemd[1]: Started session-21.scope. Dec 13 02:21:25.430936 sshd[4375]: pam_unix(sshd:session): session closed for user core Dec 13 02:21:25.436349 systemd[1]: sshd@20-172.31.31.142:22-139.178.68.195:55624.service: Deactivated successfully. Dec 13 02:21:25.437645 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 02:21:25.438751 systemd-logind[1724]: Session 21 logged out. Waiting for processes to exit. Dec 13 02:21:25.441257 systemd-logind[1724]: Removed session 21. Dec 13 02:21:30.459480 systemd[1]: Started sshd@21-172.31.31.142:22-139.178.68.195:54086.service. Dec 13 02:21:30.634647 sshd[4388]: Accepted publickey for core from 139.178.68.195 port 54086 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:21:30.637268 sshd[4388]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:21:30.648679 systemd[1]: Started session-22.scope. Dec 13 02:21:30.649482 systemd-logind[1724]: New session 22 of user core. Dec 13 02:21:30.875698 sshd[4388]: pam_unix(sshd:session): session closed for user core Dec 13 02:21:30.885583 systemd[1]: sshd@21-172.31.31.142:22-139.178.68.195:54086.service: Deactivated successfully. Dec 13 02:21:30.887862 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 02:21:30.889743 systemd-logind[1724]: Session 22 logged out. Waiting for processes to exit. Dec 13 02:21:30.910875 systemd-logind[1724]: Removed session 22. Dec 13 02:21:30.919377 systemd[1]: Started sshd@22-172.31.31.142:22-139.178.68.195:54102.service. Dec 13 02:21:31.094387 sshd[4400]: Accepted publickey for core from 139.178.68.195 port 54102 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:21:31.098679 sshd[4400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:21:31.106136 systemd-logind[1724]: New session 23 of user core. Dec 13 02:21:31.107055 systemd[1]: Started session-23.scope. Dec 13 02:21:33.566456 env[1730]: time="2024-12-13T02:21:33.566404678Z" level=info msg="StopContainer for \"00451ce288b1da6895193d1a30b591d6caa57bb031c17144d76632bcd5693984\" with timeout 30 (s)" Dec 13 02:21:33.567771 env[1730]: time="2024-12-13T02:21:33.567729249Z" level=info msg="Stop container \"00451ce288b1da6895193d1a30b591d6caa57bb031c17144d76632bcd5693984\" with signal terminated" Dec 13 02:21:33.568586 systemd[1]: run-containerd-runc-k8s.io-0cd517e1d5546f3acc511cfadd06ea2bede3727c6b3258ad7f243071c51f3ea7-runc.eJcLO5.mount: Deactivated successfully. Dec 13 02:21:33.589788 systemd[1]: cri-containerd-00451ce288b1da6895193d1a30b591d6caa57bb031c17144d76632bcd5693984.scope: Deactivated successfully. Dec 13 02:21:33.640127 env[1730]: time="2024-12-13T02:21:33.640024443Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:21:33.663815 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00451ce288b1da6895193d1a30b591d6caa57bb031c17144d76632bcd5693984-rootfs.mount: Deactivated successfully. Dec 13 02:21:33.675556 env[1730]: time="2024-12-13T02:21:33.675506941Z" level=info msg="StopContainer for \"0cd517e1d5546f3acc511cfadd06ea2bede3727c6b3258ad7f243071c51f3ea7\" with timeout 2 (s)" Dec 13 02:21:33.677851 env[1730]: time="2024-12-13T02:21:33.677683903Z" level=info msg="Stop container \"0cd517e1d5546f3acc511cfadd06ea2bede3727c6b3258ad7f243071c51f3ea7\" with signal terminated" Dec 13 02:21:33.695447 env[1730]: time="2024-12-13T02:21:33.695156718Z" level=info msg="shim disconnected" id=00451ce288b1da6895193d1a30b591d6caa57bb031c17144d76632bcd5693984 Dec 13 02:21:33.695763 env[1730]: time="2024-12-13T02:21:33.695460864Z" level=warning msg="cleaning up after shim disconnected" id=00451ce288b1da6895193d1a30b591d6caa57bb031c17144d76632bcd5693984 namespace=k8s.io Dec 13 02:21:33.695763 env[1730]: time="2024-12-13T02:21:33.695478442Z" level=info msg="cleaning up dead shim" Dec 13 02:21:33.710160 systemd-networkd[1459]: lxc_health: Link DOWN Dec 13 02:21:33.710170 systemd-networkd[1459]: lxc_health: Lost carrier Dec 13 02:21:33.799330 env[1730]: time="2024-12-13T02:21:33.799289906Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:21:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4453 runtime=io.containerd.runc.v2\n" Dec 13 02:21:33.803311 env[1730]: time="2024-12-13T02:21:33.803257511Z" level=info msg="StopContainer for \"00451ce288b1da6895193d1a30b591d6caa57bb031c17144d76632bcd5693984\" returns successfully" Dec 13 02:21:33.804202 env[1730]: time="2024-12-13T02:21:33.804164487Z" level=info msg="StopPodSandbox for \"f309a0c791f0115ca1814ab36b5a6f977569ac147cff38457f3d0c86bb970f2f\"" Dec 13 02:21:33.804422 env[1730]: time="2024-12-13T02:21:33.804392812Z" level=info msg="Container to stop \"00451ce288b1da6895193d1a30b591d6caa57bb031c17144d76632bcd5693984\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:21:33.809479 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f309a0c791f0115ca1814ab36b5a6f977569ac147cff38457f3d0c86bb970f2f-shm.mount: Deactivated successfully. Dec 13 02:21:33.875768 systemd[1]: cri-containerd-f309a0c791f0115ca1814ab36b5a6f977569ac147cff38457f3d0c86bb970f2f.scope: Deactivated successfully. Dec 13 02:21:33.887642 systemd[1]: cri-containerd-0cd517e1d5546f3acc511cfadd06ea2bede3727c6b3258ad7f243071c51f3ea7.scope: Deactivated successfully. Dec 13 02:21:33.887946 systemd[1]: cri-containerd-0cd517e1d5546f3acc511cfadd06ea2bede3727c6b3258ad7f243071c51f3ea7.scope: Consumed 8.454s CPU time. Dec 13 02:21:33.959374 env[1730]: time="2024-12-13T02:21:33.955669161Z" level=info msg="shim disconnected" id=0cd517e1d5546f3acc511cfadd06ea2bede3727c6b3258ad7f243071c51f3ea7 Dec 13 02:21:33.960363 env[1730]: time="2024-12-13T02:21:33.959383059Z" level=warning msg="cleaning up after shim disconnected" id=0cd517e1d5546f3acc511cfadd06ea2bede3727c6b3258ad7f243071c51f3ea7 namespace=k8s.io Dec 13 02:21:33.960363 env[1730]: time="2024-12-13T02:21:33.959396250Z" level=info msg="cleaning up dead shim" Dec 13 02:21:33.963387 env[1730]: time="2024-12-13T02:21:33.959332695Z" level=info msg="shim disconnected" id=f309a0c791f0115ca1814ab36b5a6f977569ac147cff38457f3d0c86bb970f2f Dec 13 02:21:33.963549 env[1730]: time="2024-12-13T02:21:33.963516926Z" level=warning msg="cleaning up after shim disconnected" id=f309a0c791f0115ca1814ab36b5a6f977569ac147cff38457f3d0c86bb970f2f namespace=k8s.io Dec 13 02:21:33.963622 env[1730]: time="2024-12-13T02:21:33.963609245Z" level=info msg="cleaning up dead shim" Dec 13 02:21:33.999494 env[1730]: time="2024-12-13T02:21:33.999441611Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:21:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4503 runtime=io.containerd.runc.v2\n" Dec 13 02:21:34.009887 env[1730]: time="2024-12-13T02:21:34.008933804Z" level=info msg="StopContainer for \"0cd517e1d5546f3acc511cfadd06ea2bede3727c6b3258ad7f243071c51f3ea7\" returns successfully" Dec 13 02:21:34.013185 env[1730]: time="2024-12-13T02:21:34.013145485Z" level=info msg="StopPodSandbox for \"e0e643092d0a94e94cdf54a2f5a276b7e9d4bf50c1255fa20a587bf23e262f8a\"" Dec 13 02:21:34.013908 env[1730]: time="2024-12-13T02:21:34.013873102Z" level=info msg="Container to stop \"ab6dacbffba5085b3fd8c4c0ad926979797f82eb602290cfc766895b11d42018\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:21:34.014152 env[1730]: time="2024-12-13T02:21:34.014127798Z" level=info msg="Container to stop \"80c2b8aa96dd795eea859262772813f17a1f72455804255c38cacc3ab1c57ff8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:21:34.014343 env[1730]: time="2024-12-13T02:21:34.014318585Z" level=info msg="Container to stop \"c276c01218d5dde7edf996dbf2d3094606f819d71c0333d2bdef9e129176f59f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:21:34.014606 env[1730]: time="2024-12-13T02:21:34.014471413Z" level=info msg="Container to stop \"e24b63777e989b5b1c7b5a925b14186bd748dc431a07d87b14eefb22395fb1d9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:21:34.014737 env[1730]: time="2024-12-13T02:21:34.014708185Z" level=info msg="Container to stop \"0cd517e1d5546f3acc511cfadd06ea2bede3727c6b3258ad7f243071c51f3ea7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:21:34.014852 env[1730]: time="2024-12-13T02:21:34.013754399Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:21:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4504 runtime=io.containerd.runc.v2\n" Dec 13 02:21:34.015312 env[1730]: time="2024-12-13T02:21:34.015262811Z" level=info msg="TearDown network for sandbox \"f309a0c791f0115ca1814ab36b5a6f977569ac147cff38457f3d0c86bb970f2f\" successfully" Dec 13 02:21:34.015461 env[1730]: time="2024-12-13T02:21:34.015427467Z" level=info msg="StopPodSandbox for \"f309a0c791f0115ca1814ab36b5a6f977569ac147cff38457f3d0c86bb970f2f\" returns successfully" Dec 13 02:21:34.035933 systemd[1]: cri-containerd-e0e643092d0a94e94cdf54a2f5a276b7e9d4bf50c1255fa20a587bf23e262f8a.scope: Deactivated successfully. Dec 13 02:21:34.102219 kubelet[2842]: I1213 02:21:34.102178 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7d4aa90b-a46f-4974-965e-6aca52f93915-cilium-config-path\") pod \"7d4aa90b-a46f-4974-965e-6aca52f93915\" (UID: \"7d4aa90b-a46f-4974-965e-6aca52f93915\") " Dec 13 02:21:34.102753 kubelet[2842]: I1213 02:21:34.102260 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwx76\" (UniqueName: \"kubernetes.io/projected/7d4aa90b-a46f-4974-965e-6aca52f93915-kube-api-access-nwx76\") pod \"7d4aa90b-a46f-4974-965e-6aca52f93915\" (UID: \"7d4aa90b-a46f-4974-965e-6aca52f93915\") " Dec 13 02:21:34.104486 env[1730]: time="2024-12-13T02:21:34.104416366Z" level=info msg="shim disconnected" id=e0e643092d0a94e94cdf54a2f5a276b7e9d4bf50c1255fa20a587bf23e262f8a Dec 13 02:21:34.104721 env[1730]: time="2024-12-13T02:21:34.104495474Z" level=warning msg="cleaning up after shim disconnected" id=e0e643092d0a94e94cdf54a2f5a276b7e9d4bf50c1255fa20a587bf23e262f8a namespace=k8s.io Dec 13 02:21:34.104721 env[1730]: time="2024-12-13T02:21:34.104510175Z" level=info msg="cleaning up dead shim" Dec 13 02:21:34.112747 kubelet[2842]: I1213 02:21:34.103204 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d4aa90b-a46f-4974-965e-6aca52f93915-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7d4aa90b-a46f-4974-965e-6aca52f93915" (UID: "7d4aa90b-a46f-4974-965e-6aca52f93915"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:21:34.117269 env[1730]: time="2024-12-13T02:21:34.117219981Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:21:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4547 runtime=io.containerd.runc.v2\n" Dec 13 02:21:34.120693 env[1730]: time="2024-12-13T02:21:34.120644465Z" level=info msg="TearDown network for sandbox \"e0e643092d0a94e94cdf54a2f5a276b7e9d4bf50c1255fa20a587bf23e262f8a\" successfully" Dec 13 02:21:34.120693 env[1730]: time="2024-12-13T02:21:34.120685700Z" level=info msg="StopPodSandbox for \"e0e643092d0a94e94cdf54a2f5a276b7e9d4bf50c1255fa20a587bf23e262f8a\" returns successfully" Dec 13 02:21:34.124714 kubelet[2842]: I1213 02:21:34.124660 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d4aa90b-a46f-4974-965e-6aca52f93915-kube-api-access-nwx76" (OuterVolumeSpecName: "kube-api-access-nwx76") pod "7d4aa90b-a46f-4974-965e-6aca52f93915" (UID: "7d4aa90b-a46f-4974-965e-6aca52f93915"). InnerVolumeSpecName "kube-api-access-nwx76". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:21:34.204380 kubelet[2842]: I1213 02:21:34.202797 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-hostproc\") pod \"6f7ecbca-520f-4e94-8257-a269bd155f93\" (UID: \"6f7ecbca-520f-4e94-8257-a269bd155f93\") " Dec 13 02:21:34.204380 kubelet[2842]: I1213 02:21:34.202846 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-cilium-cgroup\") pod \"6f7ecbca-520f-4e94-8257-a269bd155f93\" (UID: \"6f7ecbca-520f-4e94-8257-a269bd155f93\") " Dec 13 02:21:34.204380 kubelet[2842]: I1213 02:21:34.202872 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-lib-modules\") pod \"6f7ecbca-520f-4e94-8257-a269bd155f93\" (UID: \"6f7ecbca-520f-4e94-8257-a269bd155f93\") " Dec 13 02:21:34.204380 kubelet[2842]: I1213 02:21:34.202899 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-host-proc-sys-net\") pod \"6f7ecbca-520f-4e94-8257-a269bd155f93\" (UID: \"6f7ecbca-520f-4e94-8257-a269bd155f93\") " Dec 13 02:21:34.204380 kubelet[2842]: I1213 02:21:34.202948 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f7ecbca-520f-4e94-8257-a269bd155f93-clustermesh-secrets\") pod \"6f7ecbca-520f-4e94-8257-a269bd155f93\" (UID: \"6f7ecbca-520f-4e94-8257-a269bd155f93\") " Dec 13 02:21:34.204380 kubelet[2842]: I1213 02:21:34.202973 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-bpf-maps\") pod \"6f7ecbca-520f-4e94-8257-a269bd155f93\" (UID: \"6f7ecbca-520f-4e94-8257-a269bd155f93\") " Dec 13 02:21:34.204380 kubelet[2842]: I1213 02:21:34.202995 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-etc-cni-netd\") pod \"6f7ecbca-520f-4e94-8257-a269bd155f93\" (UID: \"6f7ecbca-520f-4e94-8257-a269bd155f93\") " Dec 13 02:21:34.204380 kubelet[2842]: I1213 02:21:34.203025 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-host-proc-sys-kernel\") pod \"6f7ecbca-520f-4e94-8257-a269bd155f93\" (UID: \"6f7ecbca-520f-4e94-8257-a269bd155f93\") " Dec 13 02:21:34.204380 kubelet[2842]: I1213 02:21:34.203055 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f7ecbca-520f-4e94-8257-a269bd155f93-hubble-tls\") pod \"6f7ecbca-520f-4e94-8257-a269bd155f93\" (UID: \"6f7ecbca-520f-4e94-8257-a269bd155f93\") " Dec 13 02:21:34.204380 kubelet[2842]: I1213 02:21:34.203079 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-cilium-run\") pod \"6f7ecbca-520f-4e94-8257-a269bd155f93\" (UID: \"6f7ecbca-520f-4e94-8257-a269bd155f93\") " Dec 13 02:21:34.204380 kubelet[2842]: I1213 02:21:34.203115 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f7ecbca-520f-4e94-8257-a269bd155f93-cilium-config-path\") pod \"6f7ecbca-520f-4e94-8257-a269bd155f93\" (UID: \"6f7ecbca-520f-4e94-8257-a269bd155f93\") " Dec 13 02:21:34.204380 kubelet[2842]: I1213 02:21:34.203145 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qd8c\" (UniqueName: \"kubernetes.io/projected/6f7ecbca-520f-4e94-8257-a269bd155f93-kube-api-access-6qd8c\") pod \"6f7ecbca-520f-4e94-8257-a269bd155f93\" (UID: \"6f7ecbca-520f-4e94-8257-a269bd155f93\") " Dec 13 02:21:34.204380 kubelet[2842]: I1213 02:21:34.203218 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-cni-path\") pod \"6f7ecbca-520f-4e94-8257-a269bd155f93\" (UID: \"6f7ecbca-520f-4e94-8257-a269bd155f93\") " Dec 13 02:21:34.204380 kubelet[2842]: I1213 02:21:34.203247 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-xtables-lock\") pod \"6f7ecbca-520f-4e94-8257-a269bd155f93\" (UID: \"6f7ecbca-520f-4e94-8257-a269bd155f93\") " Dec 13 02:21:34.204380 kubelet[2842]: I1213 02:21:34.203316 2842 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7d4aa90b-a46f-4974-965e-6aca52f93915-cilium-config-path\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:34.204380 kubelet[2842]: I1213 02:21:34.203336 2842 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nwx76\" (UniqueName: \"kubernetes.io/projected/7d4aa90b-a46f-4974-965e-6aca52f93915-kube-api-access-nwx76\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:34.204380 kubelet[2842]: I1213 02:21:34.203508 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6f7ecbca-520f-4e94-8257-a269bd155f93" (UID: "6f7ecbca-520f-4e94-8257-a269bd155f93"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:34.204380 kubelet[2842]: I1213 02:21:34.203563 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-hostproc" (OuterVolumeSpecName: "hostproc") pod "6f7ecbca-520f-4e94-8257-a269bd155f93" (UID: "6f7ecbca-520f-4e94-8257-a269bd155f93"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:34.204380 kubelet[2842]: I1213 02:21:34.203587 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6f7ecbca-520f-4e94-8257-a269bd155f93" (UID: "6f7ecbca-520f-4e94-8257-a269bd155f93"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:34.206943 kubelet[2842]: I1213 02:21:34.203610 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6f7ecbca-520f-4e94-8257-a269bd155f93" (UID: "6f7ecbca-520f-4e94-8257-a269bd155f93"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:34.206943 kubelet[2842]: I1213 02:21:34.203631 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6f7ecbca-520f-4e94-8257-a269bd155f93" (UID: "6f7ecbca-520f-4e94-8257-a269bd155f93"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:34.207436 kubelet[2842]: I1213 02:21:34.207391 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6f7ecbca-520f-4e94-8257-a269bd155f93" (UID: "6f7ecbca-520f-4e94-8257-a269bd155f93"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:34.208250 kubelet[2842]: I1213 02:21:34.208218 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6f7ecbca-520f-4e94-8257-a269bd155f93" (UID: "6f7ecbca-520f-4e94-8257-a269bd155f93"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:34.208817 kubelet[2842]: I1213 02:21:34.208266 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6f7ecbca-520f-4e94-8257-a269bd155f93" (UID: "6f7ecbca-520f-4e94-8257-a269bd155f93"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:34.208817 kubelet[2842]: I1213 02:21:34.208305 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6f7ecbca-520f-4e94-8257-a269bd155f93" (UID: "6f7ecbca-520f-4e94-8257-a269bd155f93"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:34.211820 kubelet[2842]: I1213 02:21:34.211778 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f7ecbca-520f-4e94-8257-a269bd155f93-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6f7ecbca-520f-4e94-8257-a269bd155f93" (UID: "6f7ecbca-520f-4e94-8257-a269bd155f93"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:21:34.212004 kubelet[2842]: I1213 02:21:34.211852 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-cni-path" (OuterVolumeSpecName: "cni-path") pod "6f7ecbca-520f-4e94-8257-a269bd155f93" (UID: "6f7ecbca-520f-4e94-8257-a269bd155f93"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:34.216638 kubelet[2842]: I1213 02:21:34.216583 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f7ecbca-520f-4e94-8257-a269bd155f93-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6f7ecbca-520f-4e94-8257-a269bd155f93" (UID: "6f7ecbca-520f-4e94-8257-a269bd155f93"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:21:34.217648 kubelet[2842]: I1213 02:21:34.217611 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f7ecbca-520f-4e94-8257-a269bd155f93-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6f7ecbca-520f-4e94-8257-a269bd155f93" (UID: "6f7ecbca-520f-4e94-8257-a269bd155f93"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:21:34.224748 kubelet[2842]: I1213 02:21:34.224678 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f7ecbca-520f-4e94-8257-a269bd155f93-kube-api-access-6qd8c" (OuterVolumeSpecName: "kube-api-access-6qd8c") pod "6f7ecbca-520f-4e94-8257-a269bd155f93" (UID: "6f7ecbca-520f-4e94-8257-a269bd155f93"). InnerVolumeSpecName "kube-api-access-6qd8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:21:34.279741 systemd[1]: Removed slice kubepods-besteffort-pod7d4aa90b_a46f_4974_965e_6aca52f93915.slice. Dec 13 02:21:34.299894 systemd[1]: Removed slice kubepods-burstable-pod6f7ecbca_520f_4e94_8257_a269bd155f93.slice. Dec 13 02:21:34.300050 systemd[1]: kubepods-burstable-pod6f7ecbca_520f_4e94_8257_a269bd155f93.slice: Consumed 8.571s CPU time. Dec 13 02:21:34.304061 kubelet[2842]: I1213 02:21:34.304039 2842 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-host-proc-sys-kernel\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:34.304716 kubelet[2842]: I1213 02:21:34.304692 2842 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6f7ecbca-520f-4e94-8257-a269bd155f93-hubble-tls\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:34.304716 kubelet[2842]: I1213 02:21:34.304719 2842 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-cilium-run\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:34.304890 kubelet[2842]: I1213 02:21:34.304734 2842 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6f7ecbca-520f-4e94-8257-a269bd155f93-cilium-config-path\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:34.304890 kubelet[2842]: I1213 02:21:34.304751 2842 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-6qd8c\" (UniqueName: \"kubernetes.io/projected/6f7ecbca-520f-4e94-8257-a269bd155f93-kube-api-access-6qd8c\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:34.304890 kubelet[2842]: I1213 02:21:34.304766 2842 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-xtables-lock\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:34.304890 kubelet[2842]: I1213 02:21:34.304782 2842 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-cni-path\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:34.304890 kubelet[2842]: I1213 02:21:34.304795 2842 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-hostproc\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:34.304890 kubelet[2842]: I1213 02:21:34.304811 2842 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-cilium-cgroup\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:34.304890 kubelet[2842]: I1213 02:21:34.304826 2842 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-lib-modules\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:34.304890 kubelet[2842]: I1213 02:21:34.304840 2842 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-bpf-maps\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:34.304890 kubelet[2842]: I1213 02:21:34.304854 2842 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-host-proc-sys-net\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:34.304890 kubelet[2842]: I1213 02:21:34.304873 2842 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6f7ecbca-520f-4e94-8257-a269bd155f93-clustermesh-secrets\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:34.304890 kubelet[2842]: I1213 02:21:34.304889 2842 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6f7ecbca-520f-4e94-8257-a269bd155f93-etc-cni-netd\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:34.553435 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0cd517e1d5546f3acc511cfadd06ea2bede3727c6b3258ad7f243071c51f3ea7-rootfs.mount: Deactivated successfully. Dec 13 02:21:34.553564 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0e643092d0a94e94cdf54a2f5a276b7e9d4bf50c1255fa20a587bf23e262f8a-rootfs.mount: Deactivated successfully. Dec 13 02:21:34.553650 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e0e643092d0a94e94cdf54a2f5a276b7e9d4bf50c1255fa20a587bf23e262f8a-shm.mount: Deactivated successfully. Dec 13 02:21:34.553729 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f309a0c791f0115ca1814ab36b5a6f977569ac147cff38457f3d0c86bb970f2f-rootfs.mount: Deactivated successfully. Dec 13 02:21:34.553803 systemd[1]: var-lib-kubelet-pods-7d4aa90b\x2da46f\x2d4974\x2d965e\x2d6aca52f93915-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnwx76.mount: Deactivated successfully. Dec 13 02:21:34.553898 systemd[1]: var-lib-kubelet-pods-6f7ecbca\x2d520f\x2d4e94\x2d8257\x2da269bd155f93-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6qd8c.mount: Deactivated successfully. Dec 13 02:21:34.554031 systemd[1]: var-lib-kubelet-pods-6f7ecbca\x2d520f\x2d4e94\x2d8257\x2da269bd155f93-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:21:34.554111 systemd[1]: var-lib-kubelet-pods-6f7ecbca\x2d520f\x2d4e94\x2d8257\x2da269bd155f93-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:21:34.813293 kubelet[2842]: I1213 02:21:34.812911 2842 scope.go:117] "RemoveContainer" containerID="0cd517e1d5546f3acc511cfadd06ea2bede3727c6b3258ad7f243071c51f3ea7" Dec 13 02:21:34.840196 env[1730]: time="2024-12-13T02:21:34.840136970Z" level=info msg="RemoveContainer for \"0cd517e1d5546f3acc511cfadd06ea2bede3727c6b3258ad7f243071c51f3ea7\"" Dec 13 02:21:34.854504 env[1730]: time="2024-12-13T02:21:34.854452952Z" level=info msg="RemoveContainer for \"0cd517e1d5546f3acc511cfadd06ea2bede3727c6b3258ad7f243071c51f3ea7\" returns successfully" Dec 13 02:21:34.883469 kubelet[2842]: I1213 02:21:34.883424 2842 scope.go:117] "RemoveContainer" containerID="80c2b8aa96dd795eea859262772813f17a1f72455804255c38cacc3ab1c57ff8" Dec 13 02:21:34.904427 env[1730]: time="2024-12-13T02:21:34.904382604Z" level=info msg="RemoveContainer for \"80c2b8aa96dd795eea859262772813f17a1f72455804255c38cacc3ab1c57ff8\"" Dec 13 02:21:34.938403 env[1730]: time="2024-12-13T02:21:34.938355630Z" level=info msg="RemoveContainer for \"80c2b8aa96dd795eea859262772813f17a1f72455804255c38cacc3ab1c57ff8\" returns successfully" Dec 13 02:21:34.939022 kubelet[2842]: I1213 02:21:34.938995 2842 scope.go:117] "RemoveContainer" containerID="ab6dacbffba5085b3fd8c4c0ad926979797f82eb602290cfc766895b11d42018" Dec 13 02:21:34.943048 env[1730]: time="2024-12-13T02:21:34.942581347Z" level=info msg="RemoveContainer for \"ab6dacbffba5085b3fd8c4c0ad926979797f82eb602290cfc766895b11d42018\"" Dec 13 02:21:34.952366 env[1730]: time="2024-12-13T02:21:34.952311155Z" level=info msg="RemoveContainer for \"ab6dacbffba5085b3fd8c4c0ad926979797f82eb602290cfc766895b11d42018\" returns successfully" Dec 13 02:21:34.953164 kubelet[2842]: I1213 02:21:34.953138 2842 scope.go:117] "RemoveContainer" containerID="e24b63777e989b5b1c7b5a925b14186bd748dc431a07d87b14eefb22395fb1d9" Dec 13 02:21:34.955908 env[1730]: time="2024-12-13T02:21:34.955698086Z" level=info msg="RemoveContainer for \"e24b63777e989b5b1c7b5a925b14186bd748dc431a07d87b14eefb22395fb1d9\"" Dec 13 02:21:34.968412 env[1730]: time="2024-12-13T02:21:34.968266152Z" level=info msg="RemoveContainer for \"e24b63777e989b5b1c7b5a925b14186bd748dc431a07d87b14eefb22395fb1d9\" returns successfully" Dec 13 02:21:34.968741 kubelet[2842]: I1213 02:21:34.968718 2842 scope.go:117] "RemoveContainer" containerID="c276c01218d5dde7edf996dbf2d3094606f819d71c0333d2bdef9e129176f59f" Dec 13 02:21:34.981743 env[1730]: time="2024-12-13T02:21:34.981555901Z" level=info msg="RemoveContainer for \"c276c01218d5dde7edf996dbf2d3094606f819d71c0333d2bdef9e129176f59f\"" Dec 13 02:21:34.988292 env[1730]: time="2024-12-13T02:21:34.988226470Z" level=info msg="RemoveContainer for \"c276c01218d5dde7edf996dbf2d3094606f819d71c0333d2bdef9e129176f59f\" returns successfully" Dec 13 02:21:34.988837 kubelet[2842]: I1213 02:21:34.988806 2842 scope.go:117] "RemoveContainer" containerID="0cd517e1d5546f3acc511cfadd06ea2bede3727c6b3258ad7f243071c51f3ea7" Dec 13 02:21:34.989398 env[1730]: time="2024-12-13T02:21:34.989308301Z" level=error msg="ContainerStatus for \"0cd517e1d5546f3acc511cfadd06ea2bede3727c6b3258ad7f243071c51f3ea7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0cd517e1d5546f3acc511cfadd06ea2bede3727c6b3258ad7f243071c51f3ea7\": not found" Dec 13 02:21:35.001919 kubelet[2842]: E1213 02:21:35.001324 2842 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0cd517e1d5546f3acc511cfadd06ea2bede3727c6b3258ad7f243071c51f3ea7\": not found" containerID="0cd517e1d5546f3acc511cfadd06ea2bede3727c6b3258ad7f243071c51f3ea7" Dec 13 02:21:35.001919 kubelet[2842]: I1213 02:21:35.001692 2842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0cd517e1d5546f3acc511cfadd06ea2bede3727c6b3258ad7f243071c51f3ea7"} err="failed to get container status \"0cd517e1d5546f3acc511cfadd06ea2bede3727c6b3258ad7f243071c51f3ea7\": rpc error: code = NotFound desc = an error occurred when try to find container \"0cd517e1d5546f3acc511cfadd06ea2bede3727c6b3258ad7f243071c51f3ea7\": not found" Dec 13 02:21:35.001919 kubelet[2842]: I1213 02:21:35.001726 2842 scope.go:117] "RemoveContainer" containerID="80c2b8aa96dd795eea859262772813f17a1f72455804255c38cacc3ab1c57ff8" Dec 13 02:21:35.002217 env[1730]: time="2024-12-13T02:21:35.002112016Z" level=error msg="ContainerStatus for \"80c2b8aa96dd795eea859262772813f17a1f72455804255c38cacc3ab1c57ff8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"80c2b8aa96dd795eea859262772813f17a1f72455804255c38cacc3ab1c57ff8\": not found" Dec 13 02:21:35.002496 kubelet[2842]: E1213 02:21:35.002469 2842 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"80c2b8aa96dd795eea859262772813f17a1f72455804255c38cacc3ab1c57ff8\": not found" containerID="80c2b8aa96dd795eea859262772813f17a1f72455804255c38cacc3ab1c57ff8" Dec 13 02:21:35.003732 kubelet[2842]: I1213 02:21:35.002512 2842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"80c2b8aa96dd795eea859262772813f17a1f72455804255c38cacc3ab1c57ff8"} err="failed to get container status \"80c2b8aa96dd795eea859262772813f17a1f72455804255c38cacc3ab1c57ff8\": rpc error: code = NotFound desc = an error occurred when try to find container \"80c2b8aa96dd795eea859262772813f17a1f72455804255c38cacc3ab1c57ff8\": not found" Dec 13 02:21:35.003732 kubelet[2842]: I1213 02:21:35.003440 2842 scope.go:117] "RemoveContainer" containerID="ab6dacbffba5085b3fd8c4c0ad926979797f82eb602290cfc766895b11d42018" Dec 13 02:21:35.006375 env[1730]: time="2024-12-13T02:21:35.004893078Z" level=error msg="ContainerStatus for \"ab6dacbffba5085b3fd8c4c0ad926979797f82eb602290cfc766895b11d42018\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ab6dacbffba5085b3fd8c4c0ad926979797f82eb602290cfc766895b11d42018\": not found" Dec 13 02:21:35.006375 env[1730]: time="2024-12-13T02:21:35.005573775Z" level=error msg="ContainerStatus for \"e24b63777e989b5b1c7b5a925b14186bd748dc431a07d87b14eefb22395fb1d9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e24b63777e989b5b1c7b5a925b14186bd748dc431a07d87b14eefb22395fb1d9\": not found" Dec 13 02:21:35.006584 kubelet[2842]: E1213 02:21:35.005237 2842 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ab6dacbffba5085b3fd8c4c0ad926979797f82eb602290cfc766895b11d42018\": not found" containerID="ab6dacbffba5085b3fd8c4c0ad926979797f82eb602290cfc766895b11d42018" Dec 13 02:21:35.006584 kubelet[2842]: I1213 02:21:35.005290 2842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ab6dacbffba5085b3fd8c4c0ad926979797f82eb602290cfc766895b11d42018"} err="failed to get container status \"ab6dacbffba5085b3fd8c4c0ad926979797f82eb602290cfc766895b11d42018\": rpc error: code = NotFound desc = an error occurred when try to find container \"ab6dacbffba5085b3fd8c4c0ad926979797f82eb602290cfc766895b11d42018\": not found" Dec 13 02:21:35.006584 kubelet[2842]: I1213 02:21:35.005308 2842 scope.go:117] "RemoveContainer" containerID="e24b63777e989b5b1c7b5a925b14186bd748dc431a07d87b14eefb22395fb1d9" Dec 13 02:21:35.006584 kubelet[2842]: E1213 02:21:35.005794 2842 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e24b63777e989b5b1c7b5a925b14186bd748dc431a07d87b14eefb22395fb1d9\": not found" containerID="e24b63777e989b5b1c7b5a925b14186bd748dc431a07d87b14eefb22395fb1d9" Dec 13 02:21:35.006584 kubelet[2842]: I1213 02:21:35.005832 2842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e24b63777e989b5b1c7b5a925b14186bd748dc431a07d87b14eefb22395fb1d9"} err="failed to get container status \"e24b63777e989b5b1c7b5a925b14186bd748dc431a07d87b14eefb22395fb1d9\": rpc error: code = NotFound desc = an error occurred when try to find container \"e24b63777e989b5b1c7b5a925b14186bd748dc431a07d87b14eefb22395fb1d9\": not found" Dec 13 02:21:35.006584 kubelet[2842]: I1213 02:21:35.005846 2842 scope.go:117] "RemoveContainer" containerID="c276c01218d5dde7edf996dbf2d3094606f819d71c0333d2bdef9e129176f59f" Dec 13 02:21:35.006849 env[1730]: time="2024-12-13T02:21:35.006263587Z" level=error msg="ContainerStatus for \"c276c01218d5dde7edf996dbf2d3094606f819d71c0333d2bdef9e129176f59f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c276c01218d5dde7edf996dbf2d3094606f819d71c0333d2bdef9e129176f59f\": not found" Dec 13 02:21:35.006951 kubelet[2842]: E1213 02:21:35.006935 2842 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c276c01218d5dde7edf996dbf2d3094606f819d71c0333d2bdef9e129176f59f\": not found" containerID="c276c01218d5dde7edf996dbf2d3094606f819d71c0333d2bdef9e129176f59f" Dec 13 02:21:35.007029 kubelet[2842]: I1213 02:21:35.006989 2842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c276c01218d5dde7edf996dbf2d3094606f819d71c0333d2bdef9e129176f59f"} err="failed to get container status \"c276c01218d5dde7edf996dbf2d3094606f819d71c0333d2bdef9e129176f59f\": rpc error: code = NotFound desc = an error occurred when try to find container \"c276c01218d5dde7edf996dbf2d3094606f819d71c0333d2bdef9e129176f59f\": not found" Dec 13 02:21:35.007029 kubelet[2842]: I1213 02:21:35.007006 2842 scope.go:117] "RemoveContainer" containerID="00451ce288b1da6895193d1a30b591d6caa57bb031c17144d76632bcd5693984" Dec 13 02:21:35.021741 env[1730]: time="2024-12-13T02:21:35.021559746Z" level=info msg="RemoveContainer for \"00451ce288b1da6895193d1a30b591d6caa57bb031c17144d76632bcd5693984\"" Dec 13 02:21:35.033823 env[1730]: time="2024-12-13T02:21:35.033666741Z" level=info msg="RemoveContainer for \"00451ce288b1da6895193d1a30b591d6caa57bb031c17144d76632bcd5693984\" returns successfully" Dec 13 02:21:35.034311 kubelet[2842]: I1213 02:21:35.034193 2842 scope.go:117] "RemoveContainer" containerID="00451ce288b1da6895193d1a30b591d6caa57bb031c17144d76632bcd5693984" Dec 13 02:21:35.036861 env[1730]: time="2024-12-13T02:21:35.034940855Z" level=error msg="ContainerStatus for \"00451ce288b1da6895193d1a30b591d6caa57bb031c17144d76632bcd5693984\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"00451ce288b1da6895193d1a30b591d6caa57bb031c17144d76632bcd5693984\": not found" Dec 13 02:21:35.037235 kubelet[2842]: E1213 02:21:35.037100 2842 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"00451ce288b1da6895193d1a30b591d6caa57bb031c17144d76632bcd5693984\": not found" containerID="00451ce288b1da6895193d1a30b591d6caa57bb031c17144d76632bcd5693984" Dec 13 02:21:35.037235 kubelet[2842]: I1213 02:21:35.037170 2842 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"00451ce288b1da6895193d1a30b591d6caa57bb031c17144d76632bcd5693984"} err="failed to get container status \"00451ce288b1da6895193d1a30b591d6caa57bb031c17144d76632bcd5693984\": rpc error: code = NotFound desc = an error occurred when try to find container \"00451ce288b1da6895193d1a30b591d6caa57bb031c17144d76632bcd5693984\": not found" Dec 13 02:21:35.475939 sshd[4400]: pam_unix(sshd:session): session closed for user core Dec 13 02:21:35.482512 systemd[1]: sshd@22-172.31.31.142:22-139.178.68.195:54102.service: Deactivated successfully. Dec 13 02:21:35.484871 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 02:21:35.486811 systemd-logind[1724]: Session 23 logged out. Waiting for processes to exit. Dec 13 02:21:35.488176 systemd-logind[1724]: Removed session 23. Dec 13 02:21:35.514823 systemd[1]: Started sshd@23-172.31.31.142:22-139.178.68.195:54118.service. Dec 13 02:21:35.697602 sshd[4566]: Accepted publickey for core from 139.178.68.195 port 54118 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:21:35.699216 sshd[4566]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:21:35.706293 systemd[1]: Started session-24.scope. Dec 13 02:21:35.707169 systemd-logind[1724]: New session 24 of user core. Dec 13 02:21:36.247383 kubelet[2842]: I1213 02:21:36.247259 2842 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6f7ecbca-520f-4e94-8257-a269bd155f93" path="/var/lib/kubelet/pods/6f7ecbca-520f-4e94-8257-a269bd155f93/volumes" Dec 13 02:21:36.250563 kubelet[2842]: I1213 02:21:36.250521 2842 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7d4aa90b-a46f-4974-965e-6aca52f93915" path="/var/lib/kubelet/pods/7d4aa90b-a46f-4974-965e-6aca52f93915/volumes" Dec 13 02:21:36.410659 sshd[4566]: pam_unix(sshd:session): session closed for user core Dec 13 02:21:36.414939 systemd[1]: sshd@23-172.31.31.142:22-139.178.68.195:54118.service: Deactivated successfully. Dec 13 02:21:36.415942 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 02:21:36.417185 systemd-logind[1724]: Session 24 logged out. Waiting for processes to exit. Dec 13 02:21:36.418567 systemd-logind[1724]: Removed session 24. Dec 13 02:21:36.437489 systemd[1]: Started sshd@24-172.31.31.142:22-139.178.68.195:47578.service. Dec 13 02:21:36.525681 kubelet[2842]: I1213 02:21:36.525644 2842 topology_manager.go:215] "Topology Admit Handler" podUID="15a856a5-1808-4281-aec5-e4b46d4370e7" podNamespace="kube-system" podName="cilium-vclpd" Dec 13 02:21:36.525929 kubelet[2842]: E1213 02:21:36.525743 2842 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f7ecbca-520f-4e94-8257-a269bd155f93" containerName="cilium-agent" Dec 13 02:21:36.525929 kubelet[2842]: E1213 02:21:36.525757 2842 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7d4aa90b-a46f-4974-965e-6aca52f93915" containerName="cilium-operator" Dec 13 02:21:36.525929 kubelet[2842]: E1213 02:21:36.525768 2842 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f7ecbca-520f-4e94-8257-a269bd155f93" containerName="mount-cgroup" Dec 13 02:21:36.525929 kubelet[2842]: E1213 02:21:36.525778 2842 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f7ecbca-520f-4e94-8257-a269bd155f93" containerName="clean-cilium-state" Dec 13 02:21:36.525929 kubelet[2842]: E1213 02:21:36.525787 2842 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f7ecbca-520f-4e94-8257-a269bd155f93" containerName="apply-sysctl-overwrites" Dec 13 02:21:36.525929 kubelet[2842]: E1213 02:21:36.525796 2842 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6f7ecbca-520f-4e94-8257-a269bd155f93" containerName="mount-bpf-fs" Dec 13 02:21:36.525929 kubelet[2842]: I1213 02:21:36.525904 2842 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f7ecbca-520f-4e94-8257-a269bd155f93" containerName="cilium-agent" Dec 13 02:21:36.525929 kubelet[2842]: I1213 02:21:36.525916 2842 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d4aa90b-a46f-4974-965e-6aca52f93915" containerName="cilium-operator" Dec 13 02:21:36.559266 systemd[1]: Created slice kubepods-burstable-pod15a856a5_1808_4281_aec5_e4b46d4370e7.slice. Dec 13 02:21:36.614724 sshd[4576]: Accepted publickey for core from 139.178.68.195 port 47578 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:21:36.617395 sshd[4576]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:21:36.625682 systemd-logind[1724]: New session 25 of user core. Dec 13 02:21:36.627379 systemd[1]: Started session-25.scope. Dec 13 02:21:36.734903 kubelet[2842]: I1213 02:21:36.734865 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-hostproc\") pod \"cilium-vclpd\" (UID: \"15a856a5-1808-4281-aec5-e4b46d4370e7\") " pod="kube-system/cilium-vclpd" Dec 13 02:21:36.735105 kubelet[2842]: I1213 02:21:36.735092 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/15a856a5-1808-4281-aec5-e4b46d4370e7-cilium-config-path\") pod \"cilium-vclpd\" (UID: \"15a856a5-1808-4281-aec5-e4b46d4370e7\") " pod="kube-system/cilium-vclpd" Dec 13 02:21:36.735232 kubelet[2842]: I1213 02:21:36.735223 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/15a856a5-1808-4281-aec5-e4b46d4370e7-cilium-ipsec-secrets\") pod \"cilium-vclpd\" (UID: \"15a856a5-1808-4281-aec5-e4b46d4370e7\") " pod="kube-system/cilium-vclpd" Dec 13 02:21:36.735918 kubelet[2842]: I1213 02:21:36.735900 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmn22\" (UniqueName: \"kubernetes.io/projected/15a856a5-1808-4281-aec5-e4b46d4370e7-kube-api-access-jmn22\") pod \"cilium-vclpd\" (UID: \"15a856a5-1808-4281-aec5-e4b46d4370e7\") " pod="kube-system/cilium-vclpd" Dec 13 02:21:36.736105 kubelet[2842]: I1213 02:21:36.736095 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-bpf-maps\") pod \"cilium-vclpd\" (UID: \"15a856a5-1808-4281-aec5-e4b46d4370e7\") " pod="kube-system/cilium-vclpd" Dec 13 02:21:36.736854 kubelet[2842]: I1213 02:21:36.736834 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-lib-modules\") pod \"cilium-vclpd\" (UID: \"15a856a5-1808-4281-aec5-e4b46d4370e7\") " pod="kube-system/cilium-vclpd" Dec 13 02:21:36.736991 kubelet[2842]: I1213 02:21:36.736976 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-xtables-lock\") pod \"cilium-vclpd\" (UID: \"15a856a5-1808-4281-aec5-e4b46d4370e7\") " pod="kube-system/cilium-vclpd" Dec 13 02:21:36.737113 kubelet[2842]: I1213 02:21:36.737103 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-cilium-cgroup\") pod \"cilium-vclpd\" (UID: \"15a856a5-1808-4281-aec5-e4b46d4370e7\") " pod="kube-system/cilium-vclpd" Dec 13 02:21:36.737207 kubelet[2842]: I1213 02:21:36.737199 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-cni-path\") pod \"cilium-vclpd\" (UID: \"15a856a5-1808-4281-aec5-e4b46d4370e7\") " pod="kube-system/cilium-vclpd" Dec 13 02:21:36.739463 kubelet[2842]: I1213 02:21:36.739370 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/15a856a5-1808-4281-aec5-e4b46d4370e7-clustermesh-secrets\") pod \"cilium-vclpd\" (UID: \"15a856a5-1808-4281-aec5-e4b46d4370e7\") " pod="kube-system/cilium-vclpd" Dec 13 02:21:36.740103 kubelet[2842]: I1213 02:21:36.740085 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-host-proc-sys-net\") pod \"cilium-vclpd\" (UID: \"15a856a5-1808-4281-aec5-e4b46d4370e7\") " pod="kube-system/cilium-vclpd" Dec 13 02:21:36.740254 kubelet[2842]: I1213 02:21:36.740243 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-cilium-run\") pod \"cilium-vclpd\" (UID: \"15a856a5-1808-4281-aec5-e4b46d4370e7\") " pod="kube-system/cilium-vclpd" Dec 13 02:21:36.741221 kubelet[2842]: I1213 02:21:36.741202 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-etc-cni-netd\") pod \"cilium-vclpd\" (UID: \"15a856a5-1808-4281-aec5-e4b46d4370e7\") " pod="kube-system/cilium-vclpd" Dec 13 02:21:36.743032 kubelet[2842]: I1213 02:21:36.743014 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/15a856a5-1808-4281-aec5-e4b46d4370e7-hubble-tls\") pod \"cilium-vclpd\" (UID: \"15a856a5-1808-4281-aec5-e4b46d4370e7\") " pod="kube-system/cilium-vclpd" Dec 13 02:21:36.743770 kubelet[2842]: I1213 02:21:36.743755 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-host-proc-sys-kernel\") pod \"cilium-vclpd\" (UID: \"15a856a5-1808-4281-aec5-e4b46d4370e7\") " pod="kube-system/cilium-vclpd" Dec 13 02:21:37.072505 sshd[4576]: pam_unix(sshd:session): session closed for user core Dec 13 02:21:37.077111 systemd[1]: sshd@24-172.31.31.142:22-139.178.68.195:47578.service: Deactivated successfully. Dec 13 02:21:37.078212 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 02:21:37.080331 systemd-logind[1724]: Session 25 logged out. Waiting for processes to exit. Dec 13 02:21:37.082081 systemd-logind[1724]: Removed session 25. Dec 13 02:21:37.111200 systemd[1]: Started sshd@25-172.31.31.142:22-139.178.68.195:47592.service. Dec 13 02:21:37.150348 env[1730]: time="2024-12-13T02:21:37.149843223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vclpd,Uid:15a856a5-1808-4281-aec5-e4b46d4370e7,Namespace:kube-system,Attempt:0,}" Dec 13 02:21:37.184778 env[1730]: time="2024-12-13T02:21:37.184707334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:21:37.184955 env[1730]: time="2024-12-13T02:21:37.184933075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:21:37.185046 env[1730]: time="2024-12-13T02:21:37.185028593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:21:37.185626 env[1730]: time="2024-12-13T02:21:37.185368976Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd61163cc07f1a2d3df19e44d3bb01ee3e7495597ce55e115672dd240e135df0 pid=4599 runtime=io.containerd.runc.v2 Dec 13 02:21:37.218505 systemd[1]: Started cri-containerd-dd61163cc07f1a2d3df19e44d3bb01ee3e7495597ce55e115672dd240e135df0.scope. Dec 13 02:21:37.256614 env[1730]: time="2024-12-13T02:21:37.256569467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vclpd,Uid:15a856a5-1808-4281-aec5-e4b46d4370e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd61163cc07f1a2d3df19e44d3bb01ee3e7495597ce55e115672dd240e135df0\"" Dec 13 02:21:37.264323 env[1730]: time="2024-12-13T02:21:37.264216910Z" level=info msg="CreateContainer within sandbox \"dd61163cc07f1a2d3df19e44d3bb01ee3e7495597ce55e115672dd240e135df0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:21:37.291623 env[1730]: time="2024-12-13T02:21:37.291574457Z" level=info msg="CreateContainer within sandbox \"dd61163cc07f1a2d3df19e44d3bb01ee3e7495597ce55e115672dd240e135df0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ab69957aee06f67d911a5f2280e4460b0492c3c42348a5e65d40c6c66a2427b8\"" Dec 13 02:21:37.292742 env[1730]: time="2024-12-13T02:21:37.292704858Z" level=info msg="StartContainer for \"ab69957aee06f67d911a5f2280e4460b0492c3c42348a5e65d40c6c66a2427b8\"" Dec 13 02:21:37.303767 sshd[4591]: Accepted publickey for core from 139.178.68.195 port 47592 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:21:37.304956 sshd[4591]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:21:37.317812 systemd[1]: Started session-26.scope. Dec 13 02:21:37.319481 systemd-logind[1724]: New session 26 of user core. Dec 13 02:21:37.347306 systemd[1]: Started cri-containerd-ab69957aee06f67d911a5f2280e4460b0492c3c42348a5e65d40c6c66a2427b8.scope. Dec 13 02:21:37.380132 systemd[1]: cri-containerd-ab69957aee06f67d911a5f2280e4460b0492c3c42348a5e65d40c6c66a2427b8.scope: Deactivated successfully. Dec 13 02:21:37.444463 env[1730]: time="2024-12-13T02:21:37.431732915Z" level=info msg="shim disconnected" id=ab69957aee06f67d911a5f2280e4460b0492c3c42348a5e65d40c6c66a2427b8 Dec 13 02:21:37.444463 env[1730]: time="2024-12-13T02:21:37.431805650Z" level=warning msg="cleaning up after shim disconnected" id=ab69957aee06f67d911a5f2280e4460b0492c3c42348a5e65d40c6c66a2427b8 namespace=k8s.io Dec 13 02:21:37.444463 env[1730]: time="2024-12-13T02:21:37.431819785Z" level=info msg="cleaning up dead shim" Dec 13 02:21:37.472497 env[1730]: time="2024-12-13T02:21:37.471851280Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:21:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4663 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T02:21:37Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ab69957aee06f67d911a5f2280e4460b0492c3c42348a5e65d40c6c66a2427b8/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 02:21:37.473447 env[1730]: time="2024-12-13T02:21:37.473309063Z" level=error msg="copy shim log" error="read /proc/self/fd/44: file already closed" Dec 13 02:21:37.477128 env[1730]: time="2024-12-13T02:21:37.475949154Z" level=error msg="Failed to pipe stdout of container \"ab69957aee06f67d911a5f2280e4460b0492c3c42348a5e65d40c6c66a2427b8\"" error="reading from a closed fifo" Dec 13 02:21:37.477758 env[1730]: time="2024-12-13T02:21:37.476034438Z" level=error msg="Failed to pipe stderr of container \"ab69957aee06f67d911a5f2280e4460b0492c3c42348a5e65d40c6c66a2427b8\"" error="reading from a closed fifo" Dec 13 02:21:37.480766 env[1730]: time="2024-12-13T02:21:37.480698619Z" level=error msg="StartContainer for \"ab69957aee06f67d911a5f2280e4460b0492c3c42348a5e65d40c6c66a2427b8\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 02:21:37.481180 kubelet[2842]: E1213 02:21:37.481098 2842 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ab69957aee06f67d911a5f2280e4460b0492c3c42348a5e65d40c6c66a2427b8" Dec 13 02:21:37.487625 kubelet[2842]: E1213 02:21:37.481254 2842 kuberuntime_manager.go:1262] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 02:21:37.487625 kubelet[2842]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 02:21:37.487625 kubelet[2842]: rm /hostbin/cilium-mount Dec 13 02:21:37.487625 kubelet[2842]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-jmn22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-vclpd_kube-system(15a856a5-1808-4281-aec5-e4b46d4370e7): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 02:21:37.488948 kubelet[2842]: E1213 02:21:37.488876 2842 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-vclpd" podUID="15a856a5-1808-4281-aec5-e4b46d4370e7" Dec 13 02:21:37.500715 kubelet[2842]: E1213 02:21:37.500645 2842 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:21:37.836604 env[1730]: time="2024-12-13T02:21:37.836555611Z" level=info msg="StopPodSandbox for \"dd61163cc07f1a2d3df19e44d3bb01ee3e7495597ce55e115672dd240e135df0\"" Dec 13 02:21:37.836790 env[1730]: time="2024-12-13T02:21:37.836640996Z" level=info msg="Container to stop \"ab69957aee06f67d911a5f2280e4460b0492c3c42348a5e65d40c6c66a2427b8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:21:37.860721 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dd61163cc07f1a2d3df19e44d3bb01ee3e7495597ce55e115672dd240e135df0-shm.mount: Deactivated successfully. Dec 13 02:21:37.883061 systemd[1]: cri-containerd-dd61163cc07f1a2d3df19e44d3bb01ee3e7495597ce55e115672dd240e135df0.scope: Deactivated successfully. Dec 13 02:21:37.916314 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd61163cc07f1a2d3df19e44d3bb01ee3e7495597ce55e115672dd240e135df0-rootfs.mount: Deactivated successfully. Dec 13 02:21:37.936391 env[1730]: time="2024-12-13T02:21:37.936325169Z" level=info msg="shim disconnected" id=dd61163cc07f1a2d3df19e44d3bb01ee3e7495597ce55e115672dd240e135df0 Dec 13 02:21:37.936391 env[1730]: time="2024-12-13T02:21:37.936387365Z" level=warning msg="cleaning up after shim disconnected" id=dd61163cc07f1a2d3df19e44d3bb01ee3e7495597ce55e115672dd240e135df0 namespace=k8s.io Dec 13 02:21:37.936751 env[1730]: time="2024-12-13T02:21:37.936400091Z" level=info msg="cleaning up dead shim" Dec 13 02:21:37.945876 env[1730]: time="2024-12-13T02:21:37.945818138Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:21:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4700 runtime=io.containerd.runc.v2\n" Dec 13 02:21:37.946245 env[1730]: time="2024-12-13T02:21:37.946206421Z" level=info msg="TearDown network for sandbox \"dd61163cc07f1a2d3df19e44d3bb01ee3e7495597ce55e115672dd240e135df0\" successfully" Dec 13 02:21:37.946245 env[1730]: time="2024-12-13T02:21:37.946241635Z" level=info msg="StopPodSandbox for \"dd61163cc07f1a2d3df19e44d3bb01ee3e7495597ce55e115672dd240e135df0\" returns successfully" Dec 13 02:21:38.067891 kubelet[2842]: I1213 02:21:38.067850 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/15a856a5-1808-4281-aec5-e4b46d4370e7-hubble-tls\") pod \"15a856a5-1808-4281-aec5-e4b46d4370e7\" (UID: \"15a856a5-1808-4281-aec5-e4b46d4370e7\") " Dec 13 02:21:38.067891 kubelet[2842]: I1213 02:21:38.067900 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-hostproc\") pod \"15a856a5-1808-4281-aec5-e4b46d4370e7\" (UID: \"15a856a5-1808-4281-aec5-e4b46d4370e7\") " Dec 13 02:21:38.068155 kubelet[2842]: I1213 02:21:38.067930 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-xtables-lock\") pod \"15a856a5-1808-4281-aec5-e4b46d4370e7\" (UID: \"15a856a5-1808-4281-aec5-e4b46d4370e7\") " Dec 13 02:21:38.068155 kubelet[2842]: I1213 02:21:38.067951 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-cilium-run\") pod \"15a856a5-1808-4281-aec5-e4b46d4370e7\" (UID: \"15a856a5-1808-4281-aec5-e4b46d4370e7\") " Dec 13 02:21:38.068155 kubelet[2842]: I1213 02:21:38.067979 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/15a856a5-1808-4281-aec5-e4b46d4370e7-cilium-ipsec-secrets\") pod \"15a856a5-1808-4281-aec5-e4b46d4370e7\" (UID: \"15a856a5-1808-4281-aec5-e4b46d4370e7\") " Dec 13 02:21:38.068155 kubelet[2842]: I1213 02:21:38.068001 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-bpf-maps\") pod \"15a856a5-1808-4281-aec5-e4b46d4370e7\" (UID: \"15a856a5-1808-4281-aec5-e4b46d4370e7\") " Dec 13 02:21:38.068155 kubelet[2842]: I1213 02:21:38.068025 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/15a856a5-1808-4281-aec5-e4b46d4370e7-clustermesh-secrets\") pod \"15a856a5-1808-4281-aec5-e4b46d4370e7\" (UID: \"15a856a5-1808-4281-aec5-e4b46d4370e7\") " Dec 13 02:21:38.068155 kubelet[2842]: I1213 02:21:38.068052 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-host-proc-sys-kernel\") pod \"15a856a5-1808-4281-aec5-e4b46d4370e7\" (UID: \"15a856a5-1808-4281-aec5-e4b46d4370e7\") " Dec 13 02:21:38.068155 kubelet[2842]: I1213 02:21:38.068080 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/15a856a5-1808-4281-aec5-e4b46d4370e7-cilium-config-path\") pod \"15a856a5-1808-4281-aec5-e4b46d4370e7\" (UID: \"15a856a5-1808-4281-aec5-e4b46d4370e7\") " Dec 13 02:21:38.068155 kubelet[2842]: I1213 02:21:38.068105 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-cni-path\") pod \"15a856a5-1808-4281-aec5-e4b46d4370e7\" (UID: \"15a856a5-1808-4281-aec5-e4b46d4370e7\") " Dec 13 02:21:38.068155 kubelet[2842]: I1213 02:21:38.068151 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-lib-modules\") pod \"15a856a5-1808-4281-aec5-e4b46d4370e7\" (UID: \"15a856a5-1808-4281-aec5-e4b46d4370e7\") " Dec 13 02:21:38.068666 kubelet[2842]: I1213 02:21:38.068181 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-host-proc-sys-net\") pod \"15a856a5-1808-4281-aec5-e4b46d4370e7\" (UID: \"15a856a5-1808-4281-aec5-e4b46d4370e7\") " Dec 13 02:21:38.068666 kubelet[2842]: I1213 02:21:38.068216 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmn22\" (UniqueName: \"kubernetes.io/projected/15a856a5-1808-4281-aec5-e4b46d4370e7-kube-api-access-jmn22\") pod \"15a856a5-1808-4281-aec5-e4b46d4370e7\" (UID: \"15a856a5-1808-4281-aec5-e4b46d4370e7\") " Dec 13 02:21:38.068666 kubelet[2842]: I1213 02:21:38.068254 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-cilium-cgroup\") pod \"15a856a5-1808-4281-aec5-e4b46d4370e7\" (UID: \"15a856a5-1808-4281-aec5-e4b46d4370e7\") " Dec 13 02:21:38.068666 kubelet[2842]: I1213 02:21:38.068295 2842 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-etc-cni-netd\") pod \"15a856a5-1808-4281-aec5-e4b46d4370e7\" (UID: \"15a856a5-1808-4281-aec5-e4b46d4370e7\") " Dec 13 02:21:38.068666 kubelet[2842]: I1213 02:21:38.068366 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "15a856a5-1808-4281-aec5-e4b46d4370e7" (UID: "15a856a5-1808-4281-aec5-e4b46d4370e7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:38.068938 kubelet[2842]: I1213 02:21:38.068834 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "15a856a5-1808-4281-aec5-e4b46d4370e7" (UID: "15a856a5-1808-4281-aec5-e4b46d4370e7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:38.068938 kubelet[2842]: I1213 02:21:38.068880 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-hostproc" (OuterVolumeSpecName: "hostproc") pod "15a856a5-1808-4281-aec5-e4b46d4370e7" (UID: "15a856a5-1808-4281-aec5-e4b46d4370e7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:38.068938 kubelet[2842]: I1213 02:21:38.068905 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "15a856a5-1808-4281-aec5-e4b46d4370e7" (UID: "15a856a5-1808-4281-aec5-e4b46d4370e7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:38.068938 kubelet[2842]: I1213 02:21:38.068927 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "15a856a5-1808-4281-aec5-e4b46d4370e7" (UID: "15a856a5-1808-4281-aec5-e4b46d4370e7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:38.069721 kubelet[2842]: I1213 02:21:38.069695 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "15a856a5-1808-4281-aec5-e4b46d4370e7" (UID: "15a856a5-1808-4281-aec5-e4b46d4370e7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:38.069976 kubelet[2842]: I1213 02:21:38.069947 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-cni-path" (OuterVolumeSpecName: "cni-path") pod "15a856a5-1808-4281-aec5-e4b46d4370e7" (UID: "15a856a5-1808-4281-aec5-e4b46d4370e7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:38.070114 kubelet[2842]: I1213 02:21:38.070097 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "15a856a5-1808-4281-aec5-e4b46d4370e7" (UID: "15a856a5-1808-4281-aec5-e4b46d4370e7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:38.070399 kubelet[2842]: I1213 02:21:38.070378 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "15a856a5-1808-4281-aec5-e4b46d4370e7" (UID: "15a856a5-1808-4281-aec5-e4b46d4370e7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:38.078317 systemd[1]: var-lib-kubelet-pods-15a856a5\x2d1808\x2d4281\x2daec5\x2de4b46d4370e7-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 02:21:38.081628 systemd[1]: var-lib-kubelet-pods-15a856a5\x2d1808\x2d4281\x2daec5\x2de4b46d4370e7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:21:38.082068 kubelet[2842]: I1213 02:21:38.082031 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "15a856a5-1808-4281-aec5-e4b46d4370e7" (UID: "15a856a5-1808-4281-aec5-e4b46d4370e7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:21:38.083239 kubelet[2842]: I1213 02:21:38.083207 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a856a5-1808-4281-aec5-e4b46d4370e7-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "15a856a5-1808-4281-aec5-e4b46d4370e7" (UID: "15a856a5-1808-4281-aec5-e4b46d4370e7"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:21:38.084819 kubelet[2842]: I1213 02:21:38.084786 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15a856a5-1808-4281-aec5-e4b46d4370e7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "15a856a5-1808-4281-aec5-e4b46d4370e7" (UID: "15a856a5-1808-4281-aec5-e4b46d4370e7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:21:38.091998 kubelet[2842]: I1213 02:21:38.087431 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15a856a5-1808-4281-aec5-e4b46d4370e7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "15a856a5-1808-4281-aec5-e4b46d4370e7" (UID: "15a856a5-1808-4281-aec5-e4b46d4370e7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:21:38.090365 systemd[1]: var-lib-kubelet-pods-15a856a5\x2d1808\x2d4281\x2daec5\x2de4b46d4370e7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:21:38.092888 kubelet[2842]: I1213 02:21:38.092769 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a856a5-1808-4281-aec5-e4b46d4370e7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "15a856a5-1808-4281-aec5-e4b46d4370e7" (UID: "15a856a5-1808-4281-aec5-e4b46d4370e7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:21:38.096086 kubelet[2842]: I1213 02:21:38.095993 2842 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15a856a5-1808-4281-aec5-e4b46d4370e7-kube-api-access-jmn22" (OuterVolumeSpecName: "kube-api-access-jmn22") pod "15a856a5-1808-4281-aec5-e4b46d4370e7" (UID: "15a856a5-1808-4281-aec5-e4b46d4370e7"). InnerVolumeSpecName "kube-api-access-jmn22". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:21:38.169359 kubelet[2842]: I1213 02:21:38.169317 2842 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-etc-cni-netd\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:38.169359 kubelet[2842]: I1213 02:21:38.169359 2842 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-hostproc\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:38.169359 kubelet[2842]: I1213 02:21:38.169376 2842 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/15a856a5-1808-4281-aec5-e4b46d4370e7-hubble-tls\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:38.169757 kubelet[2842]: I1213 02:21:38.169390 2842 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-cilium-run\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:38.169757 kubelet[2842]: I1213 02:21:38.169404 2842 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/15a856a5-1808-4281-aec5-e4b46d4370e7-cilium-ipsec-secrets\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:38.169757 kubelet[2842]: I1213 02:21:38.169418 2842 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-bpf-maps\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:38.169757 kubelet[2842]: I1213 02:21:38.169430 2842 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-xtables-lock\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:38.169757 kubelet[2842]: I1213 02:21:38.169471 2842 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-host-proc-sys-kernel\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:38.169757 kubelet[2842]: I1213 02:21:38.169484 2842 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/15a856a5-1808-4281-aec5-e4b46d4370e7-cilium-config-path\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:38.169757 kubelet[2842]: I1213 02:21:38.169495 2842 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-cni-path\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:38.169757 kubelet[2842]: I1213 02:21:38.169582 2842 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/15a856a5-1808-4281-aec5-e4b46d4370e7-clustermesh-secrets\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:38.169757 kubelet[2842]: I1213 02:21:38.169598 2842 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-host-proc-sys-net\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:38.169757 kubelet[2842]: I1213 02:21:38.169612 2842 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jmn22\" (UniqueName: \"kubernetes.io/projected/15a856a5-1808-4281-aec5-e4b46d4370e7-kube-api-access-jmn22\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:38.169757 kubelet[2842]: I1213 02:21:38.169625 2842 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-lib-modules\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:38.169757 kubelet[2842]: I1213 02:21:38.169639 2842 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/15a856a5-1808-4281-aec5-e4b46d4370e7-cilium-cgroup\") on node \"ip-172-31-31-142\" DevicePath \"\"" Dec 13 02:21:38.282297 systemd[1]: Removed slice kubepods-burstable-pod15a856a5_1808_4281_aec5_e4b46d4370e7.slice. Dec 13 02:21:38.846561 kubelet[2842]: I1213 02:21:38.846527 2842 scope.go:117] "RemoveContainer" containerID="ab69957aee06f67d911a5f2280e4460b0492c3c42348a5e65d40c6c66a2427b8" Dec 13 02:21:38.857986 env[1730]: time="2024-12-13T02:21:38.857922729Z" level=info msg="RemoveContainer for \"ab69957aee06f67d911a5f2280e4460b0492c3c42348a5e65d40c6c66a2427b8\"" Dec 13 02:21:38.879541 systemd[1]: var-lib-kubelet-pods-15a856a5\x2d1808\x2d4281\x2daec5\x2de4b46d4370e7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djmn22.mount: Deactivated successfully. Dec 13 02:21:38.880897 env[1730]: time="2024-12-13T02:21:38.880852600Z" level=info msg="RemoveContainer for \"ab69957aee06f67d911a5f2280e4460b0492c3c42348a5e65d40c6c66a2427b8\" returns successfully" Dec 13 02:21:38.988847 kubelet[2842]: I1213 02:21:38.987932 2842 topology_manager.go:215] "Topology Admit Handler" podUID="f75efe75-88d9-45da-9245-9f7667f83ff5" podNamespace="kube-system" podName="cilium-f9n9k" Dec 13 02:21:38.990057 kubelet[2842]: E1213 02:21:38.989731 2842 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15a856a5-1808-4281-aec5-e4b46d4370e7" containerName="mount-cgroup" Dec 13 02:21:38.990422 kubelet[2842]: I1213 02:21:38.990232 2842 memory_manager.go:354] "RemoveStaleState removing state" podUID="15a856a5-1808-4281-aec5-e4b46d4370e7" containerName="mount-cgroup" Dec 13 02:21:39.027599 systemd[1]: Created slice kubepods-burstable-podf75efe75_88d9_45da_9245_9f7667f83ff5.slice. Dec 13 02:21:39.182542 kubelet[2842]: I1213 02:21:39.182418 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2279d\" (UniqueName: \"kubernetes.io/projected/f75efe75-88d9-45da-9245-9f7667f83ff5-kube-api-access-2279d\") pod \"cilium-f9n9k\" (UID: \"f75efe75-88d9-45da-9245-9f7667f83ff5\") " pod="kube-system/cilium-f9n9k" Dec 13 02:21:39.182542 kubelet[2842]: I1213 02:21:39.182475 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f75efe75-88d9-45da-9245-9f7667f83ff5-hostproc\") pod \"cilium-f9n9k\" (UID: \"f75efe75-88d9-45da-9245-9f7667f83ff5\") " pod="kube-system/cilium-f9n9k" Dec 13 02:21:39.182542 kubelet[2842]: I1213 02:21:39.182506 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f75efe75-88d9-45da-9245-9f7667f83ff5-host-proc-sys-net\") pod \"cilium-f9n9k\" (UID: \"f75efe75-88d9-45da-9245-9f7667f83ff5\") " pod="kube-system/cilium-f9n9k" Dec 13 02:21:39.182542 kubelet[2842]: I1213 02:21:39.182533 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f75efe75-88d9-45da-9245-9f7667f83ff5-cilium-run\") pod \"cilium-f9n9k\" (UID: \"f75efe75-88d9-45da-9245-9f7667f83ff5\") " pod="kube-system/cilium-f9n9k" Dec 13 02:21:39.182859 kubelet[2842]: I1213 02:21:39.182559 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f75efe75-88d9-45da-9245-9f7667f83ff5-bpf-maps\") pod \"cilium-f9n9k\" (UID: \"f75efe75-88d9-45da-9245-9f7667f83ff5\") " pod="kube-system/cilium-f9n9k" Dec 13 02:21:39.182859 kubelet[2842]: I1213 02:21:39.182590 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f75efe75-88d9-45da-9245-9f7667f83ff5-cilium-ipsec-secrets\") pod \"cilium-f9n9k\" (UID: \"f75efe75-88d9-45da-9245-9f7667f83ff5\") " pod="kube-system/cilium-f9n9k" Dec 13 02:21:39.182859 kubelet[2842]: I1213 02:21:39.182617 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f75efe75-88d9-45da-9245-9f7667f83ff5-hubble-tls\") pod \"cilium-f9n9k\" (UID: \"f75efe75-88d9-45da-9245-9f7667f83ff5\") " pod="kube-system/cilium-f9n9k" Dec 13 02:21:39.182859 kubelet[2842]: I1213 02:21:39.182642 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f75efe75-88d9-45da-9245-9f7667f83ff5-xtables-lock\") pod \"cilium-f9n9k\" (UID: \"f75efe75-88d9-45da-9245-9f7667f83ff5\") " pod="kube-system/cilium-f9n9k" Dec 13 02:21:39.182859 kubelet[2842]: I1213 02:21:39.182669 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f75efe75-88d9-45da-9245-9f7667f83ff5-clustermesh-secrets\") pod \"cilium-f9n9k\" (UID: \"f75efe75-88d9-45da-9245-9f7667f83ff5\") " pod="kube-system/cilium-f9n9k" Dec 13 02:21:39.182859 kubelet[2842]: I1213 02:21:39.182696 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f75efe75-88d9-45da-9245-9f7667f83ff5-cilium-config-path\") pod \"cilium-f9n9k\" (UID: \"f75efe75-88d9-45da-9245-9f7667f83ff5\") " pod="kube-system/cilium-f9n9k" Dec 13 02:21:39.182859 kubelet[2842]: I1213 02:21:39.182729 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f75efe75-88d9-45da-9245-9f7667f83ff5-cni-path\") pod \"cilium-f9n9k\" (UID: \"f75efe75-88d9-45da-9245-9f7667f83ff5\") " pod="kube-system/cilium-f9n9k" Dec 13 02:21:39.182859 kubelet[2842]: I1213 02:21:39.182759 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f75efe75-88d9-45da-9245-9f7667f83ff5-lib-modules\") pod \"cilium-f9n9k\" (UID: \"f75efe75-88d9-45da-9245-9f7667f83ff5\") " pod="kube-system/cilium-f9n9k" Dec 13 02:21:39.182859 kubelet[2842]: I1213 02:21:39.182790 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f75efe75-88d9-45da-9245-9f7667f83ff5-host-proc-sys-kernel\") pod \"cilium-f9n9k\" (UID: \"f75efe75-88d9-45da-9245-9f7667f83ff5\") " pod="kube-system/cilium-f9n9k" Dec 13 02:21:39.182859 kubelet[2842]: I1213 02:21:39.182821 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f75efe75-88d9-45da-9245-9f7667f83ff5-cilium-cgroup\") pod \"cilium-f9n9k\" (UID: \"f75efe75-88d9-45da-9245-9f7667f83ff5\") " pod="kube-system/cilium-f9n9k" Dec 13 02:21:39.182859 kubelet[2842]: I1213 02:21:39.182851 2842 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f75efe75-88d9-45da-9245-9f7667f83ff5-etc-cni-netd\") pod \"cilium-f9n9k\" (UID: \"f75efe75-88d9-45da-9245-9f7667f83ff5\") " pod="kube-system/cilium-f9n9k" Dec 13 02:21:39.371058 env[1730]: time="2024-12-13T02:21:39.371001881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f9n9k,Uid:f75efe75-88d9-45da-9245-9f7667f83ff5,Namespace:kube-system,Attempt:0,}" Dec 13 02:21:39.406306 env[1730]: time="2024-12-13T02:21:39.406193461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:21:39.406581 env[1730]: time="2024-12-13T02:21:39.406253659Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:21:39.406581 env[1730]: time="2024-12-13T02:21:39.406549344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:21:39.407230 env[1730]: time="2024-12-13T02:21:39.407102235Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/feeae4fba753f19837ce8e69c71d2e7a682f6d3086c8c868afb8697bc74f80f1 pid=4729 runtime=io.containerd.runc.v2 Dec 13 02:21:39.435452 systemd[1]: Started cri-containerd-feeae4fba753f19837ce8e69c71d2e7a682f6d3086c8c868afb8697bc74f80f1.scope. Dec 13 02:21:39.469300 env[1730]: time="2024-12-13T02:21:39.468725315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f9n9k,Uid:f75efe75-88d9-45da-9245-9f7667f83ff5,Namespace:kube-system,Attempt:0,} returns sandbox id \"feeae4fba753f19837ce8e69c71d2e7a682f6d3086c8c868afb8697bc74f80f1\"" Dec 13 02:21:39.474625 env[1730]: time="2024-12-13T02:21:39.474589919Z" level=info msg="CreateContainer within sandbox \"feeae4fba753f19837ce8e69c71d2e7a682f6d3086c8c868afb8697bc74f80f1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:21:39.498231 env[1730]: time="2024-12-13T02:21:39.498180842Z" level=info msg="CreateContainer within sandbox \"feeae4fba753f19837ce8e69c71d2e7a682f6d3086c8c868afb8697bc74f80f1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"924b9759cb0b93e8d6b252559e8ec75a74a30479b29f3db390a7a8e6e86bb8f7\"" Dec 13 02:21:39.501349 env[1730]: time="2024-12-13T02:21:39.499436749Z" level=info msg="StartContainer for \"924b9759cb0b93e8d6b252559e8ec75a74a30479b29f3db390a7a8e6e86bb8f7\"" Dec 13 02:21:39.530855 systemd[1]: Started cri-containerd-924b9759cb0b93e8d6b252559e8ec75a74a30479b29f3db390a7a8e6e86bb8f7.scope. Dec 13 02:21:39.583305 env[1730]: time="2024-12-13T02:21:39.580821885Z" level=info msg="StartContainer for \"924b9759cb0b93e8d6b252559e8ec75a74a30479b29f3db390a7a8e6e86bb8f7\" returns successfully" Dec 13 02:21:39.617762 systemd[1]: cri-containerd-924b9759cb0b93e8d6b252559e8ec75a74a30479b29f3db390a7a8e6e86bb8f7.scope: Deactivated successfully. Dec 13 02:21:39.694335 env[1730]: time="2024-12-13T02:21:39.694181110Z" level=info msg="shim disconnected" id=924b9759cb0b93e8d6b252559e8ec75a74a30479b29f3db390a7a8e6e86bb8f7 Dec 13 02:21:39.694335 env[1730]: time="2024-12-13T02:21:39.694239967Z" level=warning msg="cleaning up after shim disconnected" id=924b9759cb0b93e8d6b252559e8ec75a74a30479b29f3db390a7a8e6e86bb8f7 namespace=k8s.io Dec 13 02:21:39.694335 env[1730]: time="2024-12-13T02:21:39.694253804Z" level=info msg="cleaning up dead shim" Dec 13 02:21:39.719787 env[1730]: time="2024-12-13T02:21:39.719733717Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:21:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4814 runtime=io.containerd.runc.v2\n" Dec 13 02:21:39.860392 env[1730]: time="2024-12-13T02:21:39.860342809Z" level=info msg="CreateContainer within sandbox \"feeae4fba753f19837ce8e69c71d2e7a682f6d3086c8c868afb8697bc74f80f1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:21:39.936613 env[1730]: time="2024-12-13T02:21:39.936561434Z" level=info msg="CreateContainer within sandbox \"feeae4fba753f19837ce8e69c71d2e7a682f6d3086c8c868afb8697bc74f80f1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1ac999bd5f004c23ed409f22d4aef34dead4e6c4155bdfda2eda419f6b43cca6\"" Dec 13 02:21:39.937737 env[1730]: time="2024-12-13T02:21:39.937698445Z" level=info msg="StartContainer for \"1ac999bd5f004c23ed409f22d4aef34dead4e6c4155bdfda2eda419f6b43cca6\"" Dec 13 02:21:39.984838 systemd[1]: Started cri-containerd-1ac999bd5f004c23ed409f22d4aef34dead4e6c4155bdfda2eda419f6b43cca6.scope. Dec 13 02:21:40.040724 env[1730]: time="2024-12-13T02:21:40.040669082Z" level=info msg="StartContainer for \"1ac999bd5f004c23ed409f22d4aef34dead4e6c4155bdfda2eda419f6b43cca6\" returns successfully" Dec 13 02:21:40.055126 systemd[1]: cri-containerd-1ac999bd5f004c23ed409f22d4aef34dead4e6c4155bdfda2eda419f6b43cca6.scope: Deactivated successfully. Dec 13 02:21:40.101353 env[1730]: time="2024-12-13T02:21:40.101294579Z" level=info msg="shim disconnected" id=1ac999bd5f004c23ed409f22d4aef34dead4e6c4155bdfda2eda419f6b43cca6 Dec 13 02:21:40.101353 env[1730]: time="2024-12-13T02:21:40.101349504Z" level=warning msg="cleaning up after shim disconnected" id=1ac999bd5f004c23ed409f22d4aef34dead4e6c4155bdfda2eda419f6b43cca6 namespace=k8s.io Dec 13 02:21:40.101353 env[1730]: time="2024-12-13T02:21:40.101361532Z" level=info msg="cleaning up dead shim" Dec 13 02:21:40.112433 env[1730]: time="2024-12-13T02:21:40.112386005Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:21:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4876 runtime=io.containerd.runc.v2\n" Dec 13 02:21:40.245189 kubelet[2842]: E1213 02:21:40.243478 2842 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-b5t8g" podUID="de6ab9c1-077a-4e36-924d-e71f18537aab" Dec 13 02:21:40.247130 kubelet[2842]: I1213 02:21:40.247101 2842 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="15a856a5-1808-4281-aec5-e4b46d4370e7" path="/var/lib/kubelet/pods/15a856a5-1808-4281-aec5-e4b46d4370e7/volumes" Dec 13 02:21:40.586725 kubelet[2842]: W1213 02:21:40.586670 2842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15a856a5_1808_4281_aec5_e4b46d4370e7.slice/cri-containerd-ab69957aee06f67d911a5f2280e4460b0492c3c42348a5e65d40c6c66a2427b8.scope WatchSource:0}: container "ab69957aee06f67d911a5f2280e4460b0492c3c42348a5e65d40c6c66a2427b8" in namespace "k8s.io": not found Dec 13 02:21:40.872567 env[1730]: time="2024-12-13T02:21:40.872456925Z" level=info msg="CreateContainer within sandbox \"feeae4fba753f19837ce8e69c71d2e7a682f6d3086c8c868afb8697bc74f80f1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:21:40.876271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ac999bd5f004c23ed409f22d4aef34dead4e6c4155bdfda2eda419f6b43cca6-rootfs.mount: Deactivated successfully. Dec 13 02:21:40.942515 env[1730]: time="2024-12-13T02:21:40.942350315Z" level=info msg="CreateContainer within sandbox \"feeae4fba753f19837ce8e69c71d2e7a682f6d3086c8c868afb8697bc74f80f1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5b7e0386abbbe54e1edd3c62b403128e3d6d0a243567bb815a261f369e8faf07\"" Dec 13 02:21:40.947330 env[1730]: time="2024-12-13T02:21:40.945879577Z" level=info msg="StartContainer for \"5b7e0386abbbe54e1edd3c62b403128e3d6d0a243567bb815a261f369e8faf07\"" Dec 13 02:21:40.994615 systemd[1]: Started cri-containerd-5b7e0386abbbe54e1edd3c62b403128e3d6d0a243567bb815a261f369e8faf07.scope. Dec 13 02:21:41.093159 env[1730]: time="2024-12-13T02:21:41.093085989Z" level=info msg="StartContainer for \"5b7e0386abbbe54e1edd3c62b403128e3d6d0a243567bb815a261f369e8faf07\" returns successfully" Dec 13 02:21:41.104117 systemd[1]: cri-containerd-5b7e0386abbbe54e1edd3c62b403128e3d6d0a243567bb815a261f369e8faf07.scope: Deactivated successfully. Dec 13 02:21:41.151977 env[1730]: time="2024-12-13T02:21:41.151809567Z" level=info msg="shim disconnected" id=5b7e0386abbbe54e1edd3c62b403128e3d6d0a243567bb815a261f369e8faf07 Dec 13 02:21:41.152453 env[1730]: time="2024-12-13T02:21:41.152419860Z" level=warning msg="cleaning up after shim disconnected" id=5b7e0386abbbe54e1edd3c62b403128e3d6d0a243567bb815a261f369e8faf07 namespace=k8s.io Dec 13 02:21:41.153229 env[1730]: time="2024-12-13T02:21:41.153202984Z" level=info msg="cleaning up dead shim" Dec 13 02:21:41.163632 env[1730]: time="2024-12-13T02:21:41.163571353Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:21:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4931 runtime=io.containerd.runc.v2\n" Dec 13 02:21:41.880736 systemd[1]: run-containerd-runc-k8s.io-5b7e0386abbbe54e1edd3c62b403128e3d6d0a243567bb815a261f369e8faf07-runc.cIXigw.mount: Deactivated successfully. Dec 13 02:21:41.881014 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b7e0386abbbe54e1edd3c62b403128e3d6d0a243567bb815a261f369e8faf07-rootfs.mount: Deactivated successfully. Dec 13 02:21:41.893566 env[1730]: time="2024-12-13T02:21:41.893518280Z" level=info msg="CreateContainer within sandbox \"feeae4fba753f19837ce8e69c71d2e7a682f6d3086c8c868afb8697bc74f80f1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:21:41.928949 env[1730]: time="2024-12-13T02:21:41.928892607Z" level=info msg="CreateContainer within sandbox \"feeae4fba753f19837ce8e69c71d2e7a682f6d3086c8c868afb8697bc74f80f1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7e251efa54009d897a59b5bb0b57554c68eb417aaae4ed384a6a0c435bce4d39\"" Dec 13 02:21:41.929906 env[1730]: time="2024-12-13T02:21:41.929843430Z" level=info msg="StartContainer for \"7e251efa54009d897a59b5bb0b57554c68eb417aaae4ed384a6a0c435bce4d39\"" Dec 13 02:21:41.984042 systemd[1]: Started cri-containerd-7e251efa54009d897a59b5bb0b57554c68eb417aaae4ed384a6a0c435bce4d39.scope. Dec 13 02:21:42.026206 systemd[1]: cri-containerd-7e251efa54009d897a59b5bb0b57554c68eb417aaae4ed384a6a0c435bce4d39.scope: Deactivated successfully. Dec 13 02:21:42.033929 env[1730]: time="2024-12-13T02:21:42.033509445Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf75efe75_88d9_45da_9245_9f7667f83ff5.slice/cri-containerd-7e251efa54009d897a59b5bb0b57554c68eb417aaae4ed384a6a0c435bce4d39.scope/memory.events\": no such file or directory" Dec 13 02:21:42.037158 env[1730]: time="2024-12-13T02:21:42.037082076Z" level=info msg="StartContainer for \"7e251efa54009d897a59b5bb0b57554c68eb417aaae4ed384a6a0c435bce4d39\" returns successfully" Dec 13 02:21:42.080444 env[1730]: time="2024-12-13T02:21:42.080339911Z" level=info msg="shim disconnected" id=7e251efa54009d897a59b5bb0b57554c68eb417aaae4ed384a6a0c435bce4d39 Dec 13 02:21:42.080444 env[1730]: time="2024-12-13T02:21:42.080445441Z" level=warning msg="cleaning up after shim disconnected" id=7e251efa54009d897a59b5bb0b57554c68eb417aaae4ed384a6a0c435bce4d39 namespace=k8s.io Dec 13 02:21:42.081133 env[1730]: time="2024-12-13T02:21:42.080463497Z" level=info msg="cleaning up dead shim" Dec 13 02:21:42.098671 env[1730]: time="2024-12-13T02:21:42.098619417Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:21:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4990 runtime=io.containerd.runc.v2\n" Dec 13 02:21:42.202605 env[1730]: time="2024-12-13T02:21:42.202490928Z" level=info msg="StopPodSandbox for \"f309a0c791f0115ca1814ab36b5a6f977569ac147cff38457f3d0c86bb970f2f\"" Dec 13 02:21:42.202986 env[1730]: time="2024-12-13T02:21:42.202928650Z" level=info msg="TearDown network for sandbox \"f309a0c791f0115ca1814ab36b5a6f977569ac147cff38457f3d0c86bb970f2f\" successfully" Dec 13 02:21:42.203148 env[1730]: time="2024-12-13T02:21:42.203126377Z" level=info msg="StopPodSandbox for \"f309a0c791f0115ca1814ab36b5a6f977569ac147cff38457f3d0c86bb970f2f\" returns successfully" Dec 13 02:21:42.203963 env[1730]: time="2024-12-13T02:21:42.203936739Z" level=info msg="RemovePodSandbox for \"f309a0c791f0115ca1814ab36b5a6f977569ac147cff38457f3d0c86bb970f2f\"" Dec 13 02:21:42.204070 env[1730]: time="2024-12-13T02:21:42.203972306Z" level=info msg="Forcibly stopping sandbox \"f309a0c791f0115ca1814ab36b5a6f977569ac147cff38457f3d0c86bb970f2f\"" Dec 13 02:21:42.204123 env[1730]: time="2024-12-13T02:21:42.204065011Z" level=info msg="TearDown network for sandbox \"f309a0c791f0115ca1814ab36b5a6f977569ac147cff38457f3d0c86bb970f2f\" successfully" Dec 13 02:21:42.212036 env[1730]: time="2024-12-13T02:21:42.211983888Z" level=info msg="RemovePodSandbox \"f309a0c791f0115ca1814ab36b5a6f977569ac147cff38457f3d0c86bb970f2f\" returns successfully" Dec 13 02:21:42.213544 env[1730]: time="2024-12-13T02:21:42.213014379Z" level=info msg="StopPodSandbox for \"e0e643092d0a94e94cdf54a2f5a276b7e9d4bf50c1255fa20a587bf23e262f8a\"" Dec 13 02:21:42.213697 env[1730]: time="2024-12-13T02:21:42.213626868Z" level=info msg="TearDown network for sandbox \"e0e643092d0a94e94cdf54a2f5a276b7e9d4bf50c1255fa20a587bf23e262f8a\" successfully" Dec 13 02:21:42.213697 env[1730]: time="2024-12-13T02:21:42.213677141Z" level=info msg="StopPodSandbox for \"e0e643092d0a94e94cdf54a2f5a276b7e9d4bf50c1255fa20a587bf23e262f8a\" returns successfully" Dec 13 02:21:42.214130 env[1730]: time="2024-12-13T02:21:42.214102263Z" level=info msg="RemovePodSandbox for \"e0e643092d0a94e94cdf54a2f5a276b7e9d4bf50c1255fa20a587bf23e262f8a\"" Dec 13 02:21:42.214311 env[1730]: time="2024-12-13T02:21:42.214133591Z" level=info msg="Forcibly stopping sandbox \"e0e643092d0a94e94cdf54a2f5a276b7e9d4bf50c1255fa20a587bf23e262f8a\"" Dec 13 02:21:42.214311 env[1730]: time="2024-12-13T02:21:42.214273067Z" level=info msg="TearDown network for sandbox \"e0e643092d0a94e94cdf54a2f5a276b7e9d4bf50c1255fa20a587bf23e262f8a\" successfully" Dec 13 02:21:42.220360 env[1730]: time="2024-12-13T02:21:42.220271617Z" level=info msg="RemovePodSandbox \"e0e643092d0a94e94cdf54a2f5a276b7e9d4bf50c1255fa20a587bf23e262f8a\" returns successfully" Dec 13 02:21:42.221508 env[1730]: time="2024-12-13T02:21:42.221465488Z" level=info msg="StopPodSandbox for \"dd61163cc07f1a2d3df19e44d3bb01ee3e7495597ce55e115672dd240e135df0\"" Dec 13 02:21:42.221658 env[1730]: time="2024-12-13T02:21:42.221600004Z" level=info msg="TearDown network for sandbox \"dd61163cc07f1a2d3df19e44d3bb01ee3e7495597ce55e115672dd240e135df0\" successfully" Dec 13 02:21:42.221729 env[1730]: time="2024-12-13T02:21:42.221655083Z" level=info msg="StopPodSandbox for \"dd61163cc07f1a2d3df19e44d3bb01ee3e7495597ce55e115672dd240e135df0\" returns successfully" Dec 13 02:21:42.222349 env[1730]: time="2024-12-13T02:21:42.222318011Z" level=info msg="RemovePodSandbox for \"dd61163cc07f1a2d3df19e44d3bb01ee3e7495597ce55e115672dd240e135df0\"" Dec 13 02:21:42.222462 env[1730]: time="2024-12-13T02:21:42.222351513Z" level=info msg="Forcibly stopping sandbox \"dd61163cc07f1a2d3df19e44d3bb01ee3e7495597ce55e115672dd240e135df0\"" Dec 13 02:21:42.222462 env[1730]: time="2024-12-13T02:21:42.222446203Z" level=info msg="TearDown network for sandbox \"dd61163cc07f1a2d3df19e44d3bb01ee3e7495597ce55e115672dd240e135df0\" successfully" Dec 13 02:21:42.229793 env[1730]: time="2024-12-13T02:21:42.229741496Z" level=info msg="RemovePodSandbox \"dd61163cc07f1a2d3df19e44d3bb01ee3e7495597ce55e115672dd240e135df0\" returns successfully" Dec 13 02:21:42.241825 kubelet[2842]: E1213 02:21:42.241694 2842 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-b5t8g" podUID="de6ab9c1-077a-4e36-924d-e71f18537aab" Dec 13 02:21:42.502246 kubelet[2842]: E1213 02:21:42.501923 2842 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:21:42.875863 systemd[1]: run-containerd-runc-k8s.io-7e251efa54009d897a59b5bb0b57554c68eb417aaae4ed384a6a0c435bce4d39-runc.WisgSM.mount: Deactivated successfully. Dec 13 02:21:42.876008 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e251efa54009d897a59b5bb0b57554c68eb417aaae4ed384a6a0c435bce4d39-rootfs.mount: Deactivated successfully. Dec 13 02:21:42.905291 env[1730]: time="2024-12-13T02:21:42.904706503Z" level=info msg="CreateContainer within sandbox \"feeae4fba753f19837ce8e69c71d2e7a682f6d3086c8c868afb8697bc74f80f1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:21:42.941024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount477363907.mount: Deactivated successfully. Dec 13 02:21:42.971630 env[1730]: time="2024-12-13T02:21:42.971570539Z" level=info msg="CreateContainer within sandbox \"feeae4fba753f19837ce8e69c71d2e7a682f6d3086c8c868afb8697bc74f80f1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"37ad5cbef6a79a305fec7bad7e48db8635b6ae937bc5f124e95e5d11c2e1988b\"" Dec 13 02:21:42.973586 env[1730]: time="2024-12-13T02:21:42.973545929Z" level=info msg="StartContainer for \"37ad5cbef6a79a305fec7bad7e48db8635b6ae937bc5f124e95e5d11c2e1988b\"" Dec 13 02:21:43.057366 systemd[1]: Started cri-containerd-37ad5cbef6a79a305fec7bad7e48db8635b6ae937bc5f124e95e5d11c2e1988b.scope. Dec 13 02:21:43.112893 env[1730]: time="2024-12-13T02:21:43.111167811Z" level=info msg="StartContainer for \"37ad5cbef6a79a305fec7bad7e48db8635b6ae937bc5f124e95e5d11c2e1988b\" returns successfully" Dec 13 02:21:43.716047 kubelet[2842]: W1213 02:21:43.716006 2842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf75efe75_88d9_45da_9245_9f7667f83ff5.slice/cri-containerd-924b9759cb0b93e8d6b252559e8ec75a74a30479b29f3db390a7a8e6e86bb8f7.scope WatchSource:0}: task 924b9759cb0b93e8d6b252559e8ec75a74a30479b29f3db390a7a8e6e86bb8f7 not found: not found Dec 13 02:21:44.247683 kubelet[2842]: E1213 02:21:44.247642 2842 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-b5t8g" podUID="de6ab9c1-077a-4e36-924d-e71f18537aab" Dec 13 02:21:44.329294 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 02:21:45.388020 kubelet[2842]: I1213 02:21:45.387980 2842 setters.go:568] "Node became not ready" node="ip-172-31-31-142" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T02:21:45Z","lastTransitionTime":"2024-12-13T02:21:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 02:21:46.101541 systemd[1]: run-containerd-runc-k8s.io-37ad5cbef6a79a305fec7bad7e48db8635b6ae937bc5f124e95e5d11c2e1988b-runc.Tjg0qZ.mount: Deactivated successfully. Dec 13 02:21:46.242185 kubelet[2842]: E1213 02:21:46.241816 2842 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-76f75df574-b5t8g" podUID="de6ab9c1-077a-4e36-924d-e71f18537aab" Dec 13 02:21:46.842012 kubelet[2842]: W1213 02:21:46.841964 2842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf75efe75_88d9_45da_9245_9f7667f83ff5.slice/cri-containerd-1ac999bd5f004c23ed409f22d4aef34dead4e6c4155bdfda2eda419f6b43cca6.scope WatchSource:0}: task 1ac999bd5f004c23ed409f22d4aef34dead4e6c4155bdfda2eda419f6b43cca6 not found: not found Dec 13 02:21:47.927705 (udev-worker)[5562]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:21:47.931926 (udev-worker)[5563]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:21:47.958488 systemd-networkd[1459]: lxc_health: Link UP Dec 13 02:21:47.971869 systemd-networkd[1459]: lxc_health: Gained carrier Dec 13 02:21:47.972396 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:21:49.416228 kubelet[2842]: I1213 02:21:49.416134 2842 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-f9n9k" podStartSLOduration=11.416055632 podStartE2EDuration="11.416055632s" podCreationTimestamp="2024-12-13 02:21:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:21:43.946107782 +0000 UTC m=+122.032480675" watchObservedRunningTime="2024-12-13 02:21:49.416055632 +0000 UTC m=+127.502428500" Dec 13 02:21:49.867429 systemd-networkd[1459]: lxc_health: Gained IPv6LL Dec 13 02:21:49.954400 kubelet[2842]: W1213 02:21:49.954355 2842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf75efe75_88d9_45da_9245_9f7667f83ff5.slice/cri-containerd-5b7e0386abbbe54e1edd3c62b403128e3d6d0a243567bb815a261f369e8faf07.scope WatchSource:0}: task 5b7e0386abbbe54e1edd3c62b403128e3d6d0a243567bb815a261f369e8faf07 not found: not found Dec 13 02:21:50.710645 systemd[1]: run-containerd-runc-k8s.io-37ad5cbef6a79a305fec7bad7e48db8635b6ae937bc5f124e95e5d11c2e1988b-runc.dZCLbM.mount: Deactivated successfully. Dec 13 02:21:53.073541 kubelet[2842]: W1213 02:21:53.073489 2842 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf75efe75_88d9_45da_9245_9f7667f83ff5.slice/cri-containerd-7e251efa54009d897a59b5bb0b57554c68eb417aaae4ed384a6a0c435bce4d39.scope WatchSource:0}: task 7e251efa54009d897a59b5bb0b57554c68eb417aaae4ed384a6a0c435bce4d39 not found: not found Dec 13 02:21:55.684052 sshd[4591]: pam_unix(sshd:session): session closed for user core Dec 13 02:21:55.689101 systemd-logind[1724]: Session 26 logged out. Waiting for processes to exit. Dec 13 02:21:55.691093 systemd[1]: sshd@25-172.31.31.142:22-139.178.68.195:47592.service: Deactivated successfully. Dec 13 02:21:55.692247 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 02:21:55.693906 systemd-logind[1724]: Removed session 26. Dec 13 02:22:10.827037 systemd[1]: cri-containerd-712be35552494236f69af76139b16b028cb1b568415b80c8b9277606c0e5fde2.scope: Deactivated successfully. Dec 13 02:22:10.827394 systemd[1]: cri-containerd-712be35552494236f69af76139b16b028cb1b568415b80c8b9277606c0e5fde2.scope: Consumed 3.702s CPU time. Dec 13 02:22:10.892261 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-712be35552494236f69af76139b16b028cb1b568415b80c8b9277606c0e5fde2-rootfs.mount: Deactivated successfully. Dec 13 02:22:10.924094 env[1730]: time="2024-12-13T02:22:10.924034852Z" level=info msg="shim disconnected" id=712be35552494236f69af76139b16b028cb1b568415b80c8b9277606c0e5fde2 Dec 13 02:22:10.925068 env[1730]: time="2024-12-13T02:22:10.924097089Z" level=warning msg="cleaning up after shim disconnected" id=712be35552494236f69af76139b16b028cb1b568415b80c8b9277606c0e5fde2 namespace=k8s.io Dec 13 02:22:10.925068 env[1730]: time="2024-12-13T02:22:10.924110639Z" level=info msg="cleaning up dead shim" Dec 13 02:22:10.935374 env[1730]: time="2024-12-13T02:22:10.935167235Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:22:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5695 runtime=io.containerd.runc.v2\n" Dec 13 02:22:10.973388 kubelet[2842]: I1213 02:22:10.973307 2842 scope.go:117] "RemoveContainer" containerID="712be35552494236f69af76139b16b028cb1b568415b80c8b9277606c0e5fde2" Dec 13 02:22:10.977480 env[1730]: time="2024-12-13T02:22:10.977440315Z" level=info msg="CreateContainer within sandbox \"02eaa13575b52123c0e4ebdab43824d642798b100740c923f339ec396029165d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 02:22:11.002838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1942638600.mount: Deactivated successfully. Dec 13 02:22:11.019729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount174172107.mount: Deactivated successfully. Dec 13 02:22:11.021009 env[1730]: time="2024-12-13T02:22:11.020920952Z" level=info msg="CreateContainer within sandbox \"02eaa13575b52123c0e4ebdab43824d642798b100740c923f339ec396029165d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a368419444a4f1217369a9880e81506f8e794623be7e51895ff2520e877197e1\"" Dec 13 02:22:11.021536 env[1730]: time="2024-12-13T02:22:11.021509771Z" level=info msg="StartContainer for \"a368419444a4f1217369a9880e81506f8e794623be7e51895ff2520e877197e1\"" Dec 13 02:22:11.051152 systemd[1]: Started cri-containerd-a368419444a4f1217369a9880e81506f8e794623be7e51895ff2520e877197e1.scope. Dec 13 02:22:11.172226 env[1730]: time="2024-12-13T02:22:11.170988632Z" level=info msg="StartContainer for \"a368419444a4f1217369a9880e81506f8e794623be7e51895ff2520e877197e1\" returns successfully" Dec 13 02:22:14.686730 systemd[1]: cri-containerd-c0d7aced378fb719996360e5d8034dbf80c8b5b88ee30de223ff4e83ef884b05.scope: Deactivated successfully. Dec 13 02:22:14.687212 systemd[1]: cri-containerd-c0d7aced378fb719996360e5d8034dbf80c8b5b88ee30de223ff4e83ef884b05.scope: Consumed 1.796s CPU time. Dec 13 02:22:14.718473 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0d7aced378fb719996360e5d8034dbf80c8b5b88ee30de223ff4e83ef884b05-rootfs.mount: Deactivated successfully. Dec 13 02:22:14.750229 env[1730]: time="2024-12-13T02:22:14.750174669Z" level=info msg="shim disconnected" id=c0d7aced378fb719996360e5d8034dbf80c8b5b88ee30de223ff4e83ef884b05 Dec 13 02:22:14.750229 env[1730]: time="2024-12-13T02:22:14.750230094Z" level=warning msg="cleaning up after shim disconnected" id=c0d7aced378fb719996360e5d8034dbf80c8b5b88ee30de223ff4e83ef884b05 namespace=k8s.io Dec 13 02:22:14.751105 env[1730]: time="2024-12-13T02:22:14.750243044Z" level=info msg="cleaning up dead shim" Dec 13 02:22:14.760036 env[1730]: time="2024-12-13T02:22:14.759987562Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:22:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5755 runtime=io.containerd.runc.v2\n" Dec 13 02:22:14.994195 kubelet[2842]: I1213 02:22:14.993651 2842 scope.go:117] "RemoveContainer" containerID="c0d7aced378fb719996360e5d8034dbf80c8b5b88ee30de223ff4e83ef884b05" Dec 13 02:22:15.032873 env[1730]: time="2024-12-13T02:22:15.032038586Z" level=info msg="CreateContainer within sandbox \"86f56d030a9885f5056a928485b904e3922fd62e33450abb9e25bd9e071f0abe\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 02:22:15.081777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1175234137.mount: Deactivated successfully. Dec 13 02:22:15.104742 env[1730]: time="2024-12-13T02:22:15.104683873Z" level=info msg="CreateContainer within sandbox \"86f56d030a9885f5056a928485b904e3922fd62e33450abb9e25bd9e071f0abe\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"29327b2511a55c8e3eb74426b27bead96a31a4fdca6300634b21e5480848cde1\"" Dec 13 02:22:15.106905 env[1730]: time="2024-12-13T02:22:15.106865968Z" level=info msg="StartContainer for \"29327b2511a55c8e3eb74426b27bead96a31a4fdca6300634b21e5480848cde1\"" Dec 13 02:22:15.170429 systemd[1]: Started cri-containerd-29327b2511a55c8e3eb74426b27bead96a31a4fdca6300634b21e5480848cde1.scope. Dec 13 02:22:15.204858 kubelet[2842]: E1213 02:22:15.204824 2842 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-31-142)" Dec 13 02:22:15.273300 env[1730]: time="2024-12-13T02:22:15.271751820Z" level=info msg="StartContainer for \"29327b2511a55c8e3eb74426b27bead96a31a4fdca6300634b21e5480848cde1\" returns successfully" Dec 13 02:22:25.206153 kubelet[2842]: E1213 02:22:25.205898 2842 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.142:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-142?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"