Dec 13 02:00:42.913984 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 02:00:42.914006 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:00:42.914016 kernel: BIOS-provided physical RAM map: Dec 13 02:00:42.914023 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 02:00:42.914029 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 02:00:42.914036 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 02:00:42.914044 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 13 02:00:42.914051 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 13 02:00:42.914059 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 02:00:42.914066 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 02:00:42.914073 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 02:00:42.914080 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 02:00:42.914086 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 02:00:42.914094 kernel: NX (Execute Disable) protection: active Dec 13 02:00:42.914103 kernel: SMBIOS 2.8 present. Dec 13 02:00:42.914111 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 13 02:00:42.914126 kernel: Hypervisor detected: KVM Dec 13 02:00:42.914138 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 02:00:42.914146 kernel: kvm-clock: cpu 0, msr 2d19b001, primary cpu clock Dec 13 02:00:42.914154 kernel: kvm-clock: using sched offset of 2715203787 cycles Dec 13 02:00:42.914163 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 02:00:42.914172 kernel: tsc: Detected 2794.748 MHz processor Dec 13 02:00:42.914181 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 02:00:42.914192 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 02:00:42.914201 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 13 02:00:42.914210 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 02:00:42.914219 kernel: Using GB pages for direct mapping Dec 13 02:00:42.914227 kernel: ACPI: Early table checksum verification disabled Dec 13 02:00:42.914236 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 13 02:00:42.914245 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:00:42.914253 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:00:42.914262 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:00:42.914272 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 13 02:00:42.914281 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:00:42.914289 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:00:42.914298 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:00:42.914307 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 02:00:42.914316 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Dec 13 02:00:42.914324 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Dec 13 02:00:42.914333 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 13 02:00:42.914346 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Dec 13 02:00:42.914355 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Dec 13 02:00:42.914364 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Dec 13 02:00:42.914377 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Dec 13 02:00:42.914385 kernel: No NUMA configuration found Dec 13 02:00:42.914393 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 13 02:00:42.914403 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Dec 13 02:00:42.914411 kernel: Zone ranges: Dec 13 02:00:42.914419 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 02:00:42.914427 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 13 02:00:42.914434 kernel: Normal empty Dec 13 02:00:42.914442 kernel: Movable zone start for each node Dec 13 02:00:42.914450 kernel: Early memory node ranges Dec 13 02:00:42.914458 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 02:00:42.914466 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 13 02:00:42.914475 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 13 02:00:42.914483 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 02:00:42.914491 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 02:00:42.914499 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 02:00:42.914507 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 02:00:42.914514 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 02:00:42.914522 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 02:00:42.914530 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 02:00:42.914538 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 02:00:42.914546 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 02:00:42.914555 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 02:00:42.914563 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 02:00:42.914571 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 02:00:42.914579 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 02:00:42.914587 kernel: TSC deadline timer available Dec 13 02:00:42.914594 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 02:00:42.914602 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 02:00:42.914610 kernel: kvm-guest: setup PV sched yield Dec 13 02:00:42.914618 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 02:00:42.914627 kernel: Booting paravirtualized kernel on KVM Dec 13 02:00:42.914636 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 02:00:42.914644 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Dec 13 02:00:42.914652 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Dec 13 02:00:42.914660 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Dec 13 02:00:42.914667 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 02:00:42.914675 kernel: kvm-guest: setup async PF for cpu 0 Dec 13 02:00:42.914683 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Dec 13 02:00:42.914691 kernel: kvm-guest: PV spinlocks enabled Dec 13 02:00:42.914700 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 02:00:42.914708 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Dec 13 02:00:42.914716 kernel: Policy zone: DMA32 Dec 13 02:00:42.914725 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:00:42.914734 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 02:00:42.914742 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 02:00:42.914750 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 02:00:42.914758 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 02:00:42.914767 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 134796K reserved, 0K cma-reserved) Dec 13 02:00:42.914775 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 02:00:42.914783 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 02:00:42.914791 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 02:00:42.914799 kernel: rcu: Hierarchical RCU implementation. Dec 13 02:00:42.914807 kernel: rcu: RCU event tracing is enabled. Dec 13 02:00:42.914816 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 02:00:42.914824 kernel: Rude variant of Tasks RCU enabled. Dec 13 02:00:42.914832 kernel: Tracing variant of Tasks RCU enabled. Dec 13 02:00:42.914841 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 02:00:42.914849 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 02:00:42.914857 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 02:00:42.914865 kernel: random: crng init done Dec 13 02:00:42.914884 kernel: Console: colour VGA+ 80x25 Dec 13 02:00:42.914892 kernel: printk: console [ttyS0] enabled Dec 13 02:00:42.914900 kernel: ACPI: Core revision 20210730 Dec 13 02:00:42.914919 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 02:00:42.914927 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 02:00:42.914937 kernel: x2apic enabled Dec 13 02:00:42.914945 kernel: Switched APIC routing to physical x2apic. Dec 13 02:00:42.914952 kernel: kvm-guest: setup PV IPIs Dec 13 02:00:42.914960 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 02:00:42.914968 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 02:00:42.914976 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 02:00:42.914985 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 02:00:42.914992 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 02:00:42.915001 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 02:00:42.915015 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 02:00:42.915023 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 02:00:42.915031 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 02:00:42.915041 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 02:00:42.915049 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 02:00:42.915058 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 02:00:42.915066 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 02:00:42.915075 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 02:00:42.915083 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 02:00:42.915093 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 02:00:42.915101 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 02:00:42.915109 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 02:00:42.915118 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 02:00:42.915126 kernel: Freeing SMP alternatives memory: 32K Dec 13 02:00:42.915134 kernel: pid_max: default: 32768 minimum: 301 Dec 13 02:00:42.915143 kernel: LSM: Security Framework initializing Dec 13 02:00:42.915152 kernel: SELinux: Initializing. Dec 13 02:00:42.915160 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 02:00:42.915169 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 02:00:42.915177 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 02:00:42.915185 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 02:00:42.915194 kernel: ... version: 0 Dec 13 02:00:42.915202 kernel: ... bit width: 48 Dec 13 02:00:42.915210 kernel: ... generic registers: 6 Dec 13 02:00:42.915218 kernel: ... value mask: 0000ffffffffffff Dec 13 02:00:42.915228 kernel: ... max period: 00007fffffffffff Dec 13 02:00:42.915236 kernel: ... fixed-purpose events: 0 Dec 13 02:00:42.915244 kernel: ... event mask: 000000000000003f Dec 13 02:00:42.915253 kernel: signal: max sigframe size: 1776 Dec 13 02:00:42.915261 kernel: rcu: Hierarchical SRCU implementation. Dec 13 02:00:42.915269 kernel: smp: Bringing up secondary CPUs ... Dec 13 02:00:42.915277 kernel: x86: Booting SMP configuration: Dec 13 02:00:42.915285 kernel: .... node #0, CPUs: #1 Dec 13 02:00:42.915294 kernel: kvm-clock: cpu 1, msr 2d19b041, secondary cpu clock Dec 13 02:00:42.915302 kernel: kvm-guest: setup async PF for cpu 1 Dec 13 02:00:42.915311 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Dec 13 02:00:42.915320 kernel: #2 Dec 13 02:00:42.915328 kernel: kvm-clock: cpu 2, msr 2d19b081, secondary cpu clock Dec 13 02:00:42.915336 kernel: kvm-guest: setup async PF for cpu 2 Dec 13 02:00:42.915345 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Dec 13 02:00:42.915353 kernel: #3 Dec 13 02:00:42.915361 kernel: kvm-clock: cpu 3, msr 2d19b0c1, secondary cpu clock Dec 13 02:00:42.915369 kernel: kvm-guest: setup async PF for cpu 3 Dec 13 02:00:42.915377 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Dec 13 02:00:42.915387 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 02:00:42.915395 kernel: smpboot: Max logical packages: 1 Dec 13 02:00:42.915403 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 02:00:42.915411 kernel: devtmpfs: initialized Dec 13 02:00:42.915420 kernel: x86/mm: Memory block size: 128MB Dec 13 02:00:42.915428 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 02:00:42.915437 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 02:00:42.915445 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 02:00:42.915453 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 02:00:42.915463 kernel: audit: initializing netlink subsys (disabled) Dec 13 02:00:42.915471 kernel: audit: type=2000 audit(1734055242.733:1): state=initialized audit_enabled=0 res=1 Dec 13 02:00:42.915479 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 02:00:42.915487 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 02:00:42.915496 kernel: cpuidle: using governor menu Dec 13 02:00:42.915504 kernel: ACPI: bus type PCI registered Dec 13 02:00:42.915512 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 02:00:42.915520 kernel: dca service started, version 1.12.1 Dec 13 02:00:42.915529 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 02:00:42.915539 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 02:00:42.915547 kernel: PCI: Using configuration type 1 for base access Dec 13 02:00:42.915555 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 02:00:42.915564 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 02:00:42.915572 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 02:00:42.915580 kernel: ACPI: Added _OSI(Module Device) Dec 13 02:00:42.915588 kernel: ACPI: Added _OSI(Processor Device) Dec 13 02:00:42.915597 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 02:00:42.915605 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 02:00:42.915615 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 02:00:42.915623 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 02:00:42.915632 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 02:00:42.915640 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 02:00:42.915648 kernel: ACPI: Interpreter enabled Dec 13 02:00:42.915657 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 02:00:42.915665 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 02:00:42.915673 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 02:00:42.915681 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 02:00:42.915691 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 02:00:42.915836 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 02:00:42.915945 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 02:00:42.916024 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 02:00:42.916034 kernel: PCI host bridge to bus 0000:00 Dec 13 02:00:42.916115 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 02:00:42.916186 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 02:00:42.916259 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 02:00:42.916330 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 02:00:42.916398 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 02:00:42.916472 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 02:00:42.916541 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 02:00:42.916632 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 02:00:42.916722 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 02:00:42.916801 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Dec 13 02:00:42.916893 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Dec 13 02:00:42.918041 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Dec 13 02:00:42.918144 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 02:00:42.918243 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 02:00:42.918335 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 02:00:42.918434 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Dec 13 02:00:42.918528 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Dec 13 02:00:42.918625 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 02:00:42.918715 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 02:00:42.918807 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Dec 13 02:00:42.918962 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Dec 13 02:00:42.919068 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 02:00:42.919163 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Dec 13 02:00:42.919252 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Dec 13 02:00:42.919340 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 13 02:00:42.919431 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Dec 13 02:00:42.919527 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 02:00:42.919619 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 02:00:42.919718 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 02:00:42.919811 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Dec 13 02:00:42.919931 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Dec 13 02:00:42.920030 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 02:00:42.920133 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 02:00:42.920149 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 02:00:42.920160 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 02:00:42.920170 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 02:00:42.920183 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 02:00:42.920192 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 02:00:42.920202 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 02:00:42.920211 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 02:00:42.920221 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 02:00:42.920231 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 02:00:42.920240 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 02:00:42.920250 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 02:00:42.920260 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 02:00:42.920272 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 02:00:42.920281 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 02:00:42.920290 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 02:00:42.920312 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 02:00:42.920322 kernel: iommu: Default domain type: Translated Dec 13 02:00:42.920332 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 02:00:42.920444 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 02:00:42.920567 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 02:00:42.920691 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 02:00:42.920705 kernel: vgaarb: loaded Dec 13 02:00:42.920715 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 02:00:42.920725 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 02:00:42.920747 kernel: PTP clock support registered Dec 13 02:00:42.920757 kernel: PCI: Using ACPI for IRQ routing Dec 13 02:00:42.920766 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 02:00:42.920776 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 02:00:42.920786 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 13 02:00:42.920797 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 02:00:42.920807 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 02:00:42.920817 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 02:00:42.920827 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 02:00:42.920837 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 02:00:42.920847 kernel: pnp: PnP ACPI init Dec 13 02:00:42.920986 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 02:00:42.921002 kernel: pnp: PnP ACPI: found 6 devices Dec 13 02:00:42.921016 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 02:00:42.921026 kernel: NET: Registered PF_INET protocol family Dec 13 02:00:42.921048 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 02:00:42.921058 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 02:00:42.921068 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 02:00:42.921078 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 02:00:42.921088 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 02:00:42.921097 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 02:00:42.921107 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 02:00:42.921119 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 02:00:42.921128 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 02:00:42.921138 kernel: NET: Registered PF_XDP protocol family Dec 13 02:00:42.921236 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 02:00:42.921331 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 02:00:42.921427 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 02:00:42.921522 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 02:00:42.921617 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 02:00:42.921724 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 02:00:42.921741 kernel: PCI: CLS 0 bytes, default 64 Dec 13 02:00:42.921762 kernel: Initialise system trusted keyrings Dec 13 02:00:42.921772 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 02:00:42.921782 kernel: Key type asymmetric registered Dec 13 02:00:42.921792 kernel: Asymmetric key parser 'x509' registered Dec 13 02:00:42.921812 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 02:00:42.921822 kernel: io scheduler mq-deadline registered Dec 13 02:00:42.921832 kernel: io scheduler kyber registered Dec 13 02:00:42.921841 kernel: io scheduler bfq registered Dec 13 02:00:42.921853 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 02:00:42.921887 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 02:00:42.921897 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 02:00:42.921926 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 02:00:42.921947 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 02:00:42.921957 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 02:00:42.921967 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 02:00:42.921977 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 02:00:42.921998 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 02:00:42.922117 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 02:00:42.922140 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 02:00:42.922239 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 02:00:42.922349 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T02:00:42 UTC (1734055242) Dec 13 02:00:42.922459 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 02:00:42.922472 kernel: NET: Registered PF_INET6 protocol family Dec 13 02:00:42.922482 kernel: Segment Routing with IPv6 Dec 13 02:00:42.922492 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 02:00:42.922516 kernel: NET: Registered PF_PACKET protocol family Dec 13 02:00:42.922526 kernel: Key type dns_resolver registered Dec 13 02:00:42.922535 kernel: IPI shorthand broadcast: enabled Dec 13 02:00:42.922545 kernel: sched_clock: Marking stable (447288833, 110424447)->(656770409, -99057129) Dec 13 02:00:42.922555 kernel: registered taskstats version 1 Dec 13 02:00:42.922564 kernel: Loading compiled-in X.509 certificates Dec 13 02:00:42.922574 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 02:00:42.922584 kernel: Key type .fscrypt registered Dec 13 02:00:42.922593 kernel: Key type fscrypt-provisioning registered Dec 13 02:00:42.922605 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 02:00:42.922615 kernel: ima: Allocated hash algorithm: sha1 Dec 13 02:00:42.922625 kernel: ima: No architecture policies found Dec 13 02:00:42.922635 kernel: clk: Disabling unused clocks Dec 13 02:00:42.922644 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 02:00:42.922654 kernel: Write protecting the kernel read-only data: 28672k Dec 13 02:00:42.922664 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 02:00:42.922673 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 02:00:42.922684 kernel: Run /init as init process Dec 13 02:00:42.922694 kernel: with arguments: Dec 13 02:00:42.922703 kernel: /init Dec 13 02:00:42.922713 kernel: with environment: Dec 13 02:00:42.922722 kernel: HOME=/ Dec 13 02:00:42.922731 kernel: TERM=linux Dec 13 02:00:42.922741 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 02:00:42.922754 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:00:42.922767 systemd[1]: Detected virtualization kvm. Dec 13 02:00:42.922778 systemd[1]: Detected architecture x86-64. Dec 13 02:00:42.922788 systemd[1]: Running in initrd. Dec 13 02:00:42.922798 systemd[1]: No hostname configured, using default hostname. Dec 13 02:00:42.922808 systemd[1]: Hostname set to . Dec 13 02:00:42.922819 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:00:42.922829 systemd[1]: Queued start job for default target initrd.target. Dec 13 02:00:42.922840 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:00:42.922851 systemd[1]: Reached target cryptsetup.target. Dec 13 02:00:42.922864 systemd[1]: Reached target paths.target. Dec 13 02:00:42.922893 systemd[1]: Reached target slices.target. Dec 13 02:00:42.922915 systemd[1]: Reached target swap.target. Dec 13 02:00:42.922926 systemd[1]: Reached target timers.target. Dec 13 02:00:42.922937 systemd[1]: Listening on iscsid.socket. Dec 13 02:00:42.922950 systemd[1]: Listening on iscsiuio.socket. Dec 13 02:00:42.922960 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 02:00:42.922971 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 02:00:42.922982 systemd[1]: Listening on systemd-journald.socket. Dec 13 02:00:42.922992 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:00:42.923003 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:00:42.923014 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:00:42.923026 systemd[1]: Reached target sockets.target. Dec 13 02:00:42.923037 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:00:42.923049 systemd[1]: Finished network-cleanup.service. Dec 13 02:00:42.923059 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 02:00:42.923070 systemd[1]: Starting systemd-journald.service... Dec 13 02:00:42.923081 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:00:42.923091 systemd[1]: Starting systemd-resolved.service... Dec 13 02:00:42.923102 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 02:00:42.923113 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:00:42.923124 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 02:00:42.923135 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:00:42.923147 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:00:42.923158 kernel: audit: type=1130 audit(1734055242.913:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:42.923172 systemd-journald[197]: Journal started Dec 13 02:00:42.923226 systemd-journald[197]: Runtime Journal (/run/log/journal/e71e779a90f74c83a28eadcbac13592e) is 6.0M, max 48.5M, 42.5M free. Dec 13 02:00:42.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:42.907689 systemd-modules-load[198]: Inserted module 'overlay' Dec 13 02:00:42.959248 systemd[1]: Started systemd-journald.service. Dec 13 02:00:42.959276 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 02:00:42.959290 kernel: Bridge firewalling registered Dec 13 02:00:42.931847 systemd-resolved[199]: Positive Trust Anchors: Dec 13 02:00:42.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:42.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:42.931856 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:00:42.967798 kernel: audit: type=1130 audit(1734055242.958:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:42.967818 kernel: audit: type=1130 audit(1734055242.964:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:42.931921 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:00:42.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:42.934588 systemd-resolved[199]: Defaulting to hostname 'linux'. Dec 13 02:00:42.979355 kernel: audit: type=1130 audit(1734055242.967:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:42.949737 systemd-modules-load[198]: Inserted module 'br_netfilter' Dec 13 02:00:42.981647 kernel: SCSI subsystem initialized Dec 13 02:00:42.959452 systemd[1]: Started systemd-resolved.service. Dec 13 02:00:42.965200 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 02:00:42.968244 systemd[1]: Reached target nss-lookup.target. Dec 13 02:00:42.979425 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 02:00:42.993534 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 02:00:42.993585 kernel: device-mapper: uevent: version 1.0.3 Dec 13 02:00:42.993600 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 02:00:42.996440 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 02:00:43.002195 kernel: audit: type=1130 audit(1734055242.996:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:42.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:42.998074 systemd[1]: Starting dracut-cmdline.service... Dec 13 02:00:43.001461 systemd-modules-load[198]: Inserted module 'dm_multipath' Dec 13 02:00:43.008432 kernel: audit: type=1130 audit(1734055243.003:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:43.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:43.008501 dracut-cmdline[216]: dracut-dracut-053 Dec 13 02:00:43.002366 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:00:43.007778 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:00:43.013306 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:00:43.016053 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:00:43.022348 kernel: audit: type=1130 audit(1734055243.017:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:43.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:43.075932 kernel: Loading iSCSI transport class v2.0-870. Dec 13 02:00:43.091932 kernel: iscsi: registered transport (tcp) Dec 13 02:00:43.112936 kernel: iscsi: registered transport (qla4xxx) Dec 13 02:00:43.112956 kernel: QLogic iSCSI HBA Driver Dec 13 02:00:43.141169 systemd[1]: Finished dracut-cmdline.service. Dec 13 02:00:43.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:43.143528 systemd[1]: Starting dracut-pre-udev.service... Dec 13 02:00:43.147171 kernel: audit: type=1130 audit(1734055243.142:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:43.187939 kernel: raid6: avx2x4 gen() 30710 MB/s Dec 13 02:00:43.204935 kernel: raid6: avx2x4 xor() 8079 MB/s Dec 13 02:00:43.221933 kernel: raid6: avx2x2 gen() 30937 MB/s Dec 13 02:00:43.238934 kernel: raid6: avx2x2 xor() 19120 MB/s Dec 13 02:00:43.255933 kernel: raid6: avx2x1 gen() 26381 MB/s Dec 13 02:00:43.272934 kernel: raid6: avx2x1 xor() 15247 MB/s Dec 13 02:00:43.289935 kernel: raid6: sse2x4 gen() 14736 MB/s Dec 13 02:00:43.306945 kernel: raid6: sse2x4 xor() 7343 MB/s Dec 13 02:00:43.323942 kernel: raid6: sse2x2 gen() 16020 MB/s Dec 13 02:00:43.340943 kernel: raid6: sse2x2 xor() 9707 MB/s Dec 13 02:00:43.357941 kernel: raid6: sse2x1 gen() 12391 MB/s Dec 13 02:00:43.375343 kernel: raid6: sse2x1 xor() 7765 MB/s Dec 13 02:00:43.375367 kernel: raid6: using algorithm avx2x2 gen() 30937 MB/s Dec 13 02:00:43.375380 kernel: raid6: .... xor() 19120 MB/s, rmw enabled Dec 13 02:00:43.376072 kernel: raid6: using avx2x2 recovery algorithm Dec 13 02:00:43.387938 kernel: xor: automatically using best checksumming function avx Dec 13 02:00:43.487951 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 02:00:43.495796 systemd[1]: Finished dracut-pre-udev.service. Dec 13 02:00:43.501238 kernel: audit: type=1130 audit(1734055243.495:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:43.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:43.500000 audit: BPF prog-id=7 op=LOAD Dec 13 02:00:43.500000 audit: BPF prog-id=8 op=LOAD Dec 13 02:00:43.501693 systemd[1]: Starting systemd-udevd.service... Dec 13 02:00:43.518587 systemd-udevd[402]: Using default interface naming scheme 'v252'. Dec 13 02:00:43.523609 systemd[1]: Started systemd-udevd.service. Dec 13 02:00:43.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:43.525788 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 02:00:43.545307 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Dec 13 02:00:43.594690 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 02:00:43.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:43.597972 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:00:43.644111 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:00:43.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:43.673688 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 02:00:43.697382 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 02:00:43.697401 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 02:00:43.697412 kernel: GPT:9289727 != 19775487 Dec 13 02:00:43.697423 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 02:00:43.697434 kernel: GPT:9289727 != 19775487 Dec 13 02:00:43.697444 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 02:00:43.697455 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 02:00:43.697465 kernel: libata version 3.00 loaded. Dec 13 02:00:43.697477 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 02:00:43.698930 kernel: AES CTR mode by8 optimization enabled Dec 13 02:00:43.702581 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 02:00:43.728356 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 02:00:43.728380 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 02:00:43.728517 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 02:00:43.728624 kernel: scsi host0: ahci Dec 13 02:00:43.728752 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (454) Dec 13 02:00:43.728766 kernel: scsi host1: ahci Dec 13 02:00:43.728939 kernel: scsi host2: ahci Dec 13 02:00:43.729071 kernel: scsi host3: ahci Dec 13 02:00:43.729189 kernel: scsi host4: ahci Dec 13 02:00:43.729307 kernel: scsi host5: ahci Dec 13 02:00:43.729428 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 31 Dec 13 02:00:43.729442 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 31 Dec 13 02:00:43.729457 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 31 Dec 13 02:00:43.729469 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 31 Dec 13 02:00:43.729480 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 31 Dec 13 02:00:43.729491 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 31 Dec 13 02:00:43.722034 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 02:00:43.770188 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 02:00:43.770572 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 02:00:43.780530 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:00:43.789084 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 02:00:43.792029 systemd[1]: Starting disk-uuid.service... Dec 13 02:00:43.971485 disk-uuid[535]: Primary Header is updated. Dec 13 02:00:43.971485 disk-uuid[535]: Secondary Entries is updated. Dec 13 02:00:43.971485 disk-uuid[535]: Secondary Header is updated. Dec 13 02:00:43.976926 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 02:00:43.981951 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 02:00:43.985936 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 02:00:44.038744 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 02:00:44.038798 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 02:00:44.038808 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 02:00:44.042811 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 02:00:44.042963 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 02:00:44.042975 kernel: ata3.00: applying bridge limits Dec 13 02:00:44.043957 kernel: ata3.00: configured for UDMA/100 Dec 13 02:00:44.043998 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 02:00:44.046059 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 02:00:44.046114 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 02:00:44.081063 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 02:00:44.098675 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 02:00:44.098699 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 02:00:44.989956 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 02:00:44.994139 disk-uuid[536]: The operation has completed successfully. Dec 13 02:00:45.020203 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 02:00:45.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:45.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:45.020286 systemd[1]: Finished disk-uuid.service. Dec 13 02:00:45.024951 systemd[1]: Starting verity-setup.service... Dec 13 02:00:45.037942 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 02:00:45.058377 systemd[1]: Found device dev-mapper-usr.device. Dec 13 02:00:45.060565 systemd[1]: Mounting sysusr-usr.mount... Dec 13 02:00:45.064240 systemd[1]: Finished verity-setup.service. Dec 13 02:00:45.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:45.118934 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 02:00:45.119046 systemd[1]: Mounted sysusr-usr.mount. Dec 13 02:00:45.119597 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 02:00:45.120467 systemd[1]: Starting ignition-setup.service... Dec 13 02:00:45.123074 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 02:00:45.132276 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:00:45.132304 kernel: BTRFS info (device vda6): using free space tree Dec 13 02:00:45.132319 kernel: BTRFS info (device vda6): has skinny extents Dec 13 02:00:45.139368 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 02:00:45.147855 systemd[1]: Finished ignition-setup.service. Dec 13 02:00:45.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:45.150228 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 02:00:45.187264 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 02:00:45.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:45.189000 audit: BPF prog-id=9 op=LOAD Dec 13 02:00:45.189580 systemd[1]: Starting systemd-networkd.service... Dec 13 02:00:45.191457 ignition[652]: Ignition 2.14.0 Dec 13 02:00:45.191468 ignition[652]: Stage: fetch-offline Dec 13 02:00:45.191529 ignition[652]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:00:45.191541 ignition[652]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 02:00:45.192365 ignition[652]: parsed url from cmdline: "" Dec 13 02:00:45.192370 ignition[652]: no config URL provided Dec 13 02:00:45.192377 ignition[652]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:00:45.192386 ignition[652]: no config at "/usr/lib/ignition/user.ign" Dec 13 02:00:45.193101 ignition[652]: op(1): [started] loading QEMU firmware config module Dec 13 02:00:45.193108 ignition[652]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 02:00:45.201276 ignition[652]: op(1): [finished] loading QEMU firmware config module Dec 13 02:00:45.202843 ignition[652]: parsing config with SHA512: 9f99cb7ac07bf81b93eb63c1636c32cc09a39174bc2502a4ad005583e32083b65a9f803eea7ab031e50f80c87fb8f2b05b319535b3db23aa38bcdb89c2c8d8a0 Dec 13 02:00:45.206224 unknown[652]: fetched base config from "system" Dec 13 02:00:45.206234 unknown[652]: fetched user config from "qemu" Dec 13 02:00:45.210157 ignition[652]: fetch-offline: fetch-offline passed Dec 13 02:00:45.210255 ignition[652]: Ignition finished successfully Dec 13 02:00:45.211566 systemd-networkd[728]: lo: Link UP Dec 13 02:00:45.211571 systemd-networkd[728]: lo: Gained carrier Dec 13 02:00:45.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:45.212197 systemd-networkd[728]: Enumeration completed Dec 13 02:00:45.212295 systemd[1]: Started systemd-networkd.service. Dec 13 02:00:45.212668 systemd-networkd[728]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:00:45.212841 systemd[1]: Reached target network.target. Dec 13 02:00:45.214142 systemd-networkd[728]: eth0: Link UP Dec 13 02:00:45.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:45.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:45.214145 systemd-networkd[728]: eth0: Gained carrier Dec 13 02:00:45.216289 systemd[1]: Starting iscsiuio.service... Dec 13 02:00:45.220970 systemd[1]: Started iscsiuio.service. Dec 13 02:00:45.222005 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 02:00:45.223439 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 02:00:45.230420 iscsid[736]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:00:45.230420 iscsid[736]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 02:00:45.230420 iscsid[736]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 02:00:45.230420 iscsid[736]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 02:00:45.230420 iscsid[736]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 02:00:45.230420 iscsid[736]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:00:45.230420 iscsid[736]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 02:00:45.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:45.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:45.224309 systemd[1]: Starting ignition-kargs.service... Dec 13 02:00:45.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:45.236950 ignition[735]: Ignition 2.14.0 Dec 13 02:00:45.226535 systemd[1]: Starting iscsid.service... Dec 13 02:00:45.236957 ignition[735]: Stage: kargs Dec 13 02:00:45.230838 systemd[1]: Started iscsid.service. Dec 13 02:00:45.237066 ignition[735]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:00:45.233475 systemd[1]: Starting dracut-initqueue.service... Dec 13 02:00:45.237079 ignition[735]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 02:00:45.239919 systemd-networkd[728]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 02:00:45.238035 ignition[735]: kargs: kargs passed Dec 13 02:00:45.240605 systemd[1]: Finished ignition-kargs.service. Dec 13 02:00:45.238080 ignition[735]: Ignition finished successfully Dec 13 02:00:45.246344 systemd[1]: Finished dracut-initqueue.service. Dec 13 02:00:45.248538 systemd[1]: Reached target remote-fs-pre.target. Dec 13 02:00:45.250067 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:00:45.251459 systemd[1]: Reached target remote-fs.target. Dec 13 02:00:45.262251 systemd[1]: Starting dracut-pre-mount.service... Dec 13 02:00:45.264244 systemd[1]: Starting ignition-disks.service... Dec 13 02:00:45.268507 systemd[1]: Finished dracut-pre-mount.service. Dec 13 02:00:45.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:45.271347 ignition[752]: Ignition 2.14.0 Dec 13 02:00:45.271355 ignition[752]: Stage: disks Dec 13 02:00:45.271438 ignition[752]: no configs at "/usr/lib/ignition/base.d" Dec 13 02:00:45.271447 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 02:00:45.272275 ignition[752]: disks: disks passed Dec 13 02:00:45.272316 ignition[752]: Ignition finished successfully Dec 13 02:00:45.275422 systemd[1]: Finished ignition-disks.service. Dec 13 02:00:45.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:45.277645 systemd[1]: Reached target initrd-root-device.target. Dec 13 02:00:45.279344 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:00:45.280897 systemd[1]: Reached target local-fs.target. Dec 13 02:00:45.282344 systemd[1]: Reached target sysinit.target. Dec 13 02:00:45.283782 systemd[1]: Reached target basic.target. Dec 13 02:00:45.285899 systemd[1]: Starting systemd-fsck-root.service... Dec 13 02:00:45.308084 systemd-fsck[765]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 02:00:45.454952 systemd[1]: Finished systemd-fsck-root.service. Dec 13 02:00:45.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:45.458269 systemd[1]: Mounting sysroot.mount... Dec 13 02:00:45.476922 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 02:00:45.477393 systemd[1]: Mounted sysroot.mount. Dec 13 02:00:45.477866 systemd[1]: Reached target initrd-root-fs.target. Dec 13 02:00:45.479970 systemd[1]: Mounting sysroot-usr.mount... Dec 13 02:00:45.481037 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 02:00:45.481077 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 02:00:45.481101 systemd[1]: Reached target ignition-diskful.target. Dec 13 02:00:45.482720 systemd[1]: Mounted sysroot-usr.mount. Dec 13 02:00:45.484725 systemd[1]: Starting initrd-setup-root.service... Dec 13 02:00:45.490381 initrd-setup-root[775]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 02:00:45.492865 initrd-setup-root[783]: cut: /sysroot/etc/group: No such file or directory Dec 13 02:00:45.495449 initrd-setup-root[791]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 02:00:45.497825 initrd-setup-root[799]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 02:00:45.519238 systemd[1]: Finished initrd-setup-root.service. Dec 13 02:00:45.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:45.521666 systemd[1]: Starting ignition-mount.service... Dec 13 02:00:45.523827 systemd[1]: Starting sysroot-boot.service... Dec 13 02:00:45.526651 bash[816]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 02:00:45.535344 ignition[818]: INFO : Ignition 2.14.0 Dec 13 02:00:45.535344 ignition[818]: INFO : Stage: mount Dec 13 02:00:45.537168 ignition[818]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:00:45.537168 ignition[818]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 02:00:45.537168 ignition[818]: INFO : mount: mount passed Dec 13 02:00:45.537168 ignition[818]: INFO : Ignition finished successfully Dec 13 02:00:45.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:45.537458 systemd[1]: Finished ignition-mount.service. Dec 13 02:00:45.546131 systemd[1]: Finished sysroot-boot.service. Dec 13 02:00:45.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.070138 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:00:46.080775 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (828) Dec 13 02:00:46.080850 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:00:46.080864 kernel: BTRFS info (device vda6): using free space tree Dec 13 02:00:46.081820 kernel: BTRFS info (device vda6): has skinny extents Dec 13 02:00:46.086086 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:00:46.088030 systemd[1]: Starting ignition-files.service... Dec 13 02:00:46.104404 ignition[848]: INFO : Ignition 2.14.0 Dec 13 02:00:46.104404 ignition[848]: INFO : Stage: files Dec 13 02:00:46.106534 ignition[848]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:00:46.106534 ignition[848]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 02:00:46.106534 ignition[848]: DEBUG : files: compiled without relabeling support, skipping Dec 13 02:00:46.106534 ignition[848]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 02:00:46.106534 ignition[848]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 02:00:46.113841 ignition[848]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 02:00:46.113841 ignition[848]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 02:00:46.113841 ignition[848]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 02:00:46.113841 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 02:00:46.113841 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 02:00:46.113841 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 02:00:46.113841 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 02:00:46.113841 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:00:46.113841 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:00:46.113841 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:00:46.113841 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:00:46.113841 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:00:46.113841 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 02:00:46.109285 unknown[848]: wrote ssh authorized keys file for user: core Dec 13 02:00:46.357073 systemd-networkd[728]: eth0: Gained IPv6LL Dec 13 02:00:46.487478 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Dec 13 02:00:46.751094 ignition[848]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:00:46.751094 ignition[848]: INFO : files: op(8): [started] processing unit "containerd.service" Dec 13 02:00:46.755929 ignition[848]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 02:00:46.755929 ignition[848]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 02:00:46.755929 ignition[848]: INFO : files: op(8): [finished] processing unit "containerd.service" Dec 13 02:00:46.755929 ignition[848]: INFO : files: op(a): [started] processing unit "coreos-metadata.service" Dec 13 02:00:46.755929 ignition[848]: INFO : files: op(a): op(b): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 02:00:46.755929 ignition[848]: INFO : files: op(a): op(b): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 02:00:46.755929 ignition[848]: INFO : files: op(a): [finished] processing unit "coreos-metadata.service" Dec 13 02:00:46.755929 ignition[848]: INFO : files: op(c): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 02:00:46.755929 ignition[848]: INFO : files: op(c): op(d): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 02:00:46.793151 ignition[848]: INFO : files: op(c): op(d): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 02:00:46.795213 ignition[848]: INFO : files: op(c): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 02:00:46.797057 ignition[848]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:00:46.799140 ignition[848]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:00:46.799140 ignition[848]: INFO : files: files passed Dec 13 02:00:46.801743 ignition[848]: INFO : Ignition finished successfully Dec 13 02:00:46.802156 systemd[1]: Finished ignition-files.service. Dec 13 02:00:46.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.803172 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 02:00:46.804587 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 02:00:46.805063 systemd[1]: Starting ignition-quench.service... Dec 13 02:00:46.810374 initrd-setup-root-after-ignition[874]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Dec 13 02:00:46.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.807517 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 02:00:46.812876 initrd-setup-root-after-ignition[876]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:00:46.807582 systemd[1]: Finished ignition-quench.service. Dec 13 02:00:46.814411 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 02:00:46.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.817141 systemd[1]: Reached target ignition-complete.target. Dec 13 02:00:46.819496 systemd[1]: Starting initrd-parse-etc.service... Dec 13 02:00:46.832840 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 02:00:46.832927 systemd[1]: Finished initrd-parse-etc.service. Dec 13 02:00:46.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.835192 systemd[1]: Reached target initrd-fs.target. Dec 13 02:00:46.836957 systemd[1]: Reached target initrd.target. Dec 13 02:00:46.838686 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 02:00:46.839265 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 02:00:46.849904 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 02:00:46.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.850697 systemd[1]: Starting initrd-cleanup.service... Dec 13 02:00:46.859386 systemd[1]: Stopped target nss-lookup.target. Dec 13 02:00:46.859698 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 02:00:46.861374 systemd[1]: Stopped target timers.target. Dec 13 02:00:46.863270 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 02:00:46.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.863359 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 02:00:46.864801 systemd[1]: Stopped target initrd.target. Dec 13 02:00:46.866463 systemd[1]: Stopped target basic.target. Dec 13 02:00:46.867549 systemd[1]: Stopped target ignition-complete.target. Dec 13 02:00:46.868937 systemd[1]: Stopped target ignition-diskful.target. Dec 13 02:00:46.870399 systemd[1]: Stopped target initrd-root-device.target. Dec 13 02:00:46.872002 systemd[1]: Stopped target remote-fs.target. Dec 13 02:00:46.873604 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 02:00:46.875304 systemd[1]: Stopped target sysinit.target. Dec 13 02:00:46.876615 systemd[1]: Stopped target local-fs.target. Dec 13 02:00:46.878168 systemd[1]: Stopped target local-fs-pre.target. Dec 13 02:00:46.880264 systemd[1]: Stopped target swap.target. Dec 13 02:00:46.881668 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 02:00:46.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.881756 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 02:00:46.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.883393 systemd[1]: Stopped target cryptsetup.target. Dec 13 02:00:46.884793 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 02:00:46.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.884876 systemd[1]: Stopped dracut-initqueue.service. Dec 13 02:00:46.885449 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 02:00:46.885530 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 02:00:46.887665 systemd[1]: Stopped target paths.target. Dec 13 02:00:46.889428 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 02:00:46.894023 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 02:00:46.894489 systemd[1]: Stopped target slices.target. Dec 13 02:00:46.896334 systemd[1]: Stopped target sockets.target. Dec 13 02:00:46.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.897864 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 02:00:46.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.897972 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 02:00:46.899284 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 02:00:46.899365 systemd[1]: Stopped ignition-files.service. Dec 13 02:00:46.901586 systemd[1]: Stopping ignition-mount.service... Dec 13 02:00:46.906461 iscsid[736]: iscsid shutting down. Dec 13 02:00:46.905706 systemd[1]: Stopping iscsid.service... Dec 13 02:00:46.907685 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 02:00:46.909444 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 02:00:46.911074 ignition[889]: INFO : Ignition 2.14.0 Dec 13 02:00:46.911074 ignition[889]: INFO : Stage: umount Dec 13 02:00:46.911074 ignition[889]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 02:00:46.911074 ignition[889]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 02:00:46.911074 ignition[889]: INFO : umount: umount passed Dec 13 02:00:46.911074 ignition[889]: INFO : Ignition finished successfully Dec 13 02:00:46.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.916868 systemd[1]: Stopping sysroot-boot.service... Dec 13 02:00:46.918323 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 02:00:46.919412 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 02:00:46.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.921328 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 02:00:46.922413 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 02:00:46.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.925585 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 02:00:46.926543 systemd[1]: Stopped iscsid.service. Dec 13 02:00:46.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.929070 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 02:00:46.930523 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 02:00:46.931544 systemd[1]: Stopped ignition-mount.service. Dec 13 02:00:46.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.933456 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 02:00:46.934377 systemd[1]: Closed iscsid.socket. Dec 13 02:00:46.935754 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 02:00:46.935806 systemd[1]: Stopped ignition-disks.service. Dec 13 02:00:46.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.938226 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 02:00:46.938257 systemd[1]: Stopped ignition-kargs.service. Dec 13 02:00:46.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.940679 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 02:00:46.940710 systemd[1]: Stopped ignition-setup.service. Dec 13 02:00:46.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.943215 systemd[1]: Stopping iscsiuio.service... Dec 13 02:00:46.944793 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 02:00:46.945773 systemd[1]: Finished initrd-cleanup.service. Dec 13 02:00:46.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.947518 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 02:00:46.948452 systemd[1]: Stopped iscsiuio.service. Dec 13 02:00:46.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.950595 systemd[1]: Stopped target network.target. Dec 13 02:00:46.952293 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 02:00:46.952328 systemd[1]: Closed iscsiuio.socket. Dec 13 02:00:46.954653 systemd[1]: Stopping systemd-networkd.service... Dec 13 02:00:46.956445 systemd[1]: Stopping systemd-resolved.service... Dec 13 02:00:46.959979 systemd-networkd[728]: eth0: DHCPv6 lease lost Dec 13 02:00:46.961317 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 02:00:46.962492 systemd[1]: Stopped systemd-networkd.service. Dec 13 02:00:46.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.964919 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 02:00:46.964953 systemd[1]: Closed systemd-networkd.socket. Dec 13 02:00:46.967000 audit: BPF prog-id=9 op=UNLOAD Dec 13 02:00:46.969217 systemd[1]: Stopping network-cleanup.service... Dec 13 02:00:46.971186 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 02:00:46.972373 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 02:00:46.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.974491 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:00:46.974546 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:00:46.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.977846 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 02:00:46.977902 systemd[1]: Stopped systemd-modules-load.service. Dec 13 02:00:46.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.981459 systemd[1]: Stopping systemd-udevd.service... Dec 13 02:00:46.983048 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 02:00:46.983581 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 02:00:46.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.983683 systemd[1]: Stopped systemd-resolved.service. Dec 13 02:00:46.988000 audit: BPF prog-id=6 op=UNLOAD Dec 13 02:00:46.990075 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 02:00:46.990186 systemd[1]: Stopped network-cleanup.service. Dec 13 02:00:46.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.992565 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 02:00:46.992693 systemd[1]: Stopped systemd-udevd.service. Dec 13 02:00:46.995159 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 02:00:47.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.995206 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 02:00:46.996815 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 02:00:47.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:47.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.996849 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 02:00:46.998751 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 02:00:47.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:46.998811 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 02:00:47.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:47.001182 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 02:00:47.001232 systemd[1]: Stopped dracut-cmdline.service. Dec 13 02:00:47.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:47.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:47.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:47.003365 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:00:47.003434 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 02:00:47.006472 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 02:00:47.007802 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:00:47.007861 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 02:00:47.010563 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 02:00:47.010675 systemd[1]: Stopped sysroot-boot.service. Dec 13 02:00:47.012268 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 02:00:47.012314 systemd[1]: Stopped initrd-setup-root.service. Dec 13 02:00:47.014732 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 02:00:47.028000 audit: BPF prog-id=5 op=UNLOAD Dec 13 02:00:47.030000 audit: BPF prog-id=4 op=UNLOAD Dec 13 02:00:47.031000 audit: BPF prog-id=3 op=UNLOAD Dec 13 02:00:47.014834 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 02:00:47.016655 systemd[1]: Reached target initrd-switch-root.target. Dec 13 02:00:47.032000 audit: BPF prog-id=8 op=UNLOAD Dec 13 02:00:47.032000 audit: BPF prog-id=7 op=UNLOAD Dec 13 02:00:47.018078 systemd[1]: Starting initrd-switch-root.service... Dec 13 02:00:47.026103 systemd[1]: Switching root. Dec 13 02:00:47.051779 systemd-journald[197]: Journal stopped Dec 13 02:00:49.811318 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Dec 13 02:00:49.811362 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 02:00:49.811376 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 02:00:49.811388 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 02:00:49.811400 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 02:00:49.811416 kernel: SELinux: policy capability open_perms=1 Dec 13 02:00:49.811427 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 02:00:49.811437 kernel: SELinux: policy capability always_check_network=0 Dec 13 02:00:49.811448 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 02:00:49.811457 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 02:00:49.812112 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 02:00:49.812127 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 02:00:49.812138 systemd[1]: Successfully loaded SELinux policy in 47.798ms. Dec 13 02:00:49.812154 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.261ms. Dec 13 02:00:49.812169 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:00:49.812179 systemd[1]: Detected virtualization kvm. Dec 13 02:00:49.812190 systemd[1]: Detected architecture x86-64. Dec 13 02:00:49.812200 systemd[1]: Detected first boot. Dec 13 02:00:49.812210 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:00:49.812220 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 02:00:49.812230 kernel: kauditd_printk_skb: 71 callbacks suppressed Dec 13 02:00:49.812242 kernel: audit: type=1400 audit(1734055247.410:82): avc: denied { associate } for pid=939 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 02:00:49.812253 kernel: audit: type=1300 audit(1734055247.410:82): arch=c000003e syscall=188 success=yes exit=0 a0=c000157682 a1=c0000daae0 a2=c0000e2a00 a3=32 items=0 ppid=922 pid=939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:00:49.812264 kernel: audit: type=1327 audit(1734055247.410:82): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:00:49.812274 kernel: audit: type=1400 audit(1734055247.411:83): avc: denied { associate } for pid=939 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 02:00:49.812284 kernel: audit: type=1300 audit(1734055247.411:83): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000157759 a2=1ed a3=0 items=2 ppid=922 pid=939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:00:49.812294 kernel: audit: type=1307 audit(1734055247.411:83): cwd="/" Dec 13 02:00:49.812304 kernel: audit: type=1302 audit(1734055247.411:83): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:49.812314 kernel: audit: type=1302 audit(1734055247.411:83): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:49.812324 kernel: audit: type=1327 audit(1734055247.411:83): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:00:49.812334 systemd[1]: Populated /etc with preset unit settings. Dec 13 02:00:49.812345 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:00:49.812355 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:00:49.812367 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:00:49.812379 systemd[1]: Queued start job for default target multi-user.target. Dec 13 02:00:49.812389 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 02:00:49.812400 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 02:00:49.812410 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 02:00:49.812420 systemd[1]: Created slice system-getty.slice. Dec 13 02:00:49.812430 systemd[1]: Created slice system-modprobe.slice. Dec 13 02:00:49.812441 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 02:00:49.812452 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 02:00:49.812463 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 02:00:49.812473 systemd[1]: Created slice user.slice. Dec 13 02:00:49.812485 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:00:49.812495 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 02:00:49.812507 systemd[1]: Set up automount boot.automount. Dec 13 02:00:49.812517 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 02:00:49.812528 systemd[1]: Reached target integritysetup.target. Dec 13 02:00:49.812538 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:00:49.812549 systemd[1]: Reached target remote-fs.target. Dec 13 02:00:49.812559 systemd[1]: Reached target slices.target. Dec 13 02:00:49.812570 systemd[1]: Reached target swap.target. Dec 13 02:00:49.812581 systemd[1]: Reached target torcx.target. Dec 13 02:00:49.812592 systemd[1]: Reached target veritysetup.target. Dec 13 02:00:49.812603 systemd[1]: Listening on systemd-coredump.socket. Dec 13 02:00:49.812614 systemd[1]: Listening on systemd-initctl.socket. Dec 13 02:00:49.812624 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 02:00:49.812635 kernel: audit: type=1400 audit(1734055249.715:84): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:00:49.812646 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 02:00:49.812656 systemd[1]: Listening on systemd-journald.socket. Dec 13 02:00:49.812666 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:00:49.812676 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:00:49.812696 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:00:49.812707 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 02:00:49.812718 systemd[1]: Mounting dev-hugepages.mount... Dec 13 02:00:49.812729 systemd[1]: Mounting dev-mqueue.mount... Dec 13 02:00:49.812740 systemd[1]: Mounting media.mount... Dec 13 02:00:49.812751 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:00:49.812761 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 02:00:49.812771 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 02:00:49.812781 systemd[1]: Mounting tmp.mount... Dec 13 02:00:49.812792 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 02:00:49.812806 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:00:49.812819 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:00:49.812832 systemd[1]: Starting modprobe@configfs.service... Dec 13 02:00:49.812845 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:00:49.812858 systemd[1]: Starting modprobe@drm.service... Dec 13 02:00:49.812871 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:00:49.812884 systemd[1]: Starting modprobe@fuse.service... Dec 13 02:00:49.812900 systemd[1]: Starting modprobe@loop.service... Dec 13 02:00:49.812943 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 02:00:49.812960 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 02:00:49.812973 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 02:00:49.812987 systemd[1]: Starting systemd-journald.service... Dec 13 02:00:49.813000 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:00:49.813013 kernel: fuse: init (API version 7.34) Dec 13 02:00:49.813025 systemd[1]: Starting systemd-network-generator.service... Dec 13 02:00:49.813036 systemd[1]: Starting systemd-remount-fs.service... Dec 13 02:00:49.813046 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:00:49.813057 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:00:49.813068 kernel: loop: module loaded Dec 13 02:00:49.813078 systemd[1]: Mounted dev-hugepages.mount. Dec 13 02:00:49.813088 systemd[1]: Mounted dev-mqueue.mount. Dec 13 02:00:49.813099 systemd[1]: Mounted media.mount. Dec 13 02:00:49.813110 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 02:00:49.813121 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 02:00:49.813131 systemd[1]: Mounted tmp.mount. Dec 13 02:00:49.813141 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:00:49.813154 systemd-journald[1023]: Journal started Dec 13 02:00:49.813193 systemd-journald[1023]: Runtime Journal (/run/log/journal/e71e779a90f74c83a28eadcbac13592e) is 6.0M, max 48.5M, 42.5M free. Dec 13 02:00:49.715000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:00:49.715000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 02:00:49.809000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 02:00:49.809000 audit[1023]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd83cbe0d0 a2=4000 a3=7ffd83cbe16c items=0 ppid=1 pid=1023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:00:49.809000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 02:00:49.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:49.816940 systemd[1]: Started systemd-journald.service. Dec 13 02:00:49.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:49.817808 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 02:00:49.818271 systemd[1]: Finished modprobe@configfs.service. Dec 13 02:00:49.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:49.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:49.819391 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:00:49.819603 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:00:49.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:49.819000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:49.820659 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:00:49.820870 systemd[1]: Finished modprobe@drm.service. Dec 13 02:00:49.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:49.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:49.821889 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:00:49.822111 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:00:49.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:49.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:49.823218 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 02:00:49.823443 systemd[1]: Finished modprobe@fuse.service. Dec 13 02:00:49.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:49.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:49.824467 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:00:49.824677 systemd[1]: Finished modprobe@loop.service. Dec 13 02:00:49.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:49.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:49.826341 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:00:49.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:49.827658 systemd[1]: Finished systemd-network-generator.service. Dec 13 02:00:49.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:49.829024 systemd[1]: Finished systemd-remount-fs.service. Dec 13 02:00:49.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:49.830494 systemd[1]: Reached target network-pre.target. Dec 13 02:00:49.832863 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 02:00:49.834992 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 02:00:49.835752 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 02:00:49.837433 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 02:00:49.839565 systemd[1]: Starting systemd-journal-flush.service... Dec 13 02:00:49.840442 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:00:49.841670 systemd[1]: Starting systemd-random-seed.service... Dec 13 02:00:49.842677 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:00:49.843882 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:00:49.848948 systemd-journald[1023]: Time spent on flushing to /var/log/journal/e71e779a90f74c83a28eadcbac13592e is 20.681ms for 1020 entries. Dec 13 02:00:49.848948 systemd-journald[1023]: System Journal (/var/log/journal/e71e779a90f74c83a28eadcbac13592e) is 8.0M, max 195.6M, 187.6M free. Dec 13 02:00:49.889206 systemd-journald[1023]: Received client request to flush runtime journal. Dec 13 02:00:49.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:49.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:49.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:49.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:49.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:49.848076 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 02:00:49.851082 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 02:00:49.891743 udevadm[1067]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 02:00:49.853197 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:00:49.854650 systemd[1]: Finished systemd-random-seed.service. Dec 13 02:00:49.856197 systemd[1]: Reached target first-boot-complete.target. Dec 13 02:00:49.858603 systemd[1]: Starting systemd-udev-settle.service... Dec 13 02:00:49.864143 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:00:49.871011 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 02:00:49.873064 systemd[1]: Starting systemd-sysusers.service... Dec 13 02:00:49.890178 systemd[1]: Finished systemd-journal-flush.service. Dec 13 02:00:49.893054 systemd[1]: Finished systemd-sysusers.service. Dec 13 02:00:49.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:49.895040 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:00:49.912179 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:00:49.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:50.281046 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 02:00:50.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:50.283137 systemd[1]: Starting systemd-udevd.service... Dec 13 02:00:50.298624 systemd-udevd[1083]: Using default interface naming scheme 'v252'. Dec 13 02:00:50.311984 systemd[1]: Started systemd-udevd.service. Dec 13 02:00:50.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:50.314433 systemd[1]: Starting systemd-networkd.service... Dec 13 02:00:50.326508 systemd[1]: Starting systemd-userdbd.service... Dec 13 02:00:50.347708 systemd[1]: Found device dev-ttyS0.device. Dec 13 02:00:50.366588 systemd[1]: Started systemd-userdbd.service. Dec 13 02:00:50.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:50.377102 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:00:50.382941 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 02:00:50.388931 kernel: ACPI: button: Power Button [PWRF] Dec 13 02:00:50.394000 audit[1101]: AVC avc: denied { confidentiality } for pid=1101 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 02:00:50.394000 audit[1101]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55e3885ce0d0 a1=337fc a2=7f1d91537bc5 a3=5 items=110 ppid=1083 pid=1101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:00:50.394000 audit: CWD cwd="/" Dec 13 02:00:50.394000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=1 name=(null) inode=12889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=2 name=(null) inode=12889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=3 name=(null) inode=12890 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=4 name=(null) inode=12889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=5 name=(null) inode=12891 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=6 name=(null) inode=12889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=7 name=(null) inode=12892 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=8 name=(null) inode=12892 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=9 name=(null) inode=12893 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=10 name=(null) inode=12892 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=11 name=(null) inode=12894 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=12 name=(null) inode=12892 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=13 name=(null) inode=12895 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=14 name=(null) inode=12892 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=15 name=(null) inode=12896 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=16 name=(null) inode=12892 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=17 name=(null) inode=12897 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=18 name=(null) inode=12889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=19 name=(null) inode=12898 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=20 name=(null) inode=12898 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=21 name=(null) inode=12899 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=22 name=(null) inode=12898 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=23 name=(null) inode=12900 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=24 name=(null) inode=12898 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=25 name=(null) inode=12901 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=26 name=(null) inode=12898 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=27 name=(null) inode=12902 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=28 name=(null) inode=12898 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=29 name=(null) inode=12903 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=30 name=(null) inode=12889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=31 name=(null) inode=12904 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=32 name=(null) inode=12904 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=33 name=(null) inode=12905 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=34 name=(null) inode=12904 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=35 name=(null) inode=12906 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=36 name=(null) inode=12904 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=37 name=(null) inode=12907 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=38 name=(null) inode=12904 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=39 name=(null) inode=12908 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=40 name=(null) inode=12904 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=41 name=(null) inode=12909 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=42 name=(null) inode=12889 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=43 name=(null) inode=12910 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=44 name=(null) inode=12910 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=45 name=(null) inode=12911 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=46 name=(null) inode=12910 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=47 name=(null) inode=12912 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=48 name=(null) inode=12910 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=49 name=(null) inode=12913 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=50 name=(null) inode=12910 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=51 name=(null) inode=12914 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=52 name=(null) inode=12910 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=53 name=(null) inode=12915 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=55 name=(null) inode=12916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=56 name=(null) inode=12916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=57 name=(null) inode=12917 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=58 name=(null) inode=12916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=59 name=(null) inode=12918 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=60 name=(null) inode=12916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=61 name=(null) inode=12919 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=62 name=(null) inode=12919 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=63 name=(null) inode=12920 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=64 name=(null) inode=12919 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=65 name=(null) inode=12921 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=66 name=(null) inode=12919 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=67 name=(null) inode=12922 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=68 name=(null) inode=12919 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=69 name=(null) inode=12923 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=70 name=(null) inode=12919 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=71 name=(null) inode=12924 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=72 name=(null) inode=12916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=73 name=(null) inode=12925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=74 name=(null) inode=12925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=75 name=(null) inode=12926 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=76 name=(null) inode=12925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=77 name=(null) inode=12927 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=78 name=(null) inode=12925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=79 name=(null) inode=12928 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=80 name=(null) inode=12925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=81 name=(null) inode=12929 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=82 name=(null) inode=12925 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=83 name=(null) inode=12930 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=84 name=(null) inode=12916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=85 name=(null) inode=12931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=86 name=(null) inode=12931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=87 name=(null) inode=12932 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=88 name=(null) inode=12931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=89 name=(null) inode=12933 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=90 name=(null) inode=12931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=91 name=(null) inode=12934 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=92 name=(null) inode=12931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=93 name=(null) inode=12935 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=94 name=(null) inode=12931 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=95 name=(null) inode=12936 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=96 name=(null) inode=12916 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=97 name=(null) inode=12937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=98 name=(null) inode=12937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=99 name=(null) inode=12938 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=100 name=(null) inode=12937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=101 name=(null) inode=12939 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=102 name=(null) inode=12937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=103 name=(null) inode=12940 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=104 name=(null) inode=12937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=105 name=(null) inode=12941 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=106 name=(null) inode=12937 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=107 name=(null) inode=12942 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PATH item=109 name=(null) inode=14778 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:00:50.394000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 02:00:50.424895 systemd-networkd[1092]: lo: Link UP Dec 13 02:00:50.424955 systemd-networkd[1092]: lo: Gained carrier Dec 13 02:00:50.425397 systemd-networkd[1092]: Enumeration completed Dec 13 02:00:50.425509 systemd-networkd[1092]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:00:50.425514 systemd[1]: Started systemd-networkd.service. Dec 13 02:00:50.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:50.427554 systemd-networkd[1092]: eth0: Link UP Dec 13 02:00:50.427564 systemd-networkd[1092]: eth0: Gained carrier Dec 13 02:00:50.431935 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 02:00:50.438534 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 02:00:50.453062 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 02:00:50.453081 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 02:00:50.453212 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 02:00:50.444112 systemd-networkd[1092]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 02:00:50.488934 kernel: kvm: Nested Virtualization enabled Dec 13 02:00:50.489094 kernel: SVM: kvm: Nested Paging enabled Dec 13 02:00:50.489128 kernel: SVM: Virtual VMLOAD VMSAVE supported Dec 13 02:00:50.489157 kernel: SVM: Virtual GIF supported Dec 13 02:00:50.508943 kernel: EDAC MC: Ver: 3.0.0 Dec 13 02:00:50.534337 systemd[1]: Finished systemd-udev-settle.service. Dec 13 02:00:50.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:50.536608 systemd[1]: Starting lvm2-activation-early.service... Dec 13 02:00:50.543542 lvm[1119]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:00:50.570182 systemd[1]: Finished lvm2-activation-early.service. Dec 13 02:00:50.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:50.571250 systemd[1]: Reached target cryptsetup.target. Dec 13 02:00:50.573118 systemd[1]: Starting lvm2-activation.service... Dec 13 02:00:50.576548 lvm[1121]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:00:50.606590 systemd[1]: Finished lvm2-activation.service. Dec 13 02:00:50.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:50.607579 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:00:50.608514 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 02:00:50.608530 systemd[1]: Reached target local-fs.target. Dec 13 02:00:50.609392 systemd[1]: Reached target machines.target. Dec 13 02:00:50.611402 systemd[1]: Starting ldconfig.service... Dec 13 02:00:50.612486 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:00:50.612529 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:00:50.613697 systemd[1]: Starting systemd-boot-update.service... Dec 13 02:00:50.615450 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 02:00:50.617791 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 02:00:50.620268 systemd[1]: Starting systemd-sysext.service... Dec 13 02:00:50.622453 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1124 (bootctl) Dec 13 02:00:50.623835 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 02:00:50.627396 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 02:00:50.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:50.632094 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 02:00:50.636406 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 02:00:50.636691 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 02:00:50.648001 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 02:00:50.657429 systemd-fsck[1133]: fsck.fat 4.2 (2021-01-31) Dec 13 02:00:50.657429 systemd-fsck[1133]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 02:00:50.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:50.659015 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 02:00:50.662158 systemd[1]: Mounting boot.mount... Dec 13 02:00:50.675349 systemd[1]: Mounted boot.mount. Dec 13 02:00:50.885948 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 02:00:50.892172 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 02:00:50.892790 systemd[1]: Finished systemd-boot-update.service. Dec 13 02:00:50.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:50.895056 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 02:00:50.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:50.907940 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 02:00:50.912313 (sd-sysext)[1145]: Using extensions 'kubernetes'. Dec 13 02:00:50.912722 (sd-sysext)[1145]: Merged extensions into '/usr'. Dec 13 02:00:50.929567 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:00:50.931286 systemd[1]: Mounting usr-share-oem.mount... Dec 13 02:00:50.932553 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:00:50.933876 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:00:50.936146 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:00:50.938767 systemd[1]: Starting modprobe@loop.service... Dec 13 02:00:50.940199 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:00:50.940514 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:00:50.940791 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:00:50.944589 systemd[1]: Mounted usr-share-oem.mount. Dec 13 02:00:50.946441 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:00:50.946626 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:00:50.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:50.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:50.948028 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:00:50.948185 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:00:50.948880 ldconfig[1123]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 02:00:50.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:50.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:50.949571 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:00:50.949736 systemd[1]: Finished modprobe@loop.service. Dec 13 02:00:50.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:50.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:50.951231 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:00:50.951341 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:00:50.952423 systemd[1]: Finished systemd-sysext.service. Dec 13 02:00:50.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:50.955240 systemd[1]: Starting ensure-sysext.service... Dec 13 02:00:50.957248 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 02:00:50.958773 systemd[1]: Finished ldconfig.service. Dec 13 02:00:50.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:50.964095 systemd[1]: Reloading. Dec 13 02:00:50.970191 systemd-tmpfiles[1160]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 02:00:50.971421 systemd-tmpfiles[1160]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 02:00:50.973260 systemd-tmpfiles[1160]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 02:00:51.002858 /usr/lib/systemd/system-generators/torcx-generator[1180]: time="2024-12-13T02:00:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:00:51.003216 /usr/lib/systemd/system-generators/torcx-generator[1180]: time="2024-12-13T02:00:51Z" level=info msg="torcx already run" Dec 13 02:00:51.091286 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:00:51.091312 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:00:51.114330 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:00:51.179899 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 02:00:51.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:51.183238 systemd[1]: Starting audit-rules.service... Dec 13 02:00:51.185158 systemd[1]: Starting clean-ca-certificates.service... Dec 13 02:00:51.186923 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 02:00:51.189168 systemd[1]: Starting systemd-resolved.service... Dec 13 02:00:51.191616 systemd[1]: Starting systemd-timesyncd.service... Dec 13 02:00:51.193799 systemd[1]: Starting systemd-update-utmp.service... Dec 13 02:00:51.195218 systemd[1]: Finished clean-ca-certificates.service. Dec 13 02:00:51.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:51.196000 audit[1241]: SYSTEM_BOOT pid=1241 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 02:00:51.201337 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:00:51.202466 systemd[1]: Finished systemd-update-utmp.service. Dec 13 02:00:51.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:51.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:51.209000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 02:00:51.209000 audit[1252]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff92c64ac0 a2=420 a3=0 items=0 ppid=1229 pid=1252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:00:51.209000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 02:00:51.206593 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 02:00:51.213431 augenrules[1252]: No rules Dec 13 02:00:51.208894 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:00:51.209100 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:00:51.210301 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:00:51.212089 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:00:51.213569 systemd[1]: Starting modprobe@loop.service... Dec 13 02:00:51.214293 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:00:51.214379 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:00:51.215399 systemd[1]: Starting systemd-update-done.service... Dec 13 02:00:51.216223 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:00:51.216301 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:00:51.217237 systemd[1]: Finished audit-rules.service. Dec 13 02:00:51.218273 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:00:51.218389 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:00:51.219710 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:00:51.219874 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:00:51.221121 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:00:51.221252 systemd[1]: Finished modprobe@loop.service. Dec 13 02:00:51.222454 systemd[1]: Finished systemd-update-done.service. Dec 13 02:00:51.223850 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:00:51.223958 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:00:51.225438 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:00:51.225627 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:00:51.226749 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:00:51.228347 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:00:51.231277 systemd[1]: Starting modprobe@loop.service... Dec 13 02:00:51.232065 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:00:51.232151 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:00:51.232230 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:00:51.232289 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:00:51.233071 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:00:51.233197 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:00:51.234381 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:00:51.234500 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:00:51.235704 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:00:51.235823 systemd[1]: Finished modprobe@loop.service. Dec 13 02:00:51.236871 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:00:51.236959 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:00:51.239379 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:00:51.239574 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:00:51.240477 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:00:51.242133 systemd[1]: Starting modprobe@drm.service... Dec 13 02:00:51.243610 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:00:51.245241 systemd[1]: Starting modprobe@loop.service... Dec 13 02:00:51.246064 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:00:51.246151 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:00:51.247270 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 02:00:51.248367 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:00:51.248451 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:00:51.249463 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:00:51.249587 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:00:51.250885 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:00:51.251012 systemd[1]: Finished modprobe@drm.service. Dec 13 02:00:51.252213 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:00:51.252330 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:00:51.253612 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:00:51.253812 systemd[1]: Finished modprobe@loop.service. Dec 13 02:00:51.254964 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:00:51.255059 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:00:51.256564 systemd[1]: Finished ensure-sysext.service. Dec 13 02:00:51.268017 systemd-resolved[1237]: Positive Trust Anchors: Dec 13 02:00:51.268028 systemd-resolved[1237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:00:51.268053 systemd-resolved[1237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:00:51.270244 systemd[1]: Started systemd-timesyncd.service. Dec 13 02:00:51.271542 systemd[1]: Reached target time-set.target. Dec 13 02:00:52.041370 systemd-timesyncd[1240]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 02:00:52.041414 systemd-timesyncd[1240]: Initial clock synchronization to Fri 2024-12-13 02:00:52.041309 UTC. Dec 13 02:00:52.044303 systemd-resolved[1237]: Defaulting to hostname 'linux'. Dec 13 02:00:52.045650 systemd[1]: Started systemd-resolved.service. Dec 13 02:00:52.046566 systemd[1]: Reached target network.target. Dec 13 02:00:52.047371 systemd[1]: Reached target nss-lookup.target. Dec 13 02:00:52.048271 systemd[1]: Reached target sysinit.target. Dec 13 02:00:52.049138 systemd[1]: Started motdgen.path. Dec 13 02:00:52.049876 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 02:00:52.051101 systemd[1]: Started logrotate.timer. Dec 13 02:00:52.051925 systemd[1]: Started mdadm.timer. Dec 13 02:00:52.052632 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 02:00:52.053520 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:00:52.053540 systemd[1]: Reached target paths.target. Dec 13 02:00:52.054308 systemd[1]: Reached target timers.target. Dec 13 02:00:52.055355 systemd[1]: Listening on dbus.socket. Dec 13 02:00:52.057169 systemd[1]: Starting docker.socket... Dec 13 02:00:52.058625 systemd[1]: Listening on sshd.socket. Dec 13 02:00:52.059604 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:00:52.059840 systemd[1]: Listening on docker.socket. Dec 13 02:00:52.060677 systemd[1]: Reached target sockets.target. Dec 13 02:00:52.061489 systemd[1]: Reached target basic.target. Dec 13 02:00:52.062359 systemd[1]: System is tainted: cgroupsv1 Dec 13 02:00:52.062392 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:00:52.062409 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:00:52.063144 systemd[1]: Starting containerd.service... Dec 13 02:00:52.064592 systemd[1]: Starting dbus.service... Dec 13 02:00:52.066165 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 02:00:52.067834 systemd[1]: Starting extend-filesystems.service... Dec 13 02:00:52.068777 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 02:00:52.069544 systemd[1]: Starting motdgen.service... Dec 13 02:00:52.069969 jq[1292]: false Dec 13 02:00:52.071132 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 02:00:52.072881 systemd[1]: Starting sshd-keygen.service... Dec 13 02:00:52.075160 systemd[1]: Starting systemd-logind.service... Dec 13 02:00:52.076030 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:00:52.076072 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 02:00:52.076955 systemd[1]: Starting update-engine.service... Dec 13 02:00:52.078847 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 02:00:52.080860 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 02:00:52.081047 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 02:00:52.082635 jq[1309]: true Dec 13 02:00:52.081369 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 02:00:52.081544 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 02:00:52.086393 jq[1313]: true Dec 13 02:00:52.091175 dbus-daemon[1291]: [system] SELinux support is enabled Dec 13 02:00:52.091318 systemd[1]: Started dbus.service. Dec 13 02:00:52.093905 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 02:00:52.093927 systemd[1]: Reached target system-config.target. Dec 13 02:00:52.094883 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 02:00:52.094897 systemd[1]: Reached target user-config.target. Dec 13 02:00:52.096772 extend-filesystems[1293]: Found loop1 Dec 13 02:00:52.107057 extend-filesystems[1293]: Found sr0 Dec 13 02:00:52.107057 extend-filesystems[1293]: Found vda Dec 13 02:00:52.107057 extend-filesystems[1293]: Found vda1 Dec 13 02:00:52.107057 extend-filesystems[1293]: Found vda2 Dec 13 02:00:52.107057 extend-filesystems[1293]: Found vda3 Dec 13 02:00:52.107057 extend-filesystems[1293]: Found usr Dec 13 02:00:52.107057 extend-filesystems[1293]: Found vda4 Dec 13 02:00:52.107057 extend-filesystems[1293]: Found vda6 Dec 13 02:00:52.107057 extend-filesystems[1293]: Found vda7 Dec 13 02:00:52.107057 extend-filesystems[1293]: Found vda9 Dec 13 02:00:52.107057 extend-filesystems[1293]: Checking size of /dev/vda9 Dec 13 02:00:52.097823 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 02:00:52.098015 systemd[1]: Finished motdgen.service. Dec 13 02:00:52.120470 update_engine[1308]: I1213 02:00:52.120337 1308 main.cc:92] Flatcar Update Engine starting Dec 13 02:00:52.125539 extend-filesystems[1293]: Resized partition /dev/vda9 Dec 13 02:00:52.127352 update_engine[1308]: I1213 02:00:52.125955 1308 update_check_scheduler.cc:74] Next update check in 8m47s Dec 13 02:00:52.126032 systemd-logind[1304]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 02:00:52.126047 systemd-logind[1304]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 02:00:52.126281 systemd-logind[1304]: New seat seat0. Dec 13 02:00:52.126602 systemd[1]: Started update-engine.service. Dec 13 02:00:52.129560 bash[1337]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:00:52.129935 systemd[1]: Started locksmithd.service. Dec 13 02:00:52.133798 extend-filesystems[1346]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 02:00:52.145340 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 02:00:52.138144 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 02:00:52.142405 systemd[1]: Started systemd-logind.service. Dec 13 02:00:52.159216 env[1314]: time="2024-12-13T02:00:52.159177101Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 02:00:52.166743 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 02:00:52.185505 env[1314]: time="2024-12-13T02:00:52.185453739Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 02:00:52.187389 env[1314]: time="2024-12-13T02:00:52.187360595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:00:52.187767 extend-filesystems[1346]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 02:00:52.187767 extend-filesystems[1346]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 02:00:52.187767 extend-filesystems[1346]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 02:00:52.192039 extend-filesystems[1293]: Resized filesystem in /dev/vda9 Dec 13 02:00:52.191975 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 02:00:52.194315 env[1314]: time="2024-12-13T02:00:52.190369788Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:00:52.194315 env[1314]: time="2024-12-13T02:00:52.190392591Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:00:52.194315 env[1314]: time="2024-12-13T02:00:52.190619827Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:00:52.194315 env[1314]: time="2024-12-13T02:00:52.190634304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 02:00:52.194315 env[1314]: time="2024-12-13T02:00:52.190645275Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 02:00:52.194315 env[1314]: time="2024-12-13T02:00:52.190653991Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 02:00:52.194315 env[1314]: time="2024-12-13T02:00:52.190721217Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:00:52.194315 env[1314]: time="2024-12-13T02:00:52.190913347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:00:52.194315 env[1314]: time="2024-12-13T02:00:52.191033904Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:00:52.194315 env[1314]: time="2024-12-13T02:00:52.191045796Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 02:00:52.192202 systemd[1]: Finished extend-filesystems.service. Dec 13 02:00:52.194664 env[1314]: time="2024-12-13T02:00:52.191083316Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 02:00:52.194664 env[1314]: time="2024-12-13T02:00:52.191092754Z" level=info msg="metadata content store policy set" policy=shared Dec 13 02:00:52.197607 env[1314]: time="2024-12-13T02:00:52.197352333Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 02:00:52.197607 env[1314]: time="2024-12-13T02:00:52.197387228Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 02:00:52.197607 env[1314]: time="2024-12-13T02:00:52.197400142Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 02:00:52.197607 env[1314]: time="2024-12-13T02:00:52.197429207Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 02:00:52.197607 env[1314]: time="2024-12-13T02:00:52.197451128Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 02:00:52.197607 env[1314]: time="2024-12-13T02:00:52.197464884Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 02:00:52.197607 env[1314]: time="2024-12-13T02:00:52.197476806Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 02:00:52.197607 env[1314]: time="2024-12-13T02:00:52.197488999Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 02:00:52.197607 env[1314]: time="2024-12-13T02:00:52.197501192Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 02:00:52.197607 env[1314]: time="2024-12-13T02:00:52.197514727Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 02:00:52.197607 env[1314]: time="2024-12-13T02:00:52.197526740Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 02:00:52.197607 env[1314]: time="2024-12-13T02:00:52.197539043Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 02:00:52.197876 env[1314]: time="2024-12-13T02:00:52.197621718Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 02:00:52.197876 env[1314]: time="2024-12-13T02:00:52.197685367Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 02:00:52.198046 env[1314]: time="2024-12-13T02:00:52.198025285Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 02:00:52.198088 env[1314]: time="2024-12-13T02:00:52.198057054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 02:00:52.198088 env[1314]: time="2024-12-13T02:00:52.198071071Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 02:00:52.198128 env[1314]: time="2024-12-13T02:00:52.198112258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 02:00:52.198128 env[1314]: time="2024-12-13T02:00:52.198125252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 02:00:52.198165 env[1314]: time="2024-12-13T02:00:52.198135992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 02:00:52.198165 env[1314]: time="2024-12-13T02:00:52.198148336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 02:00:52.198165 env[1314]: time="2024-12-13T02:00:52.198161430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 02:00:52.198227 env[1314]: time="2024-12-13T02:00:52.198172281Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 02:00:52.198227 env[1314]: time="2024-12-13T02:00:52.198182309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 02:00:52.198227 env[1314]: time="2024-12-13T02:00:52.198192889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 02:00:52.198227 env[1314]: time="2024-12-13T02:00:52.198204691Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 02:00:52.198336 env[1314]: time="2024-12-13T02:00:52.198319527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 02:00:52.198361 env[1314]: time="2024-12-13T02:00:52.198337631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 02:00:52.198361 env[1314]: time="2024-12-13T02:00:52.198352048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 02:00:52.198399 env[1314]: time="2024-12-13T02:00:52.198362988Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 02:00:52.198399 env[1314]: time="2024-12-13T02:00:52.198376283Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 02:00:52.198399 env[1314]: time="2024-12-13T02:00:52.198386091Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 02:00:52.198475 env[1314]: time="2024-12-13T02:00:52.198402602Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 02:00:52.198475 env[1314]: time="2024-12-13T02:00:52.198435244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 02:00:52.198659 env[1314]: time="2024-12-13T02:00:52.198611464Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 02:00:52.199186 env[1314]: time="2024-12-13T02:00:52.198660787Z" level=info msg="Connect containerd service" Dec 13 02:00:52.199186 env[1314]: time="2024-12-13T02:00:52.198690172Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 02:00:52.199268 env[1314]: time="2024-12-13T02:00:52.199247267Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:00:52.199382 env[1314]: time="2024-12-13T02:00:52.199350981Z" level=info msg="Start subscribing containerd event" Dec 13 02:00:52.199423 env[1314]: time="2024-12-13T02:00:52.199391988Z" level=info msg="Start recovering state" Dec 13 02:00:52.199456 env[1314]: time="2024-12-13T02:00:52.199434197Z" level=info msg="Start event monitor" Dec 13 02:00:52.199478 env[1314]: time="2024-12-13T02:00:52.199458583Z" level=info msg="Start snapshots syncer" Dec 13 02:00:52.199478 env[1314]: time="2024-12-13T02:00:52.199466898Z" level=info msg="Start cni network conf syncer for default" Dec 13 02:00:52.199478 env[1314]: time="2024-12-13T02:00:52.199473831Z" level=info msg="Start streaming server" Dec 13 02:00:52.199772 env[1314]: time="2024-12-13T02:00:52.199755429Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 02:00:52.199815 env[1314]: time="2024-12-13T02:00:52.199802798Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 02:00:52.199896 env[1314]: time="2024-12-13T02:00:52.199876737Z" level=info msg="containerd successfully booted in 0.041268s" Dec 13 02:00:52.199935 systemd[1]: Started containerd.service. Dec 13 02:00:52.201219 locksmithd[1347]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 02:00:52.849093 sshd_keygen[1324]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 02:00:52.868524 systemd[1]: Finished sshd-keygen.service. Dec 13 02:00:52.871004 systemd[1]: Starting issuegen.service... Dec 13 02:00:52.876083 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 02:00:52.876304 systemd[1]: Finished issuegen.service. Dec 13 02:00:52.878585 systemd[1]: Starting systemd-user-sessions.service... Dec 13 02:00:52.884387 systemd[1]: Finished systemd-user-sessions.service. Dec 13 02:00:52.886980 systemd[1]: Started getty@tty1.service. Dec 13 02:00:52.889126 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 02:00:52.890269 systemd[1]: Reached target getty.target. Dec 13 02:00:53.013886 systemd-networkd[1092]: eth0: Gained IPv6LL Dec 13 02:00:53.015754 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 02:00:53.017248 systemd[1]: Reached target network-online.target. Dec 13 02:00:53.020313 systemd[1]: Starting kubelet.service... Dec 13 02:00:53.565657 systemd[1]: Started kubelet.service. Dec 13 02:00:53.567631 systemd[1]: Reached target multi-user.target. Dec 13 02:00:53.570818 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 02:00:53.577784 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 02:00:53.578085 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 02:00:53.581718 systemd[1]: Startup finished in 5.049s (kernel) + 5.702s (userspace) = 10.752s. Dec 13 02:00:54.053728 kubelet[1386]: E1213 02:00:54.053629 1386 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:00:54.055273 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:00:54.055414 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:01:01.957106 systemd[1]: Created slice system-sshd.slice. Dec 13 02:01:01.958088 systemd[1]: Started sshd@0-10.0.0.71:22-10.0.0.1:60898.service. Dec 13 02:01:01.992609 sshd[1397]: Accepted publickey for core from 10.0.0.1 port 60898 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:01:01.993926 sshd[1397]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:01:02.000499 systemd[1]: Created slice user-500.slice. Dec 13 02:01:02.001316 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 02:01:02.002579 systemd-logind[1304]: New session 1 of user core. Dec 13 02:01:02.008410 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 02:01:02.009433 systemd[1]: Starting user@500.service... Dec 13 02:01:02.012183 (systemd)[1402]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:01:02.075659 systemd[1402]: Queued start job for default target default.target. Dec 13 02:01:02.075836 systemd[1402]: Reached target paths.target. Dec 13 02:01:02.075850 systemd[1402]: Reached target sockets.target. Dec 13 02:01:02.075861 systemd[1402]: Reached target timers.target. Dec 13 02:01:02.075871 systemd[1402]: Reached target basic.target. Dec 13 02:01:02.075904 systemd[1402]: Reached target default.target. Dec 13 02:01:02.075923 systemd[1402]: Startup finished in 59ms. Dec 13 02:01:02.075992 systemd[1]: Started user@500.service. Dec 13 02:01:02.076807 systemd[1]: Started session-1.scope. Dec 13 02:01:02.126783 systemd[1]: Started sshd@1-10.0.0.71:22-10.0.0.1:60912.service. Dec 13 02:01:02.159898 sshd[1411]: Accepted publickey for core from 10.0.0.1 port 60912 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:01:02.160856 sshd[1411]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:01:02.164522 systemd-logind[1304]: New session 2 of user core. Dec 13 02:01:02.165335 systemd[1]: Started session-2.scope. Dec 13 02:01:02.218645 sshd[1411]: pam_unix(sshd:session): session closed for user core Dec 13 02:01:02.221099 systemd[1]: Started sshd@2-10.0.0.71:22-10.0.0.1:60928.service. Dec 13 02:01:02.221581 systemd[1]: sshd@1-10.0.0.71:22-10.0.0.1:60912.service: Deactivated successfully. Dec 13 02:01:02.222672 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 02:01:02.223131 systemd-logind[1304]: Session 2 logged out. Waiting for processes to exit. Dec 13 02:01:02.224106 systemd-logind[1304]: Removed session 2. Dec 13 02:01:02.251500 sshd[1416]: Accepted publickey for core from 10.0.0.1 port 60928 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:01:02.252305 sshd[1416]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:01:02.255201 systemd-logind[1304]: New session 3 of user core. Dec 13 02:01:02.256018 systemd[1]: Started session-3.scope. Dec 13 02:01:02.304500 sshd[1416]: pam_unix(sshd:session): session closed for user core Dec 13 02:01:02.306515 systemd[1]: Started sshd@3-10.0.0.71:22-10.0.0.1:60940.service. Dec 13 02:01:02.306897 systemd[1]: sshd@2-10.0.0.71:22-10.0.0.1:60928.service: Deactivated successfully. Dec 13 02:01:02.307667 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 02:01:02.307697 systemd-logind[1304]: Session 3 logged out. Waiting for processes to exit. Dec 13 02:01:02.308365 systemd-logind[1304]: Removed session 3. Dec 13 02:01:02.337731 sshd[1423]: Accepted publickey for core from 10.0.0.1 port 60940 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:01:02.338672 sshd[1423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:01:02.341530 systemd-logind[1304]: New session 4 of user core. Dec 13 02:01:02.342080 systemd[1]: Started session-4.scope. Dec 13 02:01:02.394419 sshd[1423]: pam_unix(sshd:session): session closed for user core Dec 13 02:01:02.396788 systemd[1]: Started sshd@4-10.0.0.71:22-10.0.0.1:60944.service. Dec 13 02:01:02.397150 systemd[1]: sshd@3-10.0.0.71:22-10.0.0.1:60940.service: Deactivated successfully. Dec 13 02:01:02.397984 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 02:01:02.397999 systemd-logind[1304]: Session 4 logged out. Waiting for processes to exit. Dec 13 02:01:02.398800 systemd-logind[1304]: Removed session 4. Dec 13 02:01:02.428221 sshd[1430]: Accepted publickey for core from 10.0.0.1 port 60944 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:01:02.429139 sshd[1430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:01:02.432505 systemd-logind[1304]: New session 5 of user core. Dec 13 02:01:02.433425 systemd[1]: Started session-5.scope. Dec 13 02:01:02.488783 sudo[1436]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 02:01:02.489032 sudo[1436]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 02:01:02.500126 systemd[1]: Starting coreos-metadata.service... Dec 13 02:01:02.507437 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 02:01:02.507706 systemd[1]: Finished coreos-metadata.service. Dec 13 02:01:02.915322 systemd[1]: Stopped kubelet.service. Dec 13 02:01:02.917282 systemd[1]: Starting kubelet.service... Dec 13 02:01:02.931318 systemd[1]: Reloading. Dec 13 02:01:02.991488 /usr/lib/systemd/system-generators/torcx-generator[1504]: time="2024-12-13T02:01:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:01:02.991526 /usr/lib/systemd/system-generators/torcx-generator[1504]: time="2024-12-13T02:01:02Z" level=info msg="torcx already run" Dec 13 02:01:03.452978 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:01:03.452994 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:01:03.470022 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:01:03.547481 systemd[1]: Started kubelet.service. Dec 13 02:01:03.549792 systemd[1]: Stopping kubelet.service... Dec 13 02:01:03.550011 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:01:03.550233 systemd[1]: Stopped kubelet.service. Dec 13 02:01:03.551571 systemd[1]: Starting kubelet.service... Dec 13 02:01:03.622543 systemd[1]: Started kubelet.service. Dec 13 02:01:03.659146 kubelet[1567]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:01:03.659146 kubelet[1567]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:01:03.659146 kubelet[1567]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:01:03.659552 kubelet[1567]: I1213 02:01:03.659165 1567 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:01:04.013327 kubelet[1567]: I1213 02:01:04.013288 1567 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 02:01:04.013327 kubelet[1567]: I1213 02:01:04.013314 1567 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:01:04.013556 kubelet[1567]: I1213 02:01:04.013534 1567 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 02:01:04.038627 kubelet[1567]: I1213 02:01:04.038578 1567 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:01:04.054288 kubelet[1567]: I1213 02:01:04.054259 1567 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:01:04.055426 kubelet[1567]: I1213 02:01:04.055397 1567 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:01:04.055578 kubelet[1567]: I1213 02:01:04.055549 1567 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:01:04.055947 kubelet[1567]: I1213 02:01:04.055926 1567 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:01:04.055947 kubelet[1567]: I1213 02:01:04.055942 1567 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:01:04.056054 kubelet[1567]: I1213 02:01:04.056033 1567 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:01:04.056142 kubelet[1567]: I1213 02:01:04.056124 1567 kubelet.go:396] "Attempting to sync node with API server" Dec 13 02:01:04.056142 kubelet[1567]: I1213 02:01:04.056141 1567 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:01:04.056188 kubelet[1567]: I1213 02:01:04.056165 1567 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:01:04.056188 kubelet[1567]: I1213 02:01:04.056181 1567 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:01:04.057153 kubelet[1567]: E1213 02:01:04.056268 1567 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:04.057393 kubelet[1567]: E1213 02:01:04.057357 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:04.058152 kubelet[1567]: I1213 02:01:04.058124 1567 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:01:04.060488 kubelet[1567]: I1213 02:01:04.060450 1567 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:01:04.060622 kubelet[1567]: W1213 02:01:04.060526 1567 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 02:01:04.061662 kubelet[1567]: I1213 02:01:04.061633 1567 server.go:1256] "Started kubelet" Dec 13 02:01:04.061746 kubelet[1567]: I1213 02:01:04.061725 1567 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:01:04.062390 kubelet[1567]: I1213 02:01:04.062199 1567 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:01:04.062541 kubelet[1567]: I1213 02:01:04.062522 1567 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:01:04.063276 kubelet[1567]: I1213 02:01:04.063234 1567 server.go:461] "Adding debug handlers to kubelet server" Dec 13 02:01:04.064031 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 02:01:04.064176 kubelet[1567]: I1213 02:01:04.064148 1567 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:01:04.067287 kubelet[1567]: E1213 02:01:04.067272 1567 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:01:04.067502 kubelet[1567]: E1213 02:01:04.067491 1567 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" Dec 13 02:01:04.067601 kubelet[1567]: I1213 02:01:04.067586 1567 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:01:04.067756 kubelet[1567]: I1213 02:01:04.067739 1567 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 02:01:04.067846 kubelet[1567]: I1213 02:01:04.067833 1567 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 02:01:04.068396 kubelet[1567]: I1213 02:01:04.068385 1567 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:01:04.068524 kubelet[1567]: I1213 02:01:04.068506 1567 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:01:04.069692 kubelet[1567]: I1213 02:01:04.069674 1567 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:01:04.089414 kubelet[1567]: E1213 02:01:04.089390 1567 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.71\" not found" node="10.0.0.71" Dec 13 02:01:04.089732 kubelet[1567]: I1213 02:01:04.089697 1567 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:01:04.089732 kubelet[1567]: I1213 02:01:04.089730 1567 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:01:04.089803 kubelet[1567]: I1213 02:01:04.089757 1567 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:01:04.168871 kubelet[1567]: I1213 02:01:04.168825 1567 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.71" Dec 13 02:01:04.395234 kubelet[1567]: I1213 02:01:04.395179 1567 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.71" Dec 13 02:01:04.529123 kubelet[1567]: I1213 02:01:04.529063 1567 policy_none.go:49] "None policy: Start" Dec 13 02:01:04.529891 kubelet[1567]: I1213 02:01:04.529870 1567 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:01:04.529940 kubelet[1567]: I1213 02:01:04.529898 1567 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:01:04.537351 kubelet[1567]: I1213 02:01:04.537311 1567 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:01:04.537627 kubelet[1567]: I1213 02:01:04.537598 1567 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:01:04.544356 kubelet[1567]: I1213 02:01:04.544226 1567 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 02:01:04.544587 env[1314]: time="2024-12-13T02:01:04.544481918Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 02:01:04.545178 kubelet[1567]: I1213 02:01:04.545147 1567 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 02:01:04.571515 kubelet[1567]: I1213 02:01:04.571484 1567 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:01:04.572392 kubelet[1567]: I1213 02:01:04.572365 1567 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:01:04.572446 kubelet[1567]: I1213 02:01:04.572421 1567 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:01:04.572480 kubelet[1567]: I1213 02:01:04.572451 1567 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 02:01:04.572532 kubelet[1567]: E1213 02:01:04.572520 1567 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 02:01:05.014961 kubelet[1567]: I1213 02:01:05.014924 1567 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 02:01:05.015314 kubelet[1567]: W1213 02:01:05.015042 1567 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.Service ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 02:01:05.015314 kubelet[1567]: W1213 02:01:05.015118 1567 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 02:01:05.015314 kubelet[1567]: W1213 02:01:05.015133 1567 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 02:01:05.057567 kubelet[1567]: I1213 02:01:05.057531 1567 apiserver.go:52] "Watching apiserver" Dec 13 02:01:05.057567 kubelet[1567]: E1213 02:01:05.057565 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:05.060062 kubelet[1567]: I1213 02:01:05.060042 1567 topology_manager.go:215] "Topology Admit Handler" podUID="0ad5cf1b-152b-48ea-a10e-c6ba3015e769" podNamespace="kube-system" podName="cilium-lf6hj" Dec 13 02:01:05.060193 kubelet[1567]: I1213 02:01:05.060176 1567 topology_manager.go:215] "Topology Admit Handler" podUID="615c8b5d-3ec1-457b-ae5d-25fee9742405" podNamespace="kube-system" podName="kube-proxy-d4l9k" Dec 13 02:01:05.068159 kubelet[1567]: I1213 02:01:05.068134 1567 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 02:01:05.132569 kubelet[1567]: I1213 02:01:05.132538 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-cilium-run\") pod \"cilium-lf6hj\" (UID: \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\") " pod="kube-system/cilium-lf6hj" Dec 13 02:01:05.132629 kubelet[1567]: I1213 02:01:05.132607 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dxcz\" (UniqueName: \"kubernetes.io/projected/615c8b5d-3ec1-457b-ae5d-25fee9742405-kube-api-access-4dxcz\") pod \"kube-proxy-d4l9k\" (UID: \"615c8b5d-3ec1-457b-ae5d-25fee9742405\") " pod="kube-system/kube-proxy-d4l9k" Dec 13 02:01:05.132669 kubelet[1567]: I1213 02:01:05.132636 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-cilium-cgroup\") pod \"cilium-lf6hj\" (UID: \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\") " pod="kube-system/cilium-lf6hj" Dec 13 02:01:05.132692 kubelet[1567]: I1213 02:01:05.132683 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-etc-cni-netd\") pod \"cilium-lf6hj\" (UID: \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\") " pod="kube-system/cilium-lf6hj" Dec 13 02:01:05.132736 kubelet[1567]: I1213 02:01:05.132722 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-host-proc-sys-kernel\") pod \"cilium-lf6hj\" (UID: \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\") " pod="kube-system/cilium-lf6hj" Dec 13 02:01:05.132759 kubelet[1567]: I1213 02:01:05.132748 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/615c8b5d-3ec1-457b-ae5d-25fee9742405-lib-modules\") pod \"kube-proxy-d4l9k\" (UID: \"615c8b5d-3ec1-457b-ae5d-25fee9742405\") " pod="kube-system/kube-proxy-d4l9k" Dec 13 02:01:05.132780 kubelet[1567]: I1213 02:01:05.132771 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-hubble-tls\") pod \"cilium-lf6hj\" (UID: \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\") " pod="kube-system/cilium-lf6hj" Dec 13 02:01:05.132806 kubelet[1567]: I1213 02:01:05.132795 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmfrv\" (UniqueName: \"kubernetes.io/projected/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-kube-api-access-mmfrv\") pod \"cilium-lf6hj\" (UID: \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\") " pod="kube-system/cilium-lf6hj" Dec 13 02:01:05.132828 kubelet[1567]: I1213 02:01:05.132818 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/615c8b5d-3ec1-457b-ae5d-25fee9742405-xtables-lock\") pod \"kube-proxy-d4l9k\" (UID: \"615c8b5d-3ec1-457b-ae5d-25fee9742405\") " pod="kube-system/kube-proxy-d4l9k" Dec 13 02:01:05.132857 kubelet[1567]: I1213 02:01:05.132846 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-bpf-maps\") pod \"cilium-lf6hj\" (UID: \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\") " pod="kube-system/cilium-lf6hj" Dec 13 02:01:05.132891 kubelet[1567]: I1213 02:01:05.132872 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-hostproc\") pod \"cilium-lf6hj\" (UID: \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\") " pod="kube-system/cilium-lf6hj" Dec 13 02:01:05.132914 kubelet[1567]: I1213 02:01:05.132901 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-cni-path\") pod \"cilium-lf6hj\" (UID: \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\") " pod="kube-system/cilium-lf6hj" Dec 13 02:01:05.132939 kubelet[1567]: I1213 02:01:05.132920 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-host-proc-sys-net\") pod \"cilium-lf6hj\" (UID: \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\") " pod="kube-system/cilium-lf6hj" Dec 13 02:01:05.132960 kubelet[1567]: I1213 02:01:05.132936 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/615c8b5d-3ec1-457b-ae5d-25fee9742405-kube-proxy\") pod \"kube-proxy-d4l9k\" (UID: \"615c8b5d-3ec1-457b-ae5d-25fee9742405\") " pod="kube-system/kube-proxy-d4l9k" Dec 13 02:01:05.132981 kubelet[1567]: I1213 02:01:05.132966 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-lib-modules\") pod \"cilium-lf6hj\" (UID: \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\") " pod="kube-system/cilium-lf6hj" Dec 13 02:01:05.133002 kubelet[1567]: I1213 02:01:05.132982 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-xtables-lock\") pod \"cilium-lf6hj\" (UID: \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\") " pod="kube-system/cilium-lf6hj" Dec 13 02:01:05.133002 kubelet[1567]: I1213 02:01:05.133000 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-clustermesh-secrets\") pod \"cilium-lf6hj\" (UID: \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\") " pod="kube-system/cilium-lf6hj" Dec 13 02:01:05.133043 kubelet[1567]: I1213 02:01:05.133034 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-cilium-config-path\") pod \"cilium-lf6hj\" (UID: \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\") " pod="kube-system/cilium-lf6hj" Dec 13 02:01:05.138362 sudo[1436]: pam_unix(sudo:session): session closed for user root Dec 13 02:01:05.139645 sshd[1430]: pam_unix(sshd:session): session closed for user core Dec 13 02:01:05.141663 systemd[1]: sshd@4-10.0.0.71:22-10.0.0.1:60944.service: Deactivated successfully. Dec 13 02:01:05.142625 systemd-logind[1304]: Session 5 logged out. Waiting for processes to exit. Dec 13 02:01:05.142663 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 02:01:05.143409 systemd-logind[1304]: Removed session 5. Dec 13 02:01:05.366612 kubelet[1567]: E1213 02:01:05.366560 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:05.366612 kubelet[1567]: E1213 02:01:05.366573 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:05.367229 env[1314]: time="2024-12-13T02:01:05.367190019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d4l9k,Uid:615c8b5d-3ec1-457b-ae5d-25fee9742405,Namespace:kube-system,Attempt:0,}" Dec 13 02:01:05.367297 env[1314]: time="2024-12-13T02:01:05.367269589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lf6hj,Uid:0ad5cf1b-152b-48ea-a10e-c6ba3015e769,Namespace:kube-system,Attempt:0,}" Dec 13 02:01:06.057724 kubelet[1567]: E1213 02:01:06.057662 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:06.166761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2698250230.mount: Deactivated successfully. Dec 13 02:01:06.175664 env[1314]: time="2024-12-13T02:01:06.175608583Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:06.176555 env[1314]: time="2024-12-13T02:01:06.176533578Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:06.179197 env[1314]: time="2024-12-13T02:01:06.179168048Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:06.180305 env[1314]: time="2024-12-13T02:01:06.180274954Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:06.182016 env[1314]: time="2024-12-13T02:01:06.181993797Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:06.183237 env[1314]: time="2024-12-13T02:01:06.183209127Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:06.184513 env[1314]: time="2024-12-13T02:01:06.184488185Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:06.185611 env[1314]: time="2024-12-13T02:01:06.185592336Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:06.202545 env[1314]: time="2024-12-13T02:01:06.202478574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:01:06.202545 env[1314]: time="2024-12-13T02:01:06.202511806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:01:06.202545 env[1314]: time="2024-12-13T02:01:06.202521334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:01:06.203236 env[1314]: time="2024-12-13T02:01:06.202847806Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1b1b727876994574c928de794e8d101f7289622dc04e83e3373e917d37e8ca35 pid=1628 runtime=io.containerd.runc.v2 Dec 13 02:01:06.203475 env[1314]: time="2024-12-13T02:01:06.203438534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:01:06.203475 env[1314]: time="2024-12-13T02:01:06.203463881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:01:06.203560 env[1314]: time="2024-12-13T02:01:06.203473099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:01:06.203593 env[1314]: time="2024-12-13T02:01:06.203565432Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/077ca2071839a3626deb91463637bdd5d5653b0c7e20b41baa32f0ae4b9bcd25 pid=1631 runtime=io.containerd.runc.v2 Dec 13 02:01:06.234882 env[1314]: time="2024-12-13T02:01:06.234823552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lf6hj,Uid:0ad5cf1b-152b-48ea-a10e-c6ba3015e769,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b1b727876994574c928de794e8d101f7289622dc04e83e3373e917d37e8ca35\"" Dec 13 02:01:06.235385 env[1314]: time="2024-12-13T02:01:06.235354528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d4l9k,Uid:615c8b5d-3ec1-457b-ae5d-25fee9742405,Namespace:kube-system,Attempt:0,} returns sandbox id \"077ca2071839a3626deb91463637bdd5d5653b0c7e20b41baa32f0ae4b9bcd25\"" Dec 13 02:01:06.236176 kubelet[1567]: E1213 02:01:06.236161 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:06.236339 kubelet[1567]: E1213 02:01:06.236315 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:06.237377 env[1314]: time="2024-12-13T02:01:06.237359438Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 02:01:07.058805 kubelet[1567]: E1213 02:01:07.058768 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:07.390847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount415896969.mount: Deactivated successfully. Dec 13 02:01:07.925064 env[1314]: time="2024-12-13T02:01:07.925018822Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:07.926768 env[1314]: time="2024-12-13T02:01:07.926729630Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:07.928193 env[1314]: time="2024-12-13T02:01:07.928168949Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:07.929468 env[1314]: time="2024-12-13T02:01:07.929429443Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:07.929839 env[1314]: time="2024-12-13T02:01:07.929811780Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 02:01:07.930537 env[1314]: time="2024-12-13T02:01:07.930510130Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 02:01:07.931402 env[1314]: time="2024-12-13T02:01:07.931365263Z" level=info msg="CreateContainer within sandbox \"077ca2071839a3626deb91463637bdd5d5653b0c7e20b41baa32f0ae4b9bcd25\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 02:01:07.947560 env[1314]: time="2024-12-13T02:01:07.947497287Z" level=info msg="CreateContainer within sandbox \"077ca2071839a3626deb91463637bdd5d5653b0c7e20b41baa32f0ae4b9bcd25\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"049487b04177a66da416996bd2eb99d7f1c4ba95717798b0de32892e5756bfd9\"" Dec 13 02:01:07.948142 env[1314]: time="2024-12-13T02:01:07.948112110Z" level=info msg="StartContainer for \"049487b04177a66da416996bd2eb99d7f1c4ba95717798b0de32892e5756bfd9\"" Dec 13 02:01:07.988741 env[1314]: time="2024-12-13T02:01:07.988684838Z" level=info msg="StartContainer for \"049487b04177a66da416996bd2eb99d7f1c4ba95717798b0de32892e5756bfd9\" returns successfully" Dec 13 02:01:08.059446 kubelet[1567]: E1213 02:01:08.059382 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:08.580516 kubelet[1567]: E1213 02:01:08.580488 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:08.591258 kubelet[1567]: I1213 02:01:08.591214 1567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-d4l9k" podStartSLOduration=2.897957556 podStartE2EDuration="4.591162118s" podCreationTimestamp="2024-12-13 02:01:04 +0000 UTC" firstStartedPulling="2024-12-13 02:01:06.237007888 +0000 UTC m=+2.610276953" lastFinishedPulling="2024-12-13 02:01:07.930212441 +0000 UTC m=+4.303481515" observedRunningTime="2024-12-13 02:01:08.590952976 +0000 UTC m=+4.964222050" watchObservedRunningTime="2024-12-13 02:01:08.591162118 +0000 UTC m=+4.964431202" Dec 13 02:01:09.059792 kubelet[1567]: E1213 02:01:09.059757 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:09.582371 kubelet[1567]: E1213 02:01:09.582329 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:10.060216 kubelet[1567]: E1213 02:01:10.060155 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:11.060587 kubelet[1567]: E1213 02:01:11.060527 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:12.061371 kubelet[1567]: E1213 02:01:12.061315 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:13.061651 kubelet[1567]: E1213 02:01:13.061591 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:13.995413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4206554395.mount: Deactivated successfully. Dec 13 02:01:14.062803 kubelet[1567]: E1213 02:01:14.062741 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:15.063186 kubelet[1567]: E1213 02:01:15.063153 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:16.064182 kubelet[1567]: E1213 02:01:16.064144 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:17.065003 kubelet[1567]: E1213 02:01:17.064953 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:17.765565 env[1314]: time="2024-12-13T02:01:17.765490508Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:17.767428 env[1314]: time="2024-12-13T02:01:17.767383138Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:17.769747 env[1314]: time="2024-12-13T02:01:17.769657423Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:17.770249 env[1314]: time="2024-12-13T02:01:17.770217944Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 02:01:17.772155 env[1314]: time="2024-12-13T02:01:17.772119049Z" level=info msg="CreateContainer within sandbox \"1b1b727876994574c928de794e8d101f7289622dc04e83e3373e917d37e8ca35\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:01:17.784389 env[1314]: time="2024-12-13T02:01:17.784339427Z" level=info msg="CreateContainer within sandbox \"1b1b727876994574c928de794e8d101f7289622dc04e83e3373e917d37e8ca35\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7b613e634702bffcaa1842e1561903065c01e0c47709965f97963660184fe8ff\"" Dec 13 02:01:17.784834 env[1314]: time="2024-12-13T02:01:17.784790833Z" level=info msg="StartContainer for \"7b613e634702bffcaa1842e1561903065c01e0c47709965f97963660184fe8ff\"" Dec 13 02:01:17.820915 env[1314]: time="2024-12-13T02:01:17.820850438Z" level=info msg="StartContainer for \"7b613e634702bffcaa1842e1561903065c01e0c47709965f97963660184fe8ff\" returns successfully" Dec 13 02:01:18.065953 kubelet[1567]: E1213 02:01:18.065816 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:18.339997 env[1314]: time="2024-12-13T02:01:18.339871698Z" level=info msg="shim disconnected" id=7b613e634702bffcaa1842e1561903065c01e0c47709965f97963660184fe8ff Dec 13 02:01:18.339997 env[1314]: time="2024-12-13T02:01:18.339921021Z" level=warning msg="cleaning up after shim disconnected" id=7b613e634702bffcaa1842e1561903065c01e0c47709965f97963660184fe8ff namespace=k8s.io Dec 13 02:01:18.339997 env[1314]: time="2024-12-13T02:01:18.339930609Z" level=info msg="cleaning up dead shim" Dec 13 02:01:18.347012 env[1314]: time="2024-12-13T02:01:18.346942048Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:01:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1905 runtime=io.containerd.runc.v2\n" Dec 13 02:01:18.596216 kubelet[1567]: E1213 02:01:18.596111 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:18.597930 env[1314]: time="2024-12-13T02:01:18.597889604Z" level=info msg="CreateContainer within sandbox \"1b1b727876994574c928de794e8d101f7289622dc04e83e3373e917d37e8ca35\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:01:18.613065 env[1314]: time="2024-12-13T02:01:18.613008617Z" level=info msg="CreateContainer within sandbox \"1b1b727876994574c928de794e8d101f7289622dc04e83e3373e917d37e8ca35\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0ed8fc6d43ad35ed548f4a8505216ffa2c9bfd343a17483929621366967d45b4\"" Dec 13 02:01:18.613480 env[1314]: time="2024-12-13T02:01:18.613432122Z" level=info msg="StartContainer for \"0ed8fc6d43ad35ed548f4a8505216ffa2c9bfd343a17483929621366967d45b4\"" Dec 13 02:01:18.653403 env[1314]: time="2024-12-13T02:01:18.653349902Z" level=info msg="StartContainer for \"0ed8fc6d43ad35ed548f4a8505216ffa2c9bfd343a17483929621366967d45b4\" returns successfully" Dec 13 02:01:18.663944 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:01:18.664290 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:01:18.664675 systemd[1]: Stopping systemd-sysctl.service... Dec 13 02:01:18.666636 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:01:18.673763 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:01:18.687475 env[1314]: time="2024-12-13T02:01:18.687423081Z" level=info msg="shim disconnected" id=0ed8fc6d43ad35ed548f4a8505216ffa2c9bfd343a17483929621366967d45b4 Dec 13 02:01:18.687647 env[1314]: time="2024-12-13T02:01:18.687477623Z" level=warning msg="cleaning up after shim disconnected" id=0ed8fc6d43ad35ed548f4a8505216ffa2c9bfd343a17483929621366967d45b4 namespace=k8s.io Dec 13 02:01:18.687647 env[1314]: time="2024-12-13T02:01:18.687490818Z" level=info msg="cleaning up dead shim" Dec 13 02:01:18.693630 env[1314]: time="2024-12-13T02:01:18.693595306Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:01:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1970 runtime=io.containerd.runc.v2\n" Dec 13 02:01:18.779175 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b613e634702bffcaa1842e1561903065c01e0c47709965f97963660184fe8ff-rootfs.mount: Deactivated successfully. Dec 13 02:01:19.066030 kubelet[1567]: E1213 02:01:19.065962 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:19.598861 kubelet[1567]: E1213 02:01:19.598834 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:19.600275 env[1314]: time="2024-12-13T02:01:19.600222416Z" level=info msg="CreateContainer within sandbox \"1b1b727876994574c928de794e8d101f7289622dc04e83e3373e917d37e8ca35\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:01:19.862851 env[1314]: time="2024-12-13T02:01:19.862793981Z" level=info msg="CreateContainer within sandbox \"1b1b727876994574c928de794e8d101f7289622dc04e83e3373e917d37e8ca35\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"655e34230e1ee170d5a1691fd1cd90d1fa2183fe9ef2780b3a30ce003a806cc7\"" Dec 13 02:01:19.863289 env[1314]: time="2024-12-13T02:01:19.863262760Z" level=info msg="StartContainer for \"655e34230e1ee170d5a1691fd1cd90d1fa2183fe9ef2780b3a30ce003a806cc7\"" Dec 13 02:01:19.903659 env[1314]: time="2024-12-13T02:01:19.903621387Z" level=info msg="StartContainer for \"655e34230e1ee170d5a1691fd1cd90d1fa2183fe9ef2780b3a30ce003a806cc7\" returns successfully" Dec 13 02:01:19.923306 env[1314]: time="2024-12-13T02:01:19.923259175Z" level=info msg="shim disconnected" id=655e34230e1ee170d5a1691fd1cd90d1fa2183fe9ef2780b3a30ce003a806cc7 Dec 13 02:01:19.923306 env[1314]: time="2024-12-13T02:01:19.923303218Z" level=warning msg="cleaning up after shim disconnected" id=655e34230e1ee170d5a1691fd1cd90d1fa2183fe9ef2780b3a30ce003a806cc7 namespace=k8s.io Dec 13 02:01:19.923306 env[1314]: time="2024-12-13T02:01:19.923312265Z" level=info msg="cleaning up dead shim" Dec 13 02:01:19.929293 env[1314]: time="2024-12-13T02:01:19.929246583Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:01:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2027 runtime=io.containerd.runc.v2\n" Dec 13 02:01:20.066928 kubelet[1567]: E1213 02:01:20.066885 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:20.602276 kubelet[1567]: E1213 02:01:20.602246 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:20.603955 env[1314]: time="2024-12-13T02:01:20.603919646Z" level=info msg="CreateContainer within sandbox \"1b1b727876994574c928de794e8d101f7289622dc04e83e3373e917d37e8ca35\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:01:20.853509 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-655e34230e1ee170d5a1691fd1cd90d1fa2183fe9ef2780b3a30ce003a806cc7-rootfs.mount: Deactivated successfully. Dec 13 02:01:20.890946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3293017841.mount: Deactivated successfully. Dec 13 02:01:21.052858 env[1314]: time="2024-12-13T02:01:21.052793705Z" level=info msg="CreateContainer within sandbox \"1b1b727876994574c928de794e8d101f7289622dc04e83e3373e917d37e8ca35\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"72f9c8a00c99a210ce63b267852fae4023e4f2a5413bca8b2d9aa3031269aea1\"" Dec 13 02:01:21.053292 env[1314]: time="2024-12-13T02:01:21.053270540Z" level=info msg="StartContainer for \"72f9c8a00c99a210ce63b267852fae4023e4f2a5413bca8b2d9aa3031269aea1\"" Dec 13 02:01:21.067098 kubelet[1567]: E1213 02:01:21.067052 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:21.092211 env[1314]: time="2024-12-13T02:01:21.092147879Z" level=info msg="StartContainer for \"72f9c8a00c99a210ce63b267852fae4023e4f2a5413bca8b2d9aa3031269aea1\" returns successfully" Dec 13 02:01:21.111206 env[1314]: time="2024-12-13T02:01:21.111137661Z" level=info msg="shim disconnected" id=72f9c8a00c99a210ce63b267852fae4023e4f2a5413bca8b2d9aa3031269aea1 Dec 13 02:01:21.111206 env[1314]: time="2024-12-13T02:01:21.111202403Z" level=warning msg="cleaning up after shim disconnected" id=72f9c8a00c99a210ce63b267852fae4023e4f2a5413bca8b2d9aa3031269aea1 namespace=k8s.io Dec 13 02:01:21.111424 env[1314]: time="2024-12-13T02:01:21.111221278Z" level=info msg="cleaning up dead shim" Dec 13 02:01:21.118807 env[1314]: time="2024-12-13T02:01:21.118678553Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:01:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2081 runtime=io.containerd.runc.v2\n" Dec 13 02:01:21.606047 kubelet[1567]: E1213 02:01:21.605945 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:21.608341 env[1314]: time="2024-12-13T02:01:21.608301761Z" level=info msg="CreateContainer within sandbox \"1b1b727876994574c928de794e8d101f7289622dc04e83e3373e917d37e8ca35\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:01:21.623564 env[1314]: time="2024-12-13T02:01:21.623510623Z" level=info msg="CreateContainer within sandbox \"1b1b727876994574c928de794e8d101f7289622dc04e83e3373e917d37e8ca35\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dd90a41d03e804474666616c46fd02842dec45a96ad84d853706e5c32e880853\"" Dec 13 02:01:21.624103 env[1314]: time="2024-12-13T02:01:21.624073860Z" level=info msg="StartContainer for \"dd90a41d03e804474666616c46fd02842dec45a96ad84d853706e5c32e880853\"" Dec 13 02:01:21.668178 env[1314]: time="2024-12-13T02:01:21.668112027Z" level=info msg="StartContainer for \"dd90a41d03e804474666616c46fd02842dec45a96ad84d853706e5c32e880853\" returns successfully" Dec 13 02:01:21.744146 kubelet[1567]: I1213 02:01:21.743846 1567 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 02:01:21.857465 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72f9c8a00c99a210ce63b267852fae4023e4f2a5413bca8b2d9aa3031269aea1-rootfs.mount: Deactivated successfully. Dec 13 02:01:21.971747 kernel: Initializing XFRM netlink socket Dec 13 02:01:22.067396 kubelet[1567]: E1213 02:01:22.067349 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:22.609806 kubelet[1567]: E1213 02:01:22.609773 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:22.622613 kubelet[1567]: I1213 02:01:22.622570 1567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-lf6hj" podStartSLOduration=7.089224236 podStartE2EDuration="18.622529971s" podCreationTimestamp="2024-12-13 02:01:04 +0000 UTC" firstStartedPulling="2024-12-13 02:01:06.237188828 +0000 UTC m=+2.610457912" lastFinishedPulling="2024-12-13 02:01:17.770494563 +0000 UTC m=+14.143763647" observedRunningTime="2024-12-13 02:01:22.621794883 +0000 UTC m=+18.995063957" watchObservedRunningTime="2024-12-13 02:01:22.622529971 +0000 UTC m=+18.995799055" Dec 13 02:01:23.068490 kubelet[1567]: E1213 02:01:23.068345 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:23.584958 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 02:01:23.585066 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 02:01:23.582319 systemd-networkd[1092]: cilium_host: Link UP Dec 13 02:01:23.582785 systemd-networkd[1092]: cilium_net: Link UP Dec 13 02:01:23.583397 systemd-networkd[1092]: cilium_net: Gained carrier Dec 13 02:01:23.584546 systemd-networkd[1092]: cilium_host: Gained carrier Dec 13 02:01:23.611639 kubelet[1567]: E1213 02:01:23.611610 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:23.652644 systemd-networkd[1092]: cilium_vxlan: Link UP Dec 13 02:01:23.652651 systemd-networkd[1092]: cilium_vxlan: Gained carrier Dec 13 02:01:23.701863 systemd-networkd[1092]: cilium_host: Gained IPv6LL Dec 13 02:01:23.836754 kernel: NET: Registered PF_ALG protocol family Dec 13 02:01:24.056730 kubelet[1567]: E1213 02:01:24.056661 1567 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:24.068808 kubelet[1567]: E1213 02:01:24.068771 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:24.309851 systemd-networkd[1092]: cilium_net: Gained IPv6LL Dec 13 02:01:24.408861 systemd-networkd[1092]: lxc_health: Link UP Dec 13 02:01:24.416599 systemd-networkd[1092]: lxc_health: Gained carrier Dec 13 02:01:24.416758 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:01:24.613272 kubelet[1567]: E1213 02:01:24.613233 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:25.069604 kubelet[1567]: E1213 02:01:25.069447 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:25.614221 kubelet[1567]: I1213 02:01:25.614185 1567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:01:25.614952 kubelet[1567]: E1213 02:01:25.614877 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:25.717862 systemd-networkd[1092]: cilium_vxlan: Gained IPv6LL Dec 13 02:01:25.909898 systemd-networkd[1092]: lxc_health: Gained IPv6LL Dec 13 02:01:26.070249 kubelet[1567]: E1213 02:01:26.070200 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:26.680613 kubelet[1567]: I1213 02:01:26.680566 1567 topology_manager.go:215] "Topology Admit Handler" podUID="28230d24-e209-419f-b438-7e11b2a1f97c" podNamespace="default" podName="nginx-deployment-6d5f899847-nchgr" Dec 13 02:01:26.752210 kubelet[1567]: I1213 02:01:26.752174 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8s56\" (UniqueName: \"kubernetes.io/projected/28230d24-e209-419f-b438-7e11b2a1f97c-kube-api-access-r8s56\") pod \"nginx-deployment-6d5f899847-nchgr\" (UID: \"28230d24-e209-419f-b438-7e11b2a1f97c\") " pod="default/nginx-deployment-6d5f899847-nchgr" Dec 13 02:01:26.984606 env[1314]: time="2024-12-13T02:01:26.984504052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-nchgr,Uid:28230d24-e209-419f-b438-7e11b2a1f97c,Namespace:default,Attempt:0,}" Dec 13 02:01:27.014139 systemd-networkd[1092]: lxc47ac113b7325: Link UP Dec 13 02:01:27.021796 kernel: eth0: renamed from tmp43e96 Dec 13 02:01:27.028726 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:01:27.028778 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc47ac113b7325: link becomes ready Dec 13 02:01:27.028900 systemd-networkd[1092]: lxc47ac113b7325: Gained carrier Dec 13 02:01:27.071331 kubelet[1567]: E1213 02:01:27.071295 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:28.071961 kubelet[1567]: E1213 02:01:28.071899 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:28.636370 env[1314]: time="2024-12-13T02:01:28.636291223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:01:28.636370 env[1314]: time="2024-12-13T02:01:28.636335709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:01:28.636370 env[1314]: time="2024-12-13T02:01:28.636347262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:01:28.636914 env[1314]: time="2024-12-13T02:01:28.636524878Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/43e96fe1c8363b2280d2a3a8f222e83da3a143cdd29b1dfb50e186b9328790fb pid=2636 runtime=io.containerd.runc.v2 Dec 13 02:01:28.653101 systemd[1]: run-containerd-runc-k8s.io-43e96fe1c8363b2280d2a3a8f222e83da3a143cdd29b1dfb50e186b9328790fb-runc.fiK7q0.mount: Deactivated successfully. Dec 13 02:01:28.662383 systemd-resolved[1237]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 02:01:28.682877 env[1314]: time="2024-12-13T02:01:28.682816728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-nchgr,Uid:28230d24-e209-419f-b438-7e11b2a1f97c,Namespace:default,Attempt:0,} returns sandbox id \"43e96fe1c8363b2280d2a3a8f222e83da3a143cdd29b1dfb50e186b9328790fb\"" Dec 13 02:01:28.683976 env[1314]: time="2024-12-13T02:01:28.683937221Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 02:01:29.045876 systemd-networkd[1092]: lxc47ac113b7325: Gained IPv6LL Dec 13 02:01:29.072358 kubelet[1567]: E1213 02:01:29.072327 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:29.598628 kubelet[1567]: I1213 02:01:29.598572 1567 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 02:01:29.599475 kubelet[1567]: E1213 02:01:29.599427 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:29.622376 kubelet[1567]: E1213 02:01:29.622329 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:01:30.072803 kubelet[1567]: E1213 02:01:30.072701 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:31.073218 kubelet[1567]: E1213 02:01:31.073163 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:32.073562 kubelet[1567]: E1213 02:01:32.073497 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:32.624236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3381200582.mount: Deactivated successfully. Dec 13 02:01:33.073916 kubelet[1567]: E1213 02:01:33.073791 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:34.074643 kubelet[1567]: E1213 02:01:34.074592 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:34.463049 env[1314]: time="2024-12-13T02:01:34.463000460Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:34.464727 env[1314]: time="2024-12-13T02:01:34.464676677Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:34.466487 env[1314]: time="2024-12-13T02:01:34.466443339Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:34.467909 env[1314]: time="2024-12-13T02:01:34.467860588Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:34.468567 env[1314]: time="2024-12-13T02:01:34.468530156Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 02:01:34.470000 env[1314]: time="2024-12-13T02:01:34.469962875Z" level=info msg="CreateContainer within sandbox \"43e96fe1c8363b2280d2a3a8f222e83da3a143cdd29b1dfb50e186b9328790fb\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 02:01:34.480819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1524195763.mount: Deactivated successfully. Dec 13 02:01:34.483867 env[1314]: time="2024-12-13T02:01:34.483830414Z" level=info msg="CreateContainer within sandbox \"43e96fe1c8363b2280d2a3a8f222e83da3a143cdd29b1dfb50e186b9328790fb\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"36c866834d2564e3c7844e5c4617a883a99d5ed04fad9a445f70d19785bd0c0f\"" Dec 13 02:01:34.484417 env[1314]: time="2024-12-13T02:01:34.484376014Z" level=info msg="StartContainer for \"36c866834d2564e3c7844e5c4617a883a99d5ed04fad9a445f70d19785bd0c0f\"" Dec 13 02:01:34.517452 env[1314]: time="2024-12-13T02:01:34.517389416Z" level=info msg="StartContainer for \"36c866834d2564e3c7844e5c4617a883a99d5ed04fad9a445f70d19785bd0c0f\" returns successfully" Dec 13 02:01:34.636730 kubelet[1567]: I1213 02:01:34.636691 1567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-nchgr" podStartSLOduration=2.851650808 podStartE2EDuration="8.636657484s" podCreationTimestamp="2024-12-13 02:01:26 +0000 UTC" firstStartedPulling="2024-12-13 02:01:28.683742783 +0000 UTC m=+25.057011867" lastFinishedPulling="2024-12-13 02:01:34.468749468 +0000 UTC m=+30.842018543" observedRunningTime="2024-12-13 02:01:34.636380211 +0000 UTC m=+31.009649285" watchObservedRunningTime="2024-12-13 02:01:34.636657484 +0000 UTC m=+31.009926548" Dec 13 02:01:35.075036 kubelet[1567]: E1213 02:01:35.074872 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:36.075434 kubelet[1567]: E1213 02:01:36.075371 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:37.075534 kubelet[1567]: E1213 02:01:37.075469 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:37.851896 update_engine[1308]: I1213 02:01:37.851822 1308 update_attempter.cc:509] Updating boot flags... Dec 13 02:01:38.076080 kubelet[1567]: E1213 02:01:38.075982 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:39.064823 kubelet[1567]: I1213 02:01:39.064770 1567 topology_manager.go:215] "Topology Admit Handler" podUID="7e7da027-d6ab-4529-b3de-102557adc2c8" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 02:01:39.076296 kubelet[1567]: E1213 02:01:39.076259 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:39.216151 kubelet[1567]: I1213 02:01:39.216115 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd96s\" (UniqueName: \"kubernetes.io/projected/7e7da027-d6ab-4529-b3de-102557adc2c8-kube-api-access-rd96s\") pod \"nfs-server-provisioner-0\" (UID: \"7e7da027-d6ab-4529-b3de-102557adc2c8\") " pod="default/nfs-server-provisioner-0" Dec 13 02:01:39.216151 kubelet[1567]: I1213 02:01:39.216164 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/7e7da027-d6ab-4529-b3de-102557adc2c8-data\") pod \"nfs-server-provisioner-0\" (UID: \"7e7da027-d6ab-4529-b3de-102557adc2c8\") " pod="default/nfs-server-provisioner-0" Dec 13 02:01:39.367613 env[1314]: time="2024-12-13T02:01:39.367576331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7e7da027-d6ab-4529-b3de-102557adc2c8,Namespace:default,Attempt:0,}" Dec 13 02:01:39.541168 systemd-networkd[1092]: lxc5a3fc6809f57: Link UP Dec 13 02:01:39.546737 kernel: eth0: renamed from tmp1fd85 Dec 13 02:01:39.556840 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:01:39.556913 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5a3fc6809f57: link becomes ready Dec 13 02:01:39.556695 systemd-networkd[1092]: lxc5a3fc6809f57: Gained carrier Dec 13 02:01:39.685388 env[1314]: time="2024-12-13T02:01:39.685264815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:01:39.685388 env[1314]: time="2024-12-13T02:01:39.685304802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:01:39.685592 env[1314]: time="2024-12-13T02:01:39.685559919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:01:39.685871 env[1314]: time="2024-12-13T02:01:39.685816399Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1fd85b14e9624c03fabedeeabe228ff26cf69b22b9c8434de001db27178b2089 pid=2773 runtime=io.containerd.runc.v2 Dec 13 02:01:39.706116 systemd-resolved[1237]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 02:01:39.725154 env[1314]: time="2024-12-13T02:01:39.725110041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7e7da027-d6ab-4529-b3de-102557adc2c8,Namespace:default,Attempt:0,} returns sandbox id \"1fd85b14e9624c03fabedeeabe228ff26cf69b22b9c8434de001db27178b2089\"" Dec 13 02:01:39.726503 env[1314]: time="2024-12-13T02:01:39.726481973Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 02:01:40.076872 kubelet[1567]: E1213 02:01:40.076783 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:41.077587 kubelet[1567]: E1213 02:01:41.077521 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:41.397855 systemd-networkd[1092]: lxc5a3fc6809f57: Gained IPv6LL Dec 13 02:01:42.078537 kubelet[1567]: E1213 02:01:42.078484 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:42.178465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount322081863.mount: Deactivated successfully. Dec 13 02:01:43.079254 kubelet[1567]: E1213 02:01:43.079197 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:44.056332 kubelet[1567]: E1213 02:01:44.056276 1567 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:44.079522 kubelet[1567]: E1213 02:01:44.079500 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:45.053082 env[1314]: time="2024-12-13T02:01:45.053004722Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:45.055060 env[1314]: time="2024-12-13T02:01:45.054996757Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:45.056685 env[1314]: time="2024-12-13T02:01:45.056650529Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:45.058163 env[1314]: time="2024-12-13T02:01:45.058131773Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:45.058738 env[1314]: time="2024-12-13T02:01:45.058691767Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 02:01:45.060378 env[1314]: time="2024-12-13T02:01:45.060346942Z" level=info msg="CreateContainer within sandbox \"1fd85b14e9624c03fabedeeabe228ff26cf69b22b9c8434de001db27178b2089\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 02:01:45.070536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1269147941.mount: Deactivated successfully. Dec 13 02:01:45.072889 env[1314]: time="2024-12-13T02:01:45.072858834Z" level=info msg="CreateContainer within sandbox \"1fd85b14e9624c03fabedeeabe228ff26cf69b22b9c8434de001db27178b2089\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"a72c58b24e2b9041360db08201fccbc7fa502a87da35bbc293368d4af983cc1a\"" Dec 13 02:01:45.073255 env[1314]: time="2024-12-13T02:01:45.073229137Z" level=info msg="StartContainer for \"a72c58b24e2b9041360db08201fccbc7fa502a87da35bbc293368d4af983cc1a\"" Dec 13 02:01:45.080654 kubelet[1567]: E1213 02:01:45.080613 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:45.107946 env[1314]: time="2024-12-13T02:01:45.107900189Z" level=info msg="StartContainer for \"a72c58b24e2b9041360db08201fccbc7fa502a87da35bbc293368d4af983cc1a\" returns successfully" Dec 13 02:01:46.080960 kubelet[1567]: E1213 02:01:46.080912 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:47.081584 kubelet[1567]: E1213 02:01:47.081511 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:48.082490 kubelet[1567]: E1213 02:01:48.082443 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:49.083176 kubelet[1567]: E1213 02:01:49.083120 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:50.083923 kubelet[1567]: E1213 02:01:50.083859 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:51.084557 kubelet[1567]: E1213 02:01:51.084488 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:52.085046 kubelet[1567]: E1213 02:01:52.084987 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:53.085605 kubelet[1567]: E1213 02:01:53.085538 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:54.086092 kubelet[1567]: E1213 02:01:54.086041 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:54.676998 kubelet[1567]: I1213 02:01:54.676961 1567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=10.344259175 podStartE2EDuration="15.676911673s" podCreationTimestamp="2024-12-13 02:01:39 +0000 UTC" firstStartedPulling="2024-12-13 02:01:39.72627643 +0000 UTC m=+36.099545494" lastFinishedPulling="2024-12-13 02:01:45.058928918 +0000 UTC m=+41.432197992" observedRunningTime="2024-12-13 02:01:45.683224822 +0000 UTC m=+42.056493916" watchObservedRunningTime="2024-12-13 02:01:54.676911673 +0000 UTC m=+51.050180737" Dec 13 02:01:54.677194 kubelet[1567]: I1213 02:01:54.677070 1567 topology_manager.go:215] "Topology Admit Handler" podUID="2e83986c-c137-45c0-abaf-0d15cfd3886b" podNamespace="default" podName="test-pod-1" Dec 13 02:01:54.791830 kubelet[1567]: I1213 02:01:54.791800 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-89693a78-6379-49b7-890b-c3555bc79668\" (UniqueName: \"kubernetes.io/nfs/2e83986c-c137-45c0-abaf-0d15cfd3886b-pvc-89693a78-6379-49b7-890b-c3555bc79668\") pod \"test-pod-1\" (UID: \"2e83986c-c137-45c0-abaf-0d15cfd3886b\") " pod="default/test-pod-1" Dec 13 02:01:54.791969 kubelet[1567]: I1213 02:01:54.791842 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zxwj\" (UniqueName: \"kubernetes.io/projected/2e83986c-c137-45c0-abaf-0d15cfd3886b-kube-api-access-8zxwj\") pod \"test-pod-1\" (UID: \"2e83986c-c137-45c0-abaf-0d15cfd3886b\") " pod="default/test-pod-1" Dec 13 02:01:54.911741 kernel: FS-Cache: Loaded Dec 13 02:01:54.956140 kernel: RPC: Registered named UNIX socket transport module. Dec 13 02:01:54.956225 kernel: RPC: Registered udp transport module. Dec 13 02:01:54.956248 kernel: RPC: Registered tcp transport module. Dec 13 02:01:54.957022 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 02:01:55.012746 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 02:01:55.086350 kubelet[1567]: E1213 02:01:55.086295 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:55.190263 kernel: NFS: Registering the id_resolver key type Dec 13 02:01:55.190415 kernel: Key type id_resolver registered Dec 13 02:01:55.190445 kernel: Key type id_legacy registered Dec 13 02:01:55.213611 nfsidmap[2887]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 02:01:55.216391 nfsidmap[2890]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 02:01:55.280343 env[1314]: time="2024-12-13T02:01:55.280304103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:2e83986c-c137-45c0-abaf-0d15cfd3886b,Namespace:default,Attempt:0,}" Dec 13 02:01:56.086480 kubelet[1567]: E1213 02:01:56.086426 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:56.139227 systemd-networkd[1092]: lxc15470c51ca2d: Link UP Dec 13 02:01:56.140833 kernel: eth0: renamed from tmp1284b Dec 13 02:01:56.148449 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:01:56.148502 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc15470c51ca2d: link becomes ready Dec 13 02:01:56.148614 systemd-networkd[1092]: lxc15470c51ca2d: Gained carrier Dec 13 02:01:56.421789 env[1314]: time="2024-12-13T02:01:56.421721649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:01:56.421789 env[1314]: time="2024-12-13T02:01:56.421760461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:01:56.421789 env[1314]: time="2024-12-13T02:01:56.421771923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:01:56.422264 env[1314]: time="2024-12-13T02:01:56.421899544Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1284bafba6f3a61abcd0d78822a9a93200b8e87446e747e9611e1d9a9175badb pid=2924 runtime=io.containerd.runc.v2 Dec 13 02:01:56.445153 systemd-resolved[1237]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 02:01:56.466030 env[1314]: time="2024-12-13T02:01:56.465411339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:2e83986c-c137-45c0-abaf-0d15cfd3886b,Namespace:default,Attempt:0,} returns sandbox id \"1284bafba6f3a61abcd0d78822a9a93200b8e87446e747e9611e1d9a9175badb\"" Dec 13 02:01:56.466902 env[1314]: time="2024-12-13T02:01:56.466859463Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 02:01:56.969016 env[1314]: time="2024-12-13T02:01:56.968957791Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:56.971026 env[1314]: time="2024-12-13T02:01:56.970993534Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:56.972780 env[1314]: time="2024-12-13T02:01:56.972754337Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:56.974418 env[1314]: time="2024-12-13T02:01:56.974383172Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:01:56.975035 env[1314]: time="2024-12-13T02:01:56.975001438Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 02:01:56.976544 env[1314]: time="2024-12-13T02:01:56.976520286Z" level=info msg="CreateContainer within sandbox \"1284bafba6f3a61abcd0d78822a9a93200b8e87446e747e9611e1d9a9175badb\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 02:01:56.988221 env[1314]: time="2024-12-13T02:01:56.988177277Z" level=info msg="CreateContainer within sandbox \"1284bafba6f3a61abcd0d78822a9a93200b8e87446e747e9611e1d9a9175badb\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"4aabe270498ded04f3c3f7e6a6ba6e3b06534e91da44688e826b34d0fe91feb7\"" Dec 13 02:01:56.988623 env[1314]: time="2024-12-13T02:01:56.988596168Z" level=info msg="StartContainer for \"4aabe270498ded04f3c3f7e6a6ba6e3b06534e91da44688e826b34d0fe91feb7\"" Dec 13 02:01:57.021644 env[1314]: time="2024-12-13T02:01:57.021574496Z" level=info msg="StartContainer for \"4aabe270498ded04f3c3f7e6a6ba6e3b06534e91da44688e826b34d0fe91feb7\" returns successfully" Dec 13 02:01:57.087115 kubelet[1567]: E1213 02:01:57.087063 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:57.333904 systemd-networkd[1092]: lxc15470c51ca2d: Gained IPv6LL Dec 13 02:01:58.087455 kubelet[1567]: E1213 02:01:58.087397 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:01:59.088423 kubelet[1567]: E1213 02:01:59.088364 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:02:00.089115 kubelet[1567]: E1213 02:02:00.089055 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:02:01.089393 kubelet[1567]: E1213 02:02:01.089319 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:02:01.578747 kubelet[1567]: I1213 02:02:01.578667 1567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=22.069822785 podStartE2EDuration="22.578625547s" podCreationTimestamp="2024-12-13 02:01:39 +0000 UTC" firstStartedPulling="2024-12-13 02:01:56.466420615 +0000 UTC m=+52.839689689" lastFinishedPulling="2024-12-13 02:01:56.975223377 +0000 UTC m=+53.348492451" observedRunningTime="2024-12-13 02:01:57.682839343 +0000 UTC m=+54.056108417" watchObservedRunningTime="2024-12-13 02:02:01.578625547 +0000 UTC m=+57.951894621" Dec 13 02:02:01.593523 systemd[1]: run-containerd-runc-k8s.io-dd90a41d03e804474666616c46fd02842dec45a96ad84d853706e5c32e880853-runc.TzPcau.mount: Deactivated successfully. Dec 13 02:02:01.605804 env[1314]: time="2024-12-13T02:02:01.605717290Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:02:01.610184 env[1314]: time="2024-12-13T02:02:01.610154100Z" level=info msg="StopContainer for \"dd90a41d03e804474666616c46fd02842dec45a96ad84d853706e5c32e880853\" with timeout 2 (s)" Dec 13 02:02:01.610448 env[1314]: time="2024-12-13T02:02:01.610416285Z" level=info msg="Stop container \"dd90a41d03e804474666616c46fd02842dec45a96ad84d853706e5c32e880853\" with signal terminated" Dec 13 02:02:01.615872 systemd-networkd[1092]: lxc_health: Link DOWN Dec 13 02:02:01.615883 systemd-networkd[1092]: lxc_health: Lost carrier Dec 13 02:02:01.668056 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd90a41d03e804474666616c46fd02842dec45a96ad84d853706e5c32e880853-rootfs.mount: Deactivated successfully. Dec 13 02:02:01.679909 env[1314]: time="2024-12-13T02:02:01.679869932Z" level=info msg="shim disconnected" id=dd90a41d03e804474666616c46fd02842dec45a96ad84d853706e5c32e880853 Dec 13 02:02:01.679909 env[1314]: time="2024-12-13T02:02:01.679910679Z" level=warning msg="cleaning up after shim disconnected" id=dd90a41d03e804474666616c46fd02842dec45a96ad84d853706e5c32e880853 namespace=k8s.io Dec 13 02:02:01.680075 env[1314]: time="2024-12-13T02:02:01.679919716Z" level=info msg="cleaning up dead shim" Dec 13 02:02:01.685780 env[1314]: time="2024-12-13T02:02:01.685735844Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:02:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3053 runtime=io.containerd.runc.v2\n" Dec 13 02:02:01.691458 env[1314]: time="2024-12-13T02:02:01.691402991Z" level=info msg="StopContainer for \"dd90a41d03e804474666616c46fd02842dec45a96ad84d853706e5c32e880853\" returns successfully" Dec 13 02:02:01.692042 env[1314]: time="2024-12-13T02:02:01.692002802Z" level=info msg="StopPodSandbox for \"1b1b727876994574c928de794e8d101f7289622dc04e83e3373e917d37e8ca35\"" Dec 13 02:02:01.692123 env[1314]: time="2024-12-13T02:02:01.692069257Z" level=info msg="Container to stop \"72f9c8a00c99a210ce63b267852fae4023e4f2a5413bca8b2d9aa3031269aea1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:02:01.692123 env[1314]: time="2024-12-13T02:02:01.692085167Z" level=info msg="Container to stop \"dd90a41d03e804474666616c46fd02842dec45a96ad84d853706e5c32e880853\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:02:01.692123 env[1314]: time="2024-12-13T02:02:01.692098532Z" level=info msg="Container to stop \"7b613e634702bffcaa1842e1561903065c01e0c47709965f97963660184fe8ff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:02:01.692123 env[1314]: time="2024-12-13T02:02:01.692111687Z" level=info msg="Container to stop \"0ed8fc6d43ad35ed548f4a8505216ffa2c9bfd343a17483929621366967d45b4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:02:01.692269 env[1314]: time="2024-12-13T02:02:01.692123600Z" level=info msg="Container to stop \"655e34230e1ee170d5a1691fd1cd90d1fa2183fe9ef2780b3a30ce003a806cc7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:02:01.694114 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1b1b727876994574c928de794e8d101f7289622dc04e83e3373e917d37e8ca35-shm.mount: Deactivated successfully. Dec 13 02:02:01.715181 env[1314]: time="2024-12-13T02:02:01.715103646Z" level=info msg="shim disconnected" id=1b1b727876994574c928de794e8d101f7289622dc04e83e3373e917d37e8ca35 Dec 13 02:02:01.715181 env[1314]: time="2024-12-13T02:02:01.715158229Z" level=warning msg="cleaning up after shim disconnected" id=1b1b727876994574c928de794e8d101f7289622dc04e83e3373e917d37e8ca35 namespace=k8s.io Dec 13 02:02:01.715181 env[1314]: time="2024-12-13T02:02:01.715168629Z" level=info msg="cleaning up dead shim" Dec 13 02:02:01.721136 env[1314]: time="2024-12-13T02:02:01.721087611Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:02:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3086 runtime=io.containerd.runc.v2\n" Dec 13 02:02:01.721427 env[1314]: time="2024-12-13T02:02:01.721393887Z" level=info msg="TearDown network for sandbox \"1b1b727876994574c928de794e8d101f7289622dc04e83e3373e917d37e8ca35\" successfully" Dec 13 02:02:01.721427 env[1314]: time="2024-12-13T02:02:01.721421569Z" level=info msg="StopPodSandbox for \"1b1b727876994574c928de794e8d101f7289622dc04e83e3373e917d37e8ca35\" returns successfully" Dec 13 02:02:01.833948 kubelet[1567]: I1213 02:02:01.833806 1567 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0ad5cf1b-152b-48ea-a10e-c6ba3015e769" (UID: "0ad5cf1b-152b-48ea-a10e-c6ba3015e769"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:01.833948 kubelet[1567]: I1213 02:02:01.833826 1567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-lib-modules\") pod \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\" (UID: \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\") " Dec 13 02:02:01.833948 kubelet[1567]: I1213 02:02:01.833918 1567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-xtables-lock\") pod \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\" (UID: \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\") " Dec 13 02:02:01.833948 kubelet[1567]: I1213 02:02:01.833944 1567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-hostproc\") pod \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\" (UID: \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\") " Dec 13 02:02:01.834208 kubelet[1567]: I1213 02:02:01.833961 1567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-host-proc-sys-net\") pod \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\" (UID: \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\") " Dec 13 02:02:01.834208 kubelet[1567]: I1213 02:02:01.833976 1567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-cni-path\") pod \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\" (UID: \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\") " Dec 13 02:02:01.834208 kubelet[1567]: I1213 02:02:01.833992 1567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-cilium-run\") pod \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\" (UID: \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\") " Dec 13 02:02:01.834208 kubelet[1567]: I1213 02:02:01.834007 1567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-etc-cni-netd\") pod \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\" (UID: \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\") " Dec 13 02:02:01.834208 kubelet[1567]: I1213 02:02:01.834026 1567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-hubble-tls\") pod \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\" (UID: \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\") " Dec 13 02:02:01.834208 kubelet[1567]: I1213 02:02:01.834044 1567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-cilium-cgroup\") pod \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\" (UID: \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\") " Dec 13 02:02:01.834397 kubelet[1567]: I1213 02:02:01.834060 1567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-host-proc-sys-kernel\") pod \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\" (UID: \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\") " Dec 13 02:02:01.834397 kubelet[1567]: I1213 02:02:01.834057 1567 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-hostproc" (OuterVolumeSpecName: "hostproc") pod "0ad5cf1b-152b-48ea-a10e-c6ba3015e769" (UID: "0ad5cf1b-152b-48ea-a10e-c6ba3015e769"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:01.834397 kubelet[1567]: I1213 02:02:01.834087 1567 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0ad5cf1b-152b-48ea-a10e-c6ba3015e769" (UID: "0ad5cf1b-152b-48ea-a10e-c6ba3015e769"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:01.834397 kubelet[1567]: I1213 02:02:01.834102 1567 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0ad5cf1b-152b-48ea-a10e-c6ba3015e769" (UID: "0ad5cf1b-152b-48ea-a10e-c6ba3015e769"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:01.834397 kubelet[1567]: I1213 02:02:01.834075 1567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-bpf-maps\") pod \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\" (UID: \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\") " Dec 13 02:02:01.834556 kubelet[1567]: I1213 02:02:01.834118 1567 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0ad5cf1b-152b-48ea-a10e-c6ba3015e769" (UID: "0ad5cf1b-152b-48ea-a10e-c6ba3015e769"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:01.834556 kubelet[1567]: I1213 02:02:01.834127 1567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-clustermesh-secrets\") pod \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\" (UID: \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\") " Dec 13 02:02:01.834556 kubelet[1567]: I1213 02:02:01.834145 1567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-cilium-config-path\") pod \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\" (UID: \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\") " Dec 13 02:02:01.834556 kubelet[1567]: I1213 02:02:01.834154 1567 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-cni-path" (OuterVolumeSpecName: "cni-path") pod "0ad5cf1b-152b-48ea-a10e-c6ba3015e769" (UID: "0ad5cf1b-152b-48ea-a10e-c6ba3015e769"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:01.834556 kubelet[1567]: I1213 02:02:01.834166 1567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmfrv\" (UniqueName: \"kubernetes.io/projected/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-kube-api-access-mmfrv\") pod \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\" (UID: \"0ad5cf1b-152b-48ea-a10e-c6ba3015e769\") " Dec 13 02:02:01.834740 kubelet[1567]: I1213 02:02:01.834174 1567 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0ad5cf1b-152b-48ea-a10e-c6ba3015e769" (UID: "0ad5cf1b-152b-48ea-a10e-c6ba3015e769"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:01.834740 kubelet[1567]: I1213 02:02:01.834190 1567 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-cni-path\") on node \"10.0.0.71\" DevicePath \"\"" Dec 13 02:02:01.834740 kubelet[1567]: I1213 02:02:01.834201 1567 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-bpf-maps\") on node \"10.0.0.71\" DevicePath \"\"" Dec 13 02:02:01.834740 kubelet[1567]: I1213 02:02:01.834194 1567 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0ad5cf1b-152b-48ea-a10e-c6ba3015e769" (UID: "0ad5cf1b-152b-48ea-a10e-c6ba3015e769"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:01.834740 kubelet[1567]: I1213 02:02:01.834211 1567 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-hostproc\") on node \"10.0.0.71\" DevicePath \"\"" Dec 13 02:02:01.834740 kubelet[1567]: I1213 02:02:01.834222 1567 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-host-proc-sys-net\") on node \"10.0.0.71\" DevicePath \"\"" Dec 13 02:02:01.834740 kubelet[1567]: I1213 02:02:01.834231 1567 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-lib-modules\") on node \"10.0.0.71\" DevicePath \"\"" Dec 13 02:02:01.834972 kubelet[1567]: I1213 02:02:01.834241 1567 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-xtables-lock\") on node \"10.0.0.71\" DevicePath \"\"" Dec 13 02:02:01.836342 kubelet[1567]: I1213 02:02:01.835055 1567 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0ad5cf1b-152b-48ea-a10e-c6ba3015e769" (UID: "0ad5cf1b-152b-48ea-a10e-c6ba3015e769"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:01.836408 kubelet[1567]: I1213 02:02:01.834141 1567 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0ad5cf1b-152b-48ea-a10e-c6ba3015e769" (UID: "0ad5cf1b-152b-48ea-a10e-c6ba3015e769"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:01.836934 kubelet[1567]: I1213 02:02:01.836897 1567 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-kube-api-access-mmfrv" (OuterVolumeSpecName: "kube-api-access-mmfrv") pod "0ad5cf1b-152b-48ea-a10e-c6ba3015e769" (UID: "0ad5cf1b-152b-48ea-a10e-c6ba3015e769"). InnerVolumeSpecName "kube-api-access-mmfrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:02:01.837299 kubelet[1567]: I1213 02:02:01.837274 1567 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0ad5cf1b-152b-48ea-a10e-c6ba3015e769" (UID: "0ad5cf1b-152b-48ea-a10e-c6ba3015e769"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:02:01.837694 kubelet[1567]: I1213 02:02:01.837663 1567 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0ad5cf1b-152b-48ea-a10e-c6ba3015e769" (UID: "0ad5cf1b-152b-48ea-a10e-c6ba3015e769"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:02:01.838225 kubelet[1567]: I1213 02:02:01.838196 1567 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0ad5cf1b-152b-48ea-a10e-c6ba3015e769" (UID: "0ad5cf1b-152b-48ea-a10e-c6ba3015e769"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:02:01.934693 kubelet[1567]: I1213 02:02:01.934639 1567 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-hubble-tls\") on node \"10.0.0.71\" DevicePath \"\"" Dec 13 02:02:01.934693 kubelet[1567]: I1213 02:02:01.934681 1567 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-cilium-cgroup\") on node \"10.0.0.71\" DevicePath \"\"" Dec 13 02:02:01.934693 kubelet[1567]: I1213 02:02:01.934695 1567 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-host-proc-sys-kernel\") on node \"10.0.0.71\" DevicePath \"\"" Dec 13 02:02:01.934693 kubelet[1567]: I1213 02:02:01.934719 1567 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-clustermesh-secrets\") on node \"10.0.0.71\" DevicePath \"\"" Dec 13 02:02:01.934693 kubelet[1567]: I1213 02:02:01.934728 1567 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-cilium-config-path\") on node \"10.0.0.71\" DevicePath \"\"" Dec 13 02:02:01.935017 kubelet[1567]: I1213 02:02:01.934737 1567 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-mmfrv\" (UniqueName: \"kubernetes.io/projected/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-kube-api-access-mmfrv\") on node \"10.0.0.71\" DevicePath \"\"" Dec 13 02:02:01.935017 kubelet[1567]: I1213 02:02:01.934746 1567 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-cilium-run\") on node \"10.0.0.71\" DevicePath \"\"" Dec 13 02:02:01.935017 kubelet[1567]: I1213 02:02:01.934754 1567 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0ad5cf1b-152b-48ea-a10e-c6ba3015e769-etc-cni-netd\") on node \"10.0.0.71\" DevicePath \"\"" Dec 13 02:02:02.090576 kubelet[1567]: E1213 02:02:02.090454 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:02:02.589085 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b1b727876994574c928de794e8d101f7289622dc04e83e3373e917d37e8ca35-rootfs.mount: Deactivated successfully. Dec 13 02:02:02.589236 systemd[1]: var-lib-kubelet-pods-0ad5cf1b\x2d152b\x2d48ea\x2da10e\x2dc6ba3015e769-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmmfrv.mount: Deactivated successfully. Dec 13 02:02:02.589351 systemd[1]: var-lib-kubelet-pods-0ad5cf1b\x2d152b\x2d48ea\x2da10e\x2dc6ba3015e769-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:02:02.589458 systemd[1]: var-lib-kubelet-pods-0ad5cf1b\x2d152b\x2d48ea\x2da10e\x2dc6ba3015e769-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:02:02.684635 kubelet[1567]: I1213 02:02:02.684596 1567 scope.go:117] "RemoveContainer" containerID="dd90a41d03e804474666616c46fd02842dec45a96ad84d853706e5c32e880853" Dec 13 02:02:02.685704 env[1314]: time="2024-12-13T02:02:02.685667454Z" level=info msg="RemoveContainer for \"dd90a41d03e804474666616c46fd02842dec45a96ad84d853706e5c32e880853\"" Dec 13 02:02:02.807580 env[1314]: time="2024-12-13T02:02:02.807525110Z" level=info msg="RemoveContainer for \"dd90a41d03e804474666616c46fd02842dec45a96ad84d853706e5c32e880853\" returns successfully" Dec 13 02:02:02.807924 kubelet[1567]: I1213 02:02:02.807902 1567 scope.go:117] "RemoveContainer" containerID="72f9c8a00c99a210ce63b267852fae4023e4f2a5413bca8b2d9aa3031269aea1" Dec 13 02:02:02.809202 env[1314]: time="2024-12-13T02:02:02.809170520Z" level=info msg="RemoveContainer for \"72f9c8a00c99a210ce63b267852fae4023e4f2a5413bca8b2d9aa3031269aea1\"" Dec 13 02:02:02.813286 env[1314]: time="2024-12-13T02:02:02.813257988Z" level=info msg="RemoveContainer for \"72f9c8a00c99a210ce63b267852fae4023e4f2a5413bca8b2d9aa3031269aea1\" returns successfully" Dec 13 02:02:02.814070 kubelet[1567]: I1213 02:02:02.814044 1567 scope.go:117] "RemoveContainer" containerID="655e34230e1ee170d5a1691fd1cd90d1fa2183fe9ef2780b3a30ce003a806cc7" Dec 13 02:02:02.815321 env[1314]: time="2024-12-13T02:02:02.815272253Z" level=info msg="RemoveContainer for \"655e34230e1ee170d5a1691fd1cd90d1fa2183fe9ef2780b3a30ce003a806cc7\"" Dec 13 02:02:02.818621 env[1314]: time="2024-12-13T02:02:02.818593979Z" level=info msg="RemoveContainer for \"655e34230e1ee170d5a1691fd1cd90d1fa2183fe9ef2780b3a30ce003a806cc7\" returns successfully" Dec 13 02:02:02.818755 kubelet[1567]: I1213 02:02:02.818737 1567 scope.go:117] "RemoveContainer" containerID="0ed8fc6d43ad35ed548f4a8505216ffa2c9bfd343a17483929621366967d45b4" Dec 13 02:02:02.819555 env[1314]: time="2024-12-13T02:02:02.819532056Z" level=info msg="RemoveContainer for \"0ed8fc6d43ad35ed548f4a8505216ffa2c9bfd343a17483929621366967d45b4\"" Dec 13 02:02:02.822025 env[1314]: time="2024-12-13T02:02:02.822006406Z" level=info msg="RemoveContainer for \"0ed8fc6d43ad35ed548f4a8505216ffa2c9bfd343a17483929621366967d45b4\" returns successfully" Dec 13 02:02:02.822113 kubelet[1567]: I1213 02:02:02.822100 1567 scope.go:117] "RemoveContainer" containerID="7b613e634702bffcaa1842e1561903065c01e0c47709965f97963660184fe8ff" Dec 13 02:02:02.822884 env[1314]: time="2024-12-13T02:02:02.822854223Z" level=info msg="RemoveContainer for \"7b613e634702bffcaa1842e1561903065c01e0c47709965f97963660184fe8ff\"" Dec 13 02:02:02.825701 env[1314]: time="2024-12-13T02:02:02.825676168Z" level=info msg="RemoveContainer for \"7b613e634702bffcaa1842e1561903065c01e0c47709965f97963660184fe8ff\" returns successfully" Dec 13 02:02:02.825858 kubelet[1567]: I1213 02:02:02.825813 1567 scope.go:117] "RemoveContainer" containerID="dd90a41d03e804474666616c46fd02842dec45a96ad84d853706e5c32e880853" Dec 13 02:02:02.826271 env[1314]: time="2024-12-13T02:02:02.826182894Z" level=error msg="ContainerStatus for \"dd90a41d03e804474666616c46fd02842dec45a96ad84d853706e5c32e880853\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dd90a41d03e804474666616c46fd02842dec45a96ad84d853706e5c32e880853\": not found" Dec 13 02:02:02.826381 kubelet[1567]: E1213 02:02:02.826369 1567 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dd90a41d03e804474666616c46fd02842dec45a96ad84d853706e5c32e880853\": not found" containerID="dd90a41d03e804474666616c46fd02842dec45a96ad84d853706e5c32e880853" Dec 13 02:02:02.826453 kubelet[1567]: I1213 02:02:02.826445 1567 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dd90a41d03e804474666616c46fd02842dec45a96ad84d853706e5c32e880853"} err="failed to get container status \"dd90a41d03e804474666616c46fd02842dec45a96ad84d853706e5c32e880853\": rpc error: code = NotFound desc = an error occurred when try to find container \"dd90a41d03e804474666616c46fd02842dec45a96ad84d853706e5c32e880853\": not found" Dec 13 02:02:02.826486 kubelet[1567]: I1213 02:02:02.826456 1567 scope.go:117] "RemoveContainer" containerID="72f9c8a00c99a210ce63b267852fae4023e4f2a5413bca8b2d9aa3031269aea1" Dec 13 02:02:02.826663 env[1314]: time="2024-12-13T02:02:02.826614626Z" level=error msg="ContainerStatus for \"72f9c8a00c99a210ce63b267852fae4023e4f2a5413bca8b2d9aa3031269aea1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"72f9c8a00c99a210ce63b267852fae4023e4f2a5413bca8b2d9aa3031269aea1\": not found" Dec 13 02:02:02.826757 kubelet[1567]: E1213 02:02:02.826744 1567 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"72f9c8a00c99a210ce63b267852fae4023e4f2a5413bca8b2d9aa3031269aea1\": not found" containerID="72f9c8a00c99a210ce63b267852fae4023e4f2a5413bca8b2d9aa3031269aea1" Dec 13 02:02:02.826821 kubelet[1567]: I1213 02:02:02.826764 1567 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"72f9c8a00c99a210ce63b267852fae4023e4f2a5413bca8b2d9aa3031269aea1"} err="failed to get container status \"72f9c8a00c99a210ce63b267852fae4023e4f2a5413bca8b2d9aa3031269aea1\": rpc error: code = NotFound desc = an error occurred when try to find container \"72f9c8a00c99a210ce63b267852fae4023e4f2a5413bca8b2d9aa3031269aea1\": not found" Dec 13 02:02:02.826821 kubelet[1567]: I1213 02:02:02.826774 1567 scope.go:117] "RemoveContainer" containerID="655e34230e1ee170d5a1691fd1cd90d1fa2183fe9ef2780b3a30ce003a806cc7" Dec 13 02:02:02.826974 env[1314]: time="2024-12-13T02:02:02.826927967Z" level=error msg="ContainerStatus for \"655e34230e1ee170d5a1691fd1cd90d1fa2183fe9ef2780b3a30ce003a806cc7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"655e34230e1ee170d5a1691fd1cd90d1fa2183fe9ef2780b3a30ce003a806cc7\": not found" Dec 13 02:02:02.827055 kubelet[1567]: E1213 02:02:02.827044 1567 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"655e34230e1ee170d5a1691fd1cd90d1fa2183fe9ef2780b3a30ce003a806cc7\": not found" containerID="655e34230e1ee170d5a1691fd1cd90d1fa2183fe9ef2780b3a30ce003a806cc7" Dec 13 02:02:02.827084 kubelet[1567]: I1213 02:02:02.827061 1567 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"655e34230e1ee170d5a1691fd1cd90d1fa2183fe9ef2780b3a30ce003a806cc7"} err="failed to get container status \"655e34230e1ee170d5a1691fd1cd90d1fa2183fe9ef2780b3a30ce003a806cc7\": rpc error: code = NotFound desc = an error occurred when try to find container \"655e34230e1ee170d5a1691fd1cd90d1fa2183fe9ef2780b3a30ce003a806cc7\": not found" Dec 13 02:02:02.827084 kubelet[1567]: I1213 02:02:02.827080 1567 scope.go:117] "RemoveContainer" containerID="0ed8fc6d43ad35ed548f4a8505216ffa2c9bfd343a17483929621366967d45b4" Dec 13 02:02:02.827275 env[1314]: time="2024-12-13T02:02:02.827218544Z" level=error msg="ContainerStatus for \"0ed8fc6d43ad35ed548f4a8505216ffa2c9bfd343a17483929621366967d45b4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ed8fc6d43ad35ed548f4a8505216ffa2c9bfd343a17483929621366967d45b4\": not found" Dec 13 02:02:02.827364 kubelet[1567]: E1213 02:02:02.827352 1567 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ed8fc6d43ad35ed548f4a8505216ffa2c9bfd343a17483929621366967d45b4\": not found" containerID="0ed8fc6d43ad35ed548f4a8505216ffa2c9bfd343a17483929621366967d45b4" Dec 13 02:02:02.827393 kubelet[1567]: I1213 02:02:02.827371 1567 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0ed8fc6d43ad35ed548f4a8505216ffa2c9bfd343a17483929621366967d45b4"} err="failed to get container status \"0ed8fc6d43ad35ed548f4a8505216ffa2c9bfd343a17483929621366967d45b4\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ed8fc6d43ad35ed548f4a8505216ffa2c9bfd343a17483929621366967d45b4\": not found" Dec 13 02:02:02.827393 kubelet[1567]: I1213 02:02:02.827386 1567 scope.go:117] "RemoveContainer" containerID="7b613e634702bffcaa1842e1561903065c01e0c47709965f97963660184fe8ff" Dec 13 02:02:02.827567 env[1314]: time="2024-12-13T02:02:02.827520623Z" level=error msg="ContainerStatus for \"7b613e634702bffcaa1842e1561903065c01e0c47709965f97963660184fe8ff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7b613e634702bffcaa1842e1561903065c01e0c47709965f97963660184fe8ff\": not found" Dec 13 02:02:02.827691 kubelet[1567]: E1213 02:02:02.827677 1567 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7b613e634702bffcaa1842e1561903065c01e0c47709965f97963660184fe8ff\": not found" containerID="7b613e634702bffcaa1842e1561903065c01e0c47709965f97963660184fe8ff" Dec 13 02:02:02.827762 kubelet[1567]: I1213 02:02:02.827717 1567 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7b613e634702bffcaa1842e1561903065c01e0c47709965f97963660184fe8ff"} err="failed to get container status \"7b613e634702bffcaa1842e1561903065c01e0c47709965f97963660184fe8ff\": rpc error: code = NotFound desc = an error occurred when try to find container \"7b613e634702bffcaa1842e1561903065c01e0c47709965f97963660184fe8ff\": not found" Dec 13 02:02:03.091035 kubelet[1567]: E1213 02:02:03.090976 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:02:04.056822 kubelet[1567]: E1213 02:02:04.056746 1567 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:02:04.069986 env[1314]: time="2024-12-13T02:02:04.069944553Z" level=info msg="StopPodSandbox for \"1b1b727876994574c928de794e8d101f7289622dc04e83e3373e917d37e8ca35\"" Dec 13 02:02:04.070376 env[1314]: time="2024-12-13T02:02:04.070021106Z" level=info msg="TearDown network for sandbox \"1b1b727876994574c928de794e8d101f7289622dc04e83e3373e917d37e8ca35\" successfully" Dec 13 02:02:04.070376 env[1314]: time="2024-12-13T02:02:04.070052897Z" level=info msg="StopPodSandbox for \"1b1b727876994574c928de794e8d101f7289622dc04e83e3373e917d37e8ca35\" returns successfully" Dec 13 02:02:04.070563 env[1314]: time="2024-12-13T02:02:04.070516980Z" level=info msg="RemovePodSandbox for \"1b1b727876994574c928de794e8d101f7289622dc04e83e3373e917d37e8ca35\"" Dec 13 02:02:04.070612 env[1314]: time="2024-12-13T02:02:04.070556955Z" level=info msg="Forcibly stopping sandbox \"1b1b727876994574c928de794e8d101f7289622dc04e83e3373e917d37e8ca35\"" Dec 13 02:02:04.070648 env[1314]: time="2024-12-13T02:02:04.070637938Z" level=info msg="TearDown network for sandbox \"1b1b727876994574c928de794e8d101f7289622dc04e83e3373e917d37e8ca35\" successfully" Dec 13 02:02:04.074125 env[1314]: time="2024-12-13T02:02:04.074066483Z" level=info msg="RemovePodSandbox \"1b1b727876994574c928de794e8d101f7289622dc04e83e3373e917d37e8ca35\" returns successfully" Dec 13 02:02:04.091855 kubelet[1567]: E1213 02:02:04.091811 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:02:04.138374 kubelet[1567]: I1213 02:02:04.138330 1567 topology_manager.go:215] "Topology Admit Handler" podUID="b7e7437f-400b-4969-a9d9-ca106657603f" podNamespace="kube-system" podName="cilium-operator-5cc964979-fx5q5" Dec 13 02:02:04.138374 kubelet[1567]: E1213 02:02:04.138383 1567 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0ad5cf1b-152b-48ea-a10e-c6ba3015e769" containerName="apply-sysctl-overwrites" Dec 13 02:02:04.138374 kubelet[1567]: E1213 02:02:04.138392 1567 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0ad5cf1b-152b-48ea-a10e-c6ba3015e769" containerName="mount-bpf-fs" Dec 13 02:02:04.138374 kubelet[1567]: E1213 02:02:04.138398 1567 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0ad5cf1b-152b-48ea-a10e-c6ba3015e769" containerName="clean-cilium-state" Dec 13 02:02:04.138374 kubelet[1567]: E1213 02:02:04.138408 1567 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0ad5cf1b-152b-48ea-a10e-c6ba3015e769" containerName="cilium-agent" Dec 13 02:02:04.138374 kubelet[1567]: E1213 02:02:04.138413 1567 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0ad5cf1b-152b-48ea-a10e-c6ba3015e769" containerName="mount-cgroup" Dec 13 02:02:04.138780 kubelet[1567]: I1213 02:02:04.138429 1567 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ad5cf1b-152b-48ea-a10e-c6ba3015e769" containerName="cilium-agent" Dec 13 02:02:04.159218 kubelet[1567]: I1213 02:02:04.159179 1567 topology_manager.go:215] "Topology Admit Handler" podUID="926daac3-f06f-4f17-bac8-65da6fee3afe" podNamespace="kube-system" podName="cilium-2t6pd" Dec 13 02:02:04.248232 kubelet[1567]: I1213 02:02:04.248159 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-etc-cni-netd\") pod \"cilium-2t6pd\" (UID: \"926daac3-f06f-4f17-bac8-65da6fee3afe\") " pod="kube-system/cilium-2t6pd" Dec 13 02:02:04.248232 kubelet[1567]: I1213 02:02:04.248218 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-cni-path\") pod \"cilium-2t6pd\" (UID: \"926daac3-f06f-4f17-bac8-65da6fee3afe\") " pod="kube-system/cilium-2t6pd" Dec 13 02:02:04.248232 kubelet[1567]: I1213 02:02:04.248243 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/926daac3-f06f-4f17-bac8-65da6fee3afe-hubble-tls\") pod \"cilium-2t6pd\" (UID: \"926daac3-f06f-4f17-bac8-65da6fee3afe\") " pod="kube-system/cilium-2t6pd" Dec 13 02:02:04.248503 kubelet[1567]: I1213 02:02:04.248265 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-hostproc\") pod \"cilium-2t6pd\" (UID: \"926daac3-f06f-4f17-bac8-65da6fee3afe\") " pod="kube-system/cilium-2t6pd" Dec 13 02:02:04.248503 kubelet[1567]: I1213 02:02:04.248296 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/926daac3-f06f-4f17-bac8-65da6fee3afe-cilium-config-path\") pod \"cilium-2t6pd\" (UID: \"926daac3-f06f-4f17-bac8-65da6fee3afe\") " pod="kube-system/cilium-2t6pd" Dec 13 02:02:04.248503 kubelet[1567]: I1213 02:02:04.248342 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-bpf-maps\") pod \"cilium-2t6pd\" (UID: \"926daac3-f06f-4f17-bac8-65da6fee3afe\") " pod="kube-system/cilium-2t6pd" Dec 13 02:02:04.248503 kubelet[1567]: I1213 02:02:04.248371 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-cilium-cgroup\") pod \"cilium-2t6pd\" (UID: \"926daac3-f06f-4f17-bac8-65da6fee3afe\") " pod="kube-system/cilium-2t6pd" Dec 13 02:02:04.248503 kubelet[1567]: I1213 02:02:04.248417 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b7e7437f-400b-4969-a9d9-ca106657603f-cilium-config-path\") pod \"cilium-operator-5cc964979-fx5q5\" (UID: \"b7e7437f-400b-4969-a9d9-ca106657603f\") " pod="kube-system/cilium-operator-5cc964979-fx5q5" Dec 13 02:02:04.248754 kubelet[1567]: I1213 02:02:04.248488 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-lib-modules\") pod \"cilium-2t6pd\" (UID: \"926daac3-f06f-4f17-bac8-65da6fee3afe\") " pod="kube-system/cilium-2t6pd" Dec 13 02:02:04.248754 kubelet[1567]: I1213 02:02:04.248540 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn689\" (UniqueName: \"kubernetes.io/projected/b7e7437f-400b-4969-a9d9-ca106657603f-kube-api-access-qn689\") pod \"cilium-operator-5cc964979-fx5q5\" (UID: \"b7e7437f-400b-4969-a9d9-ca106657603f\") " pod="kube-system/cilium-operator-5cc964979-fx5q5" Dec 13 02:02:04.248754 kubelet[1567]: I1213 02:02:04.248574 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-cilium-run\") pod \"cilium-2t6pd\" (UID: \"926daac3-f06f-4f17-bac8-65da6fee3afe\") " pod="kube-system/cilium-2t6pd" Dec 13 02:02:04.248754 kubelet[1567]: I1213 02:02:04.248608 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-host-proc-sys-kernel\") pod \"cilium-2t6pd\" (UID: \"926daac3-f06f-4f17-bac8-65da6fee3afe\") " pod="kube-system/cilium-2t6pd" Dec 13 02:02:04.248754 kubelet[1567]: I1213 02:02:04.248633 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/926daac3-f06f-4f17-bac8-65da6fee3afe-clustermesh-secrets\") pod \"cilium-2t6pd\" (UID: \"926daac3-f06f-4f17-bac8-65da6fee3afe\") " pod="kube-system/cilium-2t6pd" Dec 13 02:02:04.248931 kubelet[1567]: I1213 02:02:04.248656 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x2kl\" (UniqueName: \"kubernetes.io/projected/926daac3-f06f-4f17-bac8-65da6fee3afe-kube-api-access-9x2kl\") pod \"cilium-2t6pd\" (UID: \"926daac3-f06f-4f17-bac8-65da6fee3afe\") " pod="kube-system/cilium-2t6pd" Dec 13 02:02:04.248931 kubelet[1567]: I1213 02:02:04.248678 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-xtables-lock\") pod \"cilium-2t6pd\" (UID: \"926daac3-f06f-4f17-bac8-65da6fee3afe\") " pod="kube-system/cilium-2t6pd" Dec 13 02:02:04.248931 kubelet[1567]: I1213 02:02:04.248753 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/926daac3-f06f-4f17-bac8-65da6fee3afe-cilium-ipsec-secrets\") pod \"cilium-2t6pd\" (UID: \"926daac3-f06f-4f17-bac8-65da6fee3afe\") " pod="kube-system/cilium-2t6pd" Dec 13 02:02:04.248931 kubelet[1567]: I1213 02:02:04.248853 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-host-proc-sys-net\") pod \"cilium-2t6pd\" (UID: \"926daac3-f06f-4f17-bac8-65da6fee3afe\") " pod="kube-system/cilium-2t6pd" Dec 13 02:02:04.367812 kubelet[1567]: E1213 02:02:04.367786 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:04.368263 env[1314]: time="2024-12-13T02:02:04.368227319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2t6pd,Uid:926daac3-f06f-4f17-bac8-65da6fee3afe,Namespace:kube-system,Attempt:0,}" Dec 13 02:02:04.381366 env[1314]: time="2024-12-13T02:02:04.381293777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:02:04.381366 env[1314]: time="2024-12-13T02:02:04.381333653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:02:04.381366 env[1314]: time="2024-12-13T02:02:04.381345035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:02:04.381579 env[1314]: time="2024-12-13T02:02:04.381536745Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2a7f4051c729b81035ee0580e43b7db36b2af3529260bebff23f38a8cd48ab0 pid=3114 runtime=io.containerd.runc.v2 Dec 13 02:02:04.406855 env[1314]: time="2024-12-13T02:02:04.406811096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2t6pd,Uid:926daac3-f06f-4f17-bac8-65da6fee3afe,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2a7f4051c729b81035ee0580e43b7db36b2af3529260bebff23f38a8cd48ab0\"" Dec 13 02:02:04.407332 kubelet[1567]: E1213 02:02:04.407313 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:04.408973 env[1314]: time="2024-12-13T02:02:04.408951585Z" level=info msg="CreateContainer within sandbox \"e2a7f4051c729b81035ee0580e43b7db36b2af3529260bebff23f38a8cd48ab0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:02:04.422569 env[1314]: time="2024-12-13T02:02:04.422518135Z" level=info msg="CreateContainer within sandbox \"e2a7f4051c729b81035ee0580e43b7db36b2af3529260bebff23f38a8cd48ab0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7dad49bcd52affd6b4f0d59f70701421cd2003cf49c06b07cf6dac6c59d96d32\"" Dec 13 02:02:04.423061 env[1314]: time="2024-12-13T02:02:04.422996546Z" level=info msg="StartContainer for \"7dad49bcd52affd6b4f0d59f70701421cd2003cf49c06b07cf6dac6c59d96d32\"" Dec 13 02:02:04.444737 kubelet[1567]: E1213 02:02:04.441229 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:04.444899 env[1314]: time="2024-12-13T02:02:04.441867342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-fx5q5,Uid:b7e7437f-400b-4969-a9d9-ca106657603f,Namespace:kube-system,Attempt:0,}" Dec 13 02:02:04.458282 env[1314]: time="2024-12-13T02:02:04.458218343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:02:04.458611 env[1314]: time="2024-12-13T02:02:04.458251385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:02:04.458611 env[1314]: time="2024-12-13T02:02:04.458260422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:02:04.458611 env[1314]: time="2024-12-13T02:02:04.458500444Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/53c794b81cbc92ee6de9f76ff0ed3e62c4c64de8469977759b3414017935e471 pid=3186 runtime=io.containerd.runc.v2 Dec 13 02:02:04.459508 env[1314]: time="2024-12-13T02:02:04.459471523Z" level=info msg="StartContainer for \"7dad49bcd52affd6b4f0d59f70701421cd2003cf49c06b07cf6dac6c59d96d32\" returns successfully" Dec 13 02:02:04.496739 env[1314]: time="2024-12-13T02:02:04.496561467Z" level=info msg="shim disconnected" id=7dad49bcd52affd6b4f0d59f70701421cd2003cf49c06b07cf6dac6c59d96d32 Dec 13 02:02:04.496739 env[1314]: time="2024-12-13T02:02:04.496605510Z" level=warning msg="cleaning up after shim disconnected" id=7dad49bcd52affd6b4f0d59f70701421cd2003cf49c06b07cf6dac6c59d96d32 namespace=k8s.io Dec 13 02:02:04.496739 env[1314]: time="2024-12-13T02:02:04.496613264Z" level=info msg="cleaning up dead shim" Dec 13 02:02:04.503128 env[1314]: time="2024-12-13T02:02:04.503086160Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:02:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3234 runtime=io.containerd.runc.v2\n" Dec 13 02:02:04.514697 env[1314]: time="2024-12-13T02:02:04.514648526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-fx5q5,Uid:b7e7437f-400b-4969-a9d9-ca106657603f,Namespace:kube-system,Attempt:0,} returns sandbox id \"53c794b81cbc92ee6de9f76ff0ed3e62c4c64de8469977759b3414017935e471\"" Dec 13 02:02:04.515255 kubelet[1567]: E1213 02:02:04.515237 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:04.516264 env[1314]: time="2024-12-13T02:02:04.516227590Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 02:02:04.549835 kubelet[1567]: E1213 02:02:04.549802 1567 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:02:04.575259 kubelet[1567]: I1213 02:02:04.575211 1567 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0ad5cf1b-152b-48ea-a10e-c6ba3015e769" path="/var/lib/kubelet/pods/0ad5cf1b-152b-48ea-a10e-c6ba3015e769/volumes" Dec 13 02:02:04.694236 env[1314]: time="2024-12-13T02:02:04.694118991Z" level=info msg="StopPodSandbox for \"e2a7f4051c729b81035ee0580e43b7db36b2af3529260bebff23f38a8cd48ab0\"" Dec 13 02:02:04.694236 env[1314]: time="2024-12-13T02:02:04.694181600Z" level=info msg="Container to stop \"7dad49bcd52affd6b4f0d59f70701421cd2003cf49c06b07cf6dac6c59d96d32\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:02:04.721674 env[1314]: time="2024-12-13T02:02:04.721618822Z" level=info msg="shim disconnected" id=e2a7f4051c729b81035ee0580e43b7db36b2af3529260bebff23f38a8cd48ab0 Dec 13 02:02:04.721674 env[1314]: time="2024-12-13T02:02:04.721679185Z" level=warning msg="cleaning up after shim disconnected" id=e2a7f4051c729b81035ee0580e43b7db36b2af3529260bebff23f38a8cd48ab0 namespace=k8s.io Dec 13 02:02:04.721674 env[1314]: time="2024-12-13T02:02:04.721695175Z" level=info msg="cleaning up dead shim" Dec 13 02:02:04.728482 env[1314]: time="2024-12-13T02:02:04.728423071Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:02:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3275 runtime=io.containerd.runc.v2\n" Dec 13 02:02:04.728773 env[1314]: time="2024-12-13T02:02:04.728742162Z" level=info msg="TearDown network for sandbox \"e2a7f4051c729b81035ee0580e43b7db36b2af3529260bebff23f38a8cd48ab0\" successfully" Dec 13 02:02:04.728822 env[1314]: time="2024-12-13T02:02:04.728773521Z" level=info msg="StopPodSandbox for \"e2a7f4051c729b81035ee0580e43b7db36b2af3529260bebff23f38a8cd48ab0\" returns successfully" Dec 13 02:02:04.853361 kubelet[1567]: I1213 02:02:04.853323 1567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/926daac3-f06f-4f17-bac8-65da6fee3afe-hubble-tls\") pod \"926daac3-f06f-4f17-bac8-65da6fee3afe\" (UID: \"926daac3-f06f-4f17-bac8-65da6fee3afe\") " Dec 13 02:02:04.853361 kubelet[1567]: I1213 02:02:04.853361 1567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-etc-cni-netd\") pod \"926daac3-f06f-4f17-bac8-65da6fee3afe\" (UID: \"926daac3-f06f-4f17-bac8-65da6fee3afe\") " Dec 13 02:02:04.853361 kubelet[1567]: I1213 02:02:04.853378 1567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-cni-path\") pod \"926daac3-f06f-4f17-bac8-65da6fee3afe\" (UID: \"926daac3-f06f-4f17-bac8-65da6fee3afe\") " Dec 13 02:02:04.853590 kubelet[1567]: I1213 02:02:04.853399 1567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/926daac3-f06f-4f17-bac8-65da6fee3afe-cilium-config-path\") pod \"926daac3-f06f-4f17-bac8-65da6fee3afe\" (UID: \"926daac3-f06f-4f17-bac8-65da6fee3afe\") " Dec 13 02:02:04.853590 kubelet[1567]: I1213 02:02:04.853414 1567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-bpf-maps\") pod \"926daac3-f06f-4f17-bac8-65da6fee3afe\" (UID: \"926daac3-f06f-4f17-bac8-65da6fee3afe\") " Dec 13 02:02:04.853590 kubelet[1567]: I1213 02:02:04.853427 1567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-lib-modules\") pod \"926daac3-f06f-4f17-bac8-65da6fee3afe\" (UID: \"926daac3-f06f-4f17-bac8-65da6fee3afe\") " Dec 13 02:02:04.853590 kubelet[1567]: I1213 02:02:04.853440 1567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-hostproc\") pod \"926daac3-f06f-4f17-bac8-65da6fee3afe\" (UID: \"926daac3-f06f-4f17-bac8-65da6fee3afe\") " Dec 13 02:02:04.853590 kubelet[1567]: I1213 02:02:04.853457 1567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x2kl\" (UniqueName: \"kubernetes.io/projected/926daac3-f06f-4f17-bac8-65da6fee3afe-kube-api-access-9x2kl\") pod \"926daac3-f06f-4f17-bac8-65da6fee3afe\" (UID: \"926daac3-f06f-4f17-bac8-65da6fee3afe\") " Dec 13 02:02:04.853590 kubelet[1567]: I1213 02:02:04.853472 1567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-cilium-cgroup\") pod \"926daac3-f06f-4f17-bac8-65da6fee3afe\" (UID: \"926daac3-f06f-4f17-bac8-65da6fee3afe\") " Dec 13 02:02:04.853745 kubelet[1567]: I1213 02:02:04.853488 1567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-cilium-run\") pod \"926daac3-f06f-4f17-bac8-65da6fee3afe\" (UID: \"926daac3-f06f-4f17-bac8-65da6fee3afe\") " Dec 13 02:02:04.853745 kubelet[1567]: I1213 02:02:04.853471 1567 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "926daac3-f06f-4f17-bac8-65da6fee3afe" (UID: "926daac3-f06f-4f17-bac8-65da6fee3afe"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:04.853745 kubelet[1567]: I1213 02:02:04.853519 1567 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "926daac3-f06f-4f17-bac8-65da6fee3afe" (UID: "926daac3-f06f-4f17-bac8-65da6fee3afe"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:04.853745 kubelet[1567]: I1213 02:02:04.853501 1567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-xtables-lock\") pod \"926daac3-f06f-4f17-bac8-65da6fee3afe\" (UID: \"926daac3-f06f-4f17-bac8-65da6fee3afe\") " Dec 13 02:02:04.853745 kubelet[1567]: I1213 02:02:04.853544 1567 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-cni-path" (OuterVolumeSpecName: "cni-path") pod "926daac3-f06f-4f17-bac8-65da6fee3afe" (UID: "926daac3-f06f-4f17-bac8-65da6fee3afe"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:04.853873 kubelet[1567]: I1213 02:02:04.853572 1567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/926daac3-f06f-4f17-bac8-65da6fee3afe-cilium-ipsec-secrets\") pod \"926daac3-f06f-4f17-bac8-65da6fee3afe\" (UID: \"926daac3-f06f-4f17-bac8-65da6fee3afe\") " Dec 13 02:02:04.853873 kubelet[1567]: I1213 02:02:04.853596 1567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-host-proc-sys-net\") pod \"926daac3-f06f-4f17-bac8-65da6fee3afe\" (UID: \"926daac3-f06f-4f17-bac8-65da6fee3afe\") " Dec 13 02:02:04.853873 kubelet[1567]: I1213 02:02:04.853615 1567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/926daac3-f06f-4f17-bac8-65da6fee3afe-clustermesh-secrets\") pod \"926daac3-f06f-4f17-bac8-65da6fee3afe\" (UID: \"926daac3-f06f-4f17-bac8-65da6fee3afe\") " Dec 13 02:02:04.853873 kubelet[1567]: I1213 02:02:04.853631 1567 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-host-proc-sys-kernel\") pod \"926daac3-f06f-4f17-bac8-65da6fee3afe\" (UID: \"926daac3-f06f-4f17-bac8-65da6fee3afe\") " Dec 13 02:02:04.853873 kubelet[1567]: I1213 02:02:04.853671 1567 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-etc-cni-netd\") on node \"10.0.0.71\" DevicePath \"\"" Dec 13 02:02:04.853873 kubelet[1567]: I1213 02:02:04.853682 1567 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-cni-path\") on node \"10.0.0.71\" DevicePath \"\"" Dec 13 02:02:04.853873 kubelet[1567]: I1213 02:02:04.853691 1567 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-xtables-lock\") on node \"10.0.0.71\" DevicePath \"\"" Dec 13 02:02:04.854028 kubelet[1567]: I1213 02:02:04.853733 1567 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "926daac3-f06f-4f17-bac8-65da6fee3afe" (UID: "926daac3-f06f-4f17-bac8-65da6fee3afe"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:04.854055 kubelet[1567]: I1213 02:02:04.854027 1567 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-hostproc" (OuterVolumeSpecName: "hostproc") pod "926daac3-f06f-4f17-bac8-65da6fee3afe" (UID: "926daac3-f06f-4f17-bac8-65da6fee3afe"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:04.854055 kubelet[1567]: I1213 02:02:04.854046 1567 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "926daac3-f06f-4f17-bac8-65da6fee3afe" (UID: "926daac3-f06f-4f17-bac8-65da6fee3afe"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:04.854103 kubelet[1567]: I1213 02:02:04.854062 1567 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "926daac3-f06f-4f17-bac8-65da6fee3afe" (UID: "926daac3-f06f-4f17-bac8-65da6fee3afe"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:04.854103 kubelet[1567]: I1213 02:02:04.854075 1567 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "926daac3-f06f-4f17-bac8-65da6fee3afe" (UID: "926daac3-f06f-4f17-bac8-65da6fee3afe"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:04.854103 kubelet[1567]: I1213 02:02:04.854086 1567 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "926daac3-f06f-4f17-bac8-65da6fee3afe" (UID: "926daac3-f06f-4f17-bac8-65da6fee3afe"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:04.854171 kubelet[1567]: I1213 02:02:04.854108 1567 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "926daac3-f06f-4f17-bac8-65da6fee3afe" (UID: "926daac3-f06f-4f17-bac8-65da6fee3afe"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:02:04.855664 kubelet[1567]: I1213 02:02:04.855617 1567 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/926daac3-f06f-4f17-bac8-65da6fee3afe-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "926daac3-f06f-4f17-bac8-65da6fee3afe" (UID: "926daac3-f06f-4f17-bac8-65da6fee3afe"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:02:04.856046 kubelet[1567]: I1213 02:02:04.856015 1567 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/926daac3-f06f-4f17-bac8-65da6fee3afe-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "926daac3-f06f-4f17-bac8-65da6fee3afe" (UID: "926daac3-f06f-4f17-bac8-65da6fee3afe"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:02:04.856109 kubelet[1567]: I1213 02:02:04.856091 1567 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/926daac3-f06f-4f17-bac8-65da6fee3afe-kube-api-access-9x2kl" (OuterVolumeSpecName: "kube-api-access-9x2kl") pod "926daac3-f06f-4f17-bac8-65da6fee3afe" (UID: "926daac3-f06f-4f17-bac8-65da6fee3afe"). InnerVolumeSpecName "kube-api-access-9x2kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:02:04.856517 kubelet[1567]: I1213 02:02:04.856487 1567 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/926daac3-f06f-4f17-bac8-65da6fee3afe-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "926daac3-f06f-4f17-bac8-65da6fee3afe" (UID: "926daac3-f06f-4f17-bac8-65da6fee3afe"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:02:04.857485 kubelet[1567]: I1213 02:02:04.857461 1567 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/926daac3-f06f-4f17-bac8-65da6fee3afe-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "926daac3-f06f-4f17-bac8-65da6fee3afe" (UID: "926daac3-f06f-4f17-bac8-65da6fee3afe"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:02:04.954011 kubelet[1567]: I1213 02:02:04.953871 1567 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/926daac3-f06f-4f17-bac8-65da6fee3afe-hubble-tls\") on node \"10.0.0.71\" DevicePath \"\"" Dec 13 02:02:04.954011 kubelet[1567]: I1213 02:02:04.953912 1567 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/926daac3-f06f-4f17-bac8-65da6fee3afe-cilium-config-path\") on node \"10.0.0.71\" DevicePath \"\"" Dec 13 02:02:04.954011 kubelet[1567]: I1213 02:02:04.953925 1567 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-bpf-maps\") on node \"10.0.0.71\" DevicePath \"\"" Dec 13 02:02:04.954011 kubelet[1567]: I1213 02:02:04.953948 1567 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-lib-modules\") on node \"10.0.0.71\" DevicePath \"\"" Dec 13 02:02:04.954011 kubelet[1567]: I1213 02:02:04.953955 1567 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-hostproc\") on node \"10.0.0.71\" DevicePath \"\"" Dec 13 02:02:04.954011 kubelet[1567]: I1213 02:02:04.953966 1567 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-9x2kl\" (UniqueName: \"kubernetes.io/projected/926daac3-f06f-4f17-bac8-65da6fee3afe-kube-api-access-9x2kl\") on node \"10.0.0.71\" DevicePath \"\"" Dec 13 02:02:04.954011 kubelet[1567]: I1213 02:02:04.953973 1567 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-cilium-cgroup\") on node \"10.0.0.71\" DevicePath \"\"" Dec 13 02:02:04.954011 kubelet[1567]: I1213 02:02:04.953981 1567 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-cilium-run\") on node \"10.0.0.71\" DevicePath \"\"" Dec 13 02:02:04.954363 kubelet[1567]: I1213 02:02:04.953988 1567 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/926daac3-f06f-4f17-bac8-65da6fee3afe-cilium-ipsec-secrets\") on node \"10.0.0.71\" DevicePath \"\"" Dec 13 02:02:04.954363 kubelet[1567]: I1213 02:02:04.954001 1567 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-host-proc-sys-net\") on node \"10.0.0.71\" DevicePath \"\"" Dec 13 02:02:04.954363 kubelet[1567]: I1213 02:02:04.954009 1567 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/926daac3-f06f-4f17-bac8-65da6fee3afe-clustermesh-secrets\") on node \"10.0.0.71\" DevicePath \"\"" Dec 13 02:02:04.954363 kubelet[1567]: I1213 02:02:04.954018 1567 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/926daac3-f06f-4f17-bac8-65da6fee3afe-host-proc-sys-kernel\") on node \"10.0.0.71\" DevicePath \"\"" Dec 13 02:02:05.092446 kubelet[1567]: E1213 02:02:05.092377 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:02:05.356260 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2a7f4051c729b81035ee0580e43b7db36b2af3529260bebff23f38a8cd48ab0-rootfs.mount: Deactivated successfully. Dec 13 02:02:05.356389 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e2a7f4051c729b81035ee0580e43b7db36b2af3529260bebff23f38a8cd48ab0-shm.mount: Deactivated successfully. Dec 13 02:02:05.356534 systemd[1]: var-lib-kubelet-pods-926daac3\x2df06f\x2d4f17\x2dbac8\x2d65da6fee3afe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9x2kl.mount: Deactivated successfully. Dec 13 02:02:05.356648 systemd[1]: var-lib-kubelet-pods-926daac3\x2df06f\x2d4f17\x2dbac8\x2d65da6fee3afe-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:02:05.356775 systemd[1]: var-lib-kubelet-pods-926daac3\x2df06f\x2d4f17\x2dbac8\x2d65da6fee3afe-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:02:05.356855 systemd[1]: var-lib-kubelet-pods-926daac3\x2df06f\x2d4f17\x2dbac8\x2d65da6fee3afe-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 02:02:05.696479 kubelet[1567]: I1213 02:02:05.696452 1567 scope.go:117] "RemoveContainer" containerID="7dad49bcd52affd6b4f0d59f70701421cd2003cf49c06b07cf6dac6c59d96d32" Dec 13 02:02:05.697387 env[1314]: time="2024-12-13T02:02:05.697352321Z" level=info msg="RemoveContainer for \"7dad49bcd52affd6b4f0d59f70701421cd2003cf49c06b07cf6dac6c59d96d32\"" Dec 13 02:02:05.869826 kubelet[1567]: I1213 02:02:05.869791 1567 setters.go:568] "Node became not ready" node="10.0.0.71" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T02:02:05Z","lastTransitionTime":"2024-12-13T02:02:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 02:02:05.916742 env[1314]: time="2024-12-13T02:02:05.916675113Z" level=info msg="RemoveContainer for \"7dad49bcd52affd6b4f0d59f70701421cd2003cf49c06b07cf6dac6c59d96d32\" returns successfully" Dec 13 02:02:06.090638 kubelet[1567]: I1213 02:02:06.090382 1567 topology_manager.go:215] "Topology Admit Handler" podUID="e4ce3b96-02f1-4726-9ebd-fb9229b15f21" podNamespace="kube-system" podName="cilium-hqzvh" Dec 13 02:02:06.090638 kubelet[1567]: E1213 02:02:06.090439 1567 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="926daac3-f06f-4f17-bac8-65da6fee3afe" containerName="mount-cgroup" Dec 13 02:02:06.090638 kubelet[1567]: I1213 02:02:06.090463 1567 memory_manager.go:354] "RemoveStaleState removing state" podUID="926daac3-f06f-4f17-bac8-65da6fee3afe" containerName="mount-cgroup" Dec 13 02:02:06.092553 kubelet[1567]: E1213 02:02:06.092521 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:02:06.161370 kubelet[1567]: I1213 02:02:06.161312 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e4ce3b96-02f1-4726-9ebd-fb9229b15f21-hostproc\") pod \"cilium-hqzvh\" (UID: \"e4ce3b96-02f1-4726-9ebd-fb9229b15f21\") " pod="kube-system/cilium-hqzvh" Dec 13 02:02:06.161370 kubelet[1567]: I1213 02:02:06.161368 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e4ce3b96-02f1-4726-9ebd-fb9229b15f21-bpf-maps\") pod \"cilium-hqzvh\" (UID: \"e4ce3b96-02f1-4726-9ebd-fb9229b15f21\") " pod="kube-system/cilium-hqzvh" Dec 13 02:02:06.161627 kubelet[1567]: I1213 02:02:06.161398 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e4ce3b96-02f1-4726-9ebd-fb9229b15f21-host-proc-sys-kernel\") pod \"cilium-hqzvh\" (UID: \"e4ce3b96-02f1-4726-9ebd-fb9229b15f21\") " pod="kube-system/cilium-hqzvh" Dec 13 02:02:06.161627 kubelet[1567]: I1213 02:02:06.161425 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5mwv\" (UniqueName: \"kubernetes.io/projected/e4ce3b96-02f1-4726-9ebd-fb9229b15f21-kube-api-access-s5mwv\") pod \"cilium-hqzvh\" (UID: \"e4ce3b96-02f1-4726-9ebd-fb9229b15f21\") " pod="kube-system/cilium-hqzvh" Dec 13 02:02:06.161627 kubelet[1567]: I1213 02:02:06.161450 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e4ce3b96-02f1-4726-9ebd-fb9229b15f21-host-proc-sys-net\") pod \"cilium-hqzvh\" (UID: \"e4ce3b96-02f1-4726-9ebd-fb9229b15f21\") " pod="kube-system/cilium-hqzvh" Dec 13 02:02:06.161627 kubelet[1567]: I1213 02:02:06.161473 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e4ce3b96-02f1-4726-9ebd-fb9229b15f21-etc-cni-netd\") pod \"cilium-hqzvh\" (UID: \"e4ce3b96-02f1-4726-9ebd-fb9229b15f21\") " pod="kube-system/cilium-hqzvh" Dec 13 02:02:06.161627 kubelet[1567]: I1213 02:02:06.161567 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e4ce3b96-02f1-4726-9ebd-fb9229b15f21-hubble-tls\") pod \"cilium-hqzvh\" (UID: \"e4ce3b96-02f1-4726-9ebd-fb9229b15f21\") " pod="kube-system/cilium-hqzvh" Dec 13 02:02:06.161874 kubelet[1567]: I1213 02:02:06.161630 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e4ce3b96-02f1-4726-9ebd-fb9229b15f21-cni-path\") pod \"cilium-hqzvh\" (UID: \"e4ce3b96-02f1-4726-9ebd-fb9229b15f21\") " pod="kube-system/cilium-hqzvh" Dec 13 02:02:06.161874 kubelet[1567]: I1213 02:02:06.161665 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e4ce3b96-02f1-4726-9ebd-fb9229b15f21-cilium-ipsec-secrets\") pod \"cilium-hqzvh\" (UID: \"e4ce3b96-02f1-4726-9ebd-fb9229b15f21\") " pod="kube-system/cilium-hqzvh" Dec 13 02:02:06.161874 kubelet[1567]: I1213 02:02:06.161681 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e4ce3b96-02f1-4726-9ebd-fb9229b15f21-cilium-run\") pod \"cilium-hqzvh\" (UID: \"e4ce3b96-02f1-4726-9ebd-fb9229b15f21\") " pod="kube-system/cilium-hqzvh" Dec 13 02:02:06.161874 kubelet[1567]: I1213 02:02:06.161753 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e4ce3b96-02f1-4726-9ebd-fb9229b15f21-cilium-cgroup\") pod \"cilium-hqzvh\" (UID: \"e4ce3b96-02f1-4726-9ebd-fb9229b15f21\") " pod="kube-system/cilium-hqzvh" Dec 13 02:02:06.161874 kubelet[1567]: I1213 02:02:06.161784 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4ce3b96-02f1-4726-9ebd-fb9229b15f21-lib-modules\") pod \"cilium-hqzvh\" (UID: \"e4ce3b96-02f1-4726-9ebd-fb9229b15f21\") " pod="kube-system/cilium-hqzvh" Dec 13 02:02:06.161874 kubelet[1567]: I1213 02:02:06.161806 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4ce3b96-02f1-4726-9ebd-fb9229b15f21-xtables-lock\") pod \"cilium-hqzvh\" (UID: \"e4ce3b96-02f1-4726-9ebd-fb9229b15f21\") " pod="kube-system/cilium-hqzvh" Dec 13 02:02:06.162087 kubelet[1567]: I1213 02:02:06.161849 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e4ce3b96-02f1-4726-9ebd-fb9229b15f21-cilium-config-path\") pod \"cilium-hqzvh\" (UID: \"e4ce3b96-02f1-4726-9ebd-fb9229b15f21\") " pod="kube-system/cilium-hqzvh" Dec 13 02:02:06.162087 kubelet[1567]: I1213 02:02:06.161867 1567 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e4ce3b96-02f1-4726-9ebd-fb9229b15f21-clustermesh-secrets\") pod \"cilium-hqzvh\" (UID: \"e4ce3b96-02f1-4726-9ebd-fb9229b15f21\") " pod="kube-system/cilium-hqzvh" Dec 13 02:02:06.393846 kubelet[1567]: E1213 02:02:06.393796 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:06.394405 env[1314]: time="2024-12-13T02:02:06.394357204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hqzvh,Uid:e4ce3b96-02f1-4726-9ebd-fb9229b15f21,Namespace:kube-system,Attempt:0,}" Dec 13 02:02:06.407447 env[1314]: time="2024-12-13T02:02:06.407380591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:02:06.407447 env[1314]: time="2024-12-13T02:02:06.407417550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:02:06.407447 env[1314]: time="2024-12-13T02:02:06.407426988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:02:06.407670 env[1314]: time="2024-12-13T02:02:06.407617025Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d0a645d79b2beb5701dc9c15f537e48257f2fd598c6a4da74057aac0e883b767 pid=3305 runtime=io.containerd.runc.v2 Dec 13 02:02:06.439074 env[1314]: time="2024-12-13T02:02:06.439033912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hqzvh,Uid:e4ce3b96-02f1-4726-9ebd-fb9229b15f21,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0a645d79b2beb5701dc9c15f537e48257f2fd598c6a4da74057aac0e883b767\"" Dec 13 02:02:06.439745 kubelet[1567]: E1213 02:02:06.439702 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:06.441442 env[1314]: time="2024-12-13T02:02:06.441415083Z" level=info msg="CreateContainer within sandbox \"d0a645d79b2beb5701dc9c15f537e48257f2fd598c6a4da74057aac0e883b767\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:02:06.454312 env[1314]: time="2024-12-13T02:02:06.454258560Z" level=info msg="CreateContainer within sandbox \"d0a645d79b2beb5701dc9c15f537e48257f2fd598c6a4da74057aac0e883b767\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f02c7e6cf5b64842157249f4a8f5d7c53732bbbeba70477b08f03350208caee3\"" Dec 13 02:02:06.455101 env[1314]: time="2024-12-13T02:02:06.455077341Z" level=info msg="StartContainer for \"f02c7e6cf5b64842157249f4a8f5d7c53732bbbeba70477b08f03350208caee3\"" Dec 13 02:02:06.494462 env[1314]: time="2024-12-13T02:02:06.494395465Z" level=info msg="StartContainer for \"f02c7e6cf5b64842157249f4a8f5d7c53732bbbeba70477b08f03350208caee3\" returns successfully" Dec 13 02:02:06.523691 env[1314]: time="2024-12-13T02:02:06.523634865Z" level=info msg="shim disconnected" id=f02c7e6cf5b64842157249f4a8f5d7c53732bbbeba70477b08f03350208caee3 Dec 13 02:02:06.523691 env[1314]: time="2024-12-13T02:02:06.523684829Z" level=warning msg="cleaning up after shim disconnected" id=f02c7e6cf5b64842157249f4a8f5d7c53732bbbeba70477b08f03350208caee3 namespace=k8s.io Dec 13 02:02:06.523691 env[1314]: time="2024-12-13T02:02:06.523696030Z" level=info msg="cleaning up dead shim" Dec 13 02:02:06.530414 env[1314]: time="2024-12-13T02:02:06.530371942Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:02:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3387 runtime=io.containerd.runc.v2\n" Dec 13 02:02:06.574986 kubelet[1567]: I1213 02:02:06.574955 1567 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="926daac3-f06f-4f17-bac8-65da6fee3afe" path="/var/lib/kubelet/pods/926daac3-f06f-4f17-bac8-65da6fee3afe/volumes" Dec 13 02:02:06.700097 kubelet[1567]: E1213 02:02:06.699978 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:06.702062 env[1314]: time="2024-12-13T02:02:06.702028027Z" level=info msg="CreateContainer within sandbox \"d0a645d79b2beb5701dc9c15f537e48257f2fd598c6a4da74057aac0e883b767\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:02:06.733765 env[1314]: time="2024-12-13T02:02:06.733682671Z" level=info msg="CreateContainer within sandbox \"d0a645d79b2beb5701dc9c15f537e48257f2fd598c6a4da74057aac0e883b767\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4da7fb8f6ad9803e521b0a32ea2adaa81934dae3be657d601dbdca63942e5885\"" Dec 13 02:02:06.734361 env[1314]: time="2024-12-13T02:02:06.734295965Z" level=info msg="StartContainer for \"4da7fb8f6ad9803e521b0a32ea2adaa81934dae3be657d601dbdca63942e5885\"" Dec 13 02:02:06.803399 env[1314]: time="2024-12-13T02:02:06.803302033Z" level=info msg="StartContainer for \"4da7fb8f6ad9803e521b0a32ea2adaa81934dae3be657d601dbdca63942e5885\" returns successfully" Dec 13 02:02:06.912364 env[1314]: time="2024-12-13T02:02:06.912284413Z" level=info msg="shim disconnected" id=4da7fb8f6ad9803e521b0a32ea2adaa81934dae3be657d601dbdca63942e5885 Dec 13 02:02:06.912364 env[1314]: time="2024-12-13T02:02:06.912335871Z" level=warning msg="cleaning up after shim disconnected" id=4da7fb8f6ad9803e521b0a32ea2adaa81934dae3be657d601dbdca63942e5885 namespace=k8s.io Dec 13 02:02:06.912364 env[1314]: time="2024-12-13T02:02:06.912344888Z" level=info msg="cleaning up dead shim" Dec 13 02:02:06.919295 env[1314]: time="2024-12-13T02:02:06.919245202Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:02:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3450 runtime=io.containerd.runc.v2\n" Dec 13 02:02:07.093085 kubelet[1567]: E1213 02:02:07.092940 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:02:07.704072 kubelet[1567]: E1213 02:02:07.704032 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:07.705520 env[1314]: time="2024-12-13T02:02:07.705487441Z" level=info msg="CreateContainer within sandbox \"d0a645d79b2beb5701dc9c15f537e48257f2fd598c6a4da74057aac0e883b767\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:02:07.720909 env[1314]: time="2024-12-13T02:02:07.720855714Z" level=info msg="CreateContainer within sandbox \"d0a645d79b2beb5701dc9c15f537e48257f2fd598c6a4da74057aac0e883b767\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"459c4253c65bed5a4d6a070e04b218d1fcd750508d3905ce2dc0ee6f0c0ba188\"" Dec 13 02:02:07.721415 env[1314]: time="2024-12-13T02:02:07.721378978Z" level=info msg="StartContainer for \"459c4253c65bed5a4d6a070e04b218d1fcd750508d3905ce2dc0ee6f0c0ba188\"" Dec 13 02:02:07.760888 env[1314]: time="2024-12-13T02:02:07.760841529Z" level=info msg="StartContainer for \"459c4253c65bed5a4d6a070e04b218d1fcd750508d3905ce2dc0ee6f0c0ba188\" returns successfully" Dec 13 02:02:07.779538 env[1314]: time="2024-12-13T02:02:07.779482228Z" level=info msg="shim disconnected" id=459c4253c65bed5a4d6a070e04b218d1fcd750508d3905ce2dc0ee6f0c0ba188 Dec 13 02:02:07.779538 env[1314]: time="2024-12-13T02:02:07.779534247Z" level=warning msg="cleaning up after shim disconnected" id=459c4253c65bed5a4d6a070e04b218d1fcd750508d3905ce2dc0ee6f0c0ba188 namespace=k8s.io Dec 13 02:02:07.779538 env[1314]: time="2024-12-13T02:02:07.779542412Z" level=info msg="cleaning up dead shim" Dec 13 02:02:07.785914 env[1314]: time="2024-12-13T02:02:07.785878122Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:02:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3506 runtime=io.containerd.runc.v2\n" Dec 13 02:02:08.094103 kubelet[1567]: E1213 02:02:08.093975 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:02:08.403736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-459c4253c65bed5a4d6a070e04b218d1fcd750508d3905ce2dc0ee6f0c0ba188-rootfs.mount: Deactivated successfully. Dec 13 02:02:08.707445 kubelet[1567]: E1213 02:02:08.707337 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:08.709035 env[1314]: time="2024-12-13T02:02:08.708994917Z" level=info msg="CreateContainer within sandbox \"d0a645d79b2beb5701dc9c15f537e48257f2fd598c6a4da74057aac0e883b767\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:02:08.722874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2549959594.mount: Deactivated successfully. Dec 13 02:02:08.723861 env[1314]: time="2024-12-13T02:02:08.723825984Z" level=info msg="CreateContainer within sandbox \"d0a645d79b2beb5701dc9c15f537e48257f2fd598c6a4da74057aac0e883b767\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"36c3872a0b99c8b2757474778d746da8a3854aa86f9ec966b42213eb3ec3157f\"" Dec 13 02:02:08.724228 env[1314]: time="2024-12-13T02:02:08.724202963Z" level=info msg="StartContainer for \"36c3872a0b99c8b2757474778d746da8a3854aa86f9ec966b42213eb3ec3157f\"" Dec 13 02:02:08.762936 env[1314]: time="2024-12-13T02:02:08.762893427Z" level=info msg="StartContainer for \"36c3872a0b99c8b2757474778d746da8a3854aa86f9ec966b42213eb3ec3157f\" returns successfully" Dec 13 02:02:08.781463 env[1314]: time="2024-12-13T02:02:08.781412479Z" level=info msg="shim disconnected" id=36c3872a0b99c8b2757474778d746da8a3854aa86f9ec966b42213eb3ec3157f Dec 13 02:02:08.781463 env[1314]: time="2024-12-13T02:02:08.781463615Z" level=warning msg="cleaning up after shim disconnected" id=36c3872a0b99c8b2757474778d746da8a3854aa86f9ec966b42213eb3ec3157f namespace=k8s.io Dec 13 02:02:08.781697 env[1314]: time="2024-12-13T02:02:08.781478132Z" level=info msg="cleaning up dead shim" Dec 13 02:02:08.787839 env[1314]: time="2024-12-13T02:02:08.787780578Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:02:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3565 runtime=io.containerd.runc.v2\n" Dec 13 02:02:09.095042 kubelet[1567]: E1213 02:02:09.094913 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:02:09.404444 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-36c3872a0b99c8b2757474778d746da8a3854aa86f9ec966b42213eb3ec3157f-rootfs.mount: Deactivated successfully. Dec 13 02:02:09.539818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount722658863.mount: Deactivated successfully. Dec 13 02:02:09.550702 kubelet[1567]: E1213 02:02:09.550641 1567 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:02:09.711733 kubelet[1567]: E1213 02:02:09.711356 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:09.713254 env[1314]: time="2024-12-13T02:02:09.713219958Z" level=info msg="CreateContainer within sandbox \"d0a645d79b2beb5701dc9c15f537e48257f2fd598c6a4da74057aac0e883b767\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:02:09.728833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1995451882.mount: Deactivated successfully. Dec 13 02:02:09.731960 env[1314]: time="2024-12-13T02:02:09.731913883Z" level=info msg="CreateContainer within sandbox \"d0a645d79b2beb5701dc9c15f537e48257f2fd598c6a4da74057aac0e883b767\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"713d9436ed59d18fe183c306b7ef1bd1bef06eb36bfc2f26fecafcbcab43ec46\"" Dec 13 02:02:09.732440 env[1314]: time="2024-12-13T02:02:09.732414584Z" level=info msg="StartContainer for \"713d9436ed59d18fe183c306b7ef1bd1bef06eb36bfc2f26fecafcbcab43ec46\"" Dec 13 02:02:09.778272 env[1314]: time="2024-12-13T02:02:09.778194768Z" level=info msg="StartContainer for \"713d9436ed59d18fe183c306b7ef1bd1bef06eb36bfc2f26fecafcbcab43ec46\" returns successfully" Dec 13 02:02:10.018741 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 02:02:10.097202 kubelet[1567]: E1213 02:02:10.095781 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:02:10.716696 kubelet[1567]: E1213 02:02:10.716651 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:10.733841 kubelet[1567]: I1213 02:02:10.733790 1567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-hqzvh" podStartSLOduration=5.7337426 podStartE2EDuration="5.7337426s" podCreationTimestamp="2024-12-13 02:02:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:02:10.732957033 +0000 UTC m=+67.106226117" watchObservedRunningTime="2024-12-13 02:02:10.7337426 +0000 UTC m=+67.107011674" Dec 13 02:02:11.096377 kubelet[1567]: E1213 02:02:11.095901 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:02:12.096306 kubelet[1567]: E1213 02:02:12.096248 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:02:12.126901 env[1314]: time="2024-12-13T02:02:12.126834617Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:02:12.129009 env[1314]: time="2024-12-13T02:02:12.128963018Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:02:12.130392 env[1314]: time="2024-12-13T02:02:12.130363380Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:02:12.130849 env[1314]: time="2024-12-13T02:02:12.130822433Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 02:02:12.132670 env[1314]: time="2024-12-13T02:02:12.132638576Z" level=info msg="CreateContainer within sandbox \"53c794b81cbc92ee6de9f76ff0ed3e62c4c64de8469977759b3414017935e471\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 02:02:12.144103 env[1314]: time="2024-12-13T02:02:12.144032884Z" level=info msg="CreateContainer within sandbox \"53c794b81cbc92ee6de9f76ff0ed3e62c4c64de8469977759b3414017935e471\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4c5325035f93af3aa12637f9c89bb27b749837b2326cfd02d47b69a98ab67a46\"" Dec 13 02:02:12.144855 env[1314]: time="2024-12-13T02:02:12.144826705Z" level=info msg="StartContainer for \"4c5325035f93af3aa12637f9c89bb27b749837b2326cfd02d47b69a98ab67a46\"" Dec 13 02:02:12.161523 systemd[1]: run-containerd-runc-k8s.io-4c5325035f93af3aa12637f9c89bb27b749837b2326cfd02d47b69a98ab67a46-runc.Mcl5Rf.mount: Deactivated successfully. Dec 13 02:02:12.194688 env[1314]: time="2024-12-13T02:02:12.194587769Z" level=info msg="StartContainer for \"4c5325035f93af3aa12637f9c89bb27b749837b2326cfd02d47b69a98ab67a46\" returns successfully" Dec 13 02:02:12.395148 kubelet[1567]: E1213 02:02:12.395105 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:12.722576 kubelet[1567]: E1213 02:02:12.722460 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:12.733768 kubelet[1567]: I1213 02:02:12.733731 1567 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-fx5q5" podStartSLOduration=1.118467294 podStartE2EDuration="8.73368103s" podCreationTimestamp="2024-12-13 02:02:04 +0000 UTC" firstStartedPulling="2024-12-13 02:02:04.515948635 +0000 UTC m=+60.889217709" lastFinishedPulling="2024-12-13 02:02:12.131162371 +0000 UTC m=+68.504431445" observedRunningTime="2024-12-13 02:02:12.731813439 +0000 UTC m=+69.105082513" watchObservedRunningTime="2024-12-13 02:02:12.73368103 +0000 UTC m=+69.106950104" Dec 13 02:02:12.769828 systemd-networkd[1092]: lxc_health: Link UP Dec 13 02:02:12.783191 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:02:12.782620 systemd-networkd[1092]: lxc_health: Gained carrier Dec 13 02:02:13.096891 kubelet[1567]: E1213 02:02:13.096755 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:02:13.724646 kubelet[1567]: E1213 02:02:13.724607 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:14.097243 kubelet[1567]: E1213 02:02:14.097101 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:02:14.293874 systemd-networkd[1092]: lxc_health: Gained IPv6LL Dec 13 02:02:14.396611 kubelet[1567]: E1213 02:02:14.396060 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:14.727048 kubelet[1567]: E1213 02:02:14.726924 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:15.097760 kubelet[1567]: E1213 02:02:15.097601 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:02:15.730054 kubelet[1567]: E1213 02:02:15.729989 1567 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:02:16.098241 kubelet[1567]: E1213 02:02:16.098066 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:02:17.098835 kubelet[1567]: E1213 02:02:17.098769 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:02:18.099530 kubelet[1567]: E1213 02:02:18.099442 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:02:19.100183 kubelet[1567]: E1213 02:02:19.100123 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:02:20.100737 kubelet[1567]: E1213 02:02:20.100667 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 02:02:21.101579 kubelet[1567]: E1213 02:02:21.101441 1567 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"