Dec 13 14:26:15.113925 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:26:15.113951 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:26:15.113971 kernel: BIOS-provided physical RAM map: Dec 13 14:26:15.113979 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 14:26:15.113986 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 14:26:15.113993 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 14:26:15.114002 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 13 14:26:15.114010 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 13 14:26:15.114021 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 14:26:15.114027 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 14:26:15.114033 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 14:26:15.114039 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 14:26:15.114044 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 14:26:15.114050 kernel: NX (Execute Disable) protection: active Dec 13 14:26:15.114060 kernel: SMBIOS 2.8 present. Dec 13 14:26:15.114066 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 13 14:26:15.114072 kernel: Hypervisor detected: KVM Dec 13 14:26:15.114078 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 14:26:15.114084 kernel: kvm-clock: cpu 0, msr 7119a001, primary cpu clock Dec 13 14:26:15.114090 kernel: kvm-clock: using sched offset of 3252517552 cycles Dec 13 14:26:15.114097 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 14:26:15.114107 kernel: tsc: Detected 2794.748 MHz processor Dec 13 14:26:15.114113 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:26:15.114122 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:26:15.114128 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 13 14:26:15.114134 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:26:15.114141 kernel: Using GB pages for direct mapping Dec 13 14:26:15.114147 kernel: ACPI: Early table checksum verification disabled Dec 13 14:26:15.114153 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 13 14:26:15.114160 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:26:15.114166 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:26:15.114172 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:26:15.114181 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 13 14:26:15.114187 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:26:15.114193 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:26:15.114200 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:26:15.114206 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:26:15.114212 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Dec 13 14:26:15.114218 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Dec 13 14:26:15.114225 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 13 14:26:15.114236 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Dec 13 14:26:15.114242 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Dec 13 14:26:15.114249 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Dec 13 14:26:15.114255 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Dec 13 14:26:15.114262 kernel: No NUMA configuration found Dec 13 14:26:15.114269 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 13 14:26:15.114277 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Dec 13 14:26:15.114283 kernel: Zone ranges: Dec 13 14:26:15.114290 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:26:15.114297 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 13 14:26:15.114303 kernel: Normal empty Dec 13 14:26:15.114310 kernel: Movable zone start for each node Dec 13 14:26:15.114316 kernel: Early memory node ranges Dec 13 14:26:15.114323 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 14:26:15.114329 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 13 14:26:15.114336 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 13 14:26:15.114347 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:26:15.114358 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 14:26:15.114375 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 14:26:15.114402 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 14:26:15.114410 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 14:26:15.114417 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 14:26:15.114423 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 14:26:15.114430 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 14:26:15.114437 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:26:15.114446 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 14:26:15.114453 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 14:26:15.114459 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:26:15.114469 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 14:26:15.114476 kernel: TSC deadline timer available Dec 13 14:26:15.114483 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 14:26:15.114489 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 14:26:15.114496 kernel: kvm-guest: setup PV sched yield Dec 13 14:26:15.114503 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 14:26:15.114511 kernel: Booting paravirtualized kernel on KVM Dec 13 14:26:15.114518 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:26:15.114525 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Dec 13 14:26:15.114532 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Dec 13 14:26:15.114538 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Dec 13 14:26:15.114545 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 14:26:15.114551 kernel: kvm-guest: setup async PF for cpu 0 Dec 13 14:26:15.114558 kernel: kvm-guest: stealtime: cpu 0, msr 9a41c0c0 Dec 13 14:26:15.114565 kernel: kvm-guest: PV spinlocks enabled Dec 13 14:26:15.114582 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:26:15.114599 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Dec 13 14:26:15.114608 kernel: Policy zone: DMA32 Dec 13 14:26:15.114619 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:26:15.114627 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:26:15.114633 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:26:15.114640 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:26:15.114647 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:26:15.114656 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 134796K reserved, 0K cma-reserved) Dec 13 14:26:15.114663 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 14:26:15.114670 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:26:15.114677 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:26:15.114683 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:26:15.114691 kernel: rcu: RCU event tracing is enabled. Dec 13 14:26:15.114698 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 14:26:15.114704 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:26:15.114711 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:26:15.114720 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:26:15.114727 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 14:26:15.114733 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 14:26:15.114740 kernel: random: crng init done Dec 13 14:26:15.114746 kernel: Console: colour VGA+ 80x25 Dec 13 14:26:15.114753 kernel: printk: console [ttyS0] enabled Dec 13 14:26:15.114759 kernel: ACPI: Core revision 20210730 Dec 13 14:26:15.114766 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 14:26:15.114774 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:26:15.114785 kernel: x2apic enabled Dec 13 14:26:15.114794 kernel: Switched APIC routing to physical x2apic. Dec 13 14:26:15.114802 kernel: kvm-guest: setup PV IPIs Dec 13 14:26:15.114810 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 14:26:15.114818 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 14:26:15.114836 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 14:26:15.114843 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 14:26:15.114850 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 14:26:15.114857 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 14:26:15.114871 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:26:15.114878 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 14:26:15.114885 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:26:15.114894 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:26:15.114901 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 14:26:15.114907 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 14:26:15.114914 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 14:26:15.114921 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 14:26:15.114929 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:26:15.114937 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:26:15.114944 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:26:15.114951 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:26:15.114976 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 14:26:15.114983 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:26:15.114990 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:26:15.114997 kernel: LSM: Security Framework initializing Dec 13 14:26:15.115006 kernel: SELinux: Initializing. Dec 13 14:26:15.115013 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:26:15.115020 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:26:15.115027 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 14:26:15.115034 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 14:26:15.115041 kernel: ... version: 0 Dec 13 14:26:15.115048 kernel: ... bit width: 48 Dec 13 14:26:15.115055 kernel: ... generic registers: 6 Dec 13 14:26:15.115062 kernel: ... value mask: 0000ffffffffffff Dec 13 14:26:15.115070 kernel: ... max period: 00007fffffffffff Dec 13 14:26:15.115077 kernel: ... fixed-purpose events: 0 Dec 13 14:26:15.115084 kernel: ... event mask: 000000000000003f Dec 13 14:26:15.115091 kernel: signal: max sigframe size: 1776 Dec 13 14:26:15.115098 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:26:15.115105 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:26:15.115112 kernel: x86: Booting SMP configuration: Dec 13 14:26:15.115119 kernel: .... node #0, CPUs: #1 Dec 13 14:26:15.115126 kernel: kvm-clock: cpu 1, msr 7119a041, secondary cpu clock Dec 13 14:26:15.115133 kernel: kvm-guest: setup async PF for cpu 1 Dec 13 14:26:15.115141 kernel: kvm-guest: stealtime: cpu 1, msr 9a49c0c0 Dec 13 14:26:15.115148 kernel: #2 Dec 13 14:26:15.115155 kernel: kvm-clock: cpu 2, msr 7119a081, secondary cpu clock Dec 13 14:26:15.115162 kernel: kvm-guest: setup async PF for cpu 2 Dec 13 14:26:15.115169 kernel: kvm-guest: stealtime: cpu 2, msr 9a51c0c0 Dec 13 14:26:15.115176 kernel: #3 Dec 13 14:26:15.115182 kernel: kvm-clock: cpu 3, msr 7119a0c1, secondary cpu clock Dec 13 14:26:15.115189 kernel: kvm-guest: setup async PF for cpu 3 Dec 13 14:26:15.115200 kernel: kvm-guest: stealtime: cpu 3, msr 9a59c0c0 Dec 13 14:26:15.115212 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 14:26:15.115221 kernel: smpboot: Max logical packages: 1 Dec 13 14:26:15.115230 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 14:26:15.115237 kernel: devtmpfs: initialized Dec 13 14:26:15.115244 kernel: x86/mm: Memory block size: 128MB Dec 13 14:26:15.115251 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:26:15.115258 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 14:26:15.115265 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:26:15.115272 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:26:15.115281 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:26:15.115289 kernel: audit: type=2000 audit(1734099974.412:1): state=initialized audit_enabled=0 res=1 Dec 13 14:26:15.115296 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:26:15.115303 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:26:15.115310 kernel: cpuidle: using governor menu Dec 13 14:26:15.115316 kernel: ACPI: bus type PCI registered Dec 13 14:26:15.115324 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:26:15.115330 kernel: dca service started, version 1.12.1 Dec 13 14:26:15.115338 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 14:26:15.115347 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 14:26:15.115354 kernel: PCI: Using configuration type 1 for base access Dec 13 14:26:15.115361 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:26:15.115368 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:26:15.115375 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:26:15.115382 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:26:15.115389 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:26:15.115396 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:26:15.115403 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:26:15.115411 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:26:15.115418 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:26:15.115425 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:26:15.115434 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:26:15.115443 kernel: ACPI: Interpreter enabled Dec 13 14:26:15.115452 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 14:26:15.115461 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:26:15.115468 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:26:15.115475 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 14:26:15.115484 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:26:15.115660 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:26:15.115743 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 14:26:15.115823 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 14:26:15.115841 kernel: PCI host bridge to bus 0000:00 Dec 13 14:26:15.115932 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 14:26:15.116080 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 14:26:15.116158 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 14:26:15.116227 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 14:26:15.116295 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 14:26:15.116377 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 14:26:15.116454 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:26:15.116558 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 14:26:15.116655 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 14:26:15.116750 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Dec 13 14:26:15.116870 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Dec 13 14:26:15.116976 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Dec 13 14:26:15.117088 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 14:26:15.117280 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 14:26:15.117376 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 14:26:15.122891 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Dec 13 14:26:15.123054 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Dec 13 14:26:15.123150 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 14:26:15.123228 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 14:26:15.123304 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Dec 13 14:26:15.123378 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Dec 13 14:26:15.123467 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 14:26:15.123548 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Dec 13 14:26:15.123622 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Dec 13 14:26:15.123696 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 13 14:26:15.123775 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Dec 13 14:26:15.123873 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 14:26:15.123950 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 14:26:15.124056 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 14:26:15.124136 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Dec 13 14:26:15.124211 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Dec 13 14:26:15.124294 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 14:26:15.124369 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 14:26:15.124378 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 14:26:15.124386 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 14:26:15.124393 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 14:26:15.124403 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 14:26:15.124410 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 14:26:15.124417 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 14:26:15.124424 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 14:26:15.124431 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 14:26:15.124438 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 14:26:15.124445 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 14:26:15.124452 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 14:26:15.124459 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 14:26:15.124468 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 14:26:15.124475 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 14:26:15.124482 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 14:26:15.124489 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 14:26:15.124496 kernel: iommu: Default domain type: Translated Dec 13 14:26:15.124503 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:26:15.124578 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 14:26:15.124653 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 14:26:15.124728 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 14:26:15.124740 kernel: vgaarb: loaded Dec 13 14:26:15.124747 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:26:15.124755 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:26:15.124762 kernel: PTP clock support registered Dec 13 14:26:15.124769 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:26:15.124776 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 14:26:15.124783 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 14:26:15.124790 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 13 14:26:15.124798 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 14:26:15.124805 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 14:26:15.124813 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 14:26:15.124820 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:26:15.124836 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:26:15.124843 kernel: pnp: PnP ACPI init Dec 13 14:26:15.124941 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 14:26:15.124952 kernel: pnp: PnP ACPI: found 6 devices Dec 13 14:26:15.124972 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:26:15.124982 kernel: NET: Registered PF_INET protocol family Dec 13 14:26:15.124989 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:26:15.124997 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 14:26:15.125004 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:26:15.125011 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:26:15.125018 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 14:26:15.125025 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 14:26:15.125033 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:26:15.125041 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:26:15.125048 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:26:15.125055 kernel: NET: Registered PF_XDP protocol family Dec 13 14:26:15.125128 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 14:26:15.125195 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 14:26:15.125260 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 14:26:15.125328 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 14:26:15.125394 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 14:26:15.125460 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 14:26:15.125471 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:26:15.125479 kernel: Initialise system trusted keyrings Dec 13 14:26:15.125486 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 14:26:15.125493 kernel: Key type asymmetric registered Dec 13 14:26:15.125500 kernel: Asymmetric key parser 'x509' registered Dec 13 14:26:15.125507 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:26:15.125514 kernel: io scheduler mq-deadline registered Dec 13 14:26:15.125521 kernel: io scheduler kyber registered Dec 13 14:26:15.125529 kernel: io scheduler bfq registered Dec 13 14:26:15.125539 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:26:15.125548 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 14:26:15.125556 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 14:26:15.125564 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 14:26:15.125571 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:26:15.125578 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:26:15.125585 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 14:26:15.125592 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 14:26:15.125599 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 14:26:15.125608 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 14:26:15.125707 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 14:26:15.125778 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 14:26:15.125855 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T14:26:14 UTC (1734099974) Dec 13 14:26:15.125925 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 14:26:15.125934 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:26:15.125941 kernel: Segment Routing with IPv6 Dec 13 14:26:15.125948 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:26:15.125969 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:26:15.125977 kernel: Key type dns_resolver registered Dec 13 14:26:15.125984 kernel: IPI shorthand broadcast: enabled Dec 13 14:26:15.125991 kernel: sched_clock: Marking stable (480056530, 102126886)->(602073343, -19889927) Dec 13 14:26:15.125998 kernel: registered taskstats version 1 Dec 13 14:26:15.126005 kernel: Loading compiled-in X.509 certificates Dec 13 14:26:15.126013 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:26:15.126020 kernel: Key type .fscrypt registered Dec 13 14:26:15.126027 kernel: Key type fscrypt-provisioning registered Dec 13 14:26:15.126036 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:26:15.126044 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:26:15.126051 kernel: ima: No architecture policies found Dec 13 14:26:15.126058 kernel: clk: Disabling unused clocks Dec 13 14:26:15.126065 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:26:15.126072 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:26:15.126079 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:26:15.126086 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:26:15.126095 kernel: Run /init as init process Dec 13 14:26:15.126103 kernel: with arguments: Dec 13 14:26:15.126110 kernel: /init Dec 13 14:26:15.126117 kernel: with environment: Dec 13 14:26:15.126124 kernel: HOME=/ Dec 13 14:26:15.126131 kernel: TERM=linux Dec 13 14:26:15.126138 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:26:15.126147 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:26:15.126157 systemd[1]: Detected virtualization kvm. Dec 13 14:26:15.126166 systemd[1]: Detected architecture x86-64. Dec 13 14:26:15.126174 systemd[1]: Running in initrd. Dec 13 14:26:15.126181 systemd[1]: No hostname configured, using default hostname. Dec 13 14:26:15.126189 systemd[1]: Hostname set to . Dec 13 14:26:15.126197 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:26:15.126205 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:26:15.126212 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:26:15.126220 systemd[1]: Reached target cryptsetup.target. Dec 13 14:26:15.126229 systemd[1]: Reached target paths.target. Dec 13 14:26:15.126244 systemd[1]: Reached target slices.target. Dec 13 14:26:15.126253 systemd[1]: Reached target swap.target. Dec 13 14:26:15.126261 systemd[1]: Reached target timers.target. Dec 13 14:26:15.126269 systemd[1]: Listening on iscsid.socket. Dec 13 14:26:15.126279 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:26:15.126287 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:26:15.126295 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:26:15.126303 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:26:15.126311 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:26:15.126319 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:26:15.126327 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:26:15.126334 systemd[1]: Reached target sockets.target. Dec 13 14:26:15.126342 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:26:15.126352 systemd[1]: Finished network-cleanup.service. Dec 13 14:26:15.126360 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:26:15.126367 systemd[1]: Starting systemd-journald.service... Dec 13 14:26:15.126375 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:26:15.126384 systemd[1]: Starting systemd-resolved.service... Dec 13 14:26:15.126392 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:26:15.126400 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:26:15.126408 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:26:15.126416 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:26:15.126427 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:26:15.126439 systemd-journald[196]: Journal started Dec 13 14:26:15.126482 systemd-journald[196]: Runtime Journal (/run/log/journal/ded33b7e4903411f832940399f8117c4) is 6.0M, max 48.5M, 42.5M free. Dec 13 14:26:15.114623 systemd-modules-load[197]: Inserted module 'overlay' Dec 13 14:26:15.162459 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:26:15.162492 kernel: Bridge firewalling registered Dec 13 14:26:15.162521 kernel: audit: type=1130 audit(1734099975.155:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:15.162550 systemd[1]: Started systemd-journald.service. Dec 13 14:26:15.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:15.143235 systemd-resolved[198]: Positive Trust Anchors: Dec 13 14:26:15.171151 kernel: audit: type=1130 audit(1734099975.162:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:15.171174 kernel: audit: type=1130 audit(1734099975.167:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:15.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:15.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:15.143254 systemd-resolved[198]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:26:15.143307 systemd-resolved[198]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:26:15.179570 kernel: SCSI subsystem initialized Dec 13 14:26:15.145927 systemd-resolved[198]: Defaulting to hostname 'linux'. Dec 13 14:26:15.149384 systemd-modules-load[197]: Inserted module 'br_netfilter' Dec 13 14:26:15.163548 systemd[1]: Started systemd-resolved.service. Dec 13 14:26:15.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:15.167540 systemd[1]: Reached target nss-lookup.target. Dec 13 14:26:15.187751 kernel: audit: type=1130 audit(1734099975.181:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:15.180914 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:26:15.183330 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:26:15.192995 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:26:15.193023 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:26:15.194991 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:26:15.197623 systemd-modules-load[197]: Inserted module 'dm_multipath' Dec 13 14:26:15.198727 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:26:15.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:15.203397 kernel: audit: type=1130 audit(1734099975.198:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:15.203023 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:26:15.206191 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:26:15.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:15.208237 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:26:15.211780 kernel: audit: type=1130 audit(1734099975.206:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:15.214241 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:26:15.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:15.218983 kernel: audit: type=1130 audit(1734099975.214:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:15.219177 dracut-cmdline[222]: dracut-dracut-053 Dec 13 14:26:15.221576 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:26:15.284989 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:26:15.304013 kernel: iscsi: registered transport (tcp) Dec 13 14:26:15.326996 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:26:15.327077 kernel: QLogic iSCSI HBA Driver Dec 13 14:26:15.358241 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:26:15.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:15.361255 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:26:15.364859 kernel: audit: type=1130 audit(1734099975.359:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:15.411005 kernel: raid6: avx2x4 gen() 28293 MB/s Dec 13 14:26:15.428013 kernel: raid6: avx2x4 xor() 6249 MB/s Dec 13 14:26:15.444991 kernel: raid6: avx2x2 gen() 23456 MB/s Dec 13 14:26:15.461987 kernel: raid6: avx2x2 xor() 17785 MB/s Dec 13 14:26:15.478994 kernel: raid6: avx2x1 gen() 25489 MB/s Dec 13 14:26:15.495996 kernel: raid6: avx2x1 xor() 12816 MB/s Dec 13 14:26:15.512995 kernel: raid6: sse2x4 gen() 11819 MB/s Dec 13 14:26:15.530017 kernel: raid6: sse2x4 xor() 5491 MB/s Dec 13 14:26:15.547011 kernel: raid6: sse2x2 gen() 15193 MB/s Dec 13 14:26:15.564014 kernel: raid6: sse2x2 xor() 8274 MB/s Dec 13 14:26:15.581012 kernel: raid6: sse2x1 gen() 11342 MB/s Dec 13 14:26:15.598443 kernel: raid6: sse2x1 xor() 7656 MB/s Dec 13 14:26:15.598506 kernel: raid6: using algorithm avx2x4 gen() 28293 MB/s Dec 13 14:26:15.598520 kernel: raid6: .... xor() 6249 MB/s, rmw enabled Dec 13 14:26:15.599158 kernel: raid6: using avx2x2 recovery algorithm Dec 13 14:26:15.611997 kernel: xor: automatically using best checksumming function avx Dec 13 14:26:15.706028 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:26:15.715155 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:26:15.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:15.718000 audit: BPF prog-id=7 op=LOAD Dec 13 14:26:15.718000 audit: BPF prog-id=8 op=LOAD Dec 13 14:26:15.719984 kernel: audit: type=1130 audit(1734099975.715:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:15.720287 systemd[1]: Starting systemd-udevd.service... Dec 13 14:26:15.734789 systemd-udevd[402]: Using default interface naming scheme 'v252'. Dec 13 14:26:15.739146 systemd[1]: Started systemd-udevd.service. Dec 13 14:26:15.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:15.741334 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:26:15.755447 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Dec 13 14:26:15.784953 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:26:15.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:15.786742 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:26:15.831727 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:26:15.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:15.868641 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 14:26:15.891020 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:26:15.891060 kernel: GPT:9289727 != 19775487 Dec 13 14:26:15.891070 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:26:15.891079 kernel: GPT:9289727 != 19775487 Dec 13 14:26:15.891092 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:26:15.891101 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:26:15.891111 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:26:15.898992 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 14:26:15.899017 kernel: AES CTR mode by8 optimization enabled Dec 13 14:26:15.903985 kernel: libata version 3.00 loaded. Dec 13 14:26:15.928992 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (443) Dec 13 14:26:15.938685 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 14:26:15.979854 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 14:26:15.979883 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 14:26:15.980042 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 14:26:15.980159 kernel: scsi host0: ahci Dec 13 14:26:15.980323 kernel: scsi host1: ahci Dec 13 14:26:15.980495 kernel: scsi host2: ahci Dec 13 14:26:15.980621 kernel: scsi host3: ahci Dec 13 14:26:15.980852 kernel: scsi host4: ahci Dec 13 14:26:15.981025 kernel: scsi host5: ahci Dec 13 14:26:15.981176 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Dec 13 14:26:15.981191 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Dec 13 14:26:15.981219 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Dec 13 14:26:15.981233 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Dec 13 14:26:15.981245 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:26:15.981258 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Dec 13 14:26:15.981271 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Dec 13 14:26:15.981282 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:26:15.941865 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:26:15.943460 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:26:15.986581 disk-uuid[523]: Primary Header is updated. Dec 13 14:26:15.986581 disk-uuid[523]: Secondary Entries is updated. Dec 13 14:26:15.986581 disk-uuid[523]: Secondary Header is updated. Dec 13 14:26:15.948764 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:26:15.953374 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:26:15.957692 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:26:15.959720 systemd[1]: Starting disk-uuid.service... Dec 13 14:26:16.290266 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 14:26:16.290354 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 14:26:16.290367 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 14:26:16.290379 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 14:26:16.292000 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 14:26:16.293002 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 14:26:16.293993 kernel: ata3.00: applying bridge limits Dec 13 14:26:16.295006 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 14:26:16.295068 kernel: ata3.00: configured for UDMA/100 Dec 13 14:26:16.298000 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 14:26:16.329270 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 14:26:16.347072 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 14:26:16.347085 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 14:26:16.982359 disk-uuid[536]: The operation has completed successfully. Dec 13 14:26:16.983574 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:26:17.008580 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:26:17.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:17.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:17.008670 systemd[1]: Finished disk-uuid.service. Dec 13 14:26:17.015011 systemd[1]: Starting verity-setup.service... Dec 13 14:26:17.028014 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 14:26:17.048194 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:26:17.050094 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:26:17.052265 systemd[1]: Finished verity-setup.service. Dec 13 14:26:17.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:17.148843 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:26:17.150484 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:26:17.149652 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:26:17.150451 systemd[1]: Starting ignition-setup.service... Dec 13 14:26:17.152912 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:26:17.161249 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:26:17.161301 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:26:17.161311 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:26:17.170237 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:26:17.203230 systemd[1]: Finished ignition-setup.service. Dec 13 14:26:17.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:17.204773 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:26:17.235503 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:26:17.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:17.236000 audit: BPF prog-id=9 op=LOAD Dec 13 14:26:17.238145 systemd[1]: Starting systemd-networkd.service... Dec 13 14:26:17.251128 ignition[676]: Ignition 2.14.0 Dec 13 14:26:17.251139 ignition[676]: Stage: fetch-offline Dec 13 14:26:17.251225 ignition[676]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:26:17.251235 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:26:17.251345 ignition[676]: parsed url from cmdline: "" Dec 13 14:26:17.251349 ignition[676]: no config URL provided Dec 13 14:26:17.251355 ignition[676]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:26:17.270659 ignition[676]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:26:17.270687 ignition[676]: op(1): [started] loading QEMU firmware config module Dec 13 14:26:17.270692 ignition[676]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 14:26:17.278574 ignition[676]: op(1): [finished] loading QEMU firmware config module Dec 13 14:26:17.279168 systemd-networkd[719]: lo: Link UP Dec 13 14:26:17.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:17.279172 systemd-networkd[719]: lo: Gained carrier Dec 13 14:26:17.279587 systemd-networkd[719]: Enumeration completed Dec 13 14:26:17.279728 systemd[1]: Started systemd-networkd.service. Dec 13 14:26:17.279797 systemd-networkd[719]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:26:17.281008 systemd-networkd[719]: eth0: Link UP Dec 13 14:26:17.281011 systemd-networkd[719]: eth0: Gained carrier Dec 13 14:26:17.281270 systemd[1]: Reached target network.target. Dec 13 14:26:17.283075 systemd[1]: Starting iscsiuio.service... Dec 13 14:26:17.287539 systemd[1]: Started iscsiuio.service. Dec 13 14:26:17.294409 iscsid[725]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:26:17.294409 iscsid[725]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:26:17.294409 iscsid[725]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:26:17.294409 iscsid[725]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:26:17.294409 iscsid[725]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:26:17.294409 iscsid[725]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:26:17.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:17.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:17.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:17.290277 systemd[1]: Starting iscsid.service... Dec 13 14:26:17.294931 systemd[1]: Started iscsid.service. Dec 13 14:26:17.297614 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:26:17.308077 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:26:17.309290 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:26:17.311773 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:26:17.312897 systemd[1]: Reached target remote-fs.target. Dec 13 14:26:17.314471 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:26:17.323602 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:26:17.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:17.359597 ignition[676]: parsing config with SHA512: 875621701dd682a1924d5bee7826679cb3b66538df88fe3d84a51962aee76b41170b9a280efeb1b2ac6b841a460c73399c57e2860c65c7adfcb76bf8808199f2 Dec 13 14:26:17.368131 unknown[676]: fetched base config from "system" Dec 13 14:26:17.368149 unknown[676]: fetched user config from "qemu" Dec 13 14:26:17.370272 ignition[676]: fetch-offline: fetch-offline passed Dec 13 14:26:17.371204 ignition[676]: Ignition finished successfully Dec 13 14:26:17.373110 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:26:17.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:17.373912 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 14:26:17.375132 systemd[1]: Starting ignition-kargs.service... Dec 13 14:26:17.386754 ignition[740]: Ignition 2.14.0 Dec 13 14:26:17.386768 ignition[740]: Stage: kargs Dec 13 14:26:17.386897 ignition[740]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:26:17.386910 ignition[740]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:26:17.391352 ignition[740]: kargs: kargs passed Dec 13 14:26:17.391409 ignition[740]: Ignition finished successfully Dec 13 14:26:17.392096 systemd-networkd[719]: eth0: DHCPv4 address 10.0.0.100/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:26:17.395527 systemd[1]: Finished ignition-kargs.service. Dec 13 14:26:17.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:17.398798 systemd[1]: Starting ignition-disks.service... Dec 13 14:26:17.408335 ignition[746]: Ignition 2.14.0 Dec 13 14:26:17.408350 ignition[746]: Stage: disks Dec 13 14:26:17.408496 ignition[746]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:26:17.408508 ignition[746]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:26:17.409991 ignition[746]: disks: disks passed Dec 13 14:26:17.410045 ignition[746]: Ignition finished successfully Dec 13 14:26:17.414603 systemd[1]: Finished ignition-disks.service. Dec 13 14:26:17.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:17.416570 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:26:17.416908 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:26:17.419329 systemd[1]: Reached target local-fs.target. Dec 13 14:26:17.422269 systemd[1]: Reached target sysinit.target. Dec 13 14:26:17.424015 systemd[1]: Reached target basic.target. Dec 13 14:26:17.426682 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:26:17.441803 systemd-fsck[754]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 14:26:17.448053 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:26:17.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:17.450003 systemd[1]: Mounting sysroot.mount... Dec 13 14:26:17.456001 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:26:17.456644 systemd[1]: Mounted sysroot.mount. Dec 13 14:26:17.458267 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:26:17.461189 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:26:17.463119 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:26:17.463176 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:26:17.464607 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:26:17.468868 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:26:17.471120 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:26:17.476293 initrd-setup-root[764]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:26:17.481119 initrd-setup-root[772]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:26:17.485798 initrd-setup-root[780]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:26:17.489945 initrd-setup-root[788]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:26:17.525894 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:26:17.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:17.527257 systemd[1]: Starting ignition-mount.service... Dec 13 14:26:17.528892 systemd[1]: Starting sysroot-boot.service... Dec 13 14:26:17.535586 bash[805]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 14:26:17.544571 ignition[807]: INFO : Ignition 2.14.0 Dec 13 14:26:17.544571 ignition[807]: INFO : Stage: mount Dec 13 14:26:17.546626 ignition[807]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:26:17.546626 ignition[807]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:26:17.546626 ignition[807]: INFO : mount: mount passed Dec 13 14:26:17.546626 ignition[807]: INFO : Ignition finished successfully Dec 13 14:26:17.551588 systemd[1]: Finished ignition-mount.service. Dec 13 14:26:17.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:17.552605 systemd[1]: Finished sysroot-boot.service. Dec 13 14:26:17.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:18.085835 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:26:18.094300 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (816) Dec 13 14:26:18.094352 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:26:18.094368 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:26:18.095293 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:26:18.100227 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:26:18.101712 systemd[1]: Starting ignition-files.service... Dec 13 14:26:18.116788 ignition[836]: INFO : Ignition 2.14.0 Dec 13 14:26:18.116788 ignition[836]: INFO : Stage: files Dec 13 14:26:18.124327 ignition[836]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:26:18.124327 ignition[836]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:26:18.124327 ignition[836]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:26:18.124327 ignition[836]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:26:18.124327 ignition[836]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:26:18.130944 ignition[836]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:26:18.130944 ignition[836]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:26:18.130944 ignition[836]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:26:18.130944 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:26:18.130944 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 14:26:18.127713 unknown[836]: wrote ssh authorized keys file for user: core Dec 13 14:26:18.310344 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 14:26:18.452719 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:26:18.454861 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:26:18.454861 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 14:26:18.944819 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 14:26:18.962136 systemd-networkd[719]: eth0: Gained IPv6LL Dec 13 14:26:19.084797 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:26:19.084797 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:26:19.088517 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:26:19.090506 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:26:19.092304 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:26:19.094094 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:26:19.095997 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:26:19.097775 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:26:19.099781 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:26:19.101635 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:26:19.103547 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:26:19.103547 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:26:19.103547 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:26:19.103547 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:26:19.103547 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 14:26:19.382291 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 14:26:19.855876 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 14:26:19.855876 ignition[836]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Dec 13 14:26:19.860497 ignition[836]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:26:19.862494 ignition[836]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:26:19.862494 ignition[836]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Dec 13 14:26:19.865606 ignition[836]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Dec 13 14:26:19.865606 ignition[836]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 14:26:19.868750 ignition[836]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 14:26:19.868750 ignition[836]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Dec 13 14:26:19.868750 ignition[836]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:26:19.873330 ignition[836]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:26:19.874736 ignition[836]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 14:26:19.874736 ignition[836]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 14:26:19.921541 ignition[836]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 14:26:19.923239 ignition[836]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 14:26:19.924657 ignition[836]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:26:19.926443 ignition[836]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:26:19.928164 ignition[836]: INFO : files: files passed Dec 13 14:26:19.928923 ignition[836]: INFO : Ignition finished successfully Dec 13 14:26:19.930451 systemd[1]: Finished ignition-files.service. Dec 13 14:26:19.936793 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 13 14:26:19.936815 kernel: audit: type=1130 audit(1734099979.929:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:19.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:19.936979 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:26:19.938904 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:26:19.939652 systemd[1]: Starting ignition-quench.service... Dec 13 14:26:19.943061 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:26:19.943145 systemd[1]: Finished ignition-quench.service. Dec 13 14:26:19.952981 kernel: audit: type=1130 audit(1734099979.942:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:19.952998 kernel: audit: type=1131 audit(1734099979.942:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:19.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:19.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:19.957143 initrd-setup-root-after-ignition[861]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Dec 13 14:26:19.960513 initrd-setup-root-after-ignition[863]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:26:19.962480 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:26:19.967899 kernel: audit: type=1130 audit(1734099979.961:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:19.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:19.963055 systemd[1]: Reached target ignition-complete.target. Dec 13 14:26:19.969754 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:26:19.984642 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:26:19.992255 kernel: audit: type=1130 audit(1734099979.984:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:19.992274 kernel: audit: type=1131 audit(1734099979.984:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:19.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:19.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:19.984752 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:26:19.985395 systemd[1]: Reached target initrd-fs.target. Dec 13 14:26:19.992485 systemd[1]: Reached target initrd.target. Dec 13 14:26:19.994029 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:26:19.994688 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:26:20.005180 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:26:20.011261 kernel: audit: type=1130 audit(1734099980.005:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.011346 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:26:20.021838 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:26:20.022480 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:26:20.024071 systemd[1]: Stopped target timers.target. Dec 13 14:26:20.025485 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:26:20.025616 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:26:20.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.031984 kernel: audit: type=1131 audit(1734099980.027:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.028643 systemd[1]: Stopped target initrd.target. Dec 13 14:26:20.032529 systemd[1]: Stopped target basic.target. Dec 13 14:26:20.032854 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:26:20.047900 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:26:20.048214 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:26:20.051062 systemd[1]: Stopped target remote-fs.target. Dec 13 14:26:20.052900 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:26:20.054406 systemd[1]: Stopped target sysinit.target. Dec 13 14:26:20.054713 systemd[1]: Stopped target local-fs.target. Dec 13 14:26:20.057295 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:26:20.058566 systemd[1]: Stopped target swap.target. Dec 13 14:26:20.060268 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:26:20.065774 kernel: audit: type=1131 audit(1734099980.060:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.060398 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:26:20.061768 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:26:20.071849 kernel: audit: type=1131 audit(1734099980.066:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.066232 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:26:20.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.066322 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:26:20.067780 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:26:20.067894 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:26:20.072358 systemd[1]: Stopped target paths.target. Dec 13 14:26:20.072598 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:26:20.076218 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:26:20.081384 systemd[1]: Stopped target slices.target. Dec 13 14:26:20.083078 systemd[1]: Stopped target sockets.target. Dec 13 14:26:20.084758 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:26:20.085707 systemd[1]: Closed iscsid.socket. Dec 13 14:26:20.087447 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:26:20.088780 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:26:20.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.091376 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:26:20.092612 systemd[1]: Stopped ignition-files.service. Dec 13 14:26:20.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.095731 systemd[1]: Stopping ignition-mount.service... Dec 13 14:26:20.097769 systemd[1]: Stopping iscsiuio.service... Dec 13 14:26:20.100635 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:26:20.102539 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:26:20.103841 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:26:20.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.105841 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:26:20.107231 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:26:20.109356 ignition[876]: INFO : Ignition 2.14.0 Dec 13 14:26:20.109356 ignition[876]: INFO : Stage: umount Dec 13 14:26:20.109356 ignition[876]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:26:20.109356 ignition[876]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:26:20.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.114820 ignition[876]: INFO : umount: umount passed Dec 13 14:26:20.114820 ignition[876]: INFO : Ignition finished successfully Dec 13 14:26:20.119179 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:26:20.120941 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:26:20.122028 systemd[1]: Stopped iscsiuio.service. Dec 13 14:26:20.124031 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:26:20.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.124106 systemd[1]: Stopped ignition-mount.service. Dec 13 14:26:20.126066 systemd[1]: Stopped target network.target. Dec 13 14:26:20.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.127776 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:26:20.127822 systemd[1]: Closed iscsiuio.socket. Dec 13 14:26:20.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.128905 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:26:20.128955 systemd[1]: Stopped ignition-disks.service. Dec 13 14:26:20.130634 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:26:20.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.130679 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:26:20.132458 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:26:20.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.132506 systemd[1]: Stopped ignition-setup.service. Dec 13 14:26:20.134580 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:26:20.136186 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:26:20.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.138523 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:26:20.151000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.138633 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:26:20.140005 systemd-networkd[719]: eth0: DHCPv6 lease lost Dec 13 14:26:20.155000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:26:20.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.141046 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:26:20.141143 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:26:20.144230 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:26:20.144346 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:26:20.146257 systemd[1]: Stopping network-cleanup.service... Dec 13 14:26:20.163000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:26:20.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.147059 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:26:20.147113 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:26:20.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.149023 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:26:20.149072 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:26:20.150827 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:26:20.150863 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:26:20.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.152064 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:26:20.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.155572 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:26:20.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.156098 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:26:20.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.156211 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:26:20.162944 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:26:20.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.163096 systemd[1]: Stopped network-cleanup.service. Dec 13 14:26:20.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.164818 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:26:20.164984 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:26:20.168108 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:26:20.168154 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:26:20.169781 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:26:20.169809 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:26:20.171498 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:26:20.171540 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:26:20.173385 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:26:20.173419 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:26:20.175101 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:26:20.175137 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:26:20.177751 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:26:20.178973 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 14:26:20.179017 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 14:26:20.180026 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:26:20.180064 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:26:20.181911 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:26:20.182000 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:26:20.184434 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 14:26:20.184930 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:26:20.185023 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:26:20.239038 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:26:20.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.239161 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:26:20.241174 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:26:20.242876 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:26:20.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:20.242928 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:26:20.245669 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:26:20.260308 systemd[1]: Switching root. Dec 13 14:26:20.281619 iscsid[725]: iscsid shutting down. Dec 13 14:26:20.282397 systemd-journald[196]: Received SIGTERM from PID 1 (systemd). Dec 13 14:26:20.282437 systemd-journald[196]: Journal stopped Dec 13 14:26:23.700256 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:26:23.700308 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:26:23.700319 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:26:23.700329 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:26:23.700338 kernel: SELinux: policy capability open_perms=1 Dec 13 14:26:23.700355 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:26:23.700368 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:26:23.700378 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:26:23.700387 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:26:23.700399 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:26:23.700411 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:26:23.700422 systemd[1]: Successfully loaded SELinux policy in 41.298ms. Dec 13 14:26:23.700442 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.591ms. Dec 13 14:26:23.700454 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:26:23.700471 systemd[1]: Detected virtualization kvm. Dec 13 14:26:23.700482 systemd[1]: Detected architecture x86-64. Dec 13 14:26:23.700493 systemd[1]: Detected first boot. Dec 13 14:26:23.700504 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:26:23.700518 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:26:23.700529 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:26:23.700540 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:26:23.700555 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:26:23.700566 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:26:23.700577 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:26:23.700588 systemd[1]: Stopped iscsid.service. Dec 13 14:26:23.700598 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:26:23.700613 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:26:23.700624 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:26:23.700650 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:26:23.700660 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:26:23.700670 systemd[1]: Created slice system-getty.slice. Dec 13 14:26:23.700680 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:26:23.700691 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:26:23.700702 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:26:23.700712 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:26:23.700735 systemd[1]: Created slice user.slice. Dec 13 14:26:23.700746 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:26:23.700756 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:26:23.700767 systemd[1]: Set up automount boot.automount. Dec 13 14:26:23.700777 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:26:23.700788 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:26:23.700799 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:26:23.700809 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:26:23.700824 systemd[1]: Reached target integritysetup.target. Dec 13 14:26:23.700835 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:26:23.700845 systemd[1]: Reached target remote-fs.target. Dec 13 14:26:23.700855 systemd[1]: Reached target slices.target. Dec 13 14:26:23.700866 systemd[1]: Reached target swap.target. Dec 13 14:26:23.700876 systemd[1]: Reached target torcx.target. Dec 13 14:26:23.700886 systemd[1]: Reached target veritysetup.target. Dec 13 14:26:23.700896 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:26:23.700906 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:26:23.700921 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:26:23.700931 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:26:23.700941 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:26:23.700952 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:26:23.700978 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:26:23.700989 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:26:23.700999 systemd[1]: Mounting media.mount... Dec 13 14:26:23.701009 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:26:23.701021 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:26:23.701031 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:26:23.701046 systemd[1]: Mounting tmp.mount... Dec 13 14:26:23.701056 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:26:23.701067 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:26:23.701077 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:26:23.701087 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:26:23.701097 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:26:23.701107 systemd[1]: Starting modprobe@drm.service... Dec 13 14:26:23.701118 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:26:23.701128 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:26:23.701142 systemd[1]: Starting modprobe@loop.service... Dec 13 14:26:23.701153 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:26:23.701163 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:26:23.701174 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:26:23.701184 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:26:23.701194 kernel: loop: module loaded Dec 13 14:26:23.701211 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:26:23.701221 systemd[1]: Stopped systemd-journald.service. Dec 13 14:26:23.701232 systemd[1]: Starting systemd-journald.service... Dec 13 14:26:23.701247 kernel: fuse: init (API version 7.34) Dec 13 14:26:23.701257 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:26:23.701267 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:26:23.701278 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:26:23.701289 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:26:23.701300 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:26:23.701310 systemd[1]: Stopped verity-setup.service. Dec 13 14:26:23.701320 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:26:23.701331 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:26:23.701348 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:26:23.701358 systemd[1]: Mounted media.mount. Dec 13 14:26:23.701371 systemd-journald[989]: Journal started Dec 13 14:26:23.701412 systemd-journald[989]: Runtime Journal (/run/log/journal/ded33b7e4903411f832940399f8117c4) is 6.0M, max 48.5M, 42.5M free. Dec 13 14:26:20.345000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:26:20.635000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:26:20.635000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:26:20.635000 audit: BPF prog-id=10 op=LOAD Dec 13 14:26:20.635000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:26:20.635000 audit: BPF prog-id=11 op=LOAD Dec 13 14:26:20.635000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:26:20.669000 audit[909]: AVC avc: denied { associate } for pid=909 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:26:20.669000 audit[909]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001058e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=892 pid=909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:20.669000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:26:20.672000 audit[909]: AVC avc: denied { associate } for pid=909 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:26:20.672000 audit[909]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001059b9 a2=1ed a3=0 items=2 ppid=892 pid=909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:20.672000 audit: CWD cwd="/" Dec 13 14:26:20.672000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:20.672000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:20.672000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:26:23.555000 audit: BPF prog-id=12 op=LOAD Dec 13 14:26:23.555000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:26:23.556000 audit: BPF prog-id=13 op=LOAD Dec 13 14:26:23.556000 audit: BPF prog-id=14 op=LOAD Dec 13 14:26:23.556000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:26:23.556000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:26:23.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.567000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:26:23.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.672000 audit: BPF prog-id=15 op=LOAD Dec 13 14:26:23.673000 audit: BPF prog-id=16 op=LOAD Dec 13 14:26:23.673000 audit: BPF prog-id=17 op=LOAD Dec 13 14:26:23.673000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:26:23.673000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:26:23.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.698000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:26:23.698000 audit[989]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffedb4e9530 a2=4000 a3=7ffedb4e95cc items=0 ppid=1 pid=989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:23.698000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:26:20.669014 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:26:23.554466 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:26:20.669278 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:20Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:26:23.554478 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 14:26:20.669300 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:20Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:26:23.558012 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:26:20.669342 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:20Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:26:20.669354 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:20Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:26:20.669394 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:20Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:26:23.703067 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:26:20.669409 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:20Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:26:20.669788 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:20Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:26:20.669844 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:20Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:26:20.669864 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:20Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:26:20.670220 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:20Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:26:20.670255 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:20Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:26:20.670272 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:20Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:26:20.670284 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:20Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:26:20.670299 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:20Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:26:20.670311 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:20Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:26:23.266066 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:23Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:26:23.266324 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:23Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:26:23.266422 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:23Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:26:23.266579 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:23Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:26:23.266623 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:23Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:26:23.266687 /usr/lib/systemd/system-generators/torcx-generator[909]: time="2024-12-13T14:26:23Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:26:23.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.704974 systemd[1]: Started systemd-journald.service. Dec 13 14:26:23.705281 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:26:23.706215 systemd[1]: Mounted tmp.mount. Dec 13 14:26:23.707125 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:26:23.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.708207 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:26:23.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.709298 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:26:23.709420 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:26:23.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.710499 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:26:23.710618 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:26:23.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.711688 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:26:23.711820 systemd[1]: Finished modprobe@drm.service. Dec 13 14:26:23.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.712949 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:26:23.713195 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:26:23.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.714287 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:26:23.714394 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:26:23.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.715459 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:26:23.715573 systemd[1]: Finished modprobe@loop.service. Dec 13 14:26:23.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.716626 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:26:23.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.717778 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:26:23.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.718933 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:26:23.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.720258 systemd[1]: Reached target network-pre.target. Dec 13 14:26:23.722270 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:26:23.724216 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:26:23.725174 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:26:23.726771 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:26:23.728672 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:26:23.729868 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:26:23.730737 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:26:23.731802 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:26:23.732771 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:26:23.733644 systemd-journald[989]: Time spent on flushing to /var/log/journal/ded33b7e4903411f832940399f8117c4 is 22.250ms for 1097 entries. Dec 13 14:26:23.733644 systemd-journald[989]: System Journal (/var/log/journal/ded33b7e4903411f832940399f8117c4) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:26:23.777188 systemd-journald[989]: Received client request to flush runtime journal. Dec 13 14:26:23.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.736385 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:26:23.739954 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:26:23.741009 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:26:23.778004 udevadm[1013]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 14:26:23.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:23.746834 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:26:23.748622 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:26:23.750116 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:26:23.753649 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:26:23.755139 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:26:23.758270 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:26:23.762481 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:26:23.778088 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:26:23.784343 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:26:23.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:24.298417 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:26:24.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:24.299000 audit: BPF prog-id=18 op=LOAD Dec 13 14:26:24.299000 audit: BPF prog-id=19 op=LOAD Dec 13 14:26:24.299000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:26:24.299000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:26:24.301023 systemd[1]: Starting systemd-udevd.service... Dec 13 14:26:24.322135 systemd-udevd[1017]: Using default interface naming scheme 'v252'. Dec 13 14:26:24.338400 systemd[1]: Started systemd-udevd.service. Dec 13 14:26:24.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:24.340000 audit: BPF prog-id=20 op=LOAD Dec 13 14:26:24.342726 systemd[1]: Starting systemd-networkd.service... Dec 13 14:26:24.348907 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:26:24.347000 audit: BPF prog-id=21 op=LOAD Dec 13 14:26:24.347000 audit: BPF prog-id=22 op=LOAD Dec 13 14:26:24.347000 audit: BPF prog-id=23 op=LOAD Dec 13 14:26:24.373854 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 14:26:24.385373 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:26:24.387148 systemd[1]: Started systemd-userdbd.service. Dec 13 14:26:24.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:24.417993 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 14:26:24.433986 kernel: ACPI: button: Power Button [PWRF] Dec 13 14:26:24.448671 systemd-networkd[1028]: lo: Link UP Dec 13 14:26:24.448689 systemd-networkd[1028]: lo: Gained carrier Dec 13 14:26:24.449261 systemd-networkd[1028]: Enumeration completed Dec 13 14:26:24.449555 systemd[1]: Started systemd-networkd.service. Dec 13 14:26:24.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:24.440000 audit[1043]: AVC avc: denied { confidentiality } for pid=1043 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:26:24.451322 systemd-networkd[1028]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:26:24.440000 audit[1043]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55ae89dac4a0 a1=337fc a2=7fe854bf7bc5 a3=5 items=110 ppid=1017 pid=1043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:24.440000 audit: CWD cwd="/" Dec 13 14:26:24.440000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=1 name=(null) inode=14794 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=2 name=(null) inode=14794 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=3 name=(null) inode=14795 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=4 name=(null) inode=14794 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=5 name=(null) inode=14796 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=6 name=(null) inode=14794 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=7 name=(null) inode=14797 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=8 name=(null) inode=14797 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=9 name=(null) inode=14798 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=10 name=(null) inode=14797 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=11 name=(null) inode=14799 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=12 name=(null) inode=14797 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=13 name=(null) inode=14800 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=14 name=(null) inode=14797 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=15 name=(null) inode=14801 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=16 name=(null) inode=14797 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=17 name=(null) inode=14802 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=18 name=(null) inode=14794 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=19 name=(null) inode=14803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=20 name=(null) inode=14803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=21 name=(null) inode=14804 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=22 name=(null) inode=14803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=23 name=(null) inode=14805 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=24 name=(null) inode=14803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=25 name=(null) inode=14806 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=26 name=(null) inode=14803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=27 name=(null) inode=14807 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=28 name=(null) inode=14803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=29 name=(null) inode=14808 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=30 name=(null) inode=14794 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=31 name=(null) inode=14809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=32 name=(null) inode=14809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=33 name=(null) inode=14810 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=34 name=(null) inode=14809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=35 name=(null) inode=14811 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=36 name=(null) inode=14809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=37 name=(null) inode=14812 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=38 name=(null) inode=14809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=39 name=(null) inode=14813 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=40 name=(null) inode=14809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=41 name=(null) inode=14814 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=42 name=(null) inode=14794 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=43 name=(null) inode=14815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=44 name=(null) inode=14815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=45 name=(null) inode=14816 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=46 name=(null) inode=14815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=47 name=(null) inode=14817 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=48 name=(null) inode=14815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=49 name=(null) inode=14818 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=50 name=(null) inode=14815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=51 name=(null) inode=14819 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=52 name=(null) inode=14815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=53 name=(null) inode=14820 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=55 name=(null) inode=14821 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=56 name=(null) inode=14821 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=57 name=(null) inode=14822 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=58 name=(null) inode=14821 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=59 name=(null) inode=14823 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=60 name=(null) inode=14821 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=61 name=(null) inode=14824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=62 name=(null) inode=14824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=63 name=(null) inode=14825 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=64 name=(null) inode=14824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=65 name=(null) inode=14826 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=66 name=(null) inode=14824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=67 name=(null) inode=14827 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=68 name=(null) inode=14824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=69 name=(null) inode=14828 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=70 name=(null) inode=14824 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=71 name=(null) inode=14829 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=72 name=(null) inode=14821 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=73 name=(null) inode=14830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=74 name=(null) inode=14830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=75 name=(null) inode=14831 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=76 name=(null) inode=14830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=77 name=(null) inode=14832 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=78 name=(null) inode=14830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=79 name=(null) inode=14833 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=80 name=(null) inode=14830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=81 name=(null) inode=14834 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=82 name=(null) inode=14830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=83 name=(null) inode=14835 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=84 name=(null) inode=14821 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=85 name=(null) inode=14836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=86 name=(null) inode=14836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=87 name=(null) inode=14837 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=88 name=(null) inode=14836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=89 name=(null) inode=14838 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=90 name=(null) inode=14836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=91 name=(null) inode=14839 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=92 name=(null) inode=14836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=93 name=(null) inode=14840 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=94 name=(null) inode=14836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=95 name=(null) inode=14841 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=96 name=(null) inode=14821 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=97 name=(null) inode=14842 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=98 name=(null) inode=14842 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=99 name=(null) inode=14843 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=100 name=(null) inode=14842 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=101 name=(null) inode=14844 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=102 name=(null) inode=14842 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=103 name=(null) inode=14845 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=104 name=(null) inode=14842 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=105 name=(null) inode=14846 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=106 name=(null) inode=14842 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=107 name=(null) inode=14847 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PATH item=109 name=(null) inode=14848 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:26:24.440000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:26:24.454675 systemd-networkd[1028]: eth0: Link UP Dec 13 14:26:24.454680 systemd-networkd[1028]: eth0: Gained carrier Dec 13 14:26:24.468134 systemd-networkd[1028]: eth0: DHCPv4 address 10.0.0.100/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:26:24.483667 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 14:26:24.484133 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 14:26:24.484280 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 14:26:24.487986 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 14:26:24.493986 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:26:24.544094 kernel: kvm: Nested Virtualization enabled Dec 13 14:26:24.544203 kernel: SVM: kvm: Nested Paging enabled Dec 13 14:26:24.545486 kernel: SVM: Virtual VMLOAD VMSAVE supported Dec 13 14:26:24.545591 kernel: SVM: Virtual GIF supported Dec 13 14:26:24.564991 kernel: EDAC MC: Ver: 3.0.0 Dec 13 14:26:24.589308 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:26:24.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:24.591532 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:26:24.599485 lvm[1052]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:26:24.624732 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:26:24.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:24.625837 systemd[1]: Reached target cryptsetup.target. Dec 13 14:26:24.627742 systemd[1]: Starting lvm2-activation.service... Dec 13 14:26:24.630918 lvm[1053]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:26:24.657183 systemd[1]: Finished lvm2-activation.service. Dec 13 14:26:24.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:24.658143 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:26:24.659037 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:26:24.659060 systemd[1]: Reached target local-fs.target. Dec 13 14:26:24.659884 systemd[1]: Reached target machines.target. Dec 13 14:26:24.661660 systemd[1]: Starting ldconfig.service... Dec 13 14:26:24.662663 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:26:24.662711 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:26:24.663605 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:26:24.665335 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:26:24.667479 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:26:24.669894 systemd[1]: Starting systemd-sysext.service... Dec 13 14:26:24.671462 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1055 (bootctl) Dec 13 14:26:24.672464 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:26:24.678749 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:26:24.684404 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:26:24.684564 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:26:24.695999 kernel: loop0: detected capacity change from 0 to 210664 Dec 13 14:26:24.712775 systemd-fsck[1063]: fsck.fat 4.2 (2021-01-31) Dec 13 14:26:24.712775 systemd-fsck[1063]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 14:26:24.714091 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:26:24.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:24.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:24.717010 systemd[1]: Mounting boot.mount... Dec 13 14:26:24.719775 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:26:24.973934 systemd[1]: Mounted boot.mount. Dec 13 14:26:24.987996 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:26:24.992218 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:26:24.997903 kernel: kauditd_printk_skb: 226 callbacks suppressed Dec 13 14:26:24.998035 kernel: audit: type=1130 audit(1734099984.992:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:24.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.006984 kernel: loop1: detected capacity change from 0 to 210664 Dec 13 14:26:25.032419 (sd-sysext)[1069]: Using extensions 'kubernetes'. Dec 13 14:26:25.032799 (sd-sysext)[1069]: Merged extensions into '/usr'. Dec 13 14:26:25.133565 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:26:25.135302 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:26:25.136423 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:26:25.137796 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:26:25.140320 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:26:25.142202 systemd[1]: Starting modprobe@loop.service... Dec 13 14:26:25.143118 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:26:25.143259 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:26:25.143388 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:26:25.145891 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:26:25.147127 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:26:25.147252 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:26:25.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.148504 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:26:25.148637 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:26:25.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.152995 kernel: audit: type=1130 audit(1734099985.147:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.153040 kernel: audit: type=1131 audit(1734099985.147:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.156984 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:26:25.157113 systemd[1]: Finished modprobe@loop.service. Dec 13 14:26:25.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.159993 kernel: audit: type=1130 audit(1734099985.155:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.160028 kernel: audit: type=1131 audit(1734099985.155:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.164350 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:26:25.164448 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:26:25.165356 systemd[1]: Finished systemd-sysext.service. Dec 13 14:26:25.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.167988 kernel: audit: type=1130 audit(1734099985.162:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.168022 kernel: audit: type=1131 audit(1734099985.162:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.172163 systemd[1]: Starting ensure-sysext.service... Dec 13 14:26:25.206992 kernel: audit: type=1130 audit(1734099985.170:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.208007 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:26:25.212215 systemd[1]: Reloading. Dec 13 14:26:25.252117 systemd-tmpfiles[1076]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:26:25.255484 systemd-tmpfiles[1076]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:26:25.257502 systemd-tmpfiles[1076]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:26:25.269933 /usr/lib/systemd/system-generators/torcx-generator[1096]: time="2024-12-13T14:26:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:26:25.270414 ldconfig[1054]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:26:25.270688 /usr/lib/systemd/system-generators/torcx-generator[1096]: time="2024-12-13T14:26:25Z" level=info msg="torcx already run" Dec 13 14:26:25.339663 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:26:25.339678 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:26:25.357479 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:26:25.408000 audit: BPF prog-id=24 op=LOAD Dec 13 14:26:25.408000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:26:25.420038 kernel: audit: type=1334 audit(1734099985.408:156): prog-id=24 op=LOAD Dec 13 14:26:25.420127 kernel: audit: type=1334 audit(1734099985.408:157): prog-id=20 op=UNLOAD Dec 13 14:26:25.417000 audit: BPF prog-id=25 op=LOAD Dec 13 14:26:25.417000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:26:25.419000 audit: BPF prog-id=26 op=LOAD Dec 13 14:26:25.419000 audit: BPF prog-id=27 op=LOAD Dec 13 14:26:25.419000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:26:25.419000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:26:25.419000 audit: BPF prog-id=28 op=LOAD Dec 13 14:26:25.419000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:26:25.419000 audit: BPF prog-id=29 op=LOAD Dec 13 14:26:25.419000 audit: BPF prog-id=30 op=LOAD Dec 13 14:26:25.419000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:26:25.419000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:26:25.420000 audit: BPF prog-id=31 op=LOAD Dec 13 14:26:25.420000 audit: BPF prog-id=32 op=LOAD Dec 13 14:26:25.420000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:26:25.420000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:26:25.430882 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:26:25.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.433896 systemd[1]: Starting audit-rules.service... Dec 13 14:26:25.435955 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:26:25.438200 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:26:25.439000 audit: BPF prog-id=33 op=LOAD Dec 13 14:26:25.441831 systemd[1]: Starting systemd-resolved.service... Dec 13 14:26:25.443000 audit: BPF prog-id=34 op=LOAD Dec 13 14:26:25.445590 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:26:25.450467 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:26:25.452442 systemd[1]: Finished ldconfig.service. Dec 13 14:26:25.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.453661 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:26:25.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.459079 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:26:25.459327 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:26:25.461199 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:26:25.463438 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:26:25.466184 systemd[1]: Starting modprobe@loop.service... Dec 13 14:26:25.467000 audit[1149]: SYSTEM_BOOT pid=1149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.467728 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:26:25.467854 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:26:25.467999 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:26:25.468099 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:26:25.470405 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:26:25.471727 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:26:25.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.473402 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:26:25.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.475037 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:26:25.475184 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:26:25.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.476666 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:26:25.476815 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:26:25.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.478359 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:26:25.478508 systemd[1]: Finished modprobe@loop.service. Dec 13 14:26:25.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.481863 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:26:25.482051 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:26:25.484329 systemd[1]: Starting systemd-update-done.service... Dec 13 14:26:25.489724 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:26:25.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.491466 systemd[1]: Finished systemd-update-done.service. Dec 13 14:26:25.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.493689 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:26:25.493880 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:26:25.495445 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:26:25.497553 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:26:25.499683 systemd[1]: Starting modprobe@loop.service... Dec 13 14:26:25.500612 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:26:25.500714 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:26:25.500799 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:26:25.500878 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:26:25.501715 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:26:25.501859 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:26:25.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.503280 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:26:25.503413 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:26:25.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.504913 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:26:25.505456 systemd[1]: Finished modprobe@loop.service. Dec 13 14:26:25.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.509937 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:26:25.510247 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:26:25.512010 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:26:25.514379 systemd[1]: Starting modprobe@drm.service... Dec 13 14:26:25.517768 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:26:25.520156 systemd[1]: Starting modprobe@loop.service... Dec 13 14:26:25.521238 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:26:25.521375 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:26:25.523126 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:26:25.524473 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:26:25.524660 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:26:25.526460 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:26:25.526773 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:26:25.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.528406 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:26:25.528481 systemd-resolved[1142]: Positive Trust Anchors: Dec 13 14:26:25.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.528493 systemd-resolved[1142]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:26:25.528530 systemd-resolved[1142]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:26:25.528581 systemd[1]: Finished modprobe@drm.service. Dec 13 14:26:25.530112 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:26:25.530268 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:26:25.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:26:25.531000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:26:25.531000 audit[1171]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe598c1840 a2=420 a3=0 items=0 ppid=1138 pid=1171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:26:25.531000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:26:25.531946 augenrules[1171]: No rules Dec 13 14:26:25.531913 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:26:25.532385 systemd[1]: Finished modprobe@loop.service. Dec 13 14:26:25.533911 systemd[1]: Finished audit-rules.service. Dec 13 14:26:25.535893 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:26:25.536007 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:26:25.537096 systemd[1]: Finished ensure-sysext.service. Dec 13 14:26:25.539474 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:26:25.540621 systemd[1]: Reached target time-set.target. Dec 13 14:26:25.946109 systemd-timesyncd[1147]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 14:26:25.946151 systemd-timesyncd[1147]: Initial clock synchronization to Fri 2024-12-13 14:26:25.946038 UTC. Dec 13 14:26:25.946641 systemd-resolved[1142]: Defaulting to hostname 'linux'. Dec 13 14:26:25.948119 systemd[1]: Started systemd-resolved.service. Dec 13 14:26:25.953116 systemd[1]: Reached target network.target. Dec 13 14:26:25.953970 systemd[1]: Reached target nss-lookup.target. Dec 13 14:26:25.954920 systemd[1]: Reached target sysinit.target. Dec 13 14:26:25.955903 systemd[1]: Started motdgen.path. Dec 13 14:26:25.956743 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:26:25.958176 systemd[1]: Started logrotate.timer. Dec 13 14:26:25.959121 systemd[1]: Started mdadm.timer. Dec 13 14:26:25.959911 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:26:25.960880 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:26:25.960907 systemd[1]: Reached target paths.target. Dec 13 14:26:25.961749 systemd[1]: Reached target timers.target. Dec 13 14:26:25.963011 systemd[1]: Listening on dbus.socket. Dec 13 14:26:25.964970 systemd[1]: Starting docker.socket... Dec 13 14:26:25.968002 systemd[1]: Listening on sshd.socket. Dec 13 14:26:25.968915 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:26:25.969506 systemd[1]: Listening on docker.socket. Dec 13 14:26:25.970427 systemd[1]: Reached target sockets.target. Dec 13 14:26:25.971298 systemd[1]: Reached target basic.target. Dec 13 14:26:25.972165 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:26:25.972190 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:26:25.973235 systemd[1]: Starting containerd.service... Dec 13 14:26:25.975328 systemd[1]: Starting dbus.service... Dec 13 14:26:25.977162 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:26:25.979201 systemd[1]: Starting extend-filesystems.service... Dec 13 14:26:25.980361 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:26:25.981732 systemd[1]: Starting motdgen.service... Dec 13 14:26:25.983943 systemd[1]: Starting prepare-helm.service... Dec 13 14:26:25.986631 jq[1181]: false Dec 13 14:26:25.985874 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:26:25.988006 systemd[1]: Starting sshd-keygen.service... Dec 13 14:26:25.991580 systemd[1]: Starting systemd-logind.service... Dec 13 14:26:25.992606 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:26:25.992703 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:26:25.993336 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:26:25.994750 systemd[1]: Starting update-engine.service... Dec 13 14:26:25.997111 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:26:26.000246 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:26:26.000523 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:26:26.001903 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:26:26.002124 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:26:26.008499 jq[1195]: true Dec 13 14:26:26.011276 tar[1202]: linux-amd64/helm Dec 13 14:26:26.018163 jq[1205]: true Dec 13 14:26:26.020377 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:26:26.020634 systemd[1]: Finished motdgen.service. Dec 13 14:26:26.024977 extend-filesystems[1182]: Found loop1 Dec 13 14:26:26.026263 extend-filesystems[1182]: Found sr0 Dec 13 14:26:26.026263 extend-filesystems[1182]: Found vda Dec 13 14:26:26.026263 extend-filesystems[1182]: Found vda1 Dec 13 14:26:26.026263 extend-filesystems[1182]: Found vda2 Dec 13 14:26:26.026263 extend-filesystems[1182]: Found vda3 Dec 13 14:26:26.026263 extend-filesystems[1182]: Found usr Dec 13 14:26:26.029158 dbus-daemon[1180]: [system] SELinux support is enabled Dec 13 14:26:26.029360 systemd[1]: Started dbus.service. Dec 13 14:26:26.031923 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:26:26.031943 systemd[1]: Reached target system-config.target. Dec 13 14:26:26.033199 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:26:26.033289 systemd[1]: Reached target user-config.target. Dec 13 14:26:26.042938 update_engine[1194]: I1213 14:26:26.042759 1194 main.cc:92] Flatcar Update Engine starting Dec 13 14:26:26.043384 extend-filesystems[1182]: Found vda4 Dec 13 14:26:26.044388 extend-filesystems[1182]: Found vda6 Dec 13 14:26:26.044713 systemd[1]: Started update-engine.service. Dec 13 14:26:26.045999 update_engine[1194]: I1213 14:26:26.044735 1194 update_check_scheduler.cc:74] Next update check in 4m1s Dec 13 14:26:26.046360 extend-filesystems[1182]: Found vda7 Dec 13 14:26:26.046360 extend-filesystems[1182]: Found vda9 Dec 13 14:26:26.046360 extend-filesystems[1182]: Checking size of /dev/vda9 Dec 13 14:26:26.047924 systemd[1]: Started locksmithd.service. Dec 13 14:26:26.055884 extend-filesystems[1182]: Resized partition /dev/vda9 Dec 13 14:26:26.061260 extend-filesystems[1231]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:26:26.069252 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 14:26:26.097873 env[1204]: time="2024-12-13T14:26:26.097794627Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:26:26.117417 systemd-logind[1191]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 14:26:26.117444 systemd-logind[1191]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:26:26.118864 systemd-logind[1191]: New seat seat0. Dec 13 14:26:26.122947 systemd[1]: Started systemd-logind.service. Dec 13 14:26:26.129020 env[1204]: time="2024-12-13T14:26:26.127902630Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:26:26.136928 env[1204]: time="2024-12-13T14:26:26.136667408Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:26:26.138523 env[1204]: time="2024-12-13T14:26:26.138340014Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:26:26.138523 env[1204]: time="2024-12-13T14:26:26.138381652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:26:26.139315 env[1204]: time="2024-12-13T14:26:26.138667338Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:26:26.139315 env[1204]: time="2024-12-13T14:26:26.138693858Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:26:26.139315 env[1204]: time="2024-12-13T14:26:26.138709678Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:26:26.139315 env[1204]: time="2024-12-13T14:26:26.138722041Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:26:26.163478 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 14:26:26.157834 locksmithd[1224]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:26:26.163760 env[1204]: time="2024-12-13T14:26:26.163572995Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:26:26.163986 env[1204]: time="2024-12-13T14:26:26.163953038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:26:26.164243 env[1204]: time="2024-12-13T14:26:26.164197667Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:26:26.164286 env[1204]: time="2024-12-13T14:26:26.164241629Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:26:26.164320 extend-filesystems[1231]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 14:26:26.164320 extend-filesystems[1231]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 14:26:26.164320 extend-filesystems[1231]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 14:26:26.169163 env[1204]: time="2024-12-13T14:26:26.166113840Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:26:26.169163 env[1204]: time="2024-12-13T14:26:26.166134739Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:26:26.169306 extend-filesystems[1182]: Resized filesystem in /dev/vda9 Dec 13 14:26:26.171005 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:26:26.171153 bash[1232]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:26:26.171206 systemd[1]: Finished extend-filesystems.service. Dec 13 14:26:26.173202 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:26:26.287745 env[1204]: time="2024-12-13T14:26:26.287668689Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:26:26.287745 env[1204]: time="2024-12-13T14:26:26.287737388Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:26:26.287745 env[1204]: time="2024-12-13T14:26:26.287758858Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:26:26.287979 env[1204]: time="2024-12-13T14:26:26.287799113Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:26:26.287979 env[1204]: time="2024-12-13T14:26:26.287815594Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:26:26.287979 env[1204]: time="2024-12-13T14:26:26.287835191Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:26:26.287979 env[1204]: time="2024-12-13T14:26:26.287862482Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:26:26.287979 env[1204]: time="2024-12-13T14:26:26.287877370Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:26:26.287979 env[1204]: time="2024-12-13T14:26:26.287894142Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:26:26.287979 env[1204]: time="2024-12-13T14:26:26.287909531Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:26:26.287979 env[1204]: time="2024-12-13T14:26:26.287923376Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:26:26.287979 env[1204]: time="2024-12-13T14:26:26.287937884Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:26:26.288151 env[1204]: time="2024-12-13T14:26:26.288090430Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:26:26.288173 env[1204]: time="2024-12-13T14:26:26.288157516Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:26:26.288465 env[1204]: time="2024-12-13T14:26:26.288423565Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:26:26.288465 env[1204]: time="2024-12-13T14:26:26.288462808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:26:26.288656 env[1204]: time="2024-12-13T14:26:26.288485370Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:26:26.288656 env[1204]: time="2024-12-13T14:26:26.288537979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:26:26.288656 env[1204]: time="2024-12-13T14:26:26.288553458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:26:26.288656 env[1204]: time="2024-12-13T14:26:26.288567695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:26:26.288656 env[1204]: time="2024-12-13T14:26:26.288580248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:26:26.288656 env[1204]: time="2024-12-13T14:26:26.288594064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:26:26.288656 env[1204]: time="2024-12-13T14:26:26.288607600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:26:26.288656 env[1204]: time="2024-12-13T14:26:26.288620444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:26:26.288656 env[1204]: time="2024-12-13T14:26:26.288634430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:26:26.288656 env[1204]: time="2024-12-13T14:26:26.288650370Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:26:26.288995 env[1204]: time="2024-12-13T14:26:26.288899557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:26:26.288995 env[1204]: time="2024-12-13T14:26:26.288927670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:26:26.288995 env[1204]: time="2024-12-13T14:26:26.288967084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:26:26.288995 env[1204]: time="2024-12-13T14:26:26.288984376Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:26:26.289113 env[1204]: time="2024-12-13T14:26:26.289004213Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:26:26.289113 env[1204]: time="2024-12-13T14:26:26.289041854Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:26:26.289113 env[1204]: time="2024-12-13T14:26:26.289070548Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:26:26.289209 env[1204]: time="2024-12-13T14:26:26.289129468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:26:26.289534 env[1204]: time="2024-12-13T14:26:26.289430543Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:26:26.289534 env[1204]: time="2024-12-13T14:26:26.289523227Z" level=info msg="Connect containerd service" Dec 13 14:26:26.290282 env[1204]: time="2024-12-13T14:26:26.289577489Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:26:26.290521 env[1204]: time="2024-12-13T14:26:26.290489068Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:26:26.290860 env[1204]: time="2024-12-13T14:26:26.290822954Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:26:26.290927 env[1204]: time="2024-12-13T14:26:26.290901672Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:26:26.291060 systemd[1]: Started containerd.service. Dec 13 14:26:26.296819 env[1204]: time="2024-12-13T14:26:26.296774916Z" level=info msg="containerd successfully booted in 0.215788s" Dec 13 14:26:26.297559 env[1204]: time="2024-12-13T14:26:26.297430796Z" level=info msg="Start subscribing containerd event" Dec 13 14:26:26.297612 env[1204]: time="2024-12-13T14:26:26.297570869Z" level=info msg="Start recovering state" Dec 13 14:26:26.297752 env[1204]: time="2024-12-13T14:26:26.297723976Z" level=info msg="Start event monitor" Dec 13 14:26:26.297779 env[1204]: time="2024-12-13T14:26:26.297750556Z" level=info msg="Start snapshots syncer" Dec 13 14:26:26.297885 env[1204]: time="2024-12-13T14:26:26.297863077Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:26:26.297885 env[1204]: time="2024-12-13T14:26:26.297879097Z" level=info msg="Start streaming server" Dec 13 14:26:26.446541 tar[1202]: linux-amd64/LICENSE Dec 13 14:26:26.446541 tar[1202]: linux-amd64/README.md Dec 13 14:26:26.450302 systemd[1]: Finished prepare-helm.service. Dec 13 14:26:26.470313 systemd-networkd[1028]: eth0: Gained IPv6LL Dec 13 14:26:26.471824 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:26:26.473047 systemd[1]: Reached target network-online.target. Dec 13 14:26:26.475180 systemd[1]: Starting kubelet.service... Dec 13 14:26:27.057322 systemd[1]: Started kubelet.service. Dec 13 14:26:27.374625 sshd_keygen[1197]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:26:27.393496 systemd[1]: Finished sshd-keygen.service. Dec 13 14:26:27.395857 systemd[1]: Starting issuegen.service... Dec 13 14:26:27.400934 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:26:27.401057 systemd[1]: Finished issuegen.service. Dec 13 14:26:27.403004 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:26:27.408323 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:26:27.410428 systemd[1]: Started getty@tty1.service. Dec 13 14:26:27.412347 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:26:27.413496 systemd[1]: Reached target getty.target. Dec 13 14:26:27.414389 systemd[1]: Reached target multi-user.target. Dec 13 14:26:27.416268 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:26:27.422593 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:26:27.422735 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:26:27.423865 systemd[1]: Startup finished in 851ms (kernel) + 5.424s (initrd) + 6.715s (userspace) = 12.990s. Dec 13 14:26:27.520628 kubelet[1249]: E1213 14:26:27.520582 1249 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:26:27.522588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:26:27.522714 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:26:35.137118 systemd[1]: Created slice system-sshd.slice. Dec 13 14:26:35.138478 systemd[1]: Started sshd@0-10.0.0.100:22-10.0.0.1:38750.service. Dec 13 14:26:35.184758 sshd[1272]: Accepted publickey for core from 10.0.0.1 port 38750 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:26:35.186414 sshd[1272]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:26:35.197334 systemd-logind[1191]: New session 1 of user core. Dec 13 14:26:35.198702 systemd[1]: Created slice user-500.slice. Dec 13 14:26:35.200301 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:26:35.209632 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:26:35.211186 systemd[1]: Starting user@500.service... Dec 13 14:26:35.214030 (systemd)[1275]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:26:35.286158 systemd[1275]: Queued start job for default target default.target. Dec 13 14:26:35.286646 systemd[1275]: Reached target paths.target. Dec 13 14:26:35.286664 systemd[1275]: Reached target sockets.target. Dec 13 14:26:35.286676 systemd[1275]: Reached target timers.target. Dec 13 14:26:35.286686 systemd[1275]: Reached target basic.target. Dec 13 14:26:35.286722 systemd[1275]: Reached target default.target. Dec 13 14:26:35.286743 systemd[1275]: Startup finished in 67ms. Dec 13 14:26:35.286827 systemd[1]: Started user@500.service. Dec 13 14:26:35.287948 systemd[1]: Started session-1.scope. Dec 13 14:26:35.341248 systemd[1]: Started sshd@1-10.0.0.100:22-10.0.0.1:38764.service. Dec 13 14:26:35.386631 sshd[1284]: Accepted publickey for core from 10.0.0.1 port 38764 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:26:35.388382 sshd[1284]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:26:35.393125 systemd-logind[1191]: New session 2 of user core. Dec 13 14:26:35.394431 systemd[1]: Started session-2.scope. Dec 13 14:26:35.450304 sshd[1284]: pam_unix(sshd:session): session closed for user core Dec 13 14:26:35.453332 systemd[1]: sshd@1-10.0.0.100:22-10.0.0.1:38764.service: Deactivated successfully. Dec 13 14:26:35.454014 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:26:35.454548 systemd-logind[1191]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:26:35.455736 systemd[1]: Started sshd@2-10.0.0.100:22-10.0.0.1:38770.service. Dec 13 14:26:35.456528 systemd-logind[1191]: Removed session 2. Dec 13 14:26:35.499004 sshd[1290]: Accepted publickey for core from 10.0.0.1 port 38770 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:26:35.500543 sshd[1290]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:26:35.504443 systemd-logind[1191]: New session 3 of user core. Dec 13 14:26:35.505236 systemd[1]: Started session-3.scope. Dec 13 14:26:35.555973 sshd[1290]: pam_unix(sshd:session): session closed for user core Dec 13 14:26:35.558886 systemd[1]: sshd@2-10.0.0.100:22-10.0.0.1:38770.service: Deactivated successfully. Dec 13 14:26:35.559461 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:26:35.559933 systemd-logind[1191]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:26:35.561153 systemd[1]: Started sshd@3-10.0.0.100:22-10.0.0.1:38786.service. Dec 13 14:26:35.561951 systemd-logind[1191]: Removed session 3. Dec 13 14:26:35.604803 sshd[1296]: Accepted publickey for core from 10.0.0.1 port 38786 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:26:35.606196 sshd[1296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:26:35.610236 systemd-logind[1191]: New session 4 of user core. Dec 13 14:26:35.610957 systemd[1]: Started session-4.scope. Dec 13 14:26:35.667596 sshd[1296]: pam_unix(sshd:session): session closed for user core Dec 13 14:26:35.670446 systemd[1]: sshd@3-10.0.0.100:22-10.0.0.1:38786.service: Deactivated successfully. Dec 13 14:26:35.670978 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:26:35.671496 systemd-logind[1191]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:26:35.672676 systemd[1]: Started sshd@4-10.0.0.100:22-10.0.0.1:38798.service. Dec 13 14:26:35.673348 systemd-logind[1191]: Removed session 4. Dec 13 14:26:35.715676 sshd[1302]: Accepted publickey for core from 10.0.0.1 port 38798 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:26:35.717293 sshd[1302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:26:35.721341 systemd-logind[1191]: New session 5 of user core. Dec 13 14:26:35.722424 systemd[1]: Started session-5.scope. Dec 13 14:26:35.780418 sudo[1305]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:26:35.780696 sudo[1305]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:26:35.803774 systemd[1]: Starting docker.service... Dec 13 14:26:35.843003 env[1317]: time="2024-12-13T14:26:35.842934533Z" level=info msg="Starting up" Dec 13 14:26:35.844356 env[1317]: time="2024-12-13T14:26:35.844324830Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:26:35.844440 env[1317]: time="2024-12-13T14:26:35.844420991Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:26:35.844543 env[1317]: time="2024-12-13T14:26:35.844518514Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:26:35.844617 env[1317]: time="2024-12-13T14:26:35.844598504Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:26:35.846425 env[1317]: time="2024-12-13T14:26:35.846387679Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:26:35.846425 env[1317]: time="2024-12-13T14:26:35.846415812Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:26:35.846507 env[1317]: time="2024-12-13T14:26:35.846435679Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:26:35.846507 env[1317]: time="2024-12-13T14:26:35.846445537Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:26:35.851125 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport872638360-merged.mount: Deactivated successfully. Dec 13 14:26:36.109746 env[1317]: time="2024-12-13T14:26:36.109634551Z" level=info msg="Loading containers: start." Dec 13 14:26:36.251257 kernel: Initializing XFRM netlink socket Dec 13 14:26:36.279524 env[1317]: time="2024-12-13T14:26:36.279472347Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 14:26:36.335363 systemd-networkd[1028]: docker0: Link UP Dec 13 14:26:36.352807 env[1317]: time="2024-12-13T14:26:36.352759807Z" level=info msg="Loading containers: done." Dec 13 14:26:36.363369 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3465016061-merged.mount: Deactivated successfully. Dec 13 14:26:36.364632 env[1317]: time="2024-12-13T14:26:36.364574915Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:26:36.364827 env[1317]: time="2024-12-13T14:26:36.364798735Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 14:26:36.364931 env[1317]: time="2024-12-13T14:26:36.364911787Z" level=info msg="Daemon has completed initialization" Dec 13 14:26:36.383749 systemd[1]: Started docker.service. Dec 13 14:26:36.391344 env[1317]: time="2024-12-13T14:26:36.391283102Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:26:37.100300 env[1204]: time="2024-12-13T14:26:37.100253385Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 14:26:37.773781 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:26:37.774022 systemd[1]: Stopped kubelet.service. Dec 13 14:26:37.776069 systemd[1]: Starting kubelet.service... Dec 13 14:26:37.864861 systemd[1]: Started kubelet.service. Dec 13 14:26:37.958916 kubelet[1458]: E1213 14:26:37.958857 1458 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:26:37.962010 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:26:37.962147 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:26:38.896789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2255361410.mount: Deactivated successfully. Dec 13 14:26:40.503469 env[1204]: time="2024-12-13T14:26:40.503392255Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:40.505196 env[1204]: time="2024-12-13T14:26:40.505157215Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:40.506894 env[1204]: time="2024-12-13T14:26:40.506851472Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:40.508488 env[1204]: time="2024-12-13T14:26:40.508429391Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:40.509202 env[1204]: time="2024-12-13T14:26:40.509166083Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Dec 13 14:26:40.518506 env[1204]: time="2024-12-13T14:26:40.518442580Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 14:26:42.746588 env[1204]: time="2024-12-13T14:26:42.746531271Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:42.749291 env[1204]: time="2024-12-13T14:26:42.749249719Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:42.751350 env[1204]: time="2024-12-13T14:26:42.751313289Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:42.753158 env[1204]: time="2024-12-13T14:26:42.753120118Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:42.753746 env[1204]: time="2024-12-13T14:26:42.753709283Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Dec 13 14:26:42.762676 env[1204]: time="2024-12-13T14:26:42.762629742Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 14:26:44.147580 env[1204]: time="2024-12-13T14:26:44.147515988Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:44.149821 env[1204]: time="2024-12-13T14:26:44.149747072Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:44.151513 env[1204]: time="2024-12-13T14:26:44.151486905Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:44.152934 env[1204]: time="2024-12-13T14:26:44.152906517Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:44.153666 env[1204]: time="2024-12-13T14:26:44.153636486Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Dec 13 14:26:44.165913 env[1204]: time="2024-12-13T14:26:44.165867984Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 14:26:45.410674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3599249920.mount: Deactivated successfully. Dec 13 14:26:46.462102 env[1204]: time="2024-12-13T14:26:46.462023856Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:46.464018 env[1204]: time="2024-12-13T14:26:46.463979534Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:46.465581 env[1204]: time="2024-12-13T14:26:46.465528589Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:46.466834 env[1204]: time="2024-12-13T14:26:46.466802408Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:46.467224 env[1204]: time="2024-12-13T14:26:46.467185787Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 14:26:46.476396 env[1204]: time="2024-12-13T14:26:46.476351636Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:26:47.035458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2901197289.mount: Deactivated successfully. Dec 13 14:26:48.213034 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:26:48.213245 systemd[1]: Stopped kubelet.service. Dec 13 14:26:48.214611 systemd[1]: Starting kubelet.service... Dec 13 14:26:48.293495 systemd[1]: Started kubelet.service. Dec 13 14:26:48.466286 kubelet[1498]: E1213 14:26:48.466124 1498 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:26:48.468319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:26:48.468506 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:26:48.499629 env[1204]: time="2024-12-13T14:26:48.499561506Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:48.501872 env[1204]: time="2024-12-13T14:26:48.501827787Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:48.504069 env[1204]: time="2024-12-13T14:26:48.504018475Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:48.506098 env[1204]: time="2024-12-13T14:26:48.506064883Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:48.506894 env[1204]: time="2024-12-13T14:26:48.506863961Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 14:26:48.516748 env[1204]: time="2024-12-13T14:26:48.516706810Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 14:26:49.023716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1860642529.mount: Deactivated successfully. Dec 13 14:26:49.029016 env[1204]: time="2024-12-13T14:26:49.028964596Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:49.030810 env[1204]: time="2024-12-13T14:26:49.030735147Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:49.032348 env[1204]: time="2024-12-13T14:26:49.032306904Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:49.033768 env[1204]: time="2024-12-13T14:26:49.033715396Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:49.034259 env[1204]: time="2024-12-13T14:26:49.034206306Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 14:26:49.045372 env[1204]: time="2024-12-13T14:26:49.045295432Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 14:26:49.727565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2039662668.mount: Deactivated successfully. Dec 13 14:26:52.383775 env[1204]: time="2024-12-13T14:26:52.383692478Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:52.385745 env[1204]: time="2024-12-13T14:26:52.385682861Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:52.387602 env[1204]: time="2024-12-13T14:26:52.387536878Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:52.389438 env[1204]: time="2024-12-13T14:26:52.389386266Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:52.390441 env[1204]: time="2024-12-13T14:26:52.390404456Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Dec 13 14:26:54.516929 systemd[1]: Stopped kubelet.service. Dec 13 14:26:54.519109 systemd[1]: Starting kubelet.service... Dec 13 14:26:54.537756 systemd[1]: Reloading. Dec 13 14:26:54.594203 /usr/lib/systemd/system-generators/torcx-generator[1627]: time="2024-12-13T14:26:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:26:54.594335 /usr/lib/systemd/system-generators/torcx-generator[1627]: time="2024-12-13T14:26:54Z" level=info msg="torcx already run" Dec 13 14:26:54.872711 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:26:54.872732 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:26:54.891754 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:26:54.967229 systemd[1]: Started kubelet.service. Dec 13 14:26:54.970438 systemd[1]: Stopping kubelet.service... Dec 13 14:26:54.970761 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:26:54.970929 systemd[1]: Stopped kubelet.service. Dec 13 14:26:54.972309 systemd[1]: Starting kubelet.service... Dec 13 14:26:55.053399 systemd[1]: Started kubelet.service. Dec 13 14:26:55.089276 kubelet[1674]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:26:55.089276 kubelet[1674]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:26:55.089276 kubelet[1674]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:26:55.089713 kubelet[1674]: I1213 14:26:55.089388 1674 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:26:55.542682 kubelet[1674]: I1213 14:26:55.542628 1674 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 14:26:55.542682 kubelet[1674]: I1213 14:26:55.542665 1674 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:26:55.542884 kubelet[1674]: I1213 14:26:55.542869 1674 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 14:26:55.553732 kubelet[1674]: I1213 14:26:55.553682 1674 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:26:55.558649 kubelet[1674]: E1213 14:26:55.558617 1674 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.100:6443: connect: connection refused Dec 13 14:26:55.566639 kubelet[1674]: I1213 14:26:55.566612 1674 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:26:55.568334 kubelet[1674]: I1213 14:26:55.568293 1674 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:26:55.568492 kubelet[1674]: I1213 14:26:55.568326 1674 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:26:55.568492 kubelet[1674]: I1213 14:26:55.568493 1674 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:26:55.568648 kubelet[1674]: I1213 14:26:55.568506 1674 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:26:55.569234 kubelet[1674]: I1213 14:26:55.569197 1674 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:26:55.569795 kubelet[1674]: I1213 14:26:55.569774 1674 kubelet.go:400] "Attempting to sync node with API server" Dec 13 14:26:55.569795 kubelet[1674]: I1213 14:26:55.569791 1674 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:26:55.569869 kubelet[1674]: I1213 14:26:55.569809 1674 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:26:55.569869 kubelet[1674]: I1213 14:26:55.569824 1674 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:26:55.570561 kubelet[1674]: W1213 14:26:55.570506 1674 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.100:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Dec 13 14:26:55.570561 kubelet[1674]: E1213 14:26:55.570565 1674 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.100:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Dec 13 14:26:55.570794 kubelet[1674]: W1213 14:26:55.570716 1674 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Dec 13 14:26:55.570794 kubelet[1674]: E1213 14:26:55.570797 1674 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Dec 13 14:26:55.572803 kubelet[1674]: I1213 14:26:55.572786 1674 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:26:55.575980 kubelet[1674]: I1213 14:26:55.575957 1674 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:26:55.576056 kubelet[1674]: W1213 14:26:55.576001 1674 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:26:55.576507 kubelet[1674]: I1213 14:26:55.576480 1674 server.go:1264] "Started kubelet" Dec 13 14:26:55.583920 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:26:55.584054 kubelet[1674]: I1213 14:26:55.584021 1674 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:26:55.588857 kubelet[1674]: I1213 14:26:55.588800 1674 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:26:55.589872 kubelet[1674]: I1213 14:26:55.589846 1674 server.go:455] "Adding debug handlers to kubelet server" Dec 13 14:26:55.590819 kubelet[1674]: I1213 14:26:55.590762 1674 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:26:55.591014 kubelet[1674]: I1213 14:26:55.590988 1674 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:26:55.592746 kubelet[1674]: I1213 14:26:55.592715 1674 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:26:55.595021 kubelet[1674]: I1213 14:26:55.594989 1674 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 14:26:55.595113 kubelet[1674]: I1213 14:26:55.595044 1674 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:26:55.595542 kubelet[1674]: E1213 14:26:55.595512 1674 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="200ms" Dec 13 14:26:55.595669 kubelet[1674]: I1213 14:26:55.595656 1674 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:26:55.595745 kubelet[1674]: I1213 14:26:55.595725 1674 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:26:55.596350 kubelet[1674]: W1213 14:26:55.596270 1674 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Dec 13 14:26:55.596350 kubelet[1674]: E1213 14:26:55.596319 1674 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Dec 13 14:26:55.596784 kubelet[1674]: E1213 14:26:55.596754 1674 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:26:55.596953 kubelet[1674]: I1213 14:26:55.596873 1674 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:26:55.604868 kubelet[1674]: E1213 14:26:55.604744 1674 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.100:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.100:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810c2c6c96f25e5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 14:26:55.576458725 +0000 UTC m=+0.518802190,LastTimestamp:2024-12-13 14:26:55.576458725 +0000 UTC m=+0.518802190,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 14:26:55.605809 kubelet[1674]: I1213 14:26:55.605768 1674 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:26:55.606874 kubelet[1674]: I1213 14:26:55.606860 1674 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:26:55.606954 kubelet[1674]: I1213 14:26:55.606940 1674 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:26:55.607042 kubelet[1674]: I1213 14:26:55.607027 1674 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 14:26:55.607148 kubelet[1674]: E1213 14:26:55.607129 1674 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:26:55.609176 kubelet[1674]: W1213 14:26:55.609124 1674 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Dec 13 14:26:55.609269 kubelet[1674]: E1213 14:26:55.609195 1674 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Dec 13 14:26:55.611883 kubelet[1674]: I1213 14:26:55.611859 1674 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:26:55.611883 kubelet[1674]: I1213 14:26:55.611875 1674 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:26:55.611953 kubelet[1674]: I1213 14:26:55.611899 1674 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:26:55.694542 kubelet[1674]: I1213 14:26:55.694483 1674 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:26:55.694913 kubelet[1674]: E1213 14:26:55.694872 1674 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Dec 13 14:26:55.708193 kubelet[1674]: E1213 14:26:55.708145 1674 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:26:55.797323 kubelet[1674]: E1213 14:26:55.797126 1674 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="400ms" Dec 13 14:26:55.896518 kubelet[1674]: I1213 14:26:55.896479 1674 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:26:55.896908 kubelet[1674]: E1213 14:26:55.896865 1674 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Dec 13 14:26:55.908972 kubelet[1674]: E1213 14:26:55.908942 1674 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:26:56.068453 kubelet[1674]: I1213 14:26:56.068318 1674 policy_none.go:49] "None policy: Start" Dec 13 14:26:56.069132 kubelet[1674]: I1213 14:26:56.069096 1674 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:26:56.069132 kubelet[1674]: I1213 14:26:56.069119 1674 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:26:56.075897 systemd[1]: Created slice kubepods.slice. Dec 13 14:26:56.080888 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:26:56.083919 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:26:56.093990 kubelet[1674]: I1213 14:26:56.093934 1674 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:26:56.094408 kubelet[1674]: I1213 14:26:56.094143 1674 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:26:56.094408 kubelet[1674]: I1213 14:26:56.094314 1674 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:26:56.095490 kubelet[1674]: E1213 14:26:56.095470 1674 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 14:26:56.197918 kubelet[1674]: E1213 14:26:56.197859 1674 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="800ms" Dec 13 14:26:56.298252 kubelet[1674]: I1213 14:26:56.298197 1674 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:26:56.298523 kubelet[1674]: E1213 14:26:56.298489 1674 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Dec 13 14:26:56.310024 kubelet[1674]: I1213 14:26:56.309980 1674 topology_manager.go:215] "Topology Admit Handler" podUID="9899c8bc7b21b75779f86ab400ba045e" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 14:26:56.310794 kubelet[1674]: I1213 14:26:56.310774 1674 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 14:26:56.311710 kubelet[1674]: I1213 14:26:56.311682 1674 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 14:26:56.316912 systemd[1]: Created slice kubepods-burstable-pod9899c8bc7b21b75779f86ab400ba045e.slice. Dec 13 14:26:56.328998 systemd[1]: Created slice kubepods-burstable-pod8a50003978138b3ab9890682eff4eae8.slice. Dec 13 14:26:56.337145 systemd[1]: Created slice kubepods-burstable-podb107a98bcf27297d642d248711a3fc70.slice. Dec 13 14:26:56.400967 kubelet[1674]: I1213 14:26:56.400924 1674 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:26:56.400967 kubelet[1674]: I1213 14:26:56.400964 1674 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9899c8bc7b21b75779f86ab400ba045e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9899c8bc7b21b75779f86ab400ba045e\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:26:56.401181 kubelet[1674]: I1213 14:26:56.400981 1674 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9899c8bc7b21b75779f86ab400ba045e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9899c8bc7b21b75779f86ab400ba045e\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:26:56.401181 kubelet[1674]: I1213 14:26:56.401000 1674 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:26:56.401181 kubelet[1674]: I1213 14:26:56.401014 1674 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:26:56.401181 kubelet[1674]: I1213 14:26:56.401028 1674 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:26:56.401181 kubelet[1674]: I1213 14:26:56.401041 1674 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 14:26:56.401330 kubelet[1674]: I1213 14:26:56.401097 1674 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9899c8bc7b21b75779f86ab400ba045e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9899c8bc7b21b75779f86ab400ba045e\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:26:56.401330 kubelet[1674]: I1213 14:26:56.401154 1674 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:26:56.576925 kubelet[1674]: W1213 14:26:56.576821 1674 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.100:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Dec 13 14:26:56.576925 kubelet[1674]: E1213 14:26:56.576905 1674 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.100:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Dec 13 14:26:56.628287 kubelet[1674]: E1213 14:26:56.628099 1674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:56.629176 env[1204]: time="2024-12-13T14:26:56.629118217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9899c8bc7b21b75779f86ab400ba045e,Namespace:kube-system,Attempt:0,}" Dec 13 14:26:56.637394 kubelet[1674]: E1213 14:26:56.637329 1674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:56.637903 env[1204]: time="2024-12-13T14:26:56.637860302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,}" Dec 13 14:26:56.639100 kubelet[1674]: E1213 14:26:56.639059 1674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:56.639643 env[1204]: time="2024-12-13T14:26:56.639585737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,}" Dec 13 14:26:56.972847 kubelet[1674]: W1213 14:26:56.972681 1674 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Dec 13 14:26:56.972847 kubelet[1674]: E1213 14:26:56.972759 1674 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Dec 13 14:26:56.998675 kubelet[1674]: E1213 14:26:56.998609 1674 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="1.6s" Dec 13 14:26:57.014402 kubelet[1674]: W1213 14:26:57.014311 1674 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Dec 13 14:26:57.014402 kubelet[1674]: E1213 14:26:57.014387 1674 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Dec 13 14:26:57.100387 kubelet[1674]: I1213 14:26:57.100319 1674 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:26:57.100814 kubelet[1674]: E1213 14:26:57.100772 1674 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Dec 13 14:26:57.103394 kubelet[1674]: W1213 14:26:57.103348 1674 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Dec 13 14:26:57.103456 kubelet[1674]: E1213 14:26:57.103401 1674 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Dec 13 14:26:57.574438 kubelet[1674]: E1213 14:26:57.574403 1674 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.100:6443: connect: connection refused Dec 13 14:26:57.660349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3237007718.mount: Deactivated successfully. Dec 13 14:26:57.663816 env[1204]: time="2024-12-13T14:26:57.663754710Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:57.667499 env[1204]: time="2024-12-13T14:26:57.667473955Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:57.669547 env[1204]: time="2024-12-13T14:26:57.669494133Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:57.670497 env[1204]: time="2024-12-13T14:26:57.670438625Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:57.671396 env[1204]: time="2024-12-13T14:26:57.671355444Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:57.673589 env[1204]: time="2024-12-13T14:26:57.673549328Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:57.675118 env[1204]: time="2024-12-13T14:26:57.675082834Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:57.676843 env[1204]: time="2024-12-13T14:26:57.676818910Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:57.679335 env[1204]: time="2024-12-13T14:26:57.679305373Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:57.680959 env[1204]: time="2024-12-13T14:26:57.680924860Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:57.682529 env[1204]: time="2024-12-13T14:26:57.682498341Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:57.684458 env[1204]: time="2024-12-13T14:26:57.684415897Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:58.114865 env[1204]: time="2024-12-13T14:26:58.114708619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:26:58.114865 env[1204]: time="2024-12-13T14:26:58.114779562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:26:58.114865 env[1204]: time="2024-12-13T14:26:58.114792627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:26:58.115321 env[1204]: time="2024-12-13T14:26:58.114996409Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd2b11cf5b919ef5b86f47b12cce4ad5fbce48102ff73d8bf31a816dcfc56a3f pid=1716 runtime=io.containerd.runc.v2 Dec 13 14:26:58.217019 systemd[1]: Started cri-containerd-cd2b11cf5b919ef5b86f47b12cce4ad5fbce48102ff73d8bf31a816dcfc56a3f.scope. Dec 13 14:26:58.598587 env[1204]: time="2024-12-13T14:26:58.593553004Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:26:58.598587 env[1204]: time="2024-12-13T14:26:58.593599090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:26:58.598587 env[1204]: time="2024-12-13T14:26:58.593612655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:26:58.598587 env[1204]: time="2024-12-13T14:26:58.594319100Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a86091b1c961df07b13a314548be737490103d217fb5b8352d12807e7e497f8 pid=1746 runtime=io.containerd.runc.v2 Dec 13 14:26:58.599917 kubelet[1674]: E1213 14:26:58.599864 1674 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="3.2s" Dec 13 14:26:58.600432 env[1204]: time="2024-12-13T14:26:58.600336886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:26:58.600432 env[1204]: time="2024-12-13T14:26:58.600406937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:26:58.600580 env[1204]: time="2024-12-13T14:26:58.600419901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:26:58.600694 env[1204]: time="2024-12-13T14:26:58.600603816Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/db3073b8f76ed1dcf73b3d3e75da58576a14f7468240582c56fbf6af8c33678c pid=1755 runtime=io.containerd.runc.v2 Dec 13 14:26:58.679244 systemd[1]: Started cri-containerd-7a86091b1c961df07b13a314548be737490103d217fb5b8352d12807e7e497f8.scope. Dec 13 14:26:58.720683 systemd[1]: Started cri-containerd-db3073b8f76ed1dcf73b3d3e75da58576a14f7468240582c56fbf6af8c33678c.scope. Dec 13 14:26:58.722352 kubelet[1674]: I1213 14:26:58.722320 1674 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:26:58.722714 kubelet[1674]: E1213 14:26:58.722686 1674 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Dec 13 14:26:58.795481 env[1204]: time="2024-12-13T14:26:58.795428963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b107a98bcf27297d642d248711a3fc70,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd2b11cf5b919ef5b86f47b12cce4ad5fbce48102ff73d8bf31a816dcfc56a3f\"" Dec 13 14:26:58.798492 kubelet[1674]: E1213 14:26:58.798460 1674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:58.804254 env[1204]: time="2024-12-13T14:26:58.800951239Z" level=info msg="CreateContainer within sandbox \"cd2b11cf5b919ef5b86f47b12cce4ad5fbce48102ff73d8bf31a816dcfc56a3f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:26:58.838697 env[1204]: time="2024-12-13T14:26:58.838636903Z" level=info msg="CreateContainer within sandbox \"cd2b11cf5b919ef5b86f47b12cce4ad5fbce48102ff73d8bf31a816dcfc56a3f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f1a0c31407bbc068f84a53f80dd0af89f4f7a0991ba0d133052d52a5c406d959\"" Dec 13 14:26:58.839343 env[1204]: time="2024-12-13T14:26:58.839312871Z" level=info msg="StartContainer for \"f1a0c31407bbc068f84a53f80dd0af89f4f7a0991ba0d133052d52a5c406d959\"" Dec 13 14:26:58.839494 env[1204]: time="2024-12-13T14:26:58.839462812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9899c8bc7b21b75779f86ab400ba045e,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a86091b1c961df07b13a314548be737490103d217fb5b8352d12807e7e497f8\"" Dec 13 14:26:58.840910 kubelet[1674]: E1213 14:26:58.840879 1674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:58.842702 env[1204]: time="2024-12-13T14:26:58.842671540Z" level=info msg="CreateContainer within sandbox \"7a86091b1c961df07b13a314548be737490103d217fb5b8352d12807e7e497f8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:26:58.847300 env[1204]: time="2024-12-13T14:26:58.847210151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8a50003978138b3ab9890682eff4eae8,Namespace:kube-system,Attempt:0,} returns sandbox id \"db3073b8f76ed1dcf73b3d3e75da58576a14f7468240582c56fbf6af8c33678c\"" Dec 13 14:26:58.847711 kubelet[1674]: E1213 14:26:58.847682 1674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:58.849649 env[1204]: time="2024-12-13T14:26:58.848973017Z" level=info msg="CreateContainer within sandbox \"db3073b8f76ed1dcf73b3d3e75da58576a14f7468240582c56fbf6af8c33678c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:26:58.859698 systemd[1]: Started cri-containerd-f1a0c31407bbc068f84a53f80dd0af89f4f7a0991ba0d133052d52a5c406d959.scope. Dec 13 14:26:58.870485 env[1204]: time="2024-12-13T14:26:58.870413115Z" level=info msg="CreateContainer within sandbox \"7a86091b1c961df07b13a314548be737490103d217fb5b8352d12807e7e497f8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f5be19e98b690d5aa903cf16d631dc207882024d5864696ba136bd74e4f83817\"" Dec 13 14:26:58.871346 env[1204]: time="2024-12-13T14:26:58.871304196Z" level=info msg="StartContainer for \"f5be19e98b690d5aa903cf16d631dc207882024d5864696ba136bd74e4f83817\"" Dec 13 14:26:58.881306 env[1204]: time="2024-12-13T14:26:58.881191068Z" level=info msg="CreateContainer within sandbox \"db3073b8f76ed1dcf73b3d3e75da58576a14f7468240582c56fbf6af8c33678c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"345eb572d36844e37aca84d5539c8c2d293c989b300175f65754a5d36d7b710f\"" Dec 13 14:26:58.882069 env[1204]: time="2024-12-13T14:26:58.882041923Z" level=info msg="StartContainer for \"345eb572d36844e37aca84d5539c8c2d293c989b300175f65754a5d36d7b710f\"" Dec 13 14:26:58.890679 systemd[1]: Started cri-containerd-f5be19e98b690d5aa903cf16d631dc207882024d5864696ba136bd74e4f83817.scope. Dec 13 14:26:58.903731 systemd[1]: Started cri-containerd-345eb572d36844e37aca84d5539c8c2d293c989b300175f65754a5d36d7b710f.scope. Dec 13 14:26:58.939922 env[1204]: time="2024-12-13T14:26:58.939863489Z" level=info msg="StartContainer for \"f1a0c31407bbc068f84a53f80dd0af89f4f7a0991ba0d133052d52a5c406d959\" returns successfully" Dec 13 14:26:58.979355 env[1204]: time="2024-12-13T14:26:58.979305788Z" level=info msg="StartContainer for \"f5be19e98b690d5aa903cf16d631dc207882024d5864696ba136bd74e4f83817\" returns successfully" Dec 13 14:26:58.981908 env[1204]: time="2024-12-13T14:26:58.981878002Z" level=info msg="StartContainer for \"345eb572d36844e37aca84d5539c8c2d293c989b300175f65754a5d36d7b710f\" returns successfully" Dec 13 14:26:59.030424 kubelet[1674]: W1213 14:26:59.030309 1674 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.100:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Dec 13 14:26:59.030424 kubelet[1674]: E1213 14:26:59.030387 1674 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.100:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Dec 13 14:26:59.651587 kubelet[1674]: E1213 14:26:59.651559 1674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:59.653719 kubelet[1674]: E1213 14:26:59.653699 1674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:59.655488 kubelet[1674]: E1213 14:26:59.655466 1674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:00.585812 kubelet[1674]: I1213 14:27:00.585769 1674 apiserver.go:52] "Watching apiserver" Dec 13 14:27:00.595923 kubelet[1674]: I1213 14:27:00.595892 1674 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 14:27:00.658409 kubelet[1674]: E1213 14:27:00.658359 1674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:00.709590 kubelet[1674]: E1213 14:27:00.709466 1674 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1810c2c6c96f25e5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 14:26:55.576458725 +0000 UTC m=+0.518802190,LastTimestamp:2024-12-13 14:26:55.576458725 +0000 UTC m=+0.518802190,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 14:27:00.780489 kubelet[1674]: E1213 14:27:00.780375 1674 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1810c2c6caa4a310 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 14:26:55.596741392 +0000 UTC m=+0.539084857,LastTimestamp:2024-12-13 14:26:55.596741392 +0000 UTC m=+0.539084857,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 14:27:00.985306 kubelet[1674]: E1213 14:27:00.985269 1674 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Dec 13 14:27:01.351593 kubelet[1674]: E1213 14:27:01.351551 1674 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Dec 13 14:27:01.735339 kubelet[1674]: E1213 14:27:01.735179 1674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:01.791859 kubelet[1674]: E1213 14:27:01.791822 1674 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:01.802861 kubelet[1674]: E1213 14:27:01.802831 1674 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 14:27:01.811086 kubelet[1674]: E1213 14:27:01.811059 1674 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Dec 13 14:27:01.924096 kubelet[1674]: I1213 14:27:01.924061 1674 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:27:01.928511 kubelet[1674]: I1213 14:27:01.928473 1674 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 14:27:02.492352 systemd[1]: Reloading. Dec 13 14:27:02.551551 /usr/lib/systemd/system-generators/torcx-generator[1975]: time="2024-12-13T14:27:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:27:02.551588 /usr/lib/systemd/system-generators/torcx-generator[1975]: time="2024-12-13T14:27:02Z" level=info msg="torcx already run" Dec 13 14:27:02.617819 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:27:02.617837 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:27:02.635138 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:27:02.726653 systemd[1]: Stopping kubelet.service... Dec 13 14:27:02.746588 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:27:02.746796 systemd[1]: Stopped kubelet.service. Dec 13 14:27:02.749611 systemd[1]: Starting kubelet.service... Dec 13 14:27:02.839442 systemd[1]: Started kubelet.service. Dec 13 14:27:02.875651 kubelet[2019]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:27:02.875651 kubelet[2019]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:27:02.875651 kubelet[2019]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:27:02.876202 kubelet[2019]: I1213 14:27:02.875681 2019 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:27:02.879806 kubelet[2019]: I1213 14:27:02.879777 2019 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 14:27:02.879806 kubelet[2019]: I1213 14:27:02.879801 2019 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:27:02.880481 kubelet[2019]: I1213 14:27:02.880460 2019 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 14:27:02.882157 kubelet[2019]: I1213 14:27:02.882116 2019 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:27:02.883563 kubelet[2019]: I1213 14:27:02.883537 2019 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:27:02.889626 kubelet[2019]: I1213 14:27:02.889603 2019 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:27:02.889861 kubelet[2019]: I1213 14:27:02.889820 2019 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:27:02.890005 kubelet[2019]: I1213 14:27:02.889853 2019 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:27:02.890095 kubelet[2019]: I1213 14:27:02.890013 2019 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:27:02.890095 kubelet[2019]: I1213 14:27:02.890021 2019 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:27:02.890095 kubelet[2019]: I1213 14:27:02.890056 2019 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:27:02.890167 kubelet[2019]: I1213 14:27:02.890145 2019 kubelet.go:400] "Attempting to sync node with API server" Dec 13 14:27:02.890167 kubelet[2019]: I1213 14:27:02.890158 2019 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:27:02.890245 kubelet[2019]: I1213 14:27:02.890179 2019 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:27:02.890245 kubelet[2019]: I1213 14:27:02.890209 2019 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:27:02.891007 kubelet[2019]: I1213 14:27:02.890985 2019 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:27:02.891131 kubelet[2019]: I1213 14:27:02.891110 2019 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:27:02.892945 kubelet[2019]: I1213 14:27:02.892919 2019 server.go:1264] "Started kubelet" Dec 13 14:27:02.894483 kubelet[2019]: I1213 14:27:02.894462 2019 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:27:02.895720 kubelet[2019]: I1213 14:27:02.895696 2019 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:27:02.895984 kubelet[2019]: I1213 14:27:02.895913 2019 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:27:02.896042 kubelet[2019]: I1213 14:27:02.896009 2019 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 14:27:02.896144 kubelet[2019]: I1213 14:27:02.896124 2019 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:27:02.896620 kubelet[2019]: E1213 14:27:02.896594 2019 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:27:02.896774 kubelet[2019]: I1213 14:27:02.896725 2019 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:27:02.902676 kubelet[2019]: I1213 14:27:02.902655 2019 server.go:455] "Adding debug handlers to kubelet server" Dec 13 14:27:02.902862 kubelet[2019]: I1213 14:27:02.902838 2019 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:27:02.906229 kubelet[2019]: I1213 14:27:02.906160 2019 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:27:02.908965 kubelet[2019]: I1213 14:27:02.908936 2019 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:27:02.908965 kubelet[2019]: I1213 14:27:02.908963 2019 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:27:02.922804 kubelet[2019]: I1213 14:27:02.922553 2019 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:27:02.925392 kubelet[2019]: I1213 14:27:02.925355 2019 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:27:02.925485 kubelet[2019]: I1213 14:27:02.925399 2019 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:27:02.925485 kubelet[2019]: I1213 14:27:02.925426 2019 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 14:27:02.925557 kubelet[2019]: E1213 14:27:02.925492 2019 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:27:02.946153 kubelet[2019]: I1213 14:27:02.946130 2019 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:27:02.946339 kubelet[2019]: I1213 14:27:02.946322 2019 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:27:02.946440 kubelet[2019]: I1213 14:27:02.946424 2019 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:27:02.946648 kubelet[2019]: I1213 14:27:02.946632 2019 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:27:02.946777 kubelet[2019]: I1213 14:27:02.946724 2019 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:27:02.946883 kubelet[2019]: I1213 14:27:02.946865 2019 policy_none.go:49] "None policy: Start" Dec 13 14:27:02.947682 kubelet[2019]: I1213 14:27:02.947662 2019 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:27:02.947682 kubelet[2019]: I1213 14:27:02.947684 2019 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:27:02.947822 kubelet[2019]: I1213 14:27:02.947807 2019 state_mem.go:75] "Updated machine memory state" Dec 13 14:27:02.951845 kubelet[2019]: I1213 14:27:02.951829 2019 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:27:02.952081 kubelet[2019]: I1213 14:27:02.952053 2019 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:27:02.952270 kubelet[2019]: I1213 14:27:02.952257 2019 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:27:03.000311 kubelet[2019]: I1213 14:27:03.000179 2019 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:27:03.005853 kubelet[2019]: I1213 14:27:03.005808 2019 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 14:27:03.005971 kubelet[2019]: I1213 14:27:03.005893 2019 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 14:27:03.025946 kubelet[2019]: I1213 14:27:03.025898 2019 topology_manager.go:215] "Topology Admit Handler" podUID="9899c8bc7b21b75779f86ab400ba045e" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 14:27:03.026086 kubelet[2019]: I1213 14:27:03.025985 2019 topology_manager.go:215] "Topology Admit Handler" podUID="8a50003978138b3ab9890682eff4eae8" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 14:27:03.026086 kubelet[2019]: I1213 14:27:03.026059 2019 topology_manager.go:215] "Topology Admit Handler" podUID="b107a98bcf27297d642d248711a3fc70" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 14:27:03.097295 kubelet[2019]: I1213 14:27:03.097171 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9899c8bc7b21b75779f86ab400ba045e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9899c8bc7b21b75779f86ab400ba045e\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:27:03.197869 kubelet[2019]: I1213 14:27:03.197822 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:27:03.197869 kubelet[2019]: I1213 14:27:03.197865 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:27:03.198066 kubelet[2019]: I1213 14:27:03.197891 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:27:03.198066 kubelet[2019]: I1213 14:27:03.197935 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9899c8bc7b21b75779f86ab400ba045e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9899c8bc7b21b75779f86ab400ba045e\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:27:03.198066 kubelet[2019]: I1213 14:27:03.197967 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:27:03.198066 kubelet[2019]: I1213 14:27:03.198019 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a50003978138b3ab9890682eff4eae8-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8a50003978138b3ab9890682eff4eae8\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:27:03.198172 kubelet[2019]: I1213 14:27:03.198066 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b107a98bcf27297d642d248711a3fc70-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b107a98bcf27297d642d248711a3fc70\") " pod="kube-system/kube-scheduler-localhost" Dec 13 14:27:03.198172 kubelet[2019]: I1213 14:27:03.198094 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9899c8bc7b21b75779f86ab400ba045e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9899c8bc7b21b75779f86ab400ba045e\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:27:03.363065 kubelet[2019]: E1213 14:27:03.363002 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:03.363333 kubelet[2019]: E1213 14:27:03.363289 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:03.363585 kubelet[2019]: E1213 14:27:03.363539 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:03.490603 sudo[2056]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 14:27:03.490884 sudo[2056]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 14:27:03.892103 kubelet[2019]: I1213 14:27:03.891274 2019 apiserver.go:52] "Watching apiserver" Dec 13 14:27:03.896973 kubelet[2019]: I1213 14:27:03.896920 2019 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 14:27:03.933517 kubelet[2019]: E1213 14:27:03.933479 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:03.934268 kubelet[2019]: E1213 14:27:03.933757 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:03.940691 kubelet[2019]: E1213 14:27:03.940648 2019 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 14:27:03.941063 kubelet[2019]: E1213 14:27:03.941042 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:04.071587 sudo[2056]: pam_unix(sudo:session): session closed for user root Dec 13 14:27:04.194882 kubelet[2019]: I1213 14:27:04.193922 2019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.193902732 podStartE2EDuration="1.193902732s" podCreationTimestamp="2024-12-13 14:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:27:04.012741896 +0000 UTC m=+1.169601962" watchObservedRunningTime="2024-12-13 14:27:04.193902732 +0000 UTC m=+1.350762799" Dec 13 14:27:04.244356 kubelet[2019]: I1213 14:27:04.244305 2019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.244290327 podStartE2EDuration="1.244290327s" podCreationTimestamp="2024-12-13 14:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:27:04.244157985 +0000 UTC m=+1.401018051" watchObservedRunningTime="2024-12-13 14:27:04.244290327 +0000 UTC m=+1.401150383" Dec 13 14:27:04.244588 kubelet[2019]: I1213 14:27:04.244369 2019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.244365872 podStartE2EDuration="1.244365872s" podCreationTimestamp="2024-12-13 14:27:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:27:04.194641514 +0000 UTC m=+1.351501580" watchObservedRunningTime="2024-12-13 14:27:04.244365872 +0000 UTC m=+1.401225938" Dec 13 14:27:05.097908 kubelet[2019]: E1213 14:27:05.097599 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:05.098437 kubelet[2019]: E1213 14:27:05.098309 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:05.760270 sudo[1305]: pam_unix(sudo:session): session closed for user root Dec 13 14:27:05.761889 sshd[1302]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:05.764917 systemd[1]: sshd@4-10.0.0.100:22-10.0.0.1:38798.service: Deactivated successfully. Dec 13 14:27:05.765938 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:27:05.766119 systemd[1]: session-5.scope: Consumed 4.242s CPU time. Dec 13 14:27:05.766898 systemd-logind[1191]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:27:05.767765 systemd-logind[1191]: Removed session 5. Dec 13 14:27:05.937883 kubelet[2019]: E1213 14:27:05.937785 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:11.312590 kubelet[2019]: E1213 14:27:11.308819 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:11.434209 update_engine[1194]: I1213 14:27:11.434145 1194 update_attempter.cc:509] Updating boot flags... Dec 13 14:27:11.947416 kubelet[2019]: E1213 14:27:11.947374 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:14.300854 kubelet[2019]: E1213 14:27:14.300815 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:14.831895 kubelet[2019]: E1213 14:27:14.831840 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:14.953522 kubelet[2019]: E1213 14:27:14.953471 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:17.155363 kubelet[2019]: I1213 14:27:17.155317 2019 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:27:17.155766 env[1204]: time="2024-12-13T14:27:17.155723951Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:27:17.155974 kubelet[2019]: I1213 14:27:17.155948 2019 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:27:18.252116 kubelet[2019]: I1213 14:27:18.252074 2019 topology_manager.go:215] "Topology Admit Handler" podUID="08aa555b-c5ba-4780-ba1e-d4627441dc3e" podNamespace="kube-system" podName="kube-proxy-2j9gx" Dec 13 14:27:18.257981 systemd[1]: Created slice kubepods-besteffort-pod08aa555b_c5ba_4780_ba1e_d4627441dc3e.slice. Dec 13 14:27:18.285414 kubelet[2019]: I1213 14:27:18.285361 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/08aa555b-c5ba-4780-ba1e-d4627441dc3e-lib-modules\") pod \"kube-proxy-2j9gx\" (UID: \"08aa555b-c5ba-4780-ba1e-d4627441dc3e\") " pod="kube-system/kube-proxy-2j9gx" Dec 13 14:27:18.285414 kubelet[2019]: I1213 14:27:18.285412 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/08aa555b-c5ba-4780-ba1e-d4627441dc3e-kube-proxy\") pod \"kube-proxy-2j9gx\" (UID: \"08aa555b-c5ba-4780-ba1e-d4627441dc3e\") " pod="kube-system/kube-proxy-2j9gx" Dec 13 14:27:18.285642 kubelet[2019]: I1213 14:27:18.285432 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08aa555b-c5ba-4780-ba1e-d4627441dc3e-xtables-lock\") pod \"kube-proxy-2j9gx\" (UID: \"08aa555b-c5ba-4780-ba1e-d4627441dc3e\") " pod="kube-system/kube-proxy-2j9gx" Dec 13 14:27:18.285642 kubelet[2019]: I1213 14:27:18.285454 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94vrc\" (UniqueName: \"kubernetes.io/projected/08aa555b-c5ba-4780-ba1e-d4627441dc3e-kube-api-access-94vrc\") pod \"kube-proxy-2j9gx\" (UID: \"08aa555b-c5ba-4780-ba1e-d4627441dc3e\") " pod="kube-system/kube-proxy-2j9gx" Dec 13 14:27:18.286803 kubelet[2019]: I1213 14:27:18.286768 2019 topology_manager.go:215] "Topology Admit Handler" podUID="587a2f3f-83f7-4a9d-980a-aebaa9c8af99" podNamespace="kube-system" podName="cilium-slrh5" Dec 13 14:27:18.287089 kubelet[2019]: I1213 14:27:18.287052 2019 topology_manager.go:215] "Topology Admit Handler" podUID="411bb671-8fda-447c-b7bf-4ce1cad51aad" podNamespace="kube-system" podName="cilium-operator-599987898-mzzpp" Dec 13 14:27:18.298825 systemd[1]: Created slice kubepods-besteffort-pod411bb671_8fda_447c_b7bf_4ce1cad51aad.slice. Dec 13 14:27:18.310665 systemd[1]: Created slice kubepods-burstable-pod587a2f3f_83f7_4a9d_980a_aebaa9c8af99.slice. Dec 13 14:27:18.486698 kubelet[2019]: I1213 14:27:18.486631 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjs9g\" (UniqueName: \"kubernetes.io/projected/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-kube-api-access-wjs9g\") pod \"cilium-slrh5\" (UID: \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\") " pod="kube-system/cilium-slrh5" Dec 13 14:27:18.486698 kubelet[2019]: I1213 14:27:18.486685 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-hubble-tls\") pod \"cilium-slrh5\" (UID: \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\") " pod="kube-system/cilium-slrh5" Dec 13 14:27:18.486698 kubelet[2019]: I1213 14:27:18.486700 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-host-proc-sys-net\") pod \"cilium-slrh5\" (UID: \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\") " pod="kube-system/cilium-slrh5" Dec 13 14:27:18.486698 kubelet[2019]: I1213 14:27:18.486716 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-host-proc-sys-kernel\") pod \"cilium-slrh5\" (UID: \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\") " pod="kube-system/cilium-slrh5" Dec 13 14:27:18.487037 kubelet[2019]: I1213 14:27:18.486734 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-cilium-cgroup\") pod \"cilium-slrh5\" (UID: \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\") " pod="kube-system/cilium-slrh5" Dec 13 14:27:18.487037 kubelet[2019]: I1213 14:27:18.486751 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-cni-path\") pod \"cilium-slrh5\" (UID: \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\") " pod="kube-system/cilium-slrh5" Dec 13 14:27:18.487037 kubelet[2019]: I1213 14:27:18.486769 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-xtables-lock\") pod \"cilium-slrh5\" (UID: \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\") " pod="kube-system/cilium-slrh5" Dec 13 14:27:18.487037 kubelet[2019]: I1213 14:27:18.486784 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-cilium-config-path\") pod \"cilium-slrh5\" (UID: \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\") " pod="kube-system/cilium-slrh5" Dec 13 14:27:18.487037 kubelet[2019]: I1213 14:27:18.486797 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/411bb671-8fda-447c-b7bf-4ce1cad51aad-cilium-config-path\") pod \"cilium-operator-599987898-mzzpp\" (UID: \"411bb671-8fda-447c-b7bf-4ce1cad51aad\") " pod="kube-system/cilium-operator-599987898-mzzpp" Dec 13 14:27:18.487337 kubelet[2019]: I1213 14:27:18.486809 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-etc-cni-netd\") pod \"cilium-slrh5\" (UID: \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\") " pod="kube-system/cilium-slrh5" Dec 13 14:27:18.487337 kubelet[2019]: I1213 14:27:18.486821 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-lib-modules\") pod \"cilium-slrh5\" (UID: \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\") " pod="kube-system/cilium-slrh5" Dec 13 14:27:18.487337 kubelet[2019]: I1213 14:27:18.486859 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-hostproc\") pod \"cilium-slrh5\" (UID: \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\") " pod="kube-system/cilium-slrh5" Dec 13 14:27:18.487337 kubelet[2019]: I1213 14:27:18.486902 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-clustermesh-secrets\") pod \"cilium-slrh5\" (UID: \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\") " pod="kube-system/cilium-slrh5" Dec 13 14:27:18.487337 kubelet[2019]: I1213 14:27:18.486925 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk7m5\" (UniqueName: \"kubernetes.io/projected/411bb671-8fda-447c-b7bf-4ce1cad51aad-kube-api-access-hk7m5\") pod \"cilium-operator-599987898-mzzpp\" (UID: \"411bb671-8fda-447c-b7bf-4ce1cad51aad\") " pod="kube-system/cilium-operator-599987898-mzzpp" Dec 13 14:27:18.487501 kubelet[2019]: I1213 14:27:18.486945 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-cilium-run\") pod \"cilium-slrh5\" (UID: \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\") " pod="kube-system/cilium-slrh5" Dec 13 14:27:18.487501 kubelet[2019]: I1213 14:27:18.486962 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-bpf-maps\") pod \"cilium-slrh5\" (UID: \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\") " pod="kube-system/cilium-slrh5" Dec 13 14:27:18.584495 kubelet[2019]: E1213 14:27:18.584313 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:18.584964 env[1204]: time="2024-12-13T14:27:18.584911701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2j9gx,Uid:08aa555b-c5ba-4780-ba1e-d4627441dc3e,Namespace:kube-system,Attempt:0,}" Dec 13 14:27:18.603402 kubelet[2019]: E1213 14:27:18.603355 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:18.604844 env[1204]: time="2024-12-13T14:27:18.603905787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-mzzpp,Uid:411bb671-8fda-447c-b7bf-4ce1cad51aad,Namespace:kube-system,Attempt:0,}" Dec 13 14:27:18.613210 env[1204]: time="2024-12-13T14:27:18.613125646Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:27:18.613210 env[1204]: time="2024-12-13T14:27:18.613186221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:27:18.613389 env[1204]: time="2024-12-13T14:27:18.613201479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:27:18.613416 env[1204]: time="2024-12-13T14:27:18.613388072Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4bf46db646d33ed293c0fa6d5bd034ee61e3c1402574cb1c98db252bca1cf02b pid=2132 runtime=io.containerd.runc.v2 Dec 13 14:27:18.616732 kubelet[2019]: E1213 14:27:18.615313 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:18.617824 env[1204]: time="2024-12-13T14:27:18.615811561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-slrh5,Uid:587a2f3f-83f7-4a9d-980a-aebaa9c8af99,Namespace:kube-system,Attempt:0,}" Dec 13 14:27:18.626183 systemd[1]: Started cri-containerd-4bf46db646d33ed293c0fa6d5bd034ee61e3c1402574cb1c98db252bca1cf02b.scope. Dec 13 14:27:18.631072 env[1204]: time="2024-12-13T14:27:18.630983877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:27:18.631072 env[1204]: time="2024-12-13T14:27:18.631039421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:27:18.631072 env[1204]: time="2024-12-13T14:27:18.631049802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:27:18.631317 env[1204]: time="2024-12-13T14:27:18.631254318Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/11ff8c046aeecd52ca123bbaf1c9e1de04a1c6f09285ef7e72551197251a3e87 pid=2160 runtime=io.containerd.runc.v2 Dec 13 14:27:18.644318 env[1204]: time="2024-12-13T14:27:18.642987256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:27:18.644318 env[1204]: time="2024-12-13T14:27:18.643080522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:27:18.644318 env[1204]: time="2024-12-13T14:27:18.643092143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:27:18.644318 env[1204]: time="2024-12-13T14:27:18.643536403Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a93d552cc13bc34b8db382dc0f641150edf43ed2bd0e8b2918a99caebbb66eb1 pid=2194 runtime=io.containerd.runc.v2 Dec 13 14:27:18.650543 systemd[1]: Started cri-containerd-11ff8c046aeecd52ca123bbaf1c9e1de04a1c6f09285ef7e72551197251a3e87.scope. Dec 13 14:27:18.655288 env[1204]: time="2024-12-13T14:27:18.655252629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2j9gx,Uid:08aa555b-c5ba-4780-ba1e-d4627441dc3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4bf46db646d33ed293c0fa6d5bd034ee61e3c1402574cb1c98db252bca1cf02b\"" Dec 13 14:27:18.656554 kubelet[2019]: E1213 14:27:18.656013 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:18.657525 systemd[1]: Started cri-containerd-a93d552cc13bc34b8db382dc0f641150edf43ed2bd0e8b2918a99caebbb66eb1.scope. Dec 13 14:27:18.662571 env[1204]: time="2024-12-13T14:27:18.662011029Z" level=info msg="CreateContainer within sandbox \"4bf46db646d33ed293c0fa6d5bd034ee61e3c1402574cb1c98db252bca1cf02b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:27:18.680858 env[1204]: time="2024-12-13T14:27:18.680809024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-slrh5,Uid:587a2f3f-83f7-4a9d-980a-aebaa9c8af99,Namespace:kube-system,Attempt:0,} returns sandbox id \"a93d552cc13bc34b8db382dc0f641150edf43ed2bd0e8b2918a99caebbb66eb1\"" Dec 13 14:27:18.681479 kubelet[2019]: E1213 14:27:18.681454 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:18.686430 env[1204]: time="2024-12-13T14:27:18.686381813Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:27:18.688709 env[1204]: time="2024-12-13T14:27:18.687592640Z" level=info msg="CreateContainer within sandbox \"4bf46db646d33ed293c0fa6d5bd034ee61e3c1402574cb1c98db252bca1cf02b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"66399532be3ab0d1155b54bdf04644164c0b74f20dbc5069f0951d903625a26a\"" Dec 13 14:27:18.688709 env[1204]: time="2024-12-13T14:27:18.688672771Z" level=info msg="StartContainer for \"66399532be3ab0d1155b54bdf04644164c0b74f20dbc5069f0951d903625a26a\"" Dec 13 14:27:18.695927 env[1204]: time="2024-12-13T14:27:18.695874387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-mzzpp,Uid:411bb671-8fda-447c-b7bf-4ce1cad51aad,Namespace:kube-system,Attempt:0,} returns sandbox id \"11ff8c046aeecd52ca123bbaf1c9e1de04a1c6f09285ef7e72551197251a3e87\"" Dec 13 14:27:18.697879 kubelet[2019]: E1213 14:27:18.697841 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:18.710158 systemd[1]: Started cri-containerd-66399532be3ab0d1155b54bdf04644164c0b74f20dbc5069f0951d903625a26a.scope. Dec 13 14:27:18.740884 env[1204]: time="2024-12-13T14:27:18.740803134Z" level=info msg="StartContainer for \"66399532be3ab0d1155b54bdf04644164c0b74f20dbc5069f0951d903625a26a\" returns successfully" Dec 13 14:27:18.963357 kubelet[2019]: E1213 14:27:18.963313 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:18.973435 kubelet[2019]: I1213 14:27:18.973279 2019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2j9gx" podStartSLOduration=0.973209049 podStartE2EDuration="973.209049ms" podCreationTimestamp="2024-12-13 14:27:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:27:18.972957334 +0000 UTC m=+16.129817400" watchObservedRunningTime="2024-12-13 14:27:18.973209049 +0000 UTC m=+16.130069105" Dec 13 14:27:26.049410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2538312802.mount: Deactivated successfully. Dec 13 14:27:31.850955 env[1204]: time="2024-12-13T14:27:31.850661276Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:31.853751 env[1204]: time="2024-12-13T14:27:31.853696396Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:31.856210 env[1204]: time="2024-12-13T14:27:31.856167405Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:31.856785 env[1204]: time="2024-12-13T14:27:31.856752747Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 14:27:31.860170 env[1204]: time="2024-12-13T14:27:31.860131644Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:27:31.869860 env[1204]: time="2024-12-13T14:27:31.869808290Z" level=info msg="CreateContainer within sandbox \"a93d552cc13bc34b8db382dc0f641150edf43ed2bd0e8b2918a99caebbb66eb1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:27:31.885532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1744935674.mount: Deactivated successfully. Dec 13 14:27:31.887732 env[1204]: time="2024-12-13T14:27:31.887627164Z" level=info msg="CreateContainer within sandbox \"a93d552cc13bc34b8db382dc0f641150edf43ed2bd0e8b2918a99caebbb66eb1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7632b9b6db3641a119dbf367d3a9028982e275071ea1785e38931a0119b46b9d\"" Dec 13 14:27:31.888428 env[1204]: time="2024-12-13T14:27:31.888362828Z" level=info msg="StartContainer for \"7632b9b6db3641a119dbf367d3a9028982e275071ea1785e38931a0119b46b9d\"" Dec 13 14:27:31.907336 systemd[1]: Started cri-containerd-7632b9b6db3641a119dbf367d3a9028982e275071ea1785e38931a0119b46b9d.scope. Dec 13 14:27:31.941131 env[1204]: time="2024-12-13T14:27:31.941065593Z" level=info msg="StartContainer for \"7632b9b6db3641a119dbf367d3a9028982e275071ea1785e38931a0119b46b9d\" returns successfully" Dec 13 14:27:31.952692 systemd[1]: cri-containerd-7632b9b6db3641a119dbf367d3a9028982e275071ea1785e38931a0119b46b9d.scope: Deactivated successfully. Dec 13 14:27:31.994095 kubelet[2019]: E1213 14:27:31.994023 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:32.882889 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7632b9b6db3641a119dbf367d3a9028982e275071ea1785e38931a0119b46b9d-rootfs.mount: Deactivated successfully. Dec 13 14:27:33.951645 kubelet[2019]: E1213 14:27:32.995096 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:34.682314 env[1204]: time="2024-12-13T14:27:34.682246332Z" level=info msg="shim disconnected" id=7632b9b6db3641a119dbf367d3a9028982e275071ea1785e38931a0119b46b9d Dec 13 14:27:34.682314 env[1204]: time="2024-12-13T14:27:34.682302397Z" level=warning msg="cleaning up after shim disconnected" id=7632b9b6db3641a119dbf367d3a9028982e275071ea1785e38931a0119b46b9d namespace=k8s.io Dec 13 14:27:34.682314 env[1204]: time="2024-12-13T14:27:34.682311815Z" level=info msg="cleaning up dead shim" Dec 13 14:27:34.690472 env[1204]: time="2024-12-13T14:27:34.690403821Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2458 runtime=io.containerd.runc.v2\n" Dec 13 14:27:34.722521 systemd[1]: Started sshd@5-10.0.0.100:22-10.0.0.1:60640.service. Dec 13 14:27:34.770678 sshd[2472]: Accepted publickey for core from 10.0.0.1 port 60640 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:27:34.772254 sshd[2472]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:34.779396 systemd-logind[1191]: New session 6 of user core. Dec 13 14:27:34.780365 systemd[1]: Started session-6.scope. Dec 13 14:27:34.922572 sshd[2472]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:34.925600 systemd[1]: sshd@5-10.0.0.100:22-10.0.0.1:60640.service: Deactivated successfully. Dec 13 14:27:34.926561 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:27:34.927530 systemd-logind[1191]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:27:34.928491 systemd-logind[1191]: Removed session 6. Dec 13 14:27:35.001330 kubelet[2019]: E1213 14:27:35.001165 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:35.004453 env[1204]: time="2024-12-13T14:27:35.003943581Z" level=info msg="CreateContainer within sandbox \"a93d552cc13bc34b8db382dc0f641150edf43ed2bd0e8b2918a99caebbb66eb1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:27:35.033078 env[1204]: time="2024-12-13T14:27:35.032903236Z" level=info msg="CreateContainer within sandbox \"a93d552cc13bc34b8db382dc0f641150edf43ed2bd0e8b2918a99caebbb66eb1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"38b6bda66113bc4a6e98b47716533747c6854d9750a0d06af14a8ad20fbf195d\"" Dec 13 14:27:35.033791 env[1204]: time="2024-12-13T14:27:35.033733928Z" level=info msg="StartContainer for \"38b6bda66113bc4a6e98b47716533747c6854d9750a0d06af14a8ad20fbf195d\"" Dec 13 14:27:35.054610 systemd[1]: Started cri-containerd-38b6bda66113bc4a6e98b47716533747c6854d9750a0d06af14a8ad20fbf195d.scope. Dec 13 14:27:35.146876 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:27:35.147082 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:27:35.147279 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:27:35.148722 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:27:35.151087 systemd[1]: cri-containerd-38b6bda66113bc4a6e98b47716533747c6854d9750a0d06af14a8ad20fbf195d.scope: Deactivated successfully. Dec 13 14:27:35.156709 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:27:35.176149 env[1204]: time="2024-12-13T14:27:35.176083409Z" level=info msg="StartContainer for \"38b6bda66113bc4a6e98b47716533747c6854d9750a0d06af14a8ad20fbf195d\" returns successfully" Dec 13 14:27:35.209749 env[1204]: time="2024-12-13T14:27:35.209692715Z" level=info msg="shim disconnected" id=38b6bda66113bc4a6e98b47716533747c6854d9750a0d06af14a8ad20fbf195d Dec 13 14:27:35.209749 env[1204]: time="2024-12-13T14:27:35.209745585Z" level=warning msg="cleaning up after shim disconnected" id=38b6bda66113bc4a6e98b47716533747c6854d9750a0d06af14a8ad20fbf195d namespace=k8s.io Dec 13 14:27:35.209749 env[1204]: time="2024-12-13T14:27:35.209754361Z" level=info msg="cleaning up dead shim" Dec 13 14:27:35.216117 env[1204]: time="2024-12-13T14:27:35.216070716Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2536 runtime=io.containerd.runc.v2\n" Dec 13 14:27:36.015363 kubelet[2019]: E1213 14:27:36.015311 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:36.020850 env[1204]: time="2024-12-13T14:27:36.020801476Z" level=info msg="CreateContainer within sandbox \"a93d552cc13bc34b8db382dc0f641150edf43ed2bd0e8b2918a99caebbb66eb1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:27:36.023920 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-38b6bda66113bc4a6e98b47716533747c6854d9750a0d06af14a8ad20fbf195d-rootfs.mount: Deactivated successfully. Dec 13 14:27:36.034787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3989592065.mount: Deactivated successfully. Dec 13 14:27:36.079451 env[1204]: time="2024-12-13T14:27:36.079377103Z" level=info msg="CreateContainer within sandbox \"a93d552cc13bc34b8db382dc0f641150edf43ed2bd0e8b2918a99caebbb66eb1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c13e2975fd93306b9f7c3cc38d2cc9e46fa3db4e4c7ec02570a5d51fef5c6e09\"" Dec 13 14:27:36.080173 env[1204]: time="2024-12-13T14:27:36.080003681Z" level=info msg="StartContainer for \"c13e2975fd93306b9f7c3cc38d2cc9e46fa3db4e4c7ec02570a5d51fef5c6e09\"" Dec 13 14:27:36.096361 systemd[1]: Started cri-containerd-c13e2975fd93306b9f7c3cc38d2cc9e46fa3db4e4c7ec02570a5d51fef5c6e09.scope. Dec 13 14:27:36.126022 systemd[1]: cri-containerd-c13e2975fd93306b9f7c3cc38d2cc9e46fa3db4e4c7ec02570a5d51fef5c6e09.scope: Deactivated successfully. Dec 13 14:27:36.128916 env[1204]: time="2024-12-13T14:27:36.128875767Z" level=info msg="StartContainer for \"c13e2975fd93306b9f7c3cc38d2cc9e46fa3db4e4c7ec02570a5d51fef5c6e09\" returns successfully" Dec 13 14:27:36.165587 env[1204]: time="2024-12-13T14:27:36.165525259Z" level=info msg="shim disconnected" id=c13e2975fd93306b9f7c3cc38d2cc9e46fa3db4e4c7ec02570a5d51fef5c6e09 Dec 13 14:27:36.165587 env[1204]: time="2024-12-13T14:27:36.165580813Z" level=warning msg="cleaning up after shim disconnected" id=c13e2975fd93306b9f7c3cc38d2cc9e46fa3db4e4c7ec02570a5d51fef5c6e09 namespace=k8s.io Dec 13 14:27:36.165587 env[1204]: time="2024-12-13T14:27:36.165595090Z" level=info msg="cleaning up dead shim" Dec 13 14:27:36.173477 env[1204]: time="2024-12-13T14:27:36.173419668Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2595 runtime=io.containerd.runc.v2\n" Dec 13 14:27:37.010189 kubelet[2019]: E1213 14:27:37.010149 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:37.012145 env[1204]: time="2024-12-13T14:27:37.012088039Z" level=info msg="CreateContainer within sandbox \"a93d552cc13bc34b8db382dc0f641150edf43ed2bd0e8b2918a99caebbb66eb1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:27:37.337871 env[1204]: time="2024-12-13T14:27:37.337830300Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:37.340754 env[1204]: time="2024-12-13T14:27:37.340710493Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:37.342803 env[1204]: time="2024-12-13T14:27:37.342749435Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:27:37.343033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount77494302.mount: Deactivated successfully. Dec 13 14:27:37.344088 env[1204]: time="2024-12-13T14:27:37.344045451Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 14:27:37.346485 env[1204]: time="2024-12-13T14:27:37.346420184Z" level=info msg="CreateContainer within sandbox \"11ff8c046aeecd52ca123bbaf1c9e1de04a1c6f09285ef7e72551197251a3e87\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:27:37.349401 env[1204]: time="2024-12-13T14:27:37.349318210Z" level=info msg="CreateContainer within sandbox \"a93d552cc13bc34b8db382dc0f641150edf43ed2bd0e8b2918a99caebbb66eb1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"847646cf0f8fa9288ed93ae799bc1b92767a6d0e7db3f59eebd3d8119879689a\"" Dec 13 14:27:37.349862 env[1204]: time="2024-12-13T14:27:37.349819893Z" level=info msg="StartContainer for \"847646cf0f8fa9288ed93ae799bc1b92767a6d0e7db3f59eebd3d8119879689a\"" Dec 13 14:27:37.365055 systemd[1]: Started cri-containerd-847646cf0f8fa9288ed93ae799bc1b92767a6d0e7db3f59eebd3d8119879689a.scope. Dec 13 14:27:37.365702 env[1204]: time="2024-12-13T14:27:37.365648754Z" level=info msg="CreateContainer within sandbox \"11ff8c046aeecd52ca123bbaf1c9e1de04a1c6f09285ef7e72551197251a3e87\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ad503a81fa9960435b8ac5639baf31b1eba9349f954df0da39447251e1e74de7\"" Dec 13 14:27:37.366855 env[1204]: time="2024-12-13T14:27:37.366823571Z" level=info msg="StartContainer for \"ad503a81fa9960435b8ac5639baf31b1eba9349f954df0da39447251e1e74de7\"" Dec 13 14:27:37.387105 systemd[1]: Started cri-containerd-ad503a81fa9960435b8ac5639baf31b1eba9349f954df0da39447251e1e74de7.scope. Dec 13 14:27:37.399513 systemd[1]: cri-containerd-847646cf0f8fa9288ed93ae799bc1b92767a6d0e7db3f59eebd3d8119879689a.scope: Deactivated successfully. Dec 13 14:27:37.401009 env[1204]: time="2024-12-13T14:27:37.400962044Z" level=info msg="StartContainer for \"847646cf0f8fa9288ed93ae799bc1b92767a6d0e7db3f59eebd3d8119879689a\" returns successfully" Dec 13 14:27:37.423494 env[1204]: time="2024-12-13T14:27:37.423411216Z" level=info msg="StartContainer for \"ad503a81fa9960435b8ac5639baf31b1eba9349f954df0da39447251e1e74de7\" returns successfully" Dec 13 14:27:37.634527 env[1204]: time="2024-12-13T14:27:37.634337885Z" level=info msg="shim disconnected" id=847646cf0f8fa9288ed93ae799bc1b92767a6d0e7db3f59eebd3d8119879689a Dec 13 14:27:37.634527 env[1204]: time="2024-12-13T14:27:37.634401835Z" level=warning msg="cleaning up after shim disconnected" id=847646cf0f8fa9288ed93ae799bc1b92767a6d0e7db3f59eebd3d8119879689a namespace=k8s.io Dec 13 14:27:37.634527 env[1204]: time="2024-12-13T14:27:37.634415090Z" level=info msg="cleaning up dead shim" Dec 13 14:27:37.643171 env[1204]: time="2024-12-13T14:27:37.643127845Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:27:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2687 runtime=io.containerd.runc.v2\n" Dec 13 14:27:38.012870 kubelet[2019]: E1213 14:27:38.012450 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:38.015272 kubelet[2019]: E1213 14:27:38.015253 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:38.018173 env[1204]: time="2024-12-13T14:27:38.018134011Z" level=info msg="CreateContainer within sandbox \"a93d552cc13bc34b8db382dc0f641150edf43ed2bd0e8b2918a99caebbb66eb1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:27:38.024491 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-847646cf0f8fa9288ed93ae799bc1b92767a6d0e7db3f59eebd3d8119879689a-rootfs.mount: Deactivated successfully. Dec 13 14:27:38.046462 env[1204]: time="2024-12-13T14:27:38.046281996Z" level=info msg="CreateContainer within sandbox \"a93d552cc13bc34b8db382dc0f641150edf43ed2bd0e8b2918a99caebbb66eb1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"62ad92fa3486d96535ab171b98725be0240a21ec3e004e97b7887791129dc848\"" Dec 13 14:27:38.046904 env[1204]: time="2024-12-13T14:27:38.046850043Z" level=info msg="StartContainer for \"62ad92fa3486d96535ab171b98725be0240a21ec3e004e97b7887791129dc848\"" Dec 13 14:27:38.069722 kubelet[2019]: I1213 14:27:38.068666 2019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-mzzpp" podStartSLOduration=1.4221704210000001 podStartE2EDuration="20.068649198s" podCreationTimestamp="2024-12-13 14:27:18 +0000 UTC" firstStartedPulling="2024-12-13 14:27:18.69842684 +0000 UTC m=+15.855286906" lastFinishedPulling="2024-12-13 14:27:37.344905617 +0000 UTC m=+34.501765683" observedRunningTime="2024-12-13 14:27:38.045382967 +0000 UTC m=+35.202243043" watchObservedRunningTime="2024-12-13 14:27:38.068649198 +0000 UTC m=+35.225509264" Dec 13 14:27:38.071801 systemd[1]: Started cri-containerd-62ad92fa3486d96535ab171b98725be0240a21ec3e004e97b7887791129dc848.scope. Dec 13 14:27:38.124650 env[1204]: time="2024-12-13T14:27:38.124583441Z" level=info msg="StartContainer for \"62ad92fa3486d96535ab171b98725be0240a21ec3e004e97b7887791129dc848\" returns successfully" Dec 13 14:27:38.267027 kubelet[2019]: I1213 14:27:38.265580 2019 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:27:38.292319 kubelet[2019]: I1213 14:27:38.292255 2019 topology_manager.go:215] "Topology Admit Handler" podUID="09bd7c57-6474-4362-b82c-39263c5ace80" podNamespace="kube-system" podName="coredns-7db6d8ff4d-829f8" Dec 13 14:27:38.293639 kubelet[2019]: I1213 14:27:38.293606 2019 topology_manager.go:215] "Topology Admit Handler" podUID="88c6f4d5-8741-406c-b355-5f715984c3f4" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qrg4c" Dec 13 14:27:38.300102 systemd[1]: Created slice kubepods-burstable-pod09bd7c57_6474_4362_b82c_39263c5ace80.slice. Dec 13 14:27:38.306974 systemd[1]: Created slice kubepods-burstable-pod88c6f4d5_8741_406c_b355_5f715984c3f4.slice. Dec 13 14:27:38.372193 kubelet[2019]: I1213 14:27:38.372146 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/09bd7c57-6474-4362-b82c-39263c5ace80-config-volume\") pod \"coredns-7db6d8ff4d-829f8\" (UID: \"09bd7c57-6474-4362-b82c-39263c5ace80\") " pod="kube-system/coredns-7db6d8ff4d-829f8" Dec 13 14:27:38.372537 kubelet[2019]: I1213 14:27:38.372494 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrg4t\" (UniqueName: \"kubernetes.io/projected/88c6f4d5-8741-406c-b355-5f715984c3f4-kube-api-access-rrg4t\") pod \"coredns-7db6d8ff4d-qrg4c\" (UID: \"88c6f4d5-8741-406c-b355-5f715984c3f4\") " pod="kube-system/coredns-7db6d8ff4d-qrg4c" Dec 13 14:27:38.372718 kubelet[2019]: I1213 14:27:38.372663 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pktzm\" (UniqueName: \"kubernetes.io/projected/09bd7c57-6474-4362-b82c-39263c5ace80-kube-api-access-pktzm\") pod \"coredns-7db6d8ff4d-829f8\" (UID: \"09bd7c57-6474-4362-b82c-39263c5ace80\") " pod="kube-system/coredns-7db6d8ff4d-829f8" Dec 13 14:27:38.372871 kubelet[2019]: I1213 14:27:38.372848 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/88c6f4d5-8741-406c-b355-5f715984c3f4-config-volume\") pod \"coredns-7db6d8ff4d-qrg4c\" (UID: \"88c6f4d5-8741-406c-b355-5f715984c3f4\") " pod="kube-system/coredns-7db6d8ff4d-qrg4c" Dec 13 14:27:39.020734 kubelet[2019]: E1213 14:27:39.020693 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:39.021194 kubelet[2019]: E1213 14:27:39.021050 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:39.402157 kubelet[2019]: I1213 14:27:39.402090 2019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-slrh5" podStartSLOduration=8.225954951 podStartE2EDuration="21.402071866s" podCreationTimestamp="2024-12-13 14:27:18 +0000 UTC" firstStartedPulling="2024-12-13 14:27:18.683771562 +0000 UTC m=+15.840631628" lastFinishedPulling="2024-12-13 14:27:31.859888477 +0000 UTC m=+29.016748543" observedRunningTime="2024-12-13 14:27:39.401982909 +0000 UTC m=+36.558842966" watchObservedRunningTime="2024-12-13 14:27:39.402071866 +0000 UTC m=+36.558931922" Dec 13 14:27:39.505476 kubelet[2019]: E1213 14:27:39.505429 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:39.506286 env[1204]: time="2024-12-13T14:27:39.506232802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-829f8,Uid:09bd7c57-6474-4362-b82c-39263c5ace80,Namespace:kube-system,Attempt:0,}" Dec 13 14:27:39.510497 kubelet[2019]: E1213 14:27:39.510460 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:39.511018 env[1204]: time="2024-12-13T14:27:39.510954473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qrg4c,Uid:88c6f4d5-8741-406c-b355-5f715984c3f4,Namespace:kube-system,Attempt:0,}" Dec 13 14:27:39.927508 systemd[1]: Started sshd@6-10.0.0.100:22-10.0.0.1:43958.service. Dec 13 14:27:39.969857 sshd[2872]: Accepted publickey for core from 10.0.0.1 port 43958 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:27:39.971386 sshd[2872]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:39.975934 systemd-logind[1191]: New session 7 of user core. Dec 13 14:27:39.977057 systemd[1]: Started session-7.scope. Dec 13 14:27:40.111334 sshd[2872]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:40.114598 systemd[1]: sshd@6-10.0.0.100:22-10.0.0.1:43958.service: Deactivated successfully. Dec 13 14:27:40.115533 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:27:40.116263 systemd-logind[1191]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:27:40.117191 systemd-logind[1191]: Removed session 7. Dec 13 14:27:40.617753 kubelet[2019]: E1213 14:27:40.617649 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:41.745974 systemd-networkd[1028]: cilium_host: Link UP Dec 13 14:27:41.749797 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:27:41.749931 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:27:41.746852 systemd-networkd[1028]: cilium_net: Link UP Dec 13 14:27:41.750012 systemd-networkd[1028]: cilium_net: Gained carrier Dec 13 14:27:41.750258 systemd-networkd[1028]: cilium_host: Gained carrier Dec 13 14:27:41.750435 systemd-networkd[1028]: cilium_net: Gained IPv6LL Dec 13 14:27:41.750648 systemd-networkd[1028]: cilium_host: Gained IPv6LL Dec 13 14:27:41.846548 systemd-networkd[1028]: cilium_vxlan: Link UP Dec 13 14:27:41.846561 systemd-networkd[1028]: cilium_vxlan: Gained carrier Dec 13 14:27:42.073260 kernel: NET: Registered PF_ALG protocol family Dec 13 14:27:42.648151 systemd-networkd[1028]: lxc_health: Link UP Dec 13 14:27:42.658732 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:27:42.658112 systemd-networkd[1028]: lxc_health: Gained carrier Dec 13 14:27:42.830627 systemd-networkd[1028]: lxc85ca50808528: Link UP Dec 13 14:27:42.836260 kernel: eth0: renamed from tmpcac92 Dec 13 14:27:42.846632 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:27:42.846770 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc85ca50808528: link becomes ready Dec 13 14:27:42.846997 systemd-networkd[1028]: lxc85ca50808528: Gained carrier Dec 13 14:27:42.855988 systemd-networkd[1028]: lxcac0d5cc24f8a: Link UP Dec 13 14:27:42.863252 kernel: eth0: renamed from tmp1df8c Dec 13 14:27:42.888532 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcac0d5cc24f8a: link becomes ready Dec 13 14:27:42.889301 systemd-networkd[1028]: lxcac0d5cc24f8a: Gained carrier Dec 13 14:27:43.590508 systemd-networkd[1028]: cilium_vxlan: Gained IPv6LL Dec 13 14:27:43.910405 systemd-networkd[1028]: lxc85ca50808528: Gained IPv6LL Dec 13 14:27:44.230403 systemd-networkd[1028]: lxc_health: Gained IPv6LL Dec 13 14:27:44.294397 systemd-networkd[1028]: lxcac0d5cc24f8a: Gained IPv6LL Dec 13 14:27:44.621337 kubelet[2019]: E1213 14:27:44.621288 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:45.131556 systemd[1]: Started sshd@7-10.0.0.100:22-10.0.0.1:43978.service. Dec 13 14:27:45.239301 sshd[3263]: Accepted publickey for core from 10.0.0.1 port 43978 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:27:45.240701 sshd[3263]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:45.244604 systemd-logind[1191]: New session 8 of user core. Dec 13 14:27:45.245424 systemd[1]: Started session-8.scope. Dec 13 14:27:45.362560 sshd[3263]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:45.364717 systemd[1]: sshd@7-10.0.0.100:22-10.0.0.1:43978.service: Deactivated successfully. Dec 13 14:27:45.365495 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:27:45.366349 systemd-logind[1191]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:27:45.367177 systemd-logind[1191]: Removed session 8. Dec 13 14:27:46.435511 env[1204]: time="2024-12-13T14:27:46.435423934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:27:46.435511 env[1204]: time="2024-12-13T14:27:46.435463569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:27:46.435511 env[1204]: time="2024-12-13T14:27:46.435472996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:27:46.436110 env[1204]: time="2024-12-13T14:27:46.436067743Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1df8c2b351a2d5ca4004a545b35a8d83a8480a197a1852e4aed21b4ec1a10911 pid=3294 runtime=io.containerd.runc.v2 Dec 13 14:27:46.440693 env[1204]: time="2024-12-13T14:27:46.439250048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:27:46.440693 env[1204]: time="2024-12-13T14:27:46.439284724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:27:46.440693 env[1204]: time="2024-12-13T14:27:46.439296556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:27:46.440693 env[1204]: time="2024-12-13T14:27:46.439508303Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cac9290ca8d9e54b9208539798323bdec9d625bb293400c40ce90ea161d7b141 pid=3312 runtime=io.containerd.runc.v2 Dec 13 14:27:46.453278 systemd[1]: run-containerd-runc-k8s.io-cac9290ca8d9e54b9208539798323bdec9d625bb293400c40ce90ea161d7b141-runc.XlZECb.mount: Deactivated successfully. Dec 13 14:27:46.455810 systemd[1]: Started cri-containerd-cac9290ca8d9e54b9208539798323bdec9d625bb293400c40ce90ea161d7b141.scope. Dec 13 14:27:46.462412 systemd[1]: Started cri-containerd-1df8c2b351a2d5ca4004a545b35a8d83a8480a197a1852e4aed21b4ec1a10911.scope. Dec 13 14:27:46.470564 systemd-resolved[1142]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:27:46.476388 systemd-resolved[1142]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:27:46.499283 env[1204]: time="2024-12-13T14:27:46.499231462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-829f8,Uid:09bd7c57-6474-4362-b82c-39263c5ace80,Namespace:kube-system,Attempt:0,} returns sandbox id \"cac9290ca8d9e54b9208539798323bdec9d625bb293400c40ce90ea161d7b141\"" Dec 13 14:27:46.500498 kubelet[2019]: E1213 14:27:46.500011 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:46.503393 env[1204]: time="2024-12-13T14:27:46.502976104Z" level=info msg="CreateContainer within sandbox \"cac9290ca8d9e54b9208539798323bdec9d625bb293400c40ce90ea161d7b141\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:27:46.503763 env[1204]: time="2024-12-13T14:27:46.503680737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qrg4c,Uid:88c6f4d5-8741-406c-b355-5f715984c3f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"1df8c2b351a2d5ca4004a545b35a8d83a8480a197a1852e4aed21b4ec1a10911\"" Dec 13 14:27:46.504287 kubelet[2019]: E1213 14:27:46.504262 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:46.506320 env[1204]: time="2024-12-13T14:27:46.506286630Z" level=info msg="CreateContainer within sandbox \"1df8c2b351a2d5ca4004a545b35a8d83a8480a197a1852e4aed21b4ec1a10911\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:27:46.521962 env[1204]: time="2024-12-13T14:27:46.521920505Z" level=info msg="CreateContainer within sandbox \"cac9290ca8d9e54b9208539798323bdec9d625bb293400c40ce90ea161d7b141\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bc9fe779f5c0d25b7338f4f8767c0613dc3253e634a89515a0ef5efa77eaebac\"" Dec 13 14:27:46.522823 env[1204]: time="2024-12-13T14:27:46.522782914Z" level=info msg="StartContainer for \"bc9fe779f5c0d25b7338f4f8767c0613dc3253e634a89515a0ef5efa77eaebac\"" Dec 13 14:27:46.538421 env[1204]: time="2024-12-13T14:27:46.537855385Z" level=info msg="CreateContainer within sandbox \"1df8c2b351a2d5ca4004a545b35a8d83a8480a197a1852e4aed21b4ec1a10911\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1a79ad2328648c51a31d7732a78efd87d255d7a3e6de9046a293e2b7d3ee3f8a\"" Dec 13 14:27:46.538657 env[1204]: time="2024-12-13T14:27:46.538618658Z" level=info msg="StartContainer for \"1a79ad2328648c51a31d7732a78efd87d255d7a3e6de9046a293e2b7d3ee3f8a\"" Dec 13 14:27:46.539805 systemd[1]: Started cri-containerd-bc9fe779f5c0d25b7338f4f8767c0613dc3253e634a89515a0ef5efa77eaebac.scope. Dec 13 14:27:46.560384 systemd[1]: Started cri-containerd-1a79ad2328648c51a31d7732a78efd87d255d7a3e6de9046a293e2b7d3ee3f8a.scope. Dec 13 14:27:46.667064 env[1204]: time="2024-12-13T14:27:46.666959908Z" level=info msg="StartContainer for \"bc9fe779f5c0d25b7338f4f8767c0613dc3253e634a89515a0ef5efa77eaebac\" returns successfully" Dec 13 14:27:46.716349 env[1204]: time="2024-12-13T14:27:46.716135724Z" level=info msg="StartContainer for \"1a79ad2328648c51a31d7732a78efd87d255d7a3e6de9046a293e2b7d3ee3f8a\" returns successfully" Dec 13 14:27:46.765678 kubelet[2019]: I1213 14:27:46.765636 2019 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:27:46.766559 kubelet[2019]: E1213 14:27:46.766395 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:47.034898 kubelet[2019]: E1213 14:27:47.034779 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:47.036498 kubelet[2019]: E1213 14:27:47.036454 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:47.036703 kubelet[2019]: E1213 14:27:47.036674 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:47.115496 kubelet[2019]: I1213 14:27:47.115423 2019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qrg4c" podStartSLOduration=29.11540654 podStartE2EDuration="29.11540654s" podCreationTimestamp="2024-12-13 14:27:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:27:47.096422797 +0000 UTC m=+44.253282863" watchObservedRunningTime="2024-12-13 14:27:47.11540654 +0000 UTC m=+44.272266606" Dec 13 14:27:48.038709 kubelet[2019]: E1213 14:27:48.038650 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:48.108492 kubelet[2019]: I1213 14:27:48.108431 2019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-829f8" podStartSLOduration=30.108409127 podStartE2EDuration="30.108409127s" podCreationTimestamp="2024-12-13 14:27:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:27:47.117136528 +0000 UTC m=+44.273996624" watchObservedRunningTime="2024-12-13 14:27:48.108409127 +0000 UTC m=+45.265269223" Dec 13 14:27:49.040797 kubelet[2019]: E1213 14:27:49.040753 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:49.506292 kubelet[2019]: E1213 14:27:49.506245 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:50.042726 kubelet[2019]: E1213 14:27:50.042674 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:27:50.365956 systemd[1]: Started sshd@8-10.0.0.100:22-10.0.0.1:38096.service. Dec 13 14:27:50.406546 sshd[3461]: Accepted publickey for core from 10.0.0.1 port 38096 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:27:50.407833 sshd[3461]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:50.411105 systemd-logind[1191]: New session 9 of user core. Dec 13 14:27:50.411912 systemd[1]: Started session-9.scope. Dec 13 14:27:50.521588 sshd[3461]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:50.524110 systemd[1]: sshd@8-10.0.0.100:22-10.0.0.1:38096.service: Deactivated successfully. Dec 13 14:27:50.524980 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:27:50.525499 systemd-logind[1191]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:27:50.526115 systemd-logind[1191]: Removed session 9. Dec 13 14:27:55.526055 systemd[1]: Started sshd@9-10.0.0.100:22-10.0.0.1:38110.service. Dec 13 14:27:55.565896 sshd[3475]: Accepted publickey for core from 10.0.0.1 port 38110 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:27:55.567257 sshd[3475]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:55.571042 systemd-logind[1191]: New session 10 of user core. Dec 13 14:27:55.572039 systemd[1]: Started session-10.scope. Dec 13 14:27:55.700857 sshd[3475]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:55.703843 systemd[1]: sshd@9-10.0.0.100:22-10.0.0.1:38110.service: Deactivated successfully. Dec 13 14:27:55.704364 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:27:55.704995 systemd-logind[1191]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:27:55.705977 systemd[1]: Started sshd@10-10.0.0.100:22-10.0.0.1:38120.service. Dec 13 14:27:55.706643 systemd-logind[1191]: Removed session 10. Dec 13 14:27:55.746669 sshd[3489]: Accepted publickey for core from 10.0.0.1 port 38120 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:27:55.747950 sshd[3489]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:55.751393 systemd-logind[1191]: New session 11 of user core. Dec 13 14:27:55.752451 systemd[1]: Started session-11.scope. Dec 13 14:27:55.961048 sshd[3489]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:55.965889 systemd[1]: Started sshd@11-10.0.0.100:22-10.0.0.1:38124.service. Dec 13 14:27:55.966646 systemd[1]: sshd@10-10.0.0.100:22-10.0.0.1:38120.service: Deactivated successfully. Dec 13 14:27:55.968458 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:27:55.970726 systemd-logind[1191]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:27:55.976864 systemd-logind[1191]: Removed session 11. Dec 13 14:27:56.017766 sshd[3500]: Accepted publickey for core from 10.0.0.1 port 38124 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:27:56.019629 sshd[3500]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:27:56.024764 systemd-logind[1191]: New session 12 of user core. Dec 13 14:27:56.025820 systemd[1]: Started session-12.scope. Dec 13 14:27:56.145188 sshd[3500]: pam_unix(sshd:session): session closed for user core Dec 13 14:27:56.147310 systemd[1]: sshd@11-10.0.0.100:22-10.0.0.1:38124.service: Deactivated successfully. Dec 13 14:27:56.148007 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:27:56.148442 systemd-logind[1191]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:27:56.149090 systemd-logind[1191]: Removed session 12. Dec 13 14:28:01.150021 systemd[1]: Started sshd@12-10.0.0.100:22-10.0.0.1:58294.service. Dec 13 14:28:01.188639 sshd[3514]: Accepted publickey for core from 10.0.0.1 port 58294 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:28:01.190029 sshd[3514]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:01.194019 systemd-logind[1191]: New session 13 of user core. Dec 13 14:28:01.194829 systemd[1]: Started session-13.scope. Dec 13 14:28:01.300413 sshd[3514]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:01.303552 systemd[1]: sshd@12-10.0.0.100:22-10.0.0.1:58294.service: Deactivated successfully. Dec 13 14:28:01.304299 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:28:01.305127 systemd-logind[1191]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:28:01.305940 systemd-logind[1191]: Removed session 13. Dec 13 14:28:06.305377 systemd[1]: Started sshd@13-10.0.0.100:22-10.0.0.1:58306.service. Dec 13 14:28:06.342694 sshd[3529]: Accepted publickey for core from 10.0.0.1 port 58306 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:28:06.343662 sshd[3529]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:06.347210 systemd-logind[1191]: New session 14 of user core. Dec 13 14:28:06.348033 systemd[1]: Started session-14.scope. Dec 13 14:28:06.465269 sshd[3529]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:06.468266 systemd[1]: sshd@13-10.0.0.100:22-10.0.0.1:58306.service: Deactivated successfully. Dec 13 14:28:06.469013 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:28:06.469635 systemd-logind[1191]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:28:06.470780 systemd[1]: Started sshd@14-10.0.0.100:22-10.0.0.1:58316.service. Dec 13 14:28:06.472700 systemd-logind[1191]: Removed session 14. Dec 13 14:28:06.508353 sshd[3542]: Accepted publickey for core from 10.0.0.1 port 58316 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:28:06.509718 sshd[3542]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:06.513088 systemd-logind[1191]: New session 15 of user core. Dec 13 14:28:06.513927 systemd[1]: Started session-15.scope. Dec 13 14:28:06.899447 sshd[3542]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:06.902652 systemd[1]: sshd@14-10.0.0.100:22-10.0.0.1:58316.service: Deactivated successfully. Dec 13 14:28:06.903228 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:28:06.903937 systemd-logind[1191]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:28:06.905460 systemd[1]: Started sshd@15-10.0.0.100:22-10.0.0.1:58330.service. Dec 13 14:28:06.906292 systemd-logind[1191]: Removed session 15. Dec 13 14:28:06.945487 sshd[3553]: Accepted publickey for core from 10.0.0.1 port 58330 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:28:06.946780 sshd[3553]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:06.950033 systemd-logind[1191]: New session 16 of user core. Dec 13 14:28:06.950845 systemd[1]: Started session-16.scope. Dec 13 14:28:08.272172 sshd[3553]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:08.276184 systemd[1]: Started sshd@16-10.0.0.100:22-10.0.0.1:39962.service. Dec 13 14:28:08.278093 systemd[1]: sshd@15-10.0.0.100:22-10.0.0.1:58330.service: Deactivated successfully. Dec 13 14:28:08.279027 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:28:08.280319 systemd-logind[1191]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:28:08.281144 systemd-logind[1191]: Removed session 16. Dec 13 14:28:08.320364 sshd[3573]: Accepted publickey for core from 10.0.0.1 port 39962 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:28:08.321580 sshd[3573]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:08.324914 systemd-logind[1191]: New session 17 of user core. Dec 13 14:28:08.325652 systemd[1]: Started session-17.scope. Dec 13 14:28:08.541619 sshd[3573]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:08.544776 systemd[1]: sshd@16-10.0.0.100:22-10.0.0.1:39962.service: Deactivated successfully. Dec 13 14:28:08.545521 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:28:08.546828 systemd-logind[1191]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:28:08.548642 systemd[1]: Started sshd@17-10.0.0.100:22-10.0.0.1:39972.service. Dec 13 14:28:08.550438 systemd-logind[1191]: Removed session 17. Dec 13 14:28:08.585548 sshd[3586]: Accepted publickey for core from 10.0.0.1 port 39972 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:28:08.586869 sshd[3586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:08.590390 systemd-logind[1191]: New session 18 of user core. Dec 13 14:28:08.591081 systemd[1]: Started session-18.scope. Dec 13 14:28:08.697486 sshd[3586]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:08.700092 systemd[1]: sshd@17-10.0.0.100:22-10.0.0.1:39972.service: Deactivated successfully. Dec 13 14:28:08.700744 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:28:08.701608 systemd-logind[1191]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:28:08.702355 systemd-logind[1191]: Removed session 18. Dec 13 14:28:12.926860 kubelet[2019]: E1213 14:28:12.926805 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:28:13.702602 systemd[1]: Started sshd@18-10.0.0.100:22-10.0.0.1:39974.service. Dec 13 14:28:13.742330 sshd[3599]: Accepted publickey for core from 10.0.0.1 port 39974 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:28:13.743585 sshd[3599]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:13.747497 systemd-logind[1191]: New session 19 of user core. Dec 13 14:28:13.748651 systemd[1]: Started session-19.scope. Dec 13 14:28:13.845883 sshd[3599]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:13.848260 systemd[1]: sshd@18-10.0.0.100:22-10.0.0.1:39974.service: Deactivated successfully. Dec 13 14:28:13.849005 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:28:13.849584 systemd-logind[1191]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:28:13.850299 systemd-logind[1191]: Removed session 19. Dec 13 14:28:18.851368 systemd[1]: Started sshd@19-10.0.0.100:22-10.0.0.1:38130.service. Dec 13 14:28:18.889313 sshd[3617]: Accepted publickey for core from 10.0.0.1 port 38130 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:28:18.890605 sshd[3617]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:18.895049 systemd-logind[1191]: New session 20 of user core. Dec 13 14:28:18.895836 systemd[1]: Started session-20.scope. Dec 13 14:28:19.006072 sshd[3617]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:19.008811 systemd[1]: sshd@19-10.0.0.100:22-10.0.0.1:38130.service: Deactivated successfully. Dec 13 14:28:19.009515 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:28:19.010303 systemd-logind[1191]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:28:19.011051 systemd-logind[1191]: Removed session 20. Dec 13 14:28:24.011973 systemd[1]: Started sshd@20-10.0.0.100:22-10.0.0.1:38138.service. Dec 13 14:28:24.061474 sshd[3631]: Accepted publickey for core from 10.0.0.1 port 38138 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:28:24.062736 sshd[3631]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:24.066133 systemd-logind[1191]: New session 21 of user core. Dec 13 14:28:24.067095 systemd[1]: Started session-21.scope. Dec 13 14:28:24.177316 sshd[3631]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:24.180028 systemd[1]: sshd@20-10.0.0.100:22-10.0.0.1:38138.service: Deactivated successfully. Dec 13 14:28:24.180756 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:28:24.181548 systemd-logind[1191]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:28:24.182202 systemd-logind[1191]: Removed session 21. Dec 13 14:28:27.927638 kubelet[2019]: E1213 14:28:27.927507 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:28:29.182209 systemd[1]: Started sshd@21-10.0.0.100:22-10.0.0.1:53750.service. Dec 13 14:28:29.220404 sshd[3645]: Accepted publickey for core from 10.0.0.1 port 53750 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:28:29.221704 sshd[3645]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:29.225515 systemd-logind[1191]: New session 22 of user core. Dec 13 14:28:29.226412 systemd[1]: Started session-22.scope. Dec 13 14:28:29.340843 sshd[3645]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:29.344351 systemd[1]: sshd@21-10.0.0.100:22-10.0.0.1:53750.service: Deactivated successfully. Dec 13 14:28:29.344900 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:28:29.346355 systemd[1]: Started sshd@22-10.0.0.100:22-10.0.0.1:53760.service. Dec 13 14:28:29.347245 systemd-logind[1191]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:28:29.348205 systemd-logind[1191]: Removed session 22. Dec 13 14:28:29.384157 sshd[3658]: Accepted publickey for core from 10.0.0.1 port 53760 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:28:29.385408 sshd[3658]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:29.388969 systemd-logind[1191]: New session 23 of user core. Dec 13 14:28:29.389723 systemd[1]: Started session-23.scope. Dec 13 14:28:31.080617 env[1204]: time="2024-12-13T14:28:31.080556425Z" level=info msg="StopContainer for \"ad503a81fa9960435b8ac5639baf31b1eba9349f954df0da39447251e1e74de7\" with timeout 30 (s)" Dec 13 14:28:31.084071 env[1204]: time="2024-12-13T14:28:31.084015256Z" level=info msg="Stop container \"ad503a81fa9960435b8ac5639baf31b1eba9349f954df0da39447251e1e74de7\" with signal terminated" Dec 13 14:28:31.097916 systemd[1]: cri-containerd-ad503a81fa9960435b8ac5639baf31b1eba9349f954df0da39447251e1e74de7.scope: Deactivated successfully. Dec 13 14:28:31.108598 env[1204]: time="2024-12-13T14:28:31.108504348Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:28:31.116139 env[1204]: time="2024-12-13T14:28:31.116086192Z" level=info msg="StopContainer for \"62ad92fa3486d96535ab171b98725be0240a21ec3e004e97b7887791129dc848\" with timeout 2 (s)" Dec 13 14:28:31.116618 env[1204]: time="2024-12-13T14:28:31.116565483Z" level=info msg="Stop container \"62ad92fa3486d96535ab171b98725be0240a21ec3e004e97b7887791129dc848\" with signal terminated" Dec 13 14:28:31.120744 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad503a81fa9960435b8ac5639baf31b1eba9349f954df0da39447251e1e74de7-rootfs.mount: Deactivated successfully. Dec 13 14:28:31.125313 systemd-networkd[1028]: lxc_health: Link DOWN Dec 13 14:28:31.125324 systemd-networkd[1028]: lxc_health: Lost carrier Dec 13 14:28:31.140543 env[1204]: time="2024-12-13T14:28:31.140475916Z" level=info msg="shim disconnected" id=ad503a81fa9960435b8ac5639baf31b1eba9349f954df0da39447251e1e74de7 Dec 13 14:28:31.140543 env[1204]: time="2024-12-13T14:28:31.140542834Z" level=warning msg="cleaning up after shim disconnected" id=ad503a81fa9960435b8ac5639baf31b1eba9349f954df0da39447251e1e74de7 namespace=k8s.io Dec 13 14:28:31.140849 env[1204]: time="2024-12-13T14:28:31.140555177Z" level=info msg="cleaning up dead shim" Dec 13 14:28:31.149148 env[1204]: time="2024-12-13T14:28:31.149070123Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3711 runtime=io.containerd.runc.v2\n" Dec 13 14:28:31.155450 env[1204]: time="2024-12-13T14:28:31.155354903Z" level=info msg="StopContainer for \"ad503a81fa9960435b8ac5639baf31b1eba9349f954df0da39447251e1e74de7\" returns successfully" Dec 13 14:28:31.156468 env[1204]: time="2024-12-13T14:28:31.156424875Z" level=info msg="StopPodSandbox for \"11ff8c046aeecd52ca123bbaf1c9e1de04a1c6f09285ef7e72551197251a3e87\"" Dec 13 14:28:31.156571 env[1204]: time="2024-12-13T14:28:31.156494176Z" level=info msg="Container to stop \"ad503a81fa9960435b8ac5639baf31b1eba9349f954df0da39447251e1e74de7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:28:31.159202 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-11ff8c046aeecd52ca123bbaf1c9e1de04a1c6f09285ef7e72551197251a3e87-shm.mount: Deactivated successfully. Dec 13 14:28:31.166059 systemd[1]: cri-containerd-62ad92fa3486d96535ab171b98725be0240a21ec3e004e97b7887791129dc848.scope: Deactivated successfully. Dec 13 14:28:31.166422 systemd[1]: cri-containerd-62ad92fa3486d96535ab171b98725be0240a21ec3e004e97b7887791129dc848.scope: Consumed 6.522s CPU time. Dec 13 14:28:31.169667 systemd[1]: cri-containerd-11ff8c046aeecd52ca123bbaf1c9e1de04a1c6f09285ef7e72551197251a3e87.scope: Deactivated successfully. Dec 13 14:28:31.190027 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62ad92fa3486d96535ab171b98725be0240a21ec3e004e97b7887791129dc848-rootfs.mount: Deactivated successfully. Dec 13 14:28:31.202416 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11ff8c046aeecd52ca123bbaf1c9e1de04a1c6f09285ef7e72551197251a3e87-rootfs.mount: Deactivated successfully. Dec 13 14:28:31.204392 env[1204]: time="2024-12-13T14:28:31.204337887Z" level=info msg="shim disconnected" id=62ad92fa3486d96535ab171b98725be0240a21ec3e004e97b7887791129dc848 Dec 13 14:28:31.204525 env[1204]: time="2024-12-13T14:28:31.204405365Z" level=warning msg="cleaning up after shim disconnected" id=62ad92fa3486d96535ab171b98725be0240a21ec3e004e97b7887791129dc848 namespace=k8s.io Dec 13 14:28:31.204525 env[1204]: time="2024-12-13T14:28:31.204420374Z" level=info msg="cleaning up dead shim" Dec 13 14:28:31.204525 env[1204]: time="2024-12-13T14:28:31.204337857Z" level=info msg="shim disconnected" id=11ff8c046aeecd52ca123bbaf1c9e1de04a1c6f09285ef7e72551197251a3e87 Dec 13 14:28:31.204525 env[1204]: time="2024-12-13T14:28:31.204498051Z" level=warning msg="cleaning up after shim disconnected" id=11ff8c046aeecd52ca123bbaf1c9e1de04a1c6f09285ef7e72551197251a3e87 namespace=k8s.io Dec 13 14:28:31.204641 env[1204]: time="2024-12-13T14:28:31.204524581Z" level=info msg="cleaning up dead shim" Dec 13 14:28:31.215328 env[1204]: time="2024-12-13T14:28:31.215270807Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3755 runtime=io.containerd.runc.v2\n" Dec 13 14:28:31.216143 env[1204]: time="2024-12-13T14:28:31.216103348Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3754 runtime=io.containerd.runc.v2\n" Dec 13 14:28:31.216351 env[1204]: time="2024-12-13T14:28:31.216289192Z" level=info msg="TearDown network for sandbox \"11ff8c046aeecd52ca123bbaf1c9e1de04a1c6f09285ef7e72551197251a3e87\" successfully" Dec 13 14:28:31.216351 env[1204]: time="2024-12-13T14:28:31.216326853Z" level=info msg="StopPodSandbox for \"11ff8c046aeecd52ca123bbaf1c9e1de04a1c6f09285ef7e72551197251a3e87\" returns successfully" Dec 13 14:28:31.221409 env[1204]: time="2024-12-13T14:28:31.221366267Z" level=info msg="StopContainer for \"62ad92fa3486d96535ab171b98725be0240a21ec3e004e97b7887791129dc848\" returns successfully" Dec 13 14:28:31.222116 env[1204]: time="2024-12-13T14:28:31.222067529Z" level=info msg="StopPodSandbox for \"a93d552cc13bc34b8db382dc0f641150edf43ed2bd0e8b2918a99caebbb66eb1\"" Dec 13 14:28:31.222198 env[1204]: time="2024-12-13T14:28:31.222131710Z" level=info msg="Container to stop \"38b6bda66113bc4a6e98b47716533747c6854d9750a0d06af14a8ad20fbf195d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:28:31.222198 env[1204]: time="2024-12-13T14:28:31.222154995Z" level=info msg="Container to stop \"c13e2975fd93306b9f7c3cc38d2cc9e46fa3db4e4c7ec02570a5d51fef5c6e09\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:28:31.222198 env[1204]: time="2024-12-13T14:28:31.222170554Z" level=info msg="Container to stop \"847646cf0f8fa9288ed93ae799bc1b92767a6d0e7db3f59eebd3d8119879689a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:28:31.222198 env[1204]: time="2024-12-13T14:28:31.222184671Z" level=info msg="Container to stop \"62ad92fa3486d96535ab171b98725be0240a21ec3e004e97b7887791129dc848\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:28:31.222198 env[1204]: time="2024-12-13T14:28:31.222200321Z" level=info msg="Container to stop \"7632b9b6db3641a119dbf367d3a9028982e275071ea1785e38931a0119b46b9d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:28:31.230699 systemd[1]: cri-containerd-a93d552cc13bc34b8db382dc0f641150edf43ed2bd0e8b2918a99caebbb66eb1.scope: Deactivated successfully. Dec 13 14:28:31.257611 env[1204]: time="2024-12-13T14:28:31.257538093Z" level=info msg="shim disconnected" id=a93d552cc13bc34b8db382dc0f641150edf43ed2bd0e8b2918a99caebbb66eb1 Dec 13 14:28:31.257611 env[1204]: time="2024-12-13T14:28:31.257603237Z" level=warning msg="cleaning up after shim disconnected" id=a93d552cc13bc34b8db382dc0f641150edf43ed2bd0e8b2918a99caebbb66eb1 namespace=k8s.io Dec 13 14:28:31.257611 env[1204]: time="2024-12-13T14:28:31.257617494Z" level=info msg="cleaning up dead shim" Dec 13 14:28:31.266456 env[1204]: time="2024-12-13T14:28:31.266360073Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3797 runtime=io.containerd.runc.v2\n" Dec 13 14:28:31.267088 env[1204]: time="2024-12-13T14:28:31.267051176Z" level=info msg="TearDown network for sandbox \"a93d552cc13bc34b8db382dc0f641150edf43ed2bd0e8b2918a99caebbb66eb1\" successfully" Dec 13 14:28:31.267088 env[1204]: time="2024-12-13T14:28:31.267078227Z" level=info msg="StopPodSandbox for \"a93d552cc13bc34b8db382dc0f641150edf43ed2bd0e8b2918a99caebbb66eb1\" returns successfully" Dec 13 14:28:31.304245 kubelet[2019]: I1213 14:28:31.304158 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk7m5\" (UniqueName: \"kubernetes.io/projected/411bb671-8fda-447c-b7bf-4ce1cad51aad-kube-api-access-hk7m5\") pod \"411bb671-8fda-447c-b7bf-4ce1cad51aad\" (UID: \"411bb671-8fda-447c-b7bf-4ce1cad51aad\") " Dec 13 14:28:31.304753 kubelet[2019]: I1213 14:28:31.304281 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/411bb671-8fda-447c-b7bf-4ce1cad51aad-cilium-config-path\") pod \"411bb671-8fda-447c-b7bf-4ce1cad51aad\" (UID: \"411bb671-8fda-447c-b7bf-4ce1cad51aad\") " Dec 13 14:28:31.306521 kubelet[2019]: I1213 14:28:31.306484 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/411bb671-8fda-447c-b7bf-4ce1cad51aad-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "411bb671-8fda-447c-b7bf-4ce1cad51aad" (UID: "411bb671-8fda-447c-b7bf-4ce1cad51aad"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:28:31.307730 kubelet[2019]: I1213 14:28:31.307625 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/411bb671-8fda-447c-b7bf-4ce1cad51aad-kube-api-access-hk7m5" (OuterVolumeSpecName: "kube-api-access-hk7m5") pod "411bb671-8fda-447c-b7bf-4ce1cad51aad" (UID: "411bb671-8fda-447c-b7bf-4ce1cad51aad"). InnerVolumeSpecName "kube-api-access-hk7m5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:28:31.405583 kubelet[2019]: I1213 14:28:31.405502 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjs9g\" (UniqueName: \"kubernetes.io/projected/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-kube-api-access-wjs9g\") pod \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\" (UID: \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\") " Dec 13 14:28:31.405583 kubelet[2019]: I1213 14:28:31.405564 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-bpf-maps\") pod \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\" (UID: \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\") " Dec 13 14:28:31.405583 kubelet[2019]: I1213 14:28:31.405589 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-lib-modules\") pod \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\" (UID: \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\") " Dec 13 14:28:31.405878 kubelet[2019]: I1213 14:28:31.405623 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-clustermesh-secrets\") pod \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\" (UID: \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\") " Dec 13 14:28:31.405878 kubelet[2019]: I1213 14:28:31.405642 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-cilium-run\") pod \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\" (UID: \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\") " Dec 13 14:28:31.405878 kubelet[2019]: I1213 14:28:31.405674 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-host-proc-sys-kernel\") pod \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\" (UID: \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\") " Dec 13 14:28:31.405878 kubelet[2019]: I1213 14:28:31.405691 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-cni-path\") pod \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\" (UID: \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\") " Dec 13 14:28:31.405878 kubelet[2019]: I1213 14:28:31.405709 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-hostproc\") pod \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\" (UID: \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\") " Dec 13 14:28:31.405878 kubelet[2019]: I1213 14:28:31.405729 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-host-proc-sys-net\") pod \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\" (UID: \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\") " Dec 13 14:28:31.406025 kubelet[2019]: I1213 14:28:31.405748 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-xtables-lock\") pod \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\" (UID: \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\") " Dec 13 14:28:31.406025 kubelet[2019]: I1213 14:28:31.405763 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-etc-cni-netd\") pod \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\" (UID: \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\") " Dec 13 14:28:31.406025 kubelet[2019]: I1213 14:28:31.405783 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-hubble-tls\") pod \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\" (UID: \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\") " Dec 13 14:28:31.406025 kubelet[2019]: I1213 14:28:31.405799 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-cilium-cgroup\") pod \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\" (UID: \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\") " Dec 13 14:28:31.406025 kubelet[2019]: I1213 14:28:31.405832 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-cilium-config-path\") pod \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\" (UID: \"587a2f3f-83f7-4a9d-980a-aebaa9c8af99\") " Dec 13 14:28:31.406025 kubelet[2019]: I1213 14:28:31.405872 2019 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/411bb671-8fda-447c-b7bf-4ce1cad51aad-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:31.406304 kubelet[2019]: I1213 14:28:31.405887 2019 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hk7m5\" (UniqueName: \"kubernetes.io/projected/411bb671-8fda-447c-b7bf-4ce1cad51aad-kube-api-access-hk7m5\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:31.406304 kubelet[2019]: I1213 14:28:31.405703 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "587a2f3f-83f7-4a9d-980a-aebaa9c8af99" (UID: "587a2f3f-83f7-4a9d-980a-aebaa9c8af99"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:31.406304 kubelet[2019]: I1213 14:28:31.405723 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "587a2f3f-83f7-4a9d-980a-aebaa9c8af99" (UID: "587a2f3f-83f7-4a9d-980a-aebaa9c8af99"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:31.406304 kubelet[2019]: I1213 14:28:31.406146 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "587a2f3f-83f7-4a9d-980a-aebaa9c8af99" (UID: "587a2f3f-83f7-4a9d-980a-aebaa9c8af99"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:31.406304 kubelet[2019]: I1213 14:28:31.406174 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "587a2f3f-83f7-4a9d-980a-aebaa9c8af99" (UID: "587a2f3f-83f7-4a9d-980a-aebaa9c8af99"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:31.406467 kubelet[2019]: I1213 14:28:31.406188 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "587a2f3f-83f7-4a9d-980a-aebaa9c8af99" (UID: "587a2f3f-83f7-4a9d-980a-aebaa9c8af99"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:31.406467 kubelet[2019]: I1213 14:28:31.406204 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-cni-path" (OuterVolumeSpecName: "cni-path") pod "587a2f3f-83f7-4a9d-980a-aebaa9c8af99" (UID: "587a2f3f-83f7-4a9d-980a-aebaa9c8af99"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:31.406467 kubelet[2019]: I1213 14:28:31.406247 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-hostproc" (OuterVolumeSpecName: "hostproc") pod "587a2f3f-83f7-4a9d-980a-aebaa9c8af99" (UID: "587a2f3f-83f7-4a9d-980a-aebaa9c8af99"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:31.406467 kubelet[2019]: I1213 14:28:31.406323 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "587a2f3f-83f7-4a9d-980a-aebaa9c8af99" (UID: "587a2f3f-83f7-4a9d-980a-aebaa9c8af99"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:31.406467 kubelet[2019]: I1213 14:28:31.406342 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "587a2f3f-83f7-4a9d-980a-aebaa9c8af99" (UID: "587a2f3f-83f7-4a9d-980a-aebaa9c8af99"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:31.406592 kubelet[2019]: I1213 14:28:31.406496 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "587a2f3f-83f7-4a9d-980a-aebaa9c8af99" (UID: "587a2f3f-83f7-4a9d-980a-aebaa9c8af99"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:31.408477 kubelet[2019]: I1213 14:28:31.408440 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "587a2f3f-83f7-4a9d-980a-aebaa9c8af99" (UID: "587a2f3f-83f7-4a9d-980a-aebaa9c8af99"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:28:31.408910 kubelet[2019]: I1213 14:28:31.408853 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-kube-api-access-wjs9g" (OuterVolumeSpecName: "kube-api-access-wjs9g") pod "587a2f3f-83f7-4a9d-980a-aebaa9c8af99" (UID: "587a2f3f-83f7-4a9d-980a-aebaa9c8af99"). InnerVolumeSpecName "kube-api-access-wjs9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:28:31.409579 kubelet[2019]: I1213 14:28:31.409551 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "587a2f3f-83f7-4a9d-980a-aebaa9c8af99" (UID: "587a2f3f-83f7-4a9d-980a-aebaa9c8af99"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:28:31.409664 kubelet[2019]: I1213 14:28:31.409593 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "587a2f3f-83f7-4a9d-980a-aebaa9c8af99" (UID: "587a2f3f-83f7-4a9d-980a-aebaa9c8af99"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:28:31.507037 kubelet[2019]: I1213 14:28:31.506979 2019 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:31.507037 kubelet[2019]: I1213 14:28:31.507029 2019 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:31.507037 kubelet[2019]: I1213 14:28:31.507060 2019 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:31.507313 kubelet[2019]: I1213 14:28:31.507073 2019 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:31.507313 kubelet[2019]: I1213 14:28:31.507084 2019 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:31.507313 kubelet[2019]: I1213 14:28:31.507096 2019 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:31.507313 kubelet[2019]: I1213 14:28:31.507106 2019 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:31.507313 kubelet[2019]: I1213 14:28:31.507116 2019 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:31.507313 kubelet[2019]: I1213 14:28:31.507126 2019 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:31.507313 kubelet[2019]: I1213 14:28:31.507136 2019 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:31.507313 kubelet[2019]: I1213 14:28:31.507146 2019 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:31.507575 kubelet[2019]: I1213 14:28:31.507155 2019 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:31.507575 kubelet[2019]: I1213 14:28:31.507165 2019 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:31.507575 kubelet[2019]: I1213 14:28:31.507175 2019 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wjs9g\" (UniqueName: \"kubernetes.io/projected/587a2f3f-83f7-4a9d-980a-aebaa9c8af99-kube-api-access-wjs9g\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:32.082875 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a93d552cc13bc34b8db382dc0f641150edf43ed2bd0e8b2918a99caebbb66eb1-rootfs.mount: Deactivated successfully. Dec 13 14:28:32.082977 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a93d552cc13bc34b8db382dc0f641150edf43ed2bd0e8b2918a99caebbb66eb1-shm.mount: Deactivated successfully. Dec 13 14:28:32.083031 systemd[1]: var-lib-kubelet-pods-411bb671\x2d8fda\x2d447c\x2db7bf\x2d4ce1cad51aad-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhk7m5.mount: Deactivated successfully. Dec 13 14:28:32.083099 systemd[1]: var-lib-kubelet-pods-587a2f3f\x2d83f7\x2d4a9d\x2d980a\x2daebaa9c8af99-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwjs9g.mount: Deactivated successfully. Dec 13 14:28:32.083151 systemd[1]: var-lib-kubelet-pods-587a2f3f\x2d83f7\x2d4a9d\x2d980a\x2daebaa9c8af99-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:28:32.083207 systemd[1]: var-lib-kubelet-pods-587a2f3f\x2d83f7\x2d4a9d\x2d980a\x2daebaa9c8af99-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:28:32.119014 kubelet[2019]: I1213 14:28:32.118969 2019 scope.go:117] "RemoveContainer" containerID="62ad92fa3486d96535ab171b98725be0240a21ec3e004e97b7887791129dc848" Dec 13 14:28:32.120143 env[1204]: time="2024-12-13T14:28:32.120090655Z" level=info msg="RemoveContainer for \"62ad92fa3486d96535ab171b98725be0240a21ec3e004e97b7887791129dc848\"" Dec 13 14:28:32.124618 systemd[1]: Removed slice kubepods-besteffort-pod411bb671_8fda_447c_b7bf_4ce1cad51aad.slice. Dec 13 14:28:32.126383 systemd[1]: Removed slice kubepods-burstable-pod587a2f3f_83f7_4a9d_980a_aebaa9c8af99.slice. Dec 13 14:28:32.126462 systemd[1]: kubepods-burstable-pod587a2f3f_83f7_4a9d_980a_aebaa9c8af99.slice: Consumed 6.630s CPU time. Dec 13 14:28:32.179981 env[1204]: time="2024-12-13T14:28:32.179908868Z" level=info msg="RemoveContainer for \"62ad92fa3486d96535ab171b98725be0240a21ec3e004e97b7887791129dc848\" returns successfully" Dec 13 14:28:32.180283 kubelet[2019]: I1213 14:28:32.180249 2019 scope.go:117] "RemoveContainer" containerID="847646cf0f8fa9288ed93ae799bc1b92767a6d0e7db3f59eebd3d8119879689a" Dec 13 14:28:32.181292 env[1204]: time="2024-12-13T14:28:32.181241019Z" level=info msg="RemoveContainer for \"847646cf0f8fa9288ed93ae799bc1b92767a6d0e7db3f59eebd3d8119879689a\"" Dec 13 14:28:32.266088 env[1204]: time="2024-12-13T14:28:32.266026248Z" level=info msg="RemoveContainer for \"847646cf0f8fa9288ed93ae799bc1b92767a6d0e7db3f59eebd3d8119879689a\" returns successfully" Dec 13 14:28:32.266304 kubelet[2019]: I1213 14:28:32.266276 2019 scope.go:117] "RemoveContainer" containerID="c13e2975fd93306b9f7c3cc38d2cc9e46fa3db4e4c7ec02570a5d51fef5c6e09" Dec 13 14:28:32.267611 env[1204]: time="2024-12-13T14:28:32.267572054Z" level=info msg="RemoveContainer for \"c13e2975fd93306b9f7c3cc38d2cc9e46fa3db4e4c7ec02570a5d51fef5c6e09\"" Dec 13 14:28:32.356832 env[1204]: time="2024-12-13T14:28:32.356739276Z" level=info msg="RemoveContainer for \"c13e2975fd93306b9f7c3cc38d2cc9e46fa3db4e4c7ec02570a5d51fef5c6e09\" returns successfully" Dec 13 14:28:32.357116 kubelet[2019]: I1213 14:28:32.357073 2019 scope.go:117] "RemoveContainer" containerID="38b6bda66113bc4a6e98b47716533747c6854d9750a0d06af14a8ad20fbf195d" Dec 13 14:28:32.358449 env[1204]: time="2024-12-13T14:28:32.358401122Z" level=info msg="RemoveContainer for \"38b6bda66113bc4a6e98b47716533747c6854d9750a0d06af14a8ad20fbf195d\"" Dec 13 14:28:32.400730 env[1204]: time="2024-12-13T14:28:32.400665880Z" level=info msg="RemoveContainer for \"38b6bda66113bc4a6e98b47716533747c6854d9750a0d06af14a8ad20fbf195d\" returns successfully" Dec 13 14:28:32.400999 kubelet[2019]: I1213 14:28:32.400943 2019 scope.go:117] "RemoveContainer" containerID="7632b9b6db3641a119dbf367d3a9028982e275071ea1785e38931a0119b46b9d" Dec 13 14:28:32.401978 env[1204]: time="2024-12-13T14:28:32.401951090Z" level=info msg="RemoveContainer for \"7632b9b6db3641a119dbf367d3a9028982e275071ea1785e38931a0119b46b9d\"" Dec 13 14:28:32.407748 env[1204]: time="2024-12-13T14:28:32.407433863Z" level=info msg="RemoveContainer for \"7632b9b6db3641a119dbf367d3a9028982e275071ea1785e38931a0119b46b9d\" returns successfully" Dec 13 14:28:32.408042 kubelet[2019]: I1213 14:28:32.408014 2019 scope.go:117] "RemoveContainer" containerID="62ad92fa3486d96535ab171b98725be0240a21ec3e004e97b7887791129dc848" Dec 13 14:28:32.408377 env[1204]: time="2024-12-13T14:28:32.408291632Z" level=error msg="ContainerStatus for \"62ad92fa3486d96535ab171b98725be0240a21ec3e004e97b7887791129dc848\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"62ad92fa3486d96535ab171b98725be0240a21ec3e004e97b7887791129dc848\": not found" Dec 13 14:28:32.408495 kubelet[2019]: E1213 14:28:32.408468 2019 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"62ad92fa3486d96535ab171b98725be0240a21ec3e004e97b7887791129dc848\": not found" containerID="62ad92fa3486d96535ab171b98725be0240a21ec3e004e97b7887791129dc848" Dec 13 14:28:32.408583 kubelet[2019]: I1213 14:28:32.408499 2019 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"62ad92fa3486d96535ab171b98725be0240a21ec3e004e97b7887791129dc848"} err="failed to get container status \"62ad92fa3486d96535ab171b98725be0240a21ec3e004e97b7887791129dc848\": rpc error: code = NotFound desc = an error occurred when try to find container \"62ad92fa3486d96535ab171b98725be0240a21ec3e004e97b7887791129dc848\": not found" Dec 13 14:28:32.408637 kubelet[2019]: I1213 14:28:32.408585 2019 scope.go:117] "RemoveContainer" containerID="847646cf0f8fa9288ed93ae799bc1b92767a6d0e7db3f59eebd3d8119879689a" Dec 13 14:28:32.408775 env[1204]: time="2024-12-13T14:28:32.408733130Z" level=error msg="ContainerStatus for \"847646cf0f8fa9288ed93ae799bc1b92767a6d0e7db3f59eebd3d8119879689a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"847646cf0f8fa9288ed93ae799bc1b92767a6d0e7db3f59eebd3d8119879689a\": not found" Dec 13 14:28:32.408909 kubelet[2019]: E1213 14:28:32.408892 2019 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"847646cf0f8fa9288ed93ae799bc1b92767a6d0e7db3f59eebd3d8119879689a\": not found" containerID="847646cf0f8fa9288ed93ae799bc1b92767a6d0e7db3f59eebd3d8119879689a" Dec 13 14:28:32.408956 kubelet[2019]: I1213 14:28:32.408909 2019 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"847646cf0f8fa9288ed93ae799bc1b92767a6d0e7db3f59eebd3d8119879689a"} err="failed to get container status \"847646cf0f8fa9288ed93ae799bc1b92767a6d0e7db3f59eebd3d8119879689a\": rpc error: code = NotFound desc = an error occurred when try to find container \"847646cf0f8fa9288ed93ae799bc1b92767a6d0e7db3f59eebd3d8119879689a\": not found" Dec 13 14:28:32.408956 kubelet[2019]: I1213 14:28:32.408920 2019 scope.go:117] "RemoveContainer" containerID="c13e2975fd93306b9f7c3cc38d2cc9e46fa3db4e4c7ec02570a5d51fef5c6e09" Dec 13 14:28:32.409124 env[1204]: time="2024-12-13T14:28:32.409071713Z" level=error msg="ContainerStatus for \"c13e2975fd93306b9f7c3cc38d2cc9e46fa3db4e4c7ec02570a5d51fef5c6e09\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c13e2975fd93306b9f7c3cc38d2cc9e46fa3db4e4c7ec02570a5d51fef5c6e09\": not found" Dec 13 14:28:32.409258 kubelet[2019]: E1213 14:28:32.409200 2019 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c13e2975fd93306b9f7c3cc38d2cc9e46fa3db4e4c7ec02570a5d51fef5c6e09\": not found" containerID="c13e2975fd93306b9f7c3cc38d2cc9e46fa3db4e4c7ec02570a5d51fef5c6e09" Dec 13 14:28:32.409258 kubelet[2019]: I1213 14:28:32.409242 2019 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c13e2975fd93306b9f7c3cc38d2cc9e46fa3db4e4c7ec02570a5d51fef5c6e09"} err="failed to get container status \"c13e2975fd93306b9f7c3cc38d2cc9e46fa3db4e4c7ec02570a5d51fef5c6e09\": rpc error: code = NotFound desc = an error occurred when try to find container \"c13e2975fd93306b9f7c3cc38d2cc9e46fa3db4e4c7ec02570a5d51fef5c6e09\": not found" Dec 13 14:28:32.409348 kubelet[2019]: I1213 14:28:32.409262 2019 scope.go:117] "RemoveContainer" containerID="38b6bda66113bc4a6e98b47716533747c6854d9750a0d06af14a8ad20fbf195d" Dec 13 14:28:32.409456 env[1204]: time="2024-12-13T14:28:32.409404505Z" level=error msg="ContainerStatus for \"38b6bda66113bc4a6e98b47716533747c6854d9750a0d06af14a8ad20fbf195d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"38b6bda66113bc4a6e98b47716533747c6854d9750a0d06af14a8ad20fbf195d\": not found" Dec 13 14:28:32.409560 kubelet[2019]: E1213 14:28:32.409538 2019 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"38b6bda66113bc4a6e98b47716533747c6854d9750a0d06af14a8ad20fbf195d\": not found" containerID="38b6bda66113bc4a6e98b47716533747c6854d9750a0d06af14a8ad20fbf195d" Dec 13 14:28:32.409609 kubelet[2019]: I1213 14:28:32.409565 2019 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"38b6bda66113bc4a6e98b47716533747c6854d9750a0d06af14a8ad20fbf195d"} err="failed to get container status \"38b6bda66113bc4a6e98b47716533747c6854d9750a0d06af14a8ad20fbf195d\": rpc error: code = NotFound desc = an error occurred when try to find container \"38b6bda66113bc4a6e98b47716533747c6854d9750a0d06af14a8ad20fbf195d\": not found" Dec 13 14:28:32.409609 kubelet[2019]: I1213 14:28:32.409581 2019 scope.go:117] "RemoveContainer" containerID="7632b9b6db3641a119dbf367d3a9028982e275071ea1785e38931a0119b46b9d" Dec 13 14:28:32.409789 env[1204]: time="2024-12-13T14:28:32.409739021Z" level=error msg="ContainerStatus for \"7632b9b6db3641a119dbf367d3a9028982e275071ea1785e38931a0119b46b9d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7632b9b6db3641a119dbf367d3a9028982e275071ea1785e38931a0119b46b9d\": not found" Dec 13 14:28:32.409914 kubelet[2019]: E1213 14:28:32.409893 2019 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7632b9b6db3641a119dbf367d3a9028982e275071ea1785e38931a0119b46b9d\": not found" containerID="7632b9b6db3641a119dbf367d3a9028982e275071ea1785e38931a0119b46b9d" Dec 13 14:28:32.409973 kubelet[2019]: I1213 14:28:32.409928 2019 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7632b9b6db3641a119dbf367d3a9028982e275071ea1785e38931a0119b46b9d"} err="failed to get container status \"7632b9b6db3641a119dbf367d3a9028982e275071ea1785e38931a0119b46b9d\": rpc error: code = NotFound desc = an error occurred when try to find container \"7632b9b6db3641a119dbf367d3a9028982e275071ea1785e38931a0119b46b9d\": not found" Dec 13 14:28:32.409973 kubelet[2019]: I1213 14:28:32.409943 2019 scope.go:117] "RemoveContainer" containerID="ad503a81fa9960435b8ac5639baf31b1eba9349f954df0da39447251e1e74de7" Dec 13 14:28:32.410733 env[1204]: time="2024-12-13T14:28:32.410714513Z" level=info msg="RemoveContainer for \"ad503a81fa9960435b8ac5639baf31b1eba9349f954df0da39447251e1e74de7\"" Dec 13 14:28:32.414090 env[1204]: time="2024-12-13T14:28:32.414058184Z" level=info msg="RemoveContainer for \"ad503a81fa9960435b8ac5639baf31b1eba9349f954df0da39447251e1e74de7\" returns successfully" Dec 13 14:28:32.414256 kubelet[2019]: I1213 14:28:32.414207 2019 scope.go:117] "RemoveContainer" containerID="ad503a81fa9960435b8ac5639baf31b1eba9349f954df0da39447251e1e74de7" Dec 13 14:28:32.414473 env[1204]: time="2024-12-13T14:28:32.414426253Z" level=error msg="ContainerStatus for \"ad503a81fa9960435b8ac5639baf31b1eba9349f954df0da39447251e1e74de7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ad503a81fa9960435b8ac5639baf31b1eba9349f954df0da39447251e1e74de7\": not found" Dec 13 14:28:32.414560 kubelet[2019]: E1213 14:28:32.414541 2019 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ad503a81fa9960435b8ac5639baf31b1eba9349f954df0da39447251e1e74de7\": not found" containerID="ad503a81fa9960435b8ac5639baf31b1eba9349f954df0da39447251e1e74de7" Dec 13 14:28:32.414614 kubelet[2019]: I1213 14:28:32.414560 2019 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ad503a81fa9960435b8ac5639baf31b1eba9349f954df0da39447251e1e74de7"} err="failed to get container status \"ad503a81fa9960435b8ac5639baf31b1eba9349f954df0da39447251e1e74de7\": rpc error: code = NotFound desc = an error occurred when try to find container \"ad503a81fa9960435b8ac5639baf31b1eba9349f954df0da39447251e1e74de7\": not found" Dec 13 14:28:32.928185 kubelet[2019]: I1213 14:28:32.928133 2019 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="411bb671-8fda-447c-b7bf-4ce1cad51aad" path="/var/lib/kubelet/pods/411bb671-8fda-447c-b7bf-4ce1cad51aad/volumes" Dec 13 14:28:32.928612 kubelet[2019]: I1213 14:28:32.928585 2019 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="587a2f3f-83f7-4a9d-980a-aebaa9c8af99" path="/var/lib/kubelet/pods/587a2f3f-83f7-4a9d-980a-aebaa9c8af99/volumes" Dec 13 14:28:32.967224 kubelet[2019]: E1213 14:28:32.967148 2019 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:28:32.989274 sshd[3658]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:32.994306 systemd[1]: sshd@22-10.0.0.100:22-10.0.0.1:53760.service: Deactivated successfully. Dec 13 14:28:32.995066 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 14:28:32.995758 systemd-logind[1191]: Session 23 logged out. Waiting for processes to exit. Dec 13 14:28:32.997309 systemd[1]: Started sshd@23-10.0.0.100:22-10.0.0.1:53772.service. Dec 13 14:28:32.998575 systemd-logind[1191]: Removed session 23. Dec 13 14:28:33.042437 sshd[3816]: Accepted publickey for core from 10.0.0.1 port 53772 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:28:33.043956 sshd[3816]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:33.047828 systemd-logind[1191]: New session 24 of user core. Dec 13 14:28:33.048625 systemd[1]: Started session-24.scope. Dec 13 14:28:33.508505 sshd[3816]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:33.511055 systemd[1]: Started sshd@24-10.0.0.100:22-10.0.0.1:53788.service. Dec 13 14:28:33.513442 systemd-logind[1191]: Session 24 logged out. Waiting for processes to exit. Dec 13 14:28:33.515133 systemd[1]: sshd@23-10.0.0.100:22-10.0.0.1:53772.service: Deactivated successfully. Dec 13 14:28:33.516080 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 14:28:33.517872 systemd-logind[1191]: Removed session 24. Dec 13 14:28:33.535964 kubelet[2019]: I1213 14:28:33.531510 2019 topology_manager.go:215] "Topology Admit Handler" podUID="5197ab06-0896-4f77-a5c0-f1419c889ac0" podNamespace="kube-system" podName="cilium-jvvtg" Dec 13 14:28:33.535964 kubelet[2019]: E1213 14:28:33.531575 2019 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="587a2f3f-83f7-4a9d-980a-aebaa9c8af99" containerName="cilium-agent" Dec 13 14:28:33.535964 kubelet[2019]: E1213 14:28:33.531587 2019 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="587a2f3f-83f7-4a9d-980a-aebaa9c8af99" containerName="mount-cgroup" Dec 13 14:28:33.535964 kubelet[2019]: E1213 14:28:33.531595 2019 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="587a2f3f-83f7-4a9d-980a-aebaa9c8af99" containerName="apply-sysctl-overwrites" Dec 13 14:28:33.535964 kubelet[2019]: E1213 14:28:33.531602 2019 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="587a2f3f-83f7-4a9d-980a-aebaa9c8af99" containerName="clean-cilium-state" Dec 13 14:28:33.535964 kubelet[2019]: E1213 14:28:33.531609 2019 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="587a2f3f-83f7-4a9d-980a-aebaa9c8af99" containerName="mount-bpf-fs" Dec 13 14:28:33.535964 kubelet[2019]: E1213 14:28:33.531616 2019 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="411bb671-8fda-447c-b7bf-4ce1cad51aad" containerName="cilium-operator" Dec 13 14:28:33.535964 kubelet[2019]: I1213 14:28:33.531649 2019 memory_manager.go:354] "RemoveStaleState removing state" podUID="411bb671-8fda-447c-b7bf-4ce1cad51aad" containerName="cilium-operator" Dec 13 14:28:33.535964 kubelet[2019]: I1213 14:28:33.531661 2019 memory_manager.go:354] "RemoveStaleState removing state" podUID="587a2f3f-83f7-4a9d-980a-aebaa9c8af99" containerName="cilium-agent" Dec 13 14:28:33.550006 systemd[1]: Created slice kubepods-burstable-pod5197ab06_0896_4f77_a5c0_f1419c889ac0.slice. Dec 13 14:28:33.558833 sshd[3827]: Accepted publickey for core from 10.0.0.1 port 53788 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:28:33.560277 sshd[3827]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:33.564532 systemd-logind[1191]: New session 25 of user core. Dec 13 14:28:33.565680 systemd[1]: Started session-25.scope. Dec 13 14:28:33.619628 kubelet[2019]: I1213 14:28:33.619580 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5197ab06-0896-4f77-a5c0-f1419c889ac0-cilium-config-path\") pod \"cilium-jvvtg\" (UID: \"5197ab06-0896-4f77-a5c0-f1419c889ac0\") " pod="kube-system/cilium-jvvtg" Dec 13 14:28:33.619628 kubelet[2019]: I1213 14:28:33.619627 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-cni-path\") pod \"cilium-jvvtg\" (UID: \"5197ab06-0896-4f77-a5c0-f1419c889ac0\") " pod="kube-system/cilium-jvvtg" Dec 13 14:28:33.619806 kubelet[2019]: I1213 14:28:33.619646 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-etc-cni-netd\") pod \"cilium-jvvtg\" (UID: \"5197ab06-0896-4f77-a5c0-f1419c889ac0\") " pod="kube-system/cilium-jvvtg" Dec 13 14:28:33.619806 kubelet[2019]: I1213 14:28:33.619659 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-lib-modules\") pod \"cilium-jvvtg\" (UID: \"5197ab06-0896-4f77-a5c0-f1419c889ac0\") " pod="kube-system/cilium-jvvtg" Dec 13 14:28:33.619806 kubelet[2019]: I1213 14:28:33.619672 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5197ab06-0896-4f77-a5c0-f1419c889ac0-cilium-ipsec-secrets\") pod \"cilium-jvvtg\" (UID: \"5197ab06-0896-4f77-a5c0-f1419c889ac0\") " pod="kube-system/cilium-jvvtg" Dec 13 14:28:33.619806 kubelet[2019]: I1213 14:28:33.619688 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-cilium-run\") pod \"cilium-jvvtg\" (UID: \"5197ab06-0896-4f77-a5c0-f1419c889ac0\") " pod="kube-system/cilium-jvvtg" Dec 13 14:28:33.619806 kubelet[2019]: I1213 14:28:33.619702 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-cilium-cgroup\") pod \"cilium-jvvtg\" (UID: \"5197ab06-0896-4f77-a5c0-f1419c889ac0\") " pod="kube-system/cilium-jvvtg" Dec 13 14:28:33.619806 kubelet[2019]: I1213 14:28:33.619717 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-host-proc-sys-kernel\") pod \"cilium-jvvtg\" (UID: \"5197ab06-0896-4f77-a5c0-f1419c889ac0\") " pod="kube-system/cilium-jvvtg" Dec 13 14:28:33.620105 kubelet[2019]: I1213 14:28:33.619729 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh7p2\" (UniqueName: \"kubernetes.io/projected/5197ab06-0896-4f77-a5c0-f1419c889ac0-kube-api-access-wh7p2\") pod \"cilium-jvvtg\" (UID: \"5197ab06-0896-4f77-a5c0-f1419c889ac0\") " pod="kube-system/cilium-jvvtg" Dec 13 14:28:33.620105 kubelet[2019]: I1213 14:28:33.619742 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-hostproc\") pod \"cilium-jvvtg\" (UID: \"5197ab06-0896-4f77-a5c0-f1419c889ac0\") " pod="kube-system/cilium-jvvtg" Dec 13 14:28:33.620105 kubelet[2019]: I1213 14:28:33.619756 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-xtables-lock\") pod \"cilium-jvvtg\" (UID: \"5197ab06-0896-4f77-a5c0-f1419c889ac0\") " pod="kube-system/cilium-jvvtg" Dec 13 14:28:33.620105 kubelet[2019]: I1213 14:28:33.619768 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5197ab06-0896-4f77-a5c0-f1419c889ac0-clustermesh-secrets\") pod \"cilium-jvvtg\" (UID: \"5197ab06-0896-4f77-a5c0-f1419c889ac0\") " pod="kube-system/cilium-jvvtg" Dec 13 14:28:33.620105 kubelet[2019]: I1213 14:28:33.619781 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-bpf-maps\") pod \"cilium-jvvtg\" (UID: \"5197ab06-0896-4f77-a5c0-f1419c889ac0\") " pod="kube-system/cilium-jvvtg" Dec 13 14:28:33.620105 kubelet[2019]: I1213 14:28:33.619793 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-host-proc-sys-net\") pod \"cilium-jvvtg\" (UID: \"5197ab06-0896-4f77-a5c0-f1419c889ac0\") " pod="kube-system/cilium-jvvtg" Dec 13 14:28:33.620364 kubelet[2019]: I1213 14:28:33.619805 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5197ab06-0896-4f77-a5c0-f1419c889ac0-hubble-tls\") pod \"cilium-jvvtg\" (UID: \"5197ab06-0896-4f77-a5c0-f1419c889ac0\") " pod="kube-system/cilium-jvvtg" Dec 13 14:28:33.693039 sshd[3827]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:33.697739 systemd[1]: Started sshd@25-10.0.0.100:22-10.0.0.1:53794.service. Dec 13 14:28:33.698535 systemd[1]: sshd@24-10.0.0.100:22-10.0.0.1:53788.service: Deactivated successfully. Dec 13 14:28:33.699503 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 14:28:33.700202 systemd-logind[1191]: Session 25 logged out. Waiting for processes to exit. Dec 13 14:28:33.701563 systemd-logind[1191]: Removed session 25. Dec 13 14:28:33.706267 kubelet[2019]: E1213 14:28:33.706184 2019 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-wh7p2 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-jvvtg" podUID="5197ab06-0896-4f77-a5c0-f1419c889ac0" Dec 13 14:28:33.743439 sshd[3840]: Accepted publickey for core from 10.0.0.1 port 53794 ssh2: RSA SHA256:G6GGwH/f10E2j6mIu1+COWQkyppDOEetpcI3w1A8nX8 Dec 13 14:28:33.744972 sshd[3840]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:28:33.749105 systemd-logind[1191]: New session 26 of user core. Dec 13 14:28:33.750168 systemd[1]: Started session-26.scope. Dec 13 14:28:34.223896 kubelet[2019]: I1213 14:28:34.223797 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh7p2\" (UniqueName: \"kubernetes.io/projected/5197ab06-0896-4f77-a5c0-f1419c889ac0-kube-api-access-wh7p2\") pod \"5197ab06-0896-4f77-a5c0-f1419c889ac0\" (UID: \"5197ab06-0896-4f77-a5c0-f1419c889ac0\") " Dec 13 14:28:34.223896 kubelet[2019]: I1213 14:28:34.223880 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5197ab06-0896-4f77-a5c0-f1419c889ac0-cilium-config-path\") pod \"5197ab06-0896-4f77-a5c0-f1419c889ac0\" (UID: \"5197ab06-0896-4f77-a5c0-f1419c889ac0\") " Dec 13 14:28:34.223896 kubelet[2019]: I1213 14:28:34.223906 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-etc-cni-netd\") pod \"5197ab06-0896-4f77-a5c0-f1419c889ac0\" (UID: \"5197ab06-0896-4f77-a5c0-f1419c889ac0\") " Dec 13 14:28:34.224154 kubelet[2019]: I1213 14:28:34.223930 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-xtables-lock\") pod \"5197ab06-0896-4f77-a5c0-f1419c889ac0\" (UID: \"5197ab06-0896-4f77-a5c0-f1419c889ac0\") " Dec 13 14:28:34.224154 kubelet[2019]: I1213 14:28:34.223954 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-bpf-maps\") pod \"5197ab06-0896-4f77-a5c0-f1419c889ac0\" (UID: \"5197ab06-0896-4f77-a5c0-f1419c889ac0\") " Dec 13 14:28:34.224154 kubelet[2019]: I1213 14:28:34.223974 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-host-proc-sys-net\") pod \"5197ab06-0896-4f77-a5c0-f1419c889ac0\" (UID: \"5197ab06-0896-4f77-a5c0-f1419c889ac0\") " Dec 13 14:28:34.224154 kubelet[2019]: I1213 14:28:34.223996 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5197ab06-0896-4f77-a5c0-f1419c889ac0-hubble-tls\") pod \"5197ab06-0896-4f77-a5c0-f1419c889ac0\" (UID: \"5197ab06-0896-4f77-a5c0-f1419c889ac0\") " Dec 13 14:28:34.224154 kubelet[2019]: I1213 14:28:34.224015 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-hostproc\") pod \"5197ab06-0896-4f77-a5c0-f1419c889ac0\" (UID: \"5197ab06-0896-4f77-a5c0-f1419c889ac0\") " Dec 13 14:28:34.224154 kubelet[2019]: I1213 14:28:34.224030 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5197ab06-0896-4f77-a5c0-f1419c889ac0" (UID: "5197ab06-0896-4f77-a5c0-f1419c889ac0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:34.224359 kubelet[2019]: I1213 14:28:34.224040 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5197ab06-0896-4f77-a5c0-f1419c889ac0-clustermesh-secrets\") pod \"5197ab06-0896-4f77-a5c0-f1419c889ac0\" (UID: \"5197ab06-0896-4f77-a5c0-f1419c889ac0\") " Dec 13 14:28:34.224359 kubelet[2019]: I1213 14:28:34.224079 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-cilium-cgroup\") pod \"5197ab06-0896-4f77-a5c0-f1419c889ac0\" (UID: \"5197ab06-0896-4f77-a5c0-f1419c889ac0\") " Dec 13 14:28:34.224359 kubelet[2019]: I1213 14:28:34.224095 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-host-proc-sys-kernel\") pod \"5197ab06-0896-4f77-a5c0-f1419c889ac0\" (UID: \"5197ab06-0896-4f77-a5c0-f1419c889ac0\") " Dec 13 14:28:34.224359 kubelet[2019]: I1213 14:28:34.224111 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-cni-path\") pod \"5197ab06-0896-4f77-a5c0-f1419c889ac0\" (UID: \"5197ab06-0896-4f77-a5c0-f1419c889ac0\") " Dec 13 14:28:34.224359 kubelet[2019]: I1213 14:28:34.224124 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-cilium-run\") pod \"5197ab06-0896-4f77-a5c0-f1419c889ac0\" (UID: \"5197ab06-0896-4f77-a5c0-f1419c889ac0\") " Dec 13 14:28:34.224359 kubelet[2019]: I1213 14:28:34.224144 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5197ab06-0896-4f77-a5c0-f1419c889ac0-cilium-ipsec-secrets\") pod \"5197ab06-0896-4f77-a5c0-f1419c889ac0\" (UID: \"5197ab06-0896-4f77-a5c0-f1419c889ac0\") " Dec 13 14:28:34.224504 kubelet[2019]: I1213 14:28:34.224158 2019 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-lib-modules\") pod \"5197ab06-0896-4f77-a5c0-f1419c889ac0\" (UID: \"5197ab06-0896-4f77-a5c0-f1419c889ac0\") " Dec 13 14:28:34.224504 kubelet[2019]: I1213 14:28:34.224197 2019 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-xtables-lock\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:34.224504 kubelet[2019]: I1213 14:28:34.224249 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5197ab06-0896-4f77-a5c0-f1419c889ac0" (UID: "5197ab06-0896-4f77-a5c0-f1419c889ac0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:34.224504 kubelet[2019]: I1213 14:28:34.224282 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5197ab06-0896-4f77-a5c0-f1419c889ac0" (UID: "5197ab06-0896-4f77-a5c0-f1419c889ac0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:34.224504 kubelet[2019]: I1213 14:28:34.224299 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5197ab06-0896-4f77-a5c0-f1419c889ac0" (UID: "5197ab06-0896-4f77-a5c0-f1419c889ac0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:34.224616 kubelet[2019]: I1213 14:28:34.224310 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5197ab06-0896-4f77-a5c0-f1419c889ac0" (UID: "5197ab06-0896-4f77-a5c0-f1419c889ac0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:34.224616 kubelet[2019]: I1213 14:28:34.224322 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-cni-path" (OuterVolumeSpecName: "cni-path") pod "5197ab06-0896-4f77-a5c0-f1419c889ac0" (UID: "5197ab06-0896-4f77-a5c0-f1419c889ac0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:34.224616 kubelet[2019]: I1213 14:28:34.224333 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5197ab06-0896-4f77-a5c0-f1419c889ac0" (UID: "5197ab06-0896-4f77-a5c0-f1419c889ac0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:34.226266 kubelet[2019]: I1213 14:28:34.226232 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5197ab06-0896-4f77-a5c0-f1419c889ac0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5197ab06-0896-4f77-a5c0-f1419c889ac0" (UID: "5197ab06-0896-4f77-a5c0-f1419c889ac0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:28:34.226318 kubelet[2019]: I1213 14:28:34.226282 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5197ab06-0896-4f77-a5c0-f1419c889ac0" (UID: "5197ab06-0896-4f77-a5c0-f1419c889ac0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:34.226318 kubelet[2019]: I1213 14:28:34.226302 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5197ab06-0896-4f77-a5c0-f1419c889ac0" (UID: "5197ab06-0896-4f77-a5c0-f1419c889ac0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:34.228373 systemd[1]: var-lib-kubelet-pods-5197ab06\x2d0896\x2d4f77\x2da5c0\x2df1419c889ac0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwh7p2.mount: Deactivated successfully. Dec 13 14:28:34.228976 kubelet[2019]: I1213 14:28:34.228554 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5197ab06-0896-4f77-a5c0-f1419c889ac0-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "5197ab06-0896-4f77-a5c0-f1419c889ac0" (UID: "5197ab06-0896-4f77-a5c0-f1419c889ac0"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:28:34.228976 kubelet[2019]: I1213 14:28:34.228588 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5197ab06-0896-4f77-a5c0-f1419c889ac0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5197ab06-0896-4f77-a5c0-f1419c889ac0" (UID: "5197ab06-0896-4f77-a5c0-f1419c889ac0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:28:34.228976 kubelet[2019]: I1213 14:28:34.228600 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-hostproc" (OuterVolumeSpecName: "hostproc") pod "5197ab06-0896-4f77-a5c0-f1419c889ac0" (UID: "5197ab06-0896-4f77-a5c0-f1419c889ac0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:28:34.228976 kubelet[2019]: I1213 14:28:34.228733 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5197ab06-0896-4f77-a5c0-f1419c889ac0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5197ab06-0896-4f77-a5c0-f1419c889ac0" (UID: "5197ab06-0896-4f77-a5c0-f1419c889ac0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:28:34.229210 kubelet[2019]: I1213 14:28:34.229188 2019 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5197ab06-0896-4f77-a5c0-f1419c889ac0-kube-api-access-wh7p2" (OuterVolumeSpecName: "kube-api-access-wh7p2") pod "5197ab06-0896-4f77-a5c0-f1419c889ac0" (UID: "5197ab06-0896-4f77-a5c0-f1419c889ac0"). InnerVolumeSpecName "kube-api-access-wh7p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:28:34.230726 systemd[1]: var-lib-kubelet-pods-5197ab06\x2d0896\x2d4f77\x2da5c0\x2df1419c889ac0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:28:34.230833 systemd[1]: var-lib-kubelet-pods-5197ab06\x2d0896\x2d4f77\x2da5c0\x2df1419c889ac0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:28:34.230910 systemd[1]: var-lib-kubelet-pods-5197ab06\x2d0896\x2d4f77\x2da5c0\x2df1419c889ac0-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:28:34.324460 kubelet[2019]: I1213 14:28:34.324391 2019 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:34.324460 kubelet[2019]: I1213 14:28:34.324437 2019 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5197ab06-0896-4f77-a5c0-f1419c889ac0-hubble-tls\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:34.324460 kubelet[2019]: I1213 14:28:34.324449 2019 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-hostproc\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:34.324460 kubelet[2019]: I1213 14:28:34.324467 2019 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5197ab06-0896-4f77-a5c0-f1419c889ac0-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:34.324460 kubelet[2019]: I1213 14:28:34.324477 2019 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:34.324755 kubelet[2019]: I1213 14:28:34.324487 2019 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:34.324755 kubelet[2019]: I1213 14:28:34.324497 2019 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-cni-path\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:34.324755 kubelet[2019]: I1213 14:28:34.324505 2019 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-cilium-run\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:34.324755 kubelet[2019]: I1213 14:28:34.324513 2019 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5197ab06-0896-4f77-a5c0-f1419c889ac0-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:34.324755 kubelet[2019]: I1213 14:28:34.324521 2019 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-lib-modules\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:34.324755 kubelet[2019]: I1213 14:28:34.324529 2019 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5197ab06-0896-4f77-a5c0-f1419c889ac0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:34.324755 kubelet[2019]: I1213 14:28:34.324536 2019 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:34.324755 kubelet[2019]: I1213 14:28:34.324544 2019 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wh7p2\" (UniqueName: \"kubernetes.io/projected/5197ab06-0896-4f77-a5c0-f1419c889ac0-kube-api-access-wh7p2\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:34.324968 kubelet[2019]: I1213 14:28:34.324554 2019 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5197ab06-0896-4f77-a5c0-f1419c889ac0-bpf-maps\") on node \"localhost\" DevicePath \"\"" Dec 13 14:28:34.932965 systemd[1]: Removed slice kubepods-burstable-pod5197ab06_0896_4f77_a5c0_f1419c889ac0.slice. Dec 13 14:28:35.130859 kubelet[2019]: I1213 14:28:35.130784 2019 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:28:35Z","lastTransitionTime":"2024-12-13T14:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:28:35.170247 kubelet[2019]: I1213 14:28:35.170177 2019 topology_manager.go:215] "Topology Admit Handler" podUID="cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073" podNamespace="kube-system" podName="cilium-cz7c6" Dec 13 14:28:35.176420 systemd[1]: Created slice kubepods-burstable-podcbc8d7cc_f8a4_4bdd_95d2_63ceb9ee9073.slice. Dec 13 14:28:35.228534 kubelet[2019]: I1213 14:28:35.228384 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073-hostproc\") pod \"cilium-cz7c6\" (UID: \"cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073\") " pod="kube-system/cilium-cz7c6" Dec 13 14:28:35.228534 kubelet[2019]: I1213 14:28:35.228438 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073-clustermesh-secrets\") pod \"cilium-cz7c6\" (UID: \"cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073\") " pod="kube-system/cilium-cz7c6" Dec 13 14:28:35.228534 kubelet[2019]: I1213 14:28:35.228468 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073-xtables-lock\") pod \"cilium-cz7c6\" (UID: \"cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073\") " pod="kube-system/cilium-cz7c6" Dec 13 14:28:35.228534 kubelet[2019]: I1213 14:28:35.228486 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073-host-proc-sys-net\") pod \"cilium-cz7c6\" (UID: \"cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073\") " pod="kube-system/cilium-cz7c6" Dec 13 14:28:35.228770 kubelet[2019]: I1213 14:28:35.228536 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073-hubble-tls\") pod \"cilium-cz7c6\" (UID: \"cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073\") " pod="kube-system/cilium-cz7c6" Dec 13 14:28:35.228770 kubelet[2019]: I1213 14:28:35.228648 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073-cilium-cgroup\") pod \"cilium-cz7c6\" (UID: \"cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073\") " pod="kube-system/cilium-cz7c6" Dec 13 14:28:35.228770 kubelet[2019]: I1213 14:28:35.228685 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073-cilium-config-path\") pod \"cilium-cz7c6\" (UID: \"cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073\") " pod="kube-system/cilium-cz7c6" Dec 13 14:28:35.228770 kubelet[2019]: I1213 14:28:35.228714 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073-cilium-run\") pod \"cilium-cz7c6\" (UID: \"cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073\") " pod="kube-system/cilium-cz7c6" Dec 13 14:28:35.228770 kubelet[2019]: I1213 14:28:35.228739 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073-etc-cni-netd\") pod \"cilium-cz7c6\" (UID: \"cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073\") " pod="kube-system/cilium-cz7c6" Dec 13 14:28:35.228961 kubelet[2019]: I1213 14:28:35.228771 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073-lib-modules\") pod \"cilium-cz7c6\" (UID: \"cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073\") " pod="kube-system/cilium-cz7c6" Dec 13 14:28:35.228961 kubelet[2019]: I1213 14:28:35.228791 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073-cilium-ipsec-secrets\") pod \"cilium-cz7c6\" (UID: \"cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073\") " pod="kube-system/cilium-cz7c6" Dec 13 14:28:35.228961 kubelet[2019]: I1213 14:28:35.228827 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073-host-proc-sys-kernel\") pod \"cilium-cz7c6\" (UID: \"cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073\") " pod="kube-system/cilium-cz7c6" Dec 13 14:28:35.228961 kubelet[2019]: I1213 14:28:35.228849 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073-cni-path\") pod \"cilium-cz7c6\" (UID: \"cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073\") " pod="kube-system/cilium-cz7c6" Dec 13 14:28:35.228961 kubelet[2019]: I1213 14:28:35.228868 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kkhs\" (UniqueName: \"kubernetes.io/projected/cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073-kube-api-access-9kkhs\") pod \"cilium-cz7c6\" (UID: \"cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073\") " pod="kube-system/cilium-cz7c6" Dec 13 14:28:35.228961 kubelet[2019]: I1213 14:28:35.228891 2019 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073-bpf-maps\") pod \"cilium-cz7c6\" (UID: \"cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073\") " pod="kube-system/cilium-cz7c6" Dec 13 14:28:35.480455 kubelet[2019]: E1213 14:28:35.480281 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:28:35.481057 env[1204]: time="2024-12-13T14:28:35.480953871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cz7c6,Uid:cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073,Namespace:kube-system,Attempt:0,}" Dec 13 14:28:35.501804 env[1204]: time="2024-12-13T14:28:35.501712732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:28:35.501804 env[1204]: time="2024-12-13T14:28:35.501760262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:28:35.502042 env[1204]: time="2024-12-13T14:28:35.501770741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:28:35.502248 env[1204]: time="2024-12-13T14:28:35.502156273Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea12137aff0be6b13abc6e83f34751a583f06006e52b7c5ddc15fe534979453c pid=3870 runtime=io.containerd.runc.v2 Dec 13 14:28:35.515288 systemd[1]: Started cri-containerd-ea12137aff0be6b13abc6e83f34751a583f06006e52b7c5ddc15fe534979453c.scope. Dec 13 14:28:35.539156 env[1204]: time="2024-12-13T14:28:35.539097860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cz7c6,Uid:cbc8d7cc-f8a4-4bdd-95d2-63ceb9ee9073,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea12137aff0be6b13abc6e83f34751a583f06006e52b7c5ddc15fe534979453c\"" Dec 13 14:28:35.539868 kubelet[2019]: E1213 14:28:35.539829 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:28:35.543111 env[1204]: time="2024-12-13T14:28:35.542012100Z" level=info msg="CreateContainer within sandbox \"ea12137aff0be6b13abc6e83f34751a583f06006e52b7c5ddc15fe534979453c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:28:35.560405 env[1204]: time="2024-12-13T14:28:35.560201404Z" level=info msg="CreateContainer within sandbox \"ea12137aff0be6b13abc6e83f34751a583f06006e52b7c5ddc15fe534979453c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6f2de6a963d272555e66d9f6d22fcaf05546e65885fec8686f30f0d1aa4e95d2\"" Dec 13 14:28:35.561096 env[1204]: time="2024-12-13T14:28:35.561048471Z" level=info msg="StartContainer for \"6f2de6a963d272555e66d9f6d22fcaf05546e65885fec8686f30f0d1aa4e95d2\"" Dec 13 14:28:35.575722 systemd[1]: Started cri-containerd-6f2de6a963d272555e66d9f6d22fcaf05546e65885fec8686f30f0d1aa4e95d2.scope. Dec 13 14:28:35.608820 env[1204]: time="2024-12-13T14:28:35.607435401Z" level=info msg="StartContainer for \"6f2de6a963d272555e66d9f6d22fcaf05546e65885fec8686f30f0d1aa4e95d2\" returns successfully" Dec 13 14:28:35.615713 systemd[1]: cri-containerd-6f2de6a963d272555e66d9f6d22fcaf05546e65885fec8686f30f0d1aa4e95d2.scope: Deactivated successfully. Dec 13 14:28:35.649465 env[1204]: time="2024-12-13T14:28:35.649389950Z" level=info msg="shim disconnected" id=6f2de6a963d272555e66d9f6d22fcaf05546e65885fec8686f30f0d1aa4e95d2 Dec 13 14:28:35.649465 env[1204]: time="2024-12-13T14:28:35.649461155Z" level=warning msg="cleaning up after shim disconnected" id=6f2de6a963d272555e66d9f6d22fcaf05546e65885fec8686f30f0d1aa4e95d2 namespace=k8s.io Dec 13 14:28:35.649465 env[1204]: time="2024-12-13T14:28:35.649471124Z" level=info msg="cleaning up dead shim" Dec 13 14:28:35.658540 env[1204]: time="2024-12-13T14:28:35.658476281Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3952 runtime=io.containerd.runc.v2\n" Dec 13 14:28:36.135123 kubelet[2019]: E1213 14:28:36.135042 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:28:36.137878 env[1204]: time="2024-12-13T14:28:36.137826751Z" level=info msg="CreateContainer within sandbox \"ea12137aff0be6b13abc6e83f34751a583f06006e52b7c5ddc15fe534979453c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:28:36.153659 env[1204]: time="2024-12-13T14:28:36.153593521Z" level=info msg="CreateContainer within sandbox \"ea12137aff0be6b13abc6e83f34751a583f06006e52b7c5ddc15fe534979453c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"42db370155a065515ad5708c48b2ba6ea298086ff5d20fba714491b81ebcfc61\"" Dec 13 14:28:36.154192 env[1204]: time="2024-12-13T14:28:36.154142693Z" level=info msg="StartContainer for \"42db370155a065515ad5708c48b2ba6ea298086ff5d20fba714491b81ebcfc61\"" Dec 13 14:28:36.168548 systemd[1]: Started cri-containerd-42db370155a065515ad5708c48b2ba6ea298086ff5d20fba714491b81ebcfc61.scope. Dec 13 14:28:36.204896 systemd[1]: cri-containerd-42db370155a065515ad5708c48b2ba6ea298086ff5d20fba714491b81ebcfc61.scope: Deactivated successfully. Dec 13 14:28:36.234650 env[1204]: time="2024-12-13T14:28:36.234255095Z" level=info msg="StartContainer for \"42db370155a065515ad5708c48b2ba6ea298086ff5d20fba714491b81ebcfc61\" returns successfully" Dec 13 14:28:36.356030 env[1204]: time="2024-12-13T14:28:36.355958786Z" level=info msg="shim disconnected" id=42db370155a065515ad5708c48b2ba6ea298086ff5d20fba714491b81ebcfc61 Dec 13 14:28:36.356030 env[1204]: time="2024-12-13T14:28:36.356019691Z" level=warning msg="cleaning up after shim disconnected" id=42db370155a065515ad5708c48b2ba6ea298086ff5d20fba714491b81ebcfc61 namespace=k8s.io Dec 13 14:28:36.356030 env[1204]: time="2024-12-13T14:28:36.356037975Z" level=info msg="cleaning up dead shim" Dec 13 14:28:36.362498 env[1204]: time="2024-12-13T14:28:36.362431524Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4014 runtime=io.containerd.runc.v2\n" Dec 13 14:28:36.926805 kubelet[2019]: E1213 14:28:36.926762 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:28:36.928668 kubelet[2019]: I1213 14:28:36.928626 2019 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5197ab06-0896-4f77-a5c0-f1419c889ac0" path="/var/lib/kubelet/pods/5197ab06-0896-4f77-a5c0-f1419c889ac0/volumes" Dec 13 14:28:37.139345 kubelet[2019]: E1213 14:28:37.139295 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:28:37.141584 env[1204]: time="2024-12-13T14:28:37.141534476Z" level=info msg="CreateContainer within sandbox \"ea12137aff0be6b13abc6e83f34751a583f06006e52b7c5ddc15fe534979453c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:28:37.154936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1151726381.mount: Deactivated successfully. Dec 13 14:28:37.157381 env[1204]: time="2024-12-13T14:28:37.157325954Z" level=info msg="CreateContainer within sandbox \"ea12137aff0be6b13abc6e83f34751a583f06006e52b7c5ddc15fe534979453c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2ebac4b12810e5e028e1bb233692b985e47ba78690e65c27410985f3a6730dc0\"" Dec 13 14:28:37.157967 env[1204]: time="2024-12-13T14:28:37.157935400Z" level=info msg="StartContainer for \"2ebac4b12810e5e028e1bb233692b985e47ba78690e65c27410985f3a6730dc0\"" Dec 13 14:28:37.175561 systemd[1]: Started cri-containerd-2ebac4b12810e5e028e1bb233692b985e47ba78690e65c27410985f3a6730dc0.scope. Dec 13 14:28:37.200906 env[1204]: time="2024-12-13T14:28:37.200789091Z" level=info msg="StartContainer for \"2ebac4b12810e5e028e1bb233692b985e47ba78690e65c27410985f3a6730dc0\" returns successfully" Dec 13 14:28:37.206092 systemd[1]: cri-containerd-2ebac4b12810e5e028e1bb233692b985e47ba78690e65c27410985f3a6730dc0.scope: Deactivated successfully. Dec 13 14:28:37.231504 env[1204]: time="2024-12-13T14:28:37.231447976Z" level=info msg="shim disconnected" id=2ebac4b12810e5e028e1bb233692b985e47ba78690e65c27410985f3a6730dc0 Dec 13 14:28:37.231504 env[1204]: time="2024-12-13T14:28:37.231495426Z" level=warning msg="cleaning up after shim disconnected" id=2ebac4b12810e5e028e1bb233692b985e47ba78690e65c27410985f3a6730dc0 namespace=k8s.io Dec 13 14:28:37.231504 env[1204]: time="2024-12-13T14:28:37.231504022Z" level=info msg="cleaning up dead shim" Dec 13 14:28:37.241403 env[1204]: time="2024-12-13T14:28:37.241334045Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4071 runtime=io.containerd.runc.v2\n" Dec 13 14:28:37.336027 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ebac4b12810e5e028e1bb233692b985e47ba78690e65c27410985f3a6730dc0-rootfs.mount: Deactivated successfully. Dec 13 14:28:37.968249 kubelet[2019]: E1213 14:28:37.968185 2019 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:28:38.142789 kubelet[2019]: E1213 14:28:38.142739 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:28:38.144806 env[1204]: time="2024-12-13T14:28:38.144759415Z" level=info msg="CreateContainer within sandbox \"ea12137aff0be6b13abc6e83f34751a583f06006e52b7c5ddc15fe534979453c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:28:38.170259 env[1204]: time="2024-12-13T14:28:38.162639175Z" level=info msg="CreateContainer within sandbox \"ea12137aff0be6b13abc6e83f34751a583f06006e52b7c5ddc15fe534979453c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b69f80aeb9b03ae39d59f7a61ba12d7ce7f8311ee6c82678c50a511ed82201e2\"" Dec 13 14:28:38.170259 env[1204]: time="2024-12-13T14:28:38.163506730Z" level=info msg="StartContainer for \"b69f80aeb9b03ae39d59f7a61ba12d7ce7f8311ee6c82678c50a511ed82201e2\"" Dec 13 14:28:38.188440 systemd[1]: Started cri-containerd-b69f80aeb9b03ae39d59f7a61ba12d7ce7f8311ee6c82678c50a511ed82201e2.scope. Dec 13 14:28:38.213429 systemd[1]: cri-containerd-b69f80aeb9b03ae39d59f7a61ba12d7ce7f8311ee6c82678c50a511ed82201e2.scope: Deactivated successfully. Dec 13 14:28:38.215793 env[1204]: time="2024-12-13T14:28:38.215742780Z" level=info msg="StartContainer for \"b69f80aeb9b03ae39d59f7a61ba12d7ce7f8311ee6c82678c50a511ed82201e2\" returns successfully" Dec 13 14:28:38.246125 env[1204]: time="2024-12-13T14:28:38.245968498Z" level=info msg="shim disconnected" id=b69f80aeb9b03ae39d59f7a61ba12d7ce7f8311ee6c82678c50a511ed82201e2 Dec 13 14:28:38.246125 env[1204]: time="2024-12-13T14:28:38.246024073Z" level=warning msg="cleaning up after shim disconnected" id=b69f80aeb9b03ae39d59f7a61ba12d7ce7f8311ee6c82678c50a511ed82201e2 namespace=k8s.io Dec 13 14:28:38.246125 env[1204]: time="2024-12-13T14:28:38.246033741Z" level=info msg="cleaning up dead shim" Dec 13 14:28:38.253443 env[1204]: time="2024-12-13T14:28:38.253383396Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:28:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4124 runtime=io.containerd.runc.v2\n" Dec 13 14:28:38.336819 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b69f80aeb9b03ae39d59f7a61ba12d7ce7f8311ee6c82678c50a511ed82201e2-rootfs.mount: Deactivated successfully. Dec 13 14:28:39.147148 kubelet[2019]: E1213 14:28:39.147110 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:28:39.149799 env[1204]: time="2024-12-13T14:28:39.149720932Z" level=info msg="CreateContainer within sandbox \"ea12137aff0be6b13abc6e83f34751a583f06006e52b7c5ddc15fe534979453c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:28:39.185725 env[1204]: time="2024-12-13T14:28:39.185663573Z" level=info msg="CreateContainer within sandbox \"ea12137aff0be6b13abc6e83f34751a583f06006e52b7c5ddc15fe534979453c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"62c70c93dc3f200da8a03ecb5cf5935cbb6ea61de5f249e8f4f5a8af04dce49b\"" Dec 13 14:28:39.186392 env[1204]: time="2024-12-13T14:28:39.186329996Z" level=info msg="StartContainer for \"62c70c93dc3f200da8a03ecb5cf5935cbb6ea61de5f249e8f4f5a8af04dce49b\"" Dec 13 14:28:39.205615 systemd[1]: Started cri-containerd-62c70c93dc3f200da8a03ecb5cf5935cbb6ea61de5f249e8f4f5a8af04dce49b.scope. Dec 13 14:28:39.233033 env[1204]: time="2024-12-13T14:28:39.232976276Z" level=info msg="StartContainer for \"62c70c93dc3f200da8a03ecb5cf5935cbb6ea61de5f249e8f4f5a8af04dce49b\" returns successfully" Dec 13 14:28:39.576241 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 14:28:40.152140 kubelet[2019]: E1213 14:28:40.152093 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:28:41.482413 kubelet[2019]: E1213 14:28:41.482366 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:28:42.504071 systemd-networkd[1028]: lxc_health: Link UP Dec 13 14:28:42.535774 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:28:42.536587 systemd-networkd[1028]: lxc_health: Gained carrier Dec 13 14:28:43.482123 kubelet[2019]: E1213 14:28:43.482093 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:28:43.551895 kubelet[2019]: I1213 14:28:43.551823 2019 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cz7c6" podStartSLOduration=8.551791052 podStartE2EDuration="8.551791052s" podCreationTimestamp="2024-12-13 14:28:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:28:40.339570628 +0000 UTC m=+97.496430714" watchObservedRunningTime="2024-12-13 14:28:43.551791052 +0000 UTC m=+100.708651128" Dec 13 14:28:43.697573 systemd-networkd[1028]: lxc_health: Gained IPv6LL Dec 13 14:28:43.926582 kubelet[2019]: E1213 14:28:43.926540 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:28:44.159074 kubelet[2019]: E1213 14:28:44.159028 2019 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:28:46.658160 systemd[1]: run-containerd-runc-k8s.io-62c70c93dc3f200da8a03ecb5cf5935cbb6ea61de5f249e8f4f5a8af04dce49b-runc.TAMRPu.mount: Deactivated successfully. Dec 13 14:28:48.787558 sshd[3840]: pam_unix(sshd:session): session closed for user core Dec 13 14:28:48.789940 systemd[1]: sshd@25-10.0.0.100:22-10.0.0.1:53794.service: Deactivated successfully. Dec 13 14:28:48.790786 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 14:28:48.791437 systemd-logind[1191]: Session 26 logged out. Waiting for processes to exit. Dec 13 14:28:48.792302 systemd-logind[1191]: Removed session 26.